Control UI From External Source?

[Hi @bert @thetechnobear and friends - congrats on the new release and wishing the team well out at Superbooth!]

So I’ve had my SSP for *a little over two years now (wow, time has flown!) and while it has shown itself to be an amazing instrument, there is so much more ground that I can see it covering with some minor extensions. Doing a ton of thinking over this time about how I can better make use of the system and contribute back I’ve finally come up with what I hope is received as a straightforward, easy to implement, and valuable to the community at large feature request. A simple new functionality that I’d build useful things with immediately, contribute back, and would enable others to do the same.

Given that Percussa’s UI has been stable for quite some time, it seems ripe for extension via automation. I’d like to propose a small but powerful feature that would enable me and the community to extend, customize, and streamline SSP’s physical interface featureset externally while minimizing any effect on higher priority internal development efforts. I have a couple of other questions / wishlist items listed at the end but in essence I’m really just looking for a way to map every physical UI input to an external MIDI control[0][1] (I’m aware of the Macro functionality which does help in limited cases, but it’s not relevant to the scope of the proposed feature capability.) With full automation capability of all UI controls users could not only choose whatever high end physical interfaces are available to navigate the UI in realtime (Monome Arc as the encoders would be incredible, for example), but in some contexts[2] they could also extend/customize/streamline areas like the patch interface in novel/efficient/interesting ways that could also potentially inform/prototype the next generation of the UI itself.

Having programmatic access to all of the buttons and encoders would allow a user to write some valuable automation[3] of repetitive tasks, but not only that; new ways of interacting with the SSP in “flow” state could then materialize. “Jamming” on the SSP in the context of patch editing could become super efficient[4] in ways not conceivable right now. Granted, being able to interact with a complex patch in arbitrary ways on the fly is already possible–I get that this is kind of the heart of what the SSP is all about–but reducing friction in that dimension IMO would produce super interesting opportunities given the SSP’s inherent flexibility and capability around patching.

Again, I would agree that in a practical sense the UI already provides the functionality to build arbitrarily complex patches right now and so there is standing for pushback on a facility/feature-request like this within the scope of that specific automation use case. That is just one example of opportunity this feature can create though. My hope is that (a) connecting UI interface functions to external MIDI controls would be relatively trivial to implement so it would not materially cut into higher priority internal development efforts and (b) the majority of SSP users would agree that they’d significantly benefit from this in some form (mostly the tactile option aspect immediately) and (eventually) there would be valuable engagement in extending the SSP’s capabilities through UI automation.

It is clear to me that the community wants to see the SSP succeed in the long term as I genuinely do, and I strongly believe a cheap to implement feature like this would really pay for itself so many times over which makes it a no-brainer to implement in short order. If it’s really tough to implement, or for whatever reason the devs don’t expect it to ever see the light of day, then maybe [5] would take a few minutes to send out on the forum and we could quickly get some different type of traction on at least improving the patch builder interface that way? I just hope that this feature request is feasible, and that it resonates with the community, or better yet, that I am wrong about it not existing so we can start using it right away :slight_smile:

Appreciate the Percussa team’s time and thanks for your hard work creating such a deeply musical and uniqely capable system. Safe travels in/out of Superbooth and hope you all have a great time out there!

Footnotes:

  1. Or I2C, or OSC, or whatever is best. I don’t personally care which protocol. Even something akin to Quartz/Carbon on MacOS would be great (and probably much more powerful, if anything similar is available within the SSP runtime.)

  2. Does the SDK api have some kind of facility that could be used to do this already? Could someone just write a VST module that can turn encoders and push buttons which could then be connected with the MIDI module as normal? I don’t know how the internals work but on the surface maybe extending the api that way is the right approach to expose these systems to external developers.

  3. Is there an API to read the value of what the is in focus on the cursor or an encoder (or an encoder if the shift button is held down, etc)?

  4. Can I disable encoder and menu rollover behavior so that a naiive automation implementation without knowledge of the current value could simply roll the encoder a bunch or hit the button many times in the “down” direction to index itself to zero before proceeding to set an intended value?

  5. What is the reasoning behind requiring the user to manually whitelist the input of a destination module after its connection from the source module has been defined in the source module’s output list? Could there be a user configuration to make that happen automatically (and conversely to automatically un-whitelist the input when disconnected) by default? Pardon if I’m getting the directional part of this wrong or something else (I haven’t had time to interact with the UI for a little while) but IIRC there is duplication of effort in this process by having to full-duplex a configuration step which conceptually should only require one relationship edge to be specified initially.

  6. It would be enormously useful to get an official PBP file format specification. With such documentation it would seem straightforward to write an arbitrary patch editor which could (maybe) run in realtime in a VST or externally so users could prepare new patches or modify existing ones with all manner of edit facilities (imagine being able to write a boilerplate patch in YAML for instance, and manually or programmatically manipulate the object until it is ready to go with sufficiently interesting and new settings and patch i/o connections, check it into source control, allow it to be forked/shared/etc, at which point it can be translated back into PBP) outside the confines of the internal patch editing interface, then load a bunch of iterations that were efficiently created to cycle between and tweak inside of SSP from there. Does a complete specification exist internally that could easily be shared with the community? I’m not sure why this part of SSP would need to be proprietary or closed… Percussa could just give community devs a disclaimer that warns of no guarantees of compatibility between releases, could crash the system loaded as written in a corrupt fashion, etc.

4 Likes

wow, long post… so please forgive, what’s likely to be a long reply :slight_smile:
(for users interested in the tl;dr; skill to end !)

please, bare in mind, this is MY personal opinion… others may vary !


first a general statement about this area…
this is something dear to my heart… the concept of extending the ‘control surface’ of the SSP to other controllers is I believe paramount to make a performable instrument, and something we can improve.
I’ve already discussed a few times with Bert, and hope to spend some more time discussing whilst at Superbooth with Percussa (*)
( * ) this is true of many topics, its going to be great for talking thru a number of areas.

I strongly believe a cheap to implement feature like this would really pay for itself so many times over which makes it a no-brainer to implement in short order. If it’s really tough to implement, or for whatever reason the devs don’t expect it to ever see the light of day, then maybe would take a few minutes to send out on the forum and we could quickly get some different type of traction on at least improving the patch builder interface that way

the issue is not one of ‘lack of creative ideas’ rather development time/resources.
I have tons of ideas for the SSP , so does Percussa… but time is not so generous.

also, frankly (and I mean no disrespect here), what is cheap / quick win , pretty much 100% depends on where the current code base is… which users cannot know (you need to know the source code).
quick win/hacks are almost always, something thats ‘kind of there already’.
also, some things, and I think this is one of them… need to be done ONCE properly.

I say all of this to set expectations… its definitely recognised, more a matter of priority and development time.


BUT on the more positive side…
my modules already have the basics pretty much in place… even if synthor does not.

you can see my approach in ALL my modules (released and unreleased) !
it is in 4 parts :
a) parameter automation
b) midi learn
c) controller mapping
d) bidirectional mapping

(Im sure you have seen this in my modules … if not, then go check it out :slight_smile: )

of course, this is not ‘new’ , think about it - this is how every daw works, and Im 100% sure, Bert has considered this all before too… again is time/resources

Parameter Automation

all of my modules are built upon parameter automation… any ‘control’ which makes sense, Ive made automat-able.
these parameters are exposed to the ‘host’ (aka synthor) and in development is actually part of how I test this modules without any ‘hardware’.

Midi Learn

the MIDI module is great, how it requires extra wiring… and frankly, its very time consuming for something you want to map quickly. midi learn is a great approach to this, as its so immediate.
and, of course, I have implemented this using parameter automation… so its applicable to every control.

Controller Mapping

my modules all support independent midi controller mapping - channel/device.
so you can use MANY controllers not just one (as with MIDI module)
note: Ive fixed the issue synthor had with this in the next firmware.

Bidirectional mapping

a really important feature for me, was to have it so that if I change a parameter on the SSP, its change is reflected on controllers that support displays - I hate, having control surfaces that are ‘out of sync’ with sound engine.

its not perfect yet…there are a couple of things missing in my implementation so far
most obvious, midi learn range - basically scale/offset.
7 bit midi is most common, and is limited in to 0-127… and alot of the time you dont want to move a parameter over that full range … rather have more precise control over a small range.

note learn, currently it only learns CC… but notes are useful for buttons.
(buttons do work with CC, but be nice to allow notes too)

theres a ton of other stuff, as I said, Im not short of ideas for improvements here :wink:


BUT the main drawbacks here, are from the limitation of whats possible inside a a module.
a) I cannot do anything about automating SSP modules
b) automation is done on a per module basis.
this is actually, not ‘bad’ as it’s nicely organises things, but means getting an overview is difficult.

ideally I think we would:
a) add automation parameters to synthor modules
b) synthor to centrally manger parameters
c) midi learn implementation in synthor
d) synthor support multiple midi controllers.

macro module ,this is the more contentious one - I personally don’t thing the macro module is really needed.I think, the ‘macro mode’ (off preset screen) should really be a configurable view, which allows you to select modules parameters… so that we have labels.

k, so for me thats the direction - but as perhaps you can see , that is a LOT of work.
unfortunately, it’s also implies a certain implementation in some areas which I think could prove challenging.
it all adds up to significant development effort.
there lies the rub…
as I said, it is not a lack of ideas on direction forward - but time/resources.

k, so on to your specific questions

  1. I2C or OSC
    midi is obviously priority as its by far the most supported protocol, and does not require extra hardware
    however, once you have parameter automation in place its pretty easy to expose this via other protocols.
    theres some stuff in the new firmware that will help on this front :wink:
    I have also considering doing a full control surface implementation on the Electra One with a proprietary protocol as the E1 is perfect companion to the SSP :slight_smile:

  2. ssp-sdk
    this is not really viable across modules, as I detailed above.
    basically my implementation is pretty much as far as you can go with whats available now.
    … of course, you could extend the ssp-sdk to cover more features, but honestly, most of what we are talking about here are considered a ‘host feature’… hence why they are not in VST/AU specs.

  3. read value of encoder/button

  4. disable encoder
    a module gets notification via api, but only when it is active.
    you could at a hardware level read all events.

generally, however this reading of hardware directly, intercepting, or creating virtual hardware events (very 1980’s :laughing: ) is really a poor way of achieving our aims , frankly - its an overly complex approach, that has too many limitations and will a be constant source of issues and headaches.
… so its not something Id be interested in. (again, personal opinion!)

  1. opening input ports
    this has been discussed many times, so not going to repeat.
    it’s linked to a few different behaviours of the SSP, so to remove would required quite a few changes.
    some of these are noticeable in the UI to users, others are more how the source code is… to are hidden to users - so if it was to be removed, it be part of a larger change.

  2. pbp file format
    I agree 100% , but unfortunately, I don’t think this is viable/possible.
    it’s a binary format, and is written in a very particular way that would editing it very difficult (*) without having access to the (closed) source code.
    it’s been done this way, as previous versions that (based around editable text) were taking too long to load.
    as I say, as an open source developer, I completely understand the motivation, but doesn’t look practical at the moment.
    ( * ) in might even be impossible, but id need to review code to check this.

phew… thats a lot…

hmm, lets see if I can summarise , briefly!

Summary

  1. you can extend control surface for @technobear modules using midi learn… its how I do it, and whilst some limitations it works well … Im pretty happy with it at least
  2. I believe extending control surface (particularly via midi) is important as a performance tool, particularly for large patches - but theres not really a quick fix here, it needs doing well, and look forward to talking to Bert about it over next few days at Superbooth
  3. development time/resources.
    we all (users and devs) have so many ideas of what we would like to do, thats never the limitation… rather its actually have the time (10s-100s of hours) to actually do them - its very time consuming ( * )

anyway, I think there’s some exciting possibilities…
for me, Im pretty happy with what Ive got already in my modules, the midi learn works really well for my setup with the Electra One… makes the SSP performable. So it would be great to build on this approach.

again, these are my personal opinions which I throwing out there, as a possible way forward based on what I see as goals/objectives.


( * ) its hard to overstate this…
a recent simple module might took me one hour to develop, but 3-4+ hours of testing etc.
a simple bug fix, sure, took me 15 min to fix… but took hours to find exact cause and test.
… and bare in mind, Im a very experienced developer, Ive got a highly optimised setup and my code (framework) is built to make dev fast and efficient.

development is just very time-consuming… if you think it take a long time to write an album, try writing code :wink:

( ** ) important note:
I still need to consider where (personally) I am going to commit my (free) development time to.
how much to the closed source of SSP?
how much to open source module development for SSP?
how much to other open source projects Ive created or am involved in?

I also have a couple of very ambitious ideas for the SSP, which might come into play here.
it’s too early to discuss these as they’d require some crazy time commitment,
however, I may come back to the community to discuss possibilities about how these might come to fruition.

Hi @thetechnobear! Thank you so much for your rapid and thorough response! There is a lot of super useful information in there, however, it looks like my original post was probably too verbose and unclear to effectively communicate the feature I’m requesting. Perhaps it’s easier for me to translate this by speaking in more abstract terms so as not to conflate the functionality specific to SSP with the primitive I’m trying to explore.

Take some piece of computing hardware with human interface devices to control the operating system running on it like a laptop. Imagine this laptop is wildly useful for a niche purpose but it turns out that it is physically very small so the mouse and keyboard are fine to use for some people, but present significant difficulties for others. All you need to do is just connect your own mouse & keyboard right? Well, unfortunately the operating system does not support external mice and keyboards :(. Well then, some users decide that the physical limitations presented are too great to continue using the laptop, and move on to something else, which is totally understandable. Others in this category however are willing to push on because they feel the capabilities of the operating system platform are so deep and wildly useful for what they’re trying to accomplish. They figure out a way to make the tiny mouse and keyboard work for them, if tedious and painful.

Next, there is the interaction with the operating system itself. Since the OS doesn’t get updated very often it turns out that there are quite a few repetitive tasks related to commanding the UI to configure the applications to work together in order to realize the full potential of the device. The OS provides facilities to connect any peripheral of one’s wildest dreams to control each application directly, and in infinite ways, which is an incredibly powerful paradigm it provides. But, the problem is that it only allows the use of any peripheral to control an application once it has been configured to accept the right type of messages from the peripheral, so the user is forced to use the tiny mouse and keyboard to set all this up.

All I’m asking for is to be able to plug my own mouse and keyboard into the laptop to operate its OS more ergonomically. That way, I could (a) use my favorite mouse & keyboard as my daily driver to interact with the OS, and (b) have a robot push the keys and move the mouse for me in certain situations, as I am able to decompose the repetitive tasks the OS requires me to perform to achieve my desired application configuration. In the SSP’s case, I just want the ability to plug my own encoders and buttons in so I can use them instead of the front panel interface to walk around SYNTHOR and set up modules and patches before they can make any sound…

Thanks again for all your hard work on the amazing modules you’ve contributed and for getting involved with SYNTHOR, great to have a chance to communicate with you on these topics! There are many super interesting sub-threads you’ve brought to light and as I have time I’ll see about breaking those out into separate topics for discussion. I just didn’t want to overload this one so we can try to get at the heart of this feature request directly.

Best,

Linux (on which the SSP is based) does support keyboard and mouse :wink:
its just that synthor doesn’t currently use them.

I think your analogy is also flawed…
for a laptop the interface (e.g. windows) was design around a keyboard and mouse, in your example you are just substituting a different one…
however, the SSP is primarily a musical device , not a laptop its interface was designed around encoders and buttons … not keyboard/mouse or anything else.

to use your analogy, this is kind of like asking Microsoft to allow you to control Windows (operations) via a midi keyboard… good luck with that one :wink:
sure, I know there are third party apps that do it, but it’s not something built into the OS.

But anyway, I just don’t personally see the advantage of allowing use of different encoders/buttons…
for one, it means you need more things attached to use/patch the SSP.
it’s just a tiny use-case, I can see very few actually using it…

also it’s of little to no use in a performance environment.
simply because, one of the issues with the current UI for performance, is you have to keep navigating to modules to change parameters, since only one module can be active at a time.
your approach has the same issue …

even if you involved scripting to do things like auto switching - you are still making the SSP switch the display around continually, and altering only a max of 4 parameters at a time.
… and scripting reduces the number of potential users dramatically.

also, there is no way in this approach to get the current values back (*) to the ‘controller’.
so if you base things on state, it’s all going to fail horribly.
many apps that do this kind of thing, will have ability to read values of the screen/dialog to prevent this kind of stuff…
don’t get me wrong , Ive used these utilities before… and for sure, they are excellent for applications that will not change, and provide a better way… but these days there are better ways.
and in electronic music, that way is midi/cv and parameter automation :wink:

(*) to do so would required parameterising , but thats half the effort of automation, which is more flexible.

my approach has none of these drawbacks - and given my modules will soon account for 50% of modules for the SSP… its already widely available for use.

as I say, it really only works for using an alternative set of encoders and buttons…
and personally, I don’t see many users using this, and frankly I think the dev effort is better spent on many other things - that will be used by a higher percentage of the user base.

again, I go back to one of my earlier points…
I do not see the SSP as a computer when I design things for it, I see it as a musical instrument.
… if I want a computer, Id use one… a rPI is much cheaper :wink:
so for me, I think the design and functionality should alway keep this in mind, the more musical we make it , the less general purpose, the more value we get from it for its primary purpose
in my humble opinion!!

BUT that’s just my opinion, its not something I personally would want to work on.
however, Im sure Bert will read this, perhaps he is more interested in the idea.

[Oops, looks like I accidentally deleted this post a moment ago - yikes, the keyboard shortcuts are ruthless in this forum interface, watch out for ‘d’! Good thing I draft everything in Gmail ahead of time, whew!]

Hello again @thetechnobear! Super great to have your perspective and thanks again for the fast response :slight_smile:

It seems like there is still a bit lost in translation here, so maybe I’ll go back to describing what I’m looking for in terms of the SSP user experience. I am not interested in adding any new functionality to the way the SSP’s UI works at all internally. I just want to be able to bring my own interface devices to the table. Maybe this is a totally unpopular opinion across the community, and if so I totally respect that… And if that’s where this ends up I would really appreciate any guidance/documentation from the core devs (hi @bert!) on how the HIDs in /dev/ are addressed so I could put on my Linux systems hat and take a stab at doing this myself via I2C or what have you :slight_smile:

It’s just that in almost all use cases I can conceive of, an SSP user already has additional peripherals that they enjoy using to make music already connected to the SSP with enough encoders and buttons to map one to one to the SSP’s physical front panel encoders and buttons. Presumably SSP users could benefit from having the option to explore using those peripherals as their primary control interface to the OS itself? I dunno if folks would agree with that, but I hope so!

To me it’s really centered around opening up new and more ergonomic ways of interacting with the existing interface, which seems consistent with making progress on the age old problem of navigating the challenges of deeply engaging musical endeavors.

To be sure though, I want to be clear that I don’t intend this feature request to cause or require any changes to the existing software UI, or to the way the physical encoders and buttons on the front panel function (ie, ideally they should continue to function in parallel with whatever supplemental peripheral a user chooses to connect as an alternative physical interface.)

Hope that clears up any miscommunication thus far, thanks again for helping suss this out! It’s quite the meta problem space so I find it valuable to have this type of dialogue as a filtering function to hone the discussion :slight_smile:

Short addendum:

Maybe a simpler way to describe the net result of what I’m looking for is to be able to power the SSP on and configure a patch from scratch without ever having to physically touch the SSP itself. The user would still engage in the same overall workflow, eg, they would continue to use whatever control surfaces, external modulation, logic modules, internal modules, etc etc are best to manipulate each module during “live performance mode”, but the entire workflow to set all of this up during, let’s call it “preset configuration mode”, from scratch would be possible using whatever control surface(s) the user prefers, as opposed to being required to use the built in encoders and buttons to achieve the performance-ready preset state.

i see a quick win here : enable synthor the use of a keyboard to navigate and input (exact) numbers/parameters

1 Like

That would be a fantastic feature! I haven’t reasoned about how difficult it would be to implement a keyboard input interface but if it’s straightforward for the devs internally I’m all for it as a feature request.

I’m traveling to Superbooth, so can’t give details at moment.

But if you poke around in Linux , you’ll find encoders and button setup - hint : look at the dtb.
I figured this out long before I had access to source code :wink:

It’s pretty low level stuff to inject events into this but doable.

Note: I’m not going to go into this in detail.
I do understand your use-case , even if it’s not how I’d approach it.
However, I also believe this could make it easier for someone else to run the ssp software on non Percussa hardware. Which I do not support.

Percussa hardware funds the Percussa firmware development , so this is solely Bert’s choice which We should totally respect.

Yes, I know it can be done without anything from me, so I’d not be imparting something not public. But I don’t want to be seen to encourage/ endorse it , out of respect for Percussa’s work:investment in this area. I’d expect the same if others :wink:

Note: this is not aimed at you @waxcorp , as you have an SSP , so have a different use-case - but this is a public forum - so who knows who will read and with what intentions in the future

1 Like

Others : keyboard / mouse.
I’ve been considering to add this for my plugins.
Honestly, I’m not sure it’s that useful but makes things handy for my testing on a Mac :joy:

It get it would be cool, if you could patch the ssp remotely … so remote display, keyboard, mouse on your desktop - like vnc.
However, last time I tried this … even with hardwire Ethernet, I found the display too slow to be comfortable.

also really to make it more comfortable with keyboard/mouse would require big changes too - in particular network screen. imho.

Again , I repeat as I did at the start of this.

I’m not the ‘guardian of change’ !
Merely expressing my personal opinions, views and also development interests as a ( independent) developer.
others may have very different views/ opinions, in particular Percussa’s views may very well differ.

I know this is obvious to many , but my engagement is out of interest, and to hopefully give the community some benefits from my experience… but don’t read more into it :wink:

2 Likes