Running applications on SSP other than SYNTHOR

Setting up SSHing into the SSP for development purposes, I’ve seen how easy it is to just boot into Linux instead of running SYNTHOR. What kind of possibilities does this open up? @bert has alluded to being able to run things like Pd or SuperCollider on the SSP, and @thetechnobear has even tried running the live-coding environment Orca.

This latter example is particularly exciting to me… how easy is it to access the ins and outs of the SSP? Can we have a live coding environment that sends CV out to our modulars?? Or eg. SuperCollider server that does the same. Or, is that SYNTHOR’s job and would be a big deal to replicate?

Anyone done any other experiments in this regard?

2 Likes

Something like Monome Teletype for instance, would be amazing.

1 Like

Having recently sold my teletype if there was a way to emulate it on the SSP I would be tickled pink.

1 Like

I guess another question: is running the Linux applications outside of SYNTHOR the best way forward, or is wrapping them in VSTs actually the more feasible approach, in that it leverages SYNTHOR and provides access to the SSP’s hardware via a standardised API? What restrictions are there to running as a VST?

Found a relevant post in another topic: What's your reason to use SSP rather than just a computer?

Another relevant post about accessing the hardware from Linux: LAN Port?

the audio interfaces (so cv/audio) are exposed via alsa - though I had some quirks when I tried to access them… but I only spent 5 mins with it :wink:

best way forward is always going to be dependent on what you are trying to do…

a vst has the advantage of integrating audio and cv with other modules, however there are some limitations imposed by the host (e.g. midi support) … though perhaps these could be worked on.

but - it would be impractical to port many existing ‘applications’ to a vst.

there are a couple of issues with running other applications on the SSP.
the main one, is how to do it so its usable by end-users…
if all I wanted is to run supercollider on my SSP, I could get that working in probably 1/2 day.
however… that install process and way to use it , would not be reasonable for most users.

e.g. the current way to access a LAN network, whilst easy enough, is not exactly user-friendly

Ive done this on a couple of other platforms (Nebulae, Norns, Organelle, Terminal Tedium) , so I pretty much know how I want to approach this.
(but of course, Ive a few projects on the go, so this has not made it to the top of the list yet)

Teletype is kind of interesting, but personally Id prefer to get something like Supercollider or CSound going on the SSP… its much more mature, and flexible - again, Ive some sketchs of ideas in my notebook, awaiting some time/motivation :wink:

Supercollider would be pretty interesting but it would be good to think about how to make best use of the screen, buttons, and encoders (while still allowing for the flexibility of arbitrary supercollider code)

yeah, of course the main thing with adding something like supercollider, csound is we also need to be able to add a UI to control things like parameters.

however, Ive already ‘done’ this on Nebulae, Norns. Organelle, Terminal Tedium… using ‘sidekick’ this basically allows UI interaction via OSC.

some might prefer I embed lua but I actually prefer using OSC…
reasons :

  • its a little easier for non-programmers
  • allow tighter integration with environment (e.g. osc support is native to sclang)
  • allows remote control… this is really cool for live coding.

of course, I could do some of this with a lua layer… which im still consider (its not hard to embed lua), but im not totally convinced … I played with this on Norns, and didnt like it much.
that said, the ER-301 community seem to like it there - so perhaps, I could look at how O|D did it, perhaps that was a bit better/more usable!?

Trying to remember my SuperCollider… so the SC Server would be running on the SSP, and then can sclang run in a terminal mode, or does it need XWindows or something? So you might have to just use OSC to talk to it from a separate computer? I would be perfectly happy doing that, or even better attaching a keyboard to run sclang on the SSP as an MVP, if that were possible.

Am I right in saying building things for the SSP is a bit like building them for Raspberry PI? https://github.com/supercollider/supercollider/blob/develop/README_RASPBERRY_PI.md

Sorry, despite being an ER-301 owner, I haven’t delved into the world of custom units much (yet) - I’ve had it just a bit longer than the SSP. I know that the strapline is “patching with Lua”, in that you can put together sound objects exposed by the underlying C++ firmware. Brian published a Doxygen of those available with v0.5.

(Edit: Sorry not sorry :stuck_out_tongue:)

supercollider - there are lots of options… what I like to do , is have the option to run sclang code either remotely, or locally.
local, allows me just use supercollider as a standalone ‘patch’.
remote, allows me to do live coding

Ive no interest in running the scide over xwindows it’ll be too slow and I dont really want to attach an keyboard/mouse to the SSP.

yes building is similar to rPI.

note: however the issue really is not you and I building/installing it, the main issue is to find a way that end-users can do this, without having to use a network connection.
Ive got an idea on how to do this :slight_smile:

(though, perhaps @bert could in a future image include supercollider and csound pre-installed)


ER-301, yeah so I do know a bit about the tech on the er-301… as Im interested in how these things tick.
so brian uses embedded lua as you say for the middleware, this is pretty much all about binding objects together, and exposing them in the UI.
however, lua is not used for the dsp audio code, this is all done in C++, which lua binds to.

this is really what the 0.6 firmware is all about , and why its been made open source.
before , whilst users could add lua code to bind, they could not build new audio objects since this needs to be in C++… 0.6 opens this up.
(I think everything was made open source partly to just make it easier for developers, given the er301 ‘api’ is specfic to that one device - unlike a vst)

btw: brian binds lua to c++ using swig


as for the SSP, Ive might look at some point about adding a lua binding…
however, its questionable how much value it adds…given like the er301, you still need the audio code to be in C++… so it really only helps the UI code and some logic.
( I dont really see myself build a library of objects that could be bound together via lua)

so… currently my main interest in using lua would probably be in conjuction with another higer level dsp language… e.g I csound could be an interesting candidate.
there are other options that could be interesting like faust, but I think thats pretty difficult for many.

anyway… lots of options, but its quite a bit more involved than just building, and installing … as that approach would pretty much only be useable by you and I :wink:
(thats ok, if thats your aim - but id like to do something that others can use as well)

… so as always, its a matter of time !

If you are expecting a UI experience similar to synthor or ER-301 then I suppose you would need a “middle layer” and it might as well be done in lua. That being said, I could also imagine a primarily text driven experience that happens to have several buttons and a few knobs available. A middle layer in lua might be total overkill if that’s all you wanted to achieve. Depends on your goal.

The other day I was daydreaming about two instances of Norns running side by side across the wide screen on the SSP :drooling_face:

Seems like something @thetechnobear could knock out :wink:

1 Like