Hi.
I am looking for a programmable (text mode) seuqnecer solution.
I know that Linux has a few small languages for creating
MIDI files, like MMA. Even LilyPond can be tricked into being a MIDI
file generating language. However, none of the solutions I have seen so
far could be easily integrated as the center/hub of a full composition.
I am imagining a workflow where I do not need to click my way through a
sequencer, setting up all the content and connections, but rather define
a composition in terms of source code. For this to be useful, it should
include conventional sample playback, as well as real time MIDI event
generation. I am not sure if we have a sufficiently remote-controllable
sampler without GUI requirements, but if we do, I might be able to get
away by using that via OSC or MIDI, instead of re-inventing the sampler wheel.
However, it feels like it would be good to have the sample definitions
part of the composition source code file. After all, I finally want all
the meta-data required to play my composition together in more or less
one play (modulo include files).
This composition compiler should ideally support JACK, with stuff like
transport control. It should be able to support optional hardware
synths, which will be controlled via MIDI messages and mixed back into
the full result via an input JACK port.
I am aware of the KISS principle and actually love it very much. So if
anyone has suggestions on how to implement such a workflow/tool with
existing tools and plumbing code, I am very open to ideas and
suggestions. However, I get a feeling that what I want is only
convenient if relatively tightly integrated, so that I do not have to
tinker with too many individual tools while trying to be productive.
Any hint on how to get such an environement going is very appreciated.
This is actually a long-long-term project of mine: Since I have started
to play with computers, I have always been frustrated by the lack of
accessibility of tools to create electronic music. I have occasionally
managed to get limited solutions working for me, and have always had
very much fun creating content when it sort of worked for me. In the
good old DOS days, there were (due to the limits in what a PC could do)
still some people trying to implement pure text-mode solutions, which
sometimes worked really good with a braille display.
I remember creating several tracks with ModEdit on MS-DOS in one
particular summer in the late 90s. Using that felt quite productive,
but also limited (due to a 4-track limit).
When I switched to Linux in 97, I
had many new things to learn and was quite busy, not really caring about
the sequencer thing. But later on, I had to discover that the situation
for me has gotten a lot worse now: All the big Linux sequencers were
purely graphical and not accessible through other means either. The
same situation is mostly true for Windows and Mac OS X unfortunately.
The obvious solutions like Reactor, Fruityloops or Abelton Live are all
far from being even remotely usable for blind musicians.
As far as I currently understand, the chances of finding usable support
for some professional screen reading solution and music composition on
Windows is relatively low, plus it might cost me a lot of money. So I
might as well try once again, and stay on Linux, where I actually
belong.
--
CYa,
⡍⠁⠗⠊⠕