[linux-audio-dev] Developing a music editor/sequencer

Fons Adriaensen fons.adriaensen at skynet.be
Sun Jan 30 10:54:17 UTC 2005


On Sat, Jan 29, 2005 at 09:24:58PM -0500, NadaSpam wrote:

> End Notes for the Curious
> ...
> My degree is in applied mathematics.

Since I am curious, are you also a musician or composer ? Would you be a
_user_ of the kind of system you propose ?

If the answer is yes, and you want such a tool, then my pragmatic response
would be to bite the bullet and learn to use things like SuperCollider.
They wil give you complete freedom (and a hard time to exploit it), and
virtually complete absense of the 'cultural bias' of traditional tools.

Some other points.

1. I don't think it will be good idea to put everything in an 'integrated
environment'. We have even now all it takes to make applications work 
together and to sync them to sample accuracy. Why should instruments
be built in or limited to what MIDI banks has to offer ? We have good
synths, sample players and general synthesis engines such as scsynth.
Why should a sequencer have audio tracks ? Just kick up Ardour and make
the two work together. While it would be nice (in some cases) to have
a WYSIWYG editor, in many cases that's just a pain (if parts of the score
are defined algorithmically for instance). Anyway if you look at some
contemporary scores, you'll see they start of with some pages that
just define the notation - there is no standard for many things.

Starting with an existing sequencer (Rosegarden or any other I know of)
could be hard. They have many built-in cultural dependencies (such as
using a 'beat count' as the independant variable), and these ripple
through from the initial design assumptions down to all levels of the
architecture and the interfaces. It could be very hard to change that.

I have a long term project of developing a sequencer that would be free
of these kind of limits, but don't wait for it. It will record events
(e.g. notes) and parameter trajectories and arbitrary data as functions
of time, and allow you to edit all of this. It will probably  accept
MIDI (sometimes it's practical to just play things on a keyboard rather
having to  write them) and OSC, and output mainly OSC.
IF there is any notion of tempo or meter that could be defined in an
hierarchical way, down to being local to a track, and the final inter-
pretation of these elements will not always be done by the sequencer
itself but could for example be delegated to an (external) instrument.

2. The model you use is based on a 'score' and 'instruments'. That's
too simple to reflect the realities of making music. In between them
there are players, and maybe a conductor. All of them interpret part
of or some aspects of the score, and they interact to achieve the end
result. Learning to interpret (rather than just play) a score and to
play together in an ensemble or orchestra is an important part of any
musician's education and training, and much of the 'magic' of music
happens right at that level.


-- 
FA





More information about the Linux-audio-dev mailing list