Dear list,
I am currently designing a new kind of music sequencer and I
need your help in making some crucial decisions.
Introduction
My project is a sequencer for composing Just Intonation music.
Just Intonation is not a new idea in the music landscape, not by a long
shot: it has roots in the first studies of music by the ancient Greeks.
The GUI I'm designing though will (hopefully) be the first of its kind.
My sequencer is going to be just that: a sequencer. It will be hard
enough to design an efficient, user-friendly and solid GUI for composing
music without a scale (yes, you read it right) so I'm not going to put
synthesis modules in the same software package. Not at first, anyway.
MIDI
Here comes the biggest problem. I cannot use MIDI as a protocol between
my sequencer and the syntesizers, because most (if not all) of the notes
produced by my software will not lie in the equal tempered scale (the 12
notes per octave everyone knows) nor in any other scale for that matter.
Please correct me if I'm wrong: MIDI doesn't allow for microtonal notes.
The best next things MIDI has to offer are Custom Scales and Pitch Bend.
Custom Scales is not a feature of MIDI, it's more like a reinterpreta-
tion of the protocol. It happens when both the sequencer and the
synthesizer are still talking of C, C#, D, D#... but the synthesizer
renders those notes with custom pitches, coming from a custom scale set
by the user. This approach is unsuitable to my project, mainly because
there could be more than 12 notes (pitches) in an octave.
Pitch Bend is not any better, because (to my knowledge) there is only
one pitch bend setting per channel. I could certainly use it to play
microtonal notes, but the pitch bend applies simultaneously to ALL notes
being played. This limits the applicability of pitch bend to monophonic
instruments, or at least to playing one voice per MIDI channel.
Alternatives
Is there a common protocol with the same scope as MIDI (transferring
notes from a sequencer to a synthesizer) but which allows for microtonal
notes? I fear not.
So I am left with the only option of manually interfacing my sequencer
to a select few software syntesizers. I'm designing my project in an
extensible way (support for plugins) so that's not so bad as it seems.
The problem is that I don't know of any software synthesizer that is:
1. good enough for decent music production;
2. easy to use by non-experts (this is a direct stab at CSound, or
better at its lack of a decent GUI, of a standard instrument exchange
file format and of a decent, centalized library of presets)
3. free software.
A final note: outputting SCO files for use in CSound seems like an
obvious solution, but this would greatly limit the usability of my
project. This is because (to my knowledge) there is no decent GUI one
can use to merge the SCO file coming from a sequencer with a few ORC/SCO
file-couples coming from an instrument library, without having to know
the CSound language. I don't want to target CSound programmers only.
I hope I've managed to explain my problem. Please feel free to dicuss
on these matters. Any constructive criticism, any note of mistakes on
my part and any practical advice for my project will be appreciated.
Toby
--
«A computer is a state machine. Threads are for people
who can't program state machines.» —Alan Cox
>From: Thorsten Wilms <t_w_(a)freenet.de>
>Subject: Re: [linux-audio-dev] Common synthesizer interface -or-
> microtonal alternative to MIDI?
>
>A sequencer is a device for recording and playback of signals
>with the possibility to arrange several recordings.
[ ... ]
Thinking of differences between "sequencer" and "editor",
I would call a sequencer which with one can place events
to a timed sequence. The events may cause a MIDI data to be
sent or an audio player to be started.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi Steve, thanks for the reply.
I will definitely look into using DSSI, looks like it
could be good once as supported as LADSPA is (I'd
never even heard of it before your post, although
that's probably just me). Is it intended as an
eventual LADSPA replacement? I never really saw the
need to divide plugins into 'instruments' and
'effects', and it seems like DSSI can do both.
Stefan Turner
> It would be more practical to do it as a DSSI
plugin, LADSPA has no way
> to
> indicate that you want to load files during runtime,
and no UI.
>
> In DSSI you can load the impulse in the "UI"
process, perform the FFT
> on
> it and send it ot hte DSP code with configure().
Once there the DSP
> code
> can the overlap-add/save on it.
>
> - Steve
___________________________________________________________
Win a castle for NYE with your mates and Yahoo! Messenger
http://uk.messenger.yahoo.com
>From: Jens M Andreasen <jens.andreasen(a)chello.se>
>
>Suggestion for running headless:
>
> if(getenv("DISPLAY"))
> isGraphic = TRUE;
> else
> isGraphic = FALSE;
I hope there is a command line option for turning the
GUI off. Otherwise I would always get the GUI.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi all,
The round-trip delay meter mentioned recently on jackit-devel
is now available at
<http://users.skynet.be/solaris/linuxaudio>
The tarball is just 2.6k, so you won't waste much bandwidth :-)
Jdelay is a JACK client that measures the delay between its output
and input, assuming the channel in between has a linear phase
response (i.e. delay is independent of frequency). If you connect
it to your soundcard and make a loopback from out to in, it will
give you the round-trip latency of your system.
Precision is around 1/1000 of a sample if you have a decent sound
card. Even in adverse conditions (S/N ratio reduced to 0 dB by adding
white noise) it will still measure the delay to within 1/10 of a sample.
Enjoy !
--
FA
Hello,
Is there a simple (i.e. simpler than getting the pollfd and using them)
to force snd_seq_event_input() in blocking mode to return, so the the
calling thread can close the handle and cleanup ?
Neither snd_seq_close() nor snd_seq_nonblock() seem to have any effect.
--
FA
Hi all,
Om is a modular synthesizer that runs under Jack and uses LADSPA and/or
DSSI plugins for processing. The engine is an independant process
entirely controlled via OSC, is polyphonic, and supports subpatches.
More information, screenshots, and downloads available at
http://www.nongnu.org/om-synth/.
Please report bugs, feedback, feature requests, etc. on the Savannah
bugs page; or feel free to email me privately.
Enjoy,
-DR-
Hi, I'm extremely new to audio programming. I have a million questions, but the one burning my brain now is how do I
get a program written with the qt widget library to display an audio waveform. Also, any links to good documentation for
audio programming would be good.
Thanks
Mike Fisher