Hi all,
I just realized why my example code "didn't work." It was a simple stupid
human error (surprise, surprise), the change did occur, it was just that the
sinetone was 400Hz and I was altering it by 0-1 Hz with slider output which
on my lousy laptop speakers in a noisy room seemingly did not make a bit of
difference (at least not to my ears :-).
Anyhow, thank you very much all for your help in this matter!
Btw, Stefan, you mentioned that there were some updates to SCUM. Are these
included in the latest 0.0.2a version? Also is this the same version like the
one presented at LAC last year?
Many thanks!
Best wishes,
Ico
Greetings all,
I guess title says it all. I do have working examples (i.e. the sequencer
example from the LAD 2004) that make new instances of synths but none of them
actually do real-time updates to existing instances.
At any rate I would greatly appreciate your help in this matter.
This is as far as I got and cannot figure it out beyond this point (my brain
is pretty darn fried :-):
s = Server.local.boot;
SynthDef("onetwoonetwo",{ arg out=0, freq;
w = SCUMWindow.new;
w.title = "Slider example";
w.initialSize = Size(20, 300);
c = SCUMVBox( w );
v = SCUMSlider(c, { |v|
v.expand = 1;
v.fill = 1;
v.bgColor = Color.white;
v.fgColor = Color.black;
v.action = {
freq = v.value * 100;
};
v.doAction;
});
w.show;
Out.ar(out,
SinOsc.ar(freq + 400, 0, 0.5)
)
}).play;
What I am simply trying to do is to affect the frequency of the sinetone by
moving the slider, yet nothing changes when I move the slider.
Any help would be greatly appreciated!
Best wishes,
Ico
i've just compiled a 2.6.12.RC4 kernel with the rt-preempt patch
(configured with PREEMPT_RT) and it all went ok, but after installing
the binary Nvidia drivers, i get this bug message:
BUG: modprobe/2085, lock held at task exit time!
[de2887e4] {(struct semaphore *)(&os_sema->wait)}
.. held by: modprobe: 2085 [dfceec30, 118]
... acquired at: os_alloc_sema+0x40/0x76 [nvidia]
should i send this to the kernel list? will this cause problems on my
system?
shayne
Hello all,
I'm planning the OSC-fication of Aeolus, and would like to have
some comments / feedback on the current ideas (they could well
be braindead, in which case you are kindly requested to say so).
The setup I have in mind is as follows:
- There will be one UDP server socket. This socket is left
unconnected, and will receive OSC commands from any source.
These commands give you complete control over most aspects
of Aeolus (excluding the stop definition editor which will
remain in the local GUI only).
They will also include note on/off comnmands, and that leads
to my first question: is there any "standard" OSC format for
these ?
- This interface will provide everything required by sequencers
etc. that want to 'play' Aeolus. Clients that want to implement
their own user interface will need more: they also require feed-
back on the current state (e.g. if a midi messages recalls a
program, this should be reflected in the user interface).
The way I currently foresee providing for these is to add
the commands:
/addclient ,s host:port and
/remclient ,s host:port
Registered clients can request information (e.g. a list of all
available stops), and will receive notification of everything
that may affect a user interface, again by UDP messages to
their port of choice. They will still send all their commands
to the unique server port above. Second question: is this a
good idea, or would it be better to create a TCP connection
for this type of client ?
Comments invited !!
--
FA
Hey LADs
The first public release of smack is now here. Smack is a drum synth,
100% sample free. In this release there are
TR808 bass, snare, hihats, cowbell and clave,
TR909 bass and snare,
a frequency shifter based snare and some FM hihats.
It's built with LADSPA plugins and the Om modular synth. For source and rpms go to http://smack.berlios.de/ Some audio demos are also on the site.
Cheers,
Loki
hi all..
i'm trying to sleep for some very low time slices ... about 100 to 1000
us ...
but i can't get below about 1 ms ... are there any workarounds to sleep
for very small time slices?
i tested nanosleep(), usleep() and select(0, 0, 0, 0, &timout) ...
thanks .... tim
--
mailto:TimBlechmann@gmx.de ICQ: 96771783
http://www.mokabar.tk
latest mp3: kMW.mp3
http://mattin.org/mp3.html
latest cd: Goh Lee Kwang & Tim Blechmann: Drone
http://www.geocities.com/gohleekwangtimblechmannduo/
After one look at this planet any visitor from outer space
would say "I want to see the manager."
William S. Burroughs
Dear list,
I am currently designing a new kind of music sequencer and I
need your help in making some crucial decisions.
Introduction
My project is a sequencer for composing Just Intonation music.
Just Intonation is not a new idea in the music landscape, not by a long
shot: it has roots in the first studies of music by the ancient Greeks.
The GUI I'm designing though will (hopefully) be the first of its kind.
My sequencer is going to be just that: a sequencer. It will be hard
enough to design an efficient, user-friendly and solid GUI for composing
music without a scale (yes, you read it right) so I'm not going to put
synthesis modules in the same software package. Not at first, anyway.
MIDI
Here comes the biggest problem. I cannot use MIDI as a protocol between
my sequencer and the syntesizers, because most (if not all) of the notes
produced by my software will not lie in the equal tempered scale (the 12
notes per octave everyone knows) nor in any other scale for that matter.
Please correct me if I'm wrong: MIDI doesn't allow for microtonal notes.
The best next things MIDI has to offer are Custom Scales and Pitch Bend.
Custom Scales is not a feature of MIDI, it's more like a reinterpreta-
tion of the protocol. It happens when both the sequencer and the
synthesizer are still talking of C, C#, D, D#... but the synthesizer
renders those notes with custom pitches, coming from a custom scale set
by the user. This approach is unsuitable to my project, mainly because
there could be more than 12 notes (pitches) in an octave.
Pitch Bend is not any better, because (to my knowledge) there is only
one pitch bend setting per channel. I could certainly use it to play
microtonal notes, but the pitch bend applies simultaneously to ALL notes
being played. This limits the applicability of pitch bend to monophonic
instruments, or at least to playing one voice per MIDI channel.
Alternatives
Is there a common protocol with the same scope as MIDI (transferring
notes from a sequencer to a synthesizer) but which allows for microtonal
notes? I fear not.
So I am left with the only option of manually interfacing my sequencer
to a select few software syntesizers. I'm designing my project in an
extensible way (support for plugins) so that's not so bad as it seems.
The problem is that I don't know of any software synthesizer that is:
1. good enough for decent music production;
2. easy to use by non-experts (this is a direct stab at CSound, or
better at its lack of a decent GUI, of a standard instrument exchange
file format and of a decent, centalized library of presets)
3. free software.
A final note: outputting SCO files for use in CSound seems like an
obvious solution, but this would greatly limit the usability of my
project. This is because (to my knowledge) there is no decent GUI one
can use to merge the SCO file coming from a sequencer with a few ORC/SCO
file-couples coming from an instrument library, without having to know
the CSound language. I don't want to target CSound programmers only.
I hope I've managed to explain my problem. Please feel free to dicuss
on these matters. Any constructive criticism, any note of mistakes on
my part and any practical advice for my project will be appreciated.
Toby
--
«A computer is a state machine. Threads are for people
who can't program state machines.» —Alan Cox
>From: Thorsten Wilms <t_w_(a)freenet.de>
>Subject: Re: [linux-audio-dev] Common synthesizer interface -or-
> microtonal alternative to MIDI?
>
>A sequencer is a device for recording and playback of signals
>with the possibility to arrange several recordings.
[ ... ]
Thinking of differences between "sequencer" and "editor",
I would call a sequencer which with one can place events
to a timed sequence. The events may cause a MIDI data to be
sent or an audio player to be started.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software