>From: Benno Senoner <sbenno(a)gardena.net>
>
>Since we cannot increase the speed at which the sound travels and even
>DACs add some latency (1msec or so)
>I see any effort to reduce latency below 2-3msec quite useless.
Lucasfilm's sound processor at 1983 had fixed 1.5 ms latency.
But did they have as lousy DACs as we have today? Don't know.
Did they mean the DSP effect processing latency only? Not sure.
I dislike that the Jack buffersize must be turned up for all
clients when one client does not perform well. It well could
be that I would like to use the buffersize 32 for
A/D --> EQ --> M --> D/A
and the buffersize 256 for
Zyn --> M --> D/A (the part M --> D/A is the same as above)
where M is a magic processing node which mixes the audios
having different buffersizes. M would be quite simple, actually.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi LADs
As you might remember from my recent questions about MIDI tuning and
microtonality, I'm currently designing a music editor unlike any other
(that I've seen.) It will be a graphical music editor, similar to
existing "piano roll" editors on the surface, but with several important
differences. It will be a tool for composing music in Just Intonation.
It will also be free software (that goes without saying :-)
Anyway, I need to make a few key decisions about it and I'd like to have
some feedback and advice from you experienced people!
The most important issue is how to integrate my editor with the greatest
possible number of synths and other existing music sofware/hardware.
Unfortunately my software will need to set the pitch of every single
note independently of the others, so "common" MIDI will not suffice.
Pitch bend will not work either, or rather it will limit the output to
one note per MIDI channel. Not adequate at all for most uses.
In the other thread you kindly provided me with some advice and links,
including mentioning the MIDI Tuning Standard and OSC.
I'm designing my software with extensibility in mind, so adding new
protocols will not be a problem. Nonetheless, the more protocols and
APIs I know of in advance, the more extensible I can make it!
I have heard of several standards in the Linux audio world, thinks like
Jack and ALSA. But I'm quite new to all this, so I don't have a good
idea of what standards is a modern music editor supposed to support.
Could you please mention them to me? I will gladly study the APIs on my
own, but I don't want to waste time studying stuff that has no practical
value for building a graphical music editor. Also, please keep in mind
my special needs about note tuning/microtonality.
I hope this is not considered a repeat of my previous email... I've read
quite a bit of docs in the meantime, but I'm still confused. Is an
editor supposed to do something with Jack? I'm handling the "editing"
part on my own, but how should I embed playback/recording functionality
into my editor? Is there a way to interface it with VST plugins (where
binary compatible) and/or any free alternatives? Should I do all this
on my own, or are there any architectures that I could simply plug into?
Regards,
Toby
Hi all,
I just realized why my example code "didn't work." It was a simple stupid
human error (surprise, surprise), the change did occur, it was just that the
sinetone was 400Hz and I was altering it by 0-1 Hz with slider output which
on my lousy laptop speakers in a noisy room seemingly did not make a bit of
difference (at least not to my ears :-).
Anyhow, thank you very much all for your help in this matter!
Btw, Stefan, you mentioned that there were some updates to SCUM. Are these
included in the latest 0.0.2a version? Also is this the same version like the
one presented at LAC last year?
Many thanks!
Best wishes,
Ico
Greetings all,
I guess title says it all. I do have working examples (i.e. the sequencer
example from the LAD 2004) that make new instances of synths but none of them
actually do real-time updates to existing instances.
At any rate I would greatly appreciate your help in this matter.
This is as far as I got and cannot figure it out beyond this point (my brain
is pretty darn fried :-):
s = Server.local.boot;
SynthDef("onetwoonetwo",{ arg out=0, freq;
w = SCUMWindow.new;
w.title = "Slider example";
w.initialSize = Size(20, 300);
c = SCUMVBox( w );
v = SCUMSlider(c, { |v|
v.expand = 1;
v.fill = 1;
v.bgColor = Color.white;
v.fgColor = Color.black;
v.action = {
freq = v.value * 100;
};
v.doAction;
});
w.show;
Out.ar(out,
SinOsc.ar(freq + 400, 0, 0.5)
)
}).play;
What I am simply trying to do is to affect the frequency of the sinetone by
moving the slider, yet nothing changes when I move the slider.
Any help would be greatly appreciated!
Best wishes,
Ico
i've just compiled a 2.6.12.RC4 kernel with the rt-preempt patch
(configured with PREEMPT_RT) and it all went ok, but after installing
the binary Nvidia drivers, i get this bug message:
BUG: modprobe/2085, lock held at task exit time!
[de2887e4] {(struct semaphore *)(&os_sema->wait)}
.. held by: modprobe: 2085 [dfceec30, 118]
... acquired at: os_alloc_sema+0x40/0x76 [nvidia]
should i send this to the kernel list? will this cause problems on my
system?
shayne
Hello all,
I'm planning the OSC-fication of Aeolus, and would like to have
some comments / feedback on the current ideas (they could well
be braindead, in which case you are kindly requested to say so).
The setup I have in mind is as follows:
- There will be one UDP server socket. This socket is left
unconnected, and will receive OSC commands from any source.
These commands give you complete control over most aspects
of Aeolus (excluding the stop definition editor which will
remain in the local GUI only).
They will also include note on/off comnmands, and that leads
to my first question: is there any "standard" OSC format for
these ?
- This interface will provide everything required by sequencers
etc. that want to 'play' Aeolus. Clients that want to implement
their own user interface will need more: they also require feed-
back on the current state (e.g. if a midi messages recalls a
program, this should be reflected in the user interface).
The way I currently foresee providing for these is to add
the commands:
/addclient ,s host:port and
/remclient ,s host:port
Registered clients can request information (e.g. a list of all
available stops), and will receive notification of everything
that may affect a user interface, again by UDP messages to
their port of choice. They will still send all their commands
to the unique server port above. Second question: is this a
good idea, or would it be better to create a TCP connection
for this type of client ?
Comments invited !!
--
FA
Hey LADs
The first public release of smack is now here. Smack is a drum synth,
100% sample free. In this release there are
TR808 bass, snare, hihats, cowbell and clave,
TR909 bass and snare,
a frequency shifter based snare and some FM hihats.
It's built with LADSPA plugins and the Om modular synth. For source and rpms go to http://smack.berlios.de/ Some audio demos are also on the site.
Cheers,
Loki
hi all..
i'm trying to sleep for some very low time slices ... about 100 to 1000
us ...
but i can't get below about 1 ms ... are there any workarounds to sleep
for very small time slices?
i tested nanosleep(), usleep() and select(0, 0, 0, 0, &timout) ...
thanks .... tim
--
mailto:TimBlechmann@gmx.de ICQ: 96771783
http://www.mokabar.tk
latest mp3: kMW.mp3
http://mattin.org/mp3.html
latest cd: Goh Lee Kwang & Tim Blechmann: Drone
http://www.geocities.com/gohleekwangtimblechmannduo/
After one look at this planet any visitor from outer space
would say "I want to see the manager."
William S. Burroughs