Hi.
Sorry to come in very late. The Mustajuuri plugin interface includes all
the bits you need. In fact I already have two synthesizer engines under
the hood.
With Mustajuuri you can write the synth as a plugin and the host is only
responsible for delivering the control messages to it.
Alternatively you could write a new voice type for the Mustajuuri synth,
which can lead to smaller overhead ... or not, depending on what you are
after.
http://www.tml.hut.fi/~tilmonen/mustajuuri/
On 3 Jul 2002, nick wrote:
> Hi all
>
> I've been scratching my head for a while now, planning out how im going
> to write amSynthe (aka amSynth2)
>
> Ideally i don't want to be touching low-level stuff again, and it makes
> sense to write it as a plugin for some host. Obviously in the Win/Mac
> world theres VST/DXi/whatever - but that doesnt really concern me as I
> dont use em ;) I just want to make my music on my OS of choice..
>
> Now somebody please put me straight here - as far as I can see, there's
> LADSPA and JACK. (and MuSE's own plugins?). Now, I'm under the
> impression that these only deal with the audio data - only half what I
> need for a synth. Or can LADSPA deal with MIDI?
>
> So how should I go about it?
> Is it acceptable to (for example) read the midi events from the ALSA
> sequencer in the audio callback? My gut instinct is no, no, no!
>
> Even if that's feasible with the alsa sequencer, it still has problems -
> say the host wanted to "render" the `song' to an audio file - using the
> sequencer surely it would have to be done in real time?
>
> I just want to get on, write amSynthe and then everyone can enjoy it,
> but this hurdle is bigger than it seems.
>
> Thanks,
> Nick
>
>
> _________________________________________________________
> Do You Yahoo!?
> Get your free @yahoo.com address at http://mail.yahoo.com
>
Tommi Ilmonen Researcher
>=> http://www.hut.fi/u/tilmonen/
Linux/IRIX audio: Mustajuuri
>=> http://www.tml.hut.fi/~tilmonen/mustajuuri/
3D audio/animation: DIVA
>=> http://www.tml.hut.fi/Research/DIVA/
After troubleshooting configuration problems with several users, I believe I
have fixed the problem with the configure script that prevented some people from
building freqtweak. Anyone who had problems should definitely try the new
release.
Along with that, you get a 4096 freq bin mode, and a few tiny bugfixes.
http://freqtweak.sourceforge.net/
I added a short example of how to pipe alsaplayer through freqtweak and out your
speakers without messing with JACK patchbays (see Usage Tips section).
jlc
Hi Guys,
I thought I'd mention this project that's been going on in that bad bad windows
world to provide free wdm drivers for all emu10k based soundcards. It seems
they've made quite a good job at that. I've heard reports that it is working in
w2k with Cubase at 4ms latency with a standard sb-live soundcard (sound quality
not accounted for).
Though this is not really Linux related, there does not seem to be any source
for this driver, and so on. It would however be interesting to know how they've
crafted the driver to provide such good performance.
More releated to Linux is the bundled DSP compiler. It seems there is a number
of DSP effect algorithms bundled in the package that are made for the emu10k
processor. I don't know if the source to the effects in in it, but they have a
message board where one of the topics seems purely about using/programming the DSP.
The dsp algorithms, even the dsp-binaries, should be very interesting to test
with the emu10k in Linux, they _should_ be possible to use "of the shelf".
Oh, right! The site:
http://kxproject.spb.ru/
Regards
Robert
Changes:
-Bug when playing recorded tracks (VERY IMPORTANT!!!)
-Bug when trying to export .ecs
- Add quotes when filename contains spaces
Download it from:
http://www.sourceforge.net/projects/tkeca
Regards,
Luis Pablo
Ahora pod�s usar Yahoo! Messenger desde tu celular. Aprend� c�mo hacerlo en Yahoo! M�vil: http://ar.mobile.yahoo.com/sms.html
> -----Original Message-----
> From: Ivica Bukvic [mailto:ico@fuse.net]
...
> introduces a problem of porting apps into its API, and that
> again poses
> the same problem of excluding a lot of older audio apps that
...
this might be solved by user space device drivers - they do not want the
mixer in kernel... once the userspace device drivers are available it will
be possible to implement what you propose, I guess. not sure what's the
latest news as far as the userspace device drivers go...
the other option is that one of the APIs will become roughly as common as
OSS is today (so that virtually no applications will go straight to device
drivers). JACK?
erik
Announcing the initial release of FreqTweak (v0.4)
http://freqtweak.sourceforge.net
FreqTweak is a tool for FFT-based realtime audio spectral manipulation
and display. It provides several algorithms for processing audio data
in the frequency domain and a highly interactive GUI to manipulate the
associated filters for each. It also provides high-resolution spectral
displays in the form of scrolling- raster spectragrams and energy vs
frequency plots displaying both pre- and post-processed spectra.
It currently relies on JACK for low latency audio interconnection and
delivery. Thus, it is only supported on Linux.
FreqTweak is an extremely addictive audio toy, I have to pry myself
away from playing with it so I can work on it! I hope it has value
for serious audio work too (sound design, etc). The spectrum analysis
is pretty useful in its own right.
FreqTweak supports manipulating the spectral filters at several
frequency resolutions (64,128,256,512,1024, or 2048 bands) depending
on your needs and resources. Overlap and windowing are also
selectable.
The GUI filter graph manipulators (and analysis plots) have selectable
frequency scale types: 1x and 2x linear, and two log scales to help
with modulating the musical frequencies. Filters can be linked across
multiple channels. The plots are resizable and zoomable (y-axis) to
allow precise editing of filter values.
The current processing filters are described below in the order audio
is processed in the chain. Any or all of the filters can be
bypassed. The state of all filters can be stored or loaded as presets.
Spectral Analysis -- Multicolor scrolling-raster spectragram,
or energy vs. freq line or bar plots... one shows
pre-processed, another shows post-processed.
EQ -- Your basic multi-band frequency attenuation. But you get
an unhealthy number of bands...
Pitch Scaling -- This is an interesting application of
Sprengler's pitch scaling algorithm (used in Steve Harris'
LADSPA plugin). If you keep all the bins at the same scale, it
is equivalent to Steve's plugin, but when you start applying
different scales per frequency bin, things quickly get weird.
Gate -- This is a double filter where a given frequency band is
allowed to pass through (unaltered) if the power on that band
is between two dB thresholds... otherwise its gain is clamped
to 0.
Delay -- This lets you delay the audio on a per frequency-bin
basis yielding some pretty wild effects (or subtle, if you are
careful). A feedback filter controls the feedback of the delay
per bin (be careful with this one). This is basically what
Native Instrument's Spektral-Delay accomplishes. Granted, I
don't have all the automated filter modulations (yet ;). See
their website for audio examples of what is possible with this
cool effect.
Have fun... report bugs...
Jesse Chappell <jesse(a)essej.net>
Hi list,
sorry for abusing this list yet again for a not directly LAD related
issue, but..
does anyone here own (or have experience with) the Novation Supernova II
synthesizer? I want to extend my MIDI park at home a little around
Christmas, and this one looks like an interesting candidate to me.
Things I'm interested in are: Overall opinion, weak spots, support from
Novation, reasons why you would not buy it. Also, if there are
recommendations for comparable systems from another company, I'm also
interested in that.
And, does the machine come with a complete SysEx implementation chart? I
downloaded the PDF manual, but it doesn't contain anything like that..
Thanks, and please reply in private mail unless lots of people on this
list start shouting "me wanna know, too" :-).
Frank
Hi all,
can anyone give me pointers on how the overview cache for a zoomable
waveform display is organized?
One can see accurate and fast displays in a lot of applications but i guess
the rendering of this is not straightforward.
best greetings,
Thomas
Pick up sfront 0.85 -- 10/13/02 at:
http://www.cs.berkeley.edu/~lazzaro/sa/index.html
[1] Mac OS X support for
real-time MIDI control,
using the -cin coremidi
control driver. Up to four
external MIDI sources are
recognized. Virtual sources
are ignored; expect virtual
source support in a future
release.
[2] Mac OS X memory locking
now works in normal user
processes, and is no longer
limited to root.
-----
All the changes in 0.85 are OS X specific, but thought I'd post this
here in case people are curious about OS X porting ...
With this release, all of the real-time examples in the sfront
distribution run under Mac OS X. Specifically, its now it's possible
to use OS X as a Structured Audio softsynth -- I've been running my
PowerBook this way with 2ms CoreAudio buffers, with MIDI input from my
controller via an Edirol UM-1S USB MIDI interface, and audio output
via the headphone jack on the Powerbook, and things work glitch-free.
Also, because audio and MIDI are both virtualized under OS X, its
possible to run multiple ./sa softsynths in parallel (i.e. from
different Terminal windows) and get useable layering ... although in
most cases, you'd be better off doing your layering inside a single SA
engine.
To see the -cin coremidi control driver in action, run the
sfront/examples/rtime/linbuzz softsynth, it will find external MIDI
sources (up to 4, no virtual source support ...) and use them to drive
the SA program in real-time. In the linbuzz example, the pitch wheel
(set up to do vibrato) mod wheel (spectral envelope) and channel
volume controllers are all active -- you can look at the linbuzz.saol
SAOL program to see how they are used.
The actual CoreMIDI code is in:
sfront/src/lib/csys/coremidi.c
The most interesting aspect of this code is that a single
AF_UNIX SOCK_DGRAM socketpair pipe (named csysi_readproc_pipepair) is
used for communication between an arbitrary number of CoreMIDI
readprocs (one for each active source) and the SA sound engine (which
runs inside the CoreAudio callback -- the actual main thread sleeps
and does nothing). Writing the pipe is blocking (but should rarely
block, and never for significant time), but reading the pipe is
non-blocking.
The semantics of the AF_UNIX SOCK_DGRAM (AF_UNIX is reliable,
SOCK_DGRAM guarantees the messages from the CoreMIDI readprocs don't
mix) makes it a good choice for doing the multi-source MIDI merge. The
actual messages sent in the pipe consists of a preamble to identify
the readproc, and the (error-checked for SA semantics) MIDI commands
in each MIDIPacket.
At this point, the Linux and OS X real-time implementations
support all of the same features (audio input, audio output, MIDI In,
RTP networking) ... I'm not sure if AudioUnits support makes sense for
sfront, I'll probably take a closer look at the issue soon ...
-------------------------------------------------------------------------
John Lazzaro -- Research Specialist -- CS Division -- EECS -- UC Berkeley
lazzaro [at] cs [dot] berkeley [dot] edu www.cs.berkeley.edu/~lazzaro
-------------------------------------------------------------------------