Hi all,
has anyone compiled the LADSPA sdk for Mac OS X? I expect lots of
people have, and that it's very easy..!
In the src directory, I type make and get this:
ld: unknown flag: -shared
The targets in the makefile are:
../plugins/%.so: plugins/%.c ladspa.h
$(CC) $(CFLAGS) -o plugins/$*.o -c plugins/$*.c
$(LD) -o ../plugins/$*.so plugins/$*.o -shared
../plugins/%.so: plugins/%.cpp ladspa.h
$(CPP) $(CXXFLAGS) -o plugins/$*.o -c plugins/$*.cpp
$(CPP) -o ../plugins/$*.so plugins/$*.o -shared
So I guess ld is different on OS X, and looking at the ld man page, I
guess maybe I need -dylib instead:
../plugins/%.so: plugins/%.c ladspa.h
$(CC) $(CFLAGS) -o plugins/$*.o -c plugins/$*.c
$(LD) -o ../plugins/$*.so plugins/$*.o -dylib
../plugins/%.so: plugins/%.cpp ladspa.h
$(CPP) $(CXXFLAGS) -o plugins/$*.o -c plugins/$*.cpp
$(CPP) -o ../plugins/$*.so plugins/$*.o -dylib
But now I get this error:
ld: plugins/amp.o illegal undefined reference for multi module
MH_DYLIB output file to symbol: dyld_stub_binding_helper from section
(__DATA,__la_symbol_ptr) relocation entry: 0
and that's where I give up! But I guess there's a solution which is
obvious to anyone who knows about dynamic libraries on a Mac?
Hi ! I'm writing this mail because at this point I'm seeing that the
Linux Audio Development community is still going aimlessly on how to
develop powerful audio applications. As we know, one of the key factors
missing nowadays is audio plugins and synth plugins. Some audio plugin
needs can be covered by LADSPA, but in other cases you need a lot more
complex interfaces than just a few sliders.. (like is compressor with
curves, graphical equalizers with band feedback, parametrics, multiband
compressors, voice autotunning, surround spatial repositioning.. and
everything available of course, that would become more intuitive and
useful with more feedback)
Also there is the need for instrument plugins, single or multi timbral
(so you dont have to load all the instances at once). DSSI provides this
(except the multitimbral) but you have to write the interface separated
than the core, which not only is more difficult to program, but also
very limiting because in many cases you cant show with much detail what
is going inside the synth, like editing a sample, draw freeform
envelopes, edit the oscilator generator, etc. And better not talk about
effects, which need spectrums, vus, and other kind of complex feedback.
So, I'm sure that many of you have asked yourselves, why cant we have a
VST-like plugin architecture for Linux?
The following are excuses:
-Because we need core and interface separation, this way we warranty a
clean design and network transparency
-Because it would force both interface and core to be memory locked, as
a result of being a jack clients, forcing more physical memory to be used
-Because separating interface and core makes for a clear design.
And I say, ALL THIS IS LIES. Every single of such reasons are of no
concern to musicans. If I just want to make music, I would not care
about any of such reasons. I'd just fire up an application and make
music. All that seems to me more like academia-speech than real reasons.
You can believe them or not, but it's PROVEN that none of these are
needed to make music.
However, there is a one and single reason of why we dont have VST-Like
plugins in Linux:
-It is impossible to make a plugin in a widget toolkit that will be
hosted by another widget toolkit. (say, gtk on qt, or fltk on fox).
Why is this? The answer is simple.. X11. An X client connects to the
Xserver (via unix socket/tcp/wathever) then stays blocked in that
connection until it receives events (keypress/mouse/etc), which are sent
to the event loop. It is then the toolkit (gtk/qt/etc) the one
responsible for handling the events. This way, even if you load many
toolkits at the same time, and even each opens a connection... one of
them has to stay blocked in the event loop.
This is the REAL reason why we cant make a VST-Like plugin API in Linux,
or EVEN port opensource VST plugins and use them here.
Up to now, nobody has really bothered in solving this problem, instead
most just attempt finding a workaround.
So what I am proposing is to SOLVE this problem.. how?
-Designing a library that will act as intermediate between the toolkits
and the X Connection, handling the event loops for all of them, so they
can run together.
-Placing this library in freedesktop.org
-Encouraging toolkit makers to support this library, and porting them to it
As you see, this is not a simple task, and much less something I can do
alone, so I will need help!! I am very sure that at some point we will
get support from other groups which will also benefit from this.
So before anything, I'd like to know who is interested in this, and who
would like to help, offer experience, etc.
Well, hope I get any feedback
Cheers!
Juan
I have decided to put together a release and free Chionic to the world.
Chionic is by far THE most advanced sampler that you can find for
GNU/Linux, *BSD, Etc. It is not only extremely powerful, but t has
everything configurable via UI.
You can obtain the baby here: http://www.reduz.com.ar/chionic/
Lengthy, Lenghty, Insane List of features:
1.2.1 General
* OSS and JACK Support (sorry, no time to make ALSA)
* Integrated Virtual Keyboard for testing samples and patches
(automatically detecting the mode in which you are).
* Spectrum display for the output sound.
* Voice Counter, so you can always see how many voices is Choinic
processing.
* Customizable Colors for some widgets.
* Customizable keybindings for many widgets
* Sliders all show the respective values, and offer small button so
you can input a value by yourself, with the range and all presented.
* Spinboxes go to top and bottom if you right click on the arrows.
* Fully Functional ``Settings'' Menu.. no need to go like..
``chionic -help'', tweak everything from the UI.
1.2.2 Samples
* No, you dont control Samples directly, you have to make instruments.
* Up to 1000 samples per bank
* Loading of stereo/quadrophonic/whathever samples in as many slots
as possible, or just merge into a monophonic sample, all upon import
* Supports every single sample format thru libsndfile
* Built in extremely complete sample editor with extensible plugins.
This sample editor is meant for tweaking everything sample related:
1. Loop Begin/ Loop End.
2. Tuning, Autotuner.
3. Realtime centered zoom, from full view to individual samples
using a slider.
4. Play Position Display! It displays the playback pointer
INSIDE the sample, for every voice being played.
* Sample Looping at any point, plus ping-pong (bidirectional) looping
* Samples are stored in 16 bits normalized, then converted to 32bits
floating point for mixing (not really any audible quality loss, since
it's CD quality).
* Samples are stored in a pool and become loaded on demand, when an
instrument is using them or you are editing them. They are automatically
freed when not using anymore.
* If you permorm modifications to a sample in realtime, it will
remain in memory until you save it, to avoid any kind of mistake.
* Sample management is easy, with an interface full of indicators
for the usage/refcount of every single sample.
* Extremely fast resampler using selectable (cubic, linear, cosine
or raw) interpolation method, which uses fixed point increment sections.
You can have over 512 simultaneous voices with very little CPU usage.
(yup!) on a 1Gz machine..
* The resampler also performs volume ramping and filtering, so the
quality is perfect.
* Integrated declicker, which will smooth out samples that dont
start or end at DC. The result is a clean sound no matter if the samples
have wrong DC. It also declicks samples that are being killed.
* Experimental Sample mip-mapping feature.. in a way, it's the best
upsampling interpolation you'd ever hear... in other way, it works like
shit with looped samples.. as I need to refigure out how to make them..
so It's off by default.
1.2.3 Channels/Buffers/Buses/Sends/Inserts
* Integrated Buffer Routing System, for assigning inserts and sends
to your mix.
* Integrated set of effects and presets for insert (Chorus, Reverb,
Stereo FX, Stereo Echo, Band Limiter, 6,8,10,20,30 bands graphical EQs,
OA-Pitch Shifter as internal plugins, plus LADSPA is supported too).
* Sends can be assigned to midi controllers.
* Very Smart algorithm for detecting inactivity of mixing channel
buffers, so it can turn off inserts automatically when they are not
used, for both internal and LADSPA inserts.
* Buffer counter, so you can see how well the previous feature works :)
* PANIC!!! button, to turn off all the voices.
1.2.4 Instruments
* Unlimited Banks of 128 instruments for Patches
* Patches are made by using samples from the pool
* When a Patch is loaded, the samples are loaded too
* Adjustable Velocity Sensitivity Curve
* Pitch/Pan Scale and Center Separation, to be able to make
piano-like instruments.. where the bass goes to the left, and treble to
the right of the pan.
* Inserts PER PATCH. Besides the other inserts, you can make an
effect chain with the same internal and LADSPA effects that are bind to
a Patch. Also, if you modify any parameter, it will be modified in
realtime in all the channels where this Patch has been assigned too.
* Up to 4 ``tones'' (sub voices) per Patch, each tone has:
1.2.5 Tones
* Velocity Range (from which to which velocity does the tone accept
notes)
* Sensibility Curve Editing
* Note Scaling, Coarse/Fine Tuning, Pan, Start Delay
* Filters: Resonant LPF,HPF,BPF,BRF with Frequency Tracking for
Cutoff, and Volume->Cutoff Sensibility in both ways.
* Widget to edit the filter shape (cutoff/resonance) by just drawing
on the Frequency Domain curve.
* Randomness factor in all of the above parameters, for a more
natural sound.
* Freeform Envelopes (just draw a polygon) with Loop and Sustain
Loops (which can take any 2 different points, so you can loop with any
shape you want). The Display has Auto-Zooming and guides. Envelopes also
display a one or more position guides which represent the place the
envelopes are being played, in realtime!
* LFOs (wich can be sync-ed or not) with sine, saw, square and noise
waves, and editable Delay, Rate, Depth and Phase.
* Envelopes and LFO for: Volume, Pan, Pitch, Cutoff and Resonance.
Use all together if you want.
* Sample/Note Table, to assign to EACH note, EACH different sample
in the sample pool.
* Of course, helper buttons for the above :) so it doesnt take a
much time to do.
1.2.6 MIDI
* ALSA_seq MIDI Input driver, can open up to FOUR ports, so you
achieve 64 input channels. All channels are mapped into the buffers
screen, and due to the optimized nature of Chionic, you can use all 64
of them without any slowdown.
* Midi Jitter Correction. This means that you dont need to run JACK
at low latency in order to avoid Jitter.. even at 16k buffer sizes, the
midi sent to Chionic will sound fine, with no jitter. This was achieved
using a very clever and complex hack, which shouldnt be there because
JACK should support midi in the first place :)
* Due to the wonderful nature of Linux, you have to make sure that
nothing is stressing the /proc filesystem, or else the above feaure will
not work correctly! Dont run any 'top' operations and BE CAREFUL with
KSysguardd. Failure to comply with this will result in the midi sensing
thread to improperly timestamp incoming events.. heck if I know why.
* On the bright side, and if nothing of the sort is being done, you
can run jack and chionic even as normal user and it's going to work fine.
****************************
Sad News.
I'm over with the development of chionic, on it's first release. I have
to sadly accept that as much as it works the way I want, I cant
communicate my sequencer to chionic in the way I need (because Jackd is
too poor to do it the proper way).
So in short, anyone who is interested in taking care of chionic for me
and continuing development is MORE THAN WELCOME. I will gladly offer
assistance on it's development. Also, choinic and it's Core/UI are
separated at code level, so if you want to make a GTK app, or port it to
Windows, it's not a hard task. Just let me know, so...
CHIONIC IS DEAD UNTIL SOMEONE DECIDES TO PICK IT UP AND CONTINUE
DEVELOPMENT (or until Jackd matures enough.. yeah right :)
Jean-Marc Valin:
> Hi,
>
> I think a part of the conversation I had with Con Kolivas may be of
> interest here:
>
>
> <jmspeex> I still don't understand why this [unprivileged real-time]
> hasn't gone in the kernel a long time ago.
>
<---->
> <con> jmspeex since lkml is so hopeless as a forum for that sort of
> thing, feel free to start an online petition to show how many people
> there are. 1 of 2 things will happen - 1. you'll have thousands signing
> up, or 2. there are less people doing it than you think
This shouldn't be a problem. There are about 2300 subscribers to the
linux-audio-user mailing-list:
http://www.linuxdj.com/audio/lad/subscribe.php
If someone sets up this forum, and more than twice of us sign
up, that should show those arrogant lklm-people that there are really
_a lot_ of us, and that we are strong, and very angry. Hah!
hi all ...
another jack newbie question ...
sometimes jacks kicks out my application, since the subgraph times out
...
basically i have two questions about that:
- is there an api function, that tells jack not to kick the application
or a callback that tells the application that it has been kicked out?
- is it possible to query the time out so that i can adapt the callback
that it finished before this time out?
thanks ... tim
--
mailto:TimBlechmann@gmx.de ICQ: 96771783
http://www.mokabar.tk
latest mp3: kMW.mp3
http://mattin.org/mp3.html
latest cd: Goh Lee Kwang & Tim Blechmann: Drone
http://www.geocities.com/gohleekwangtimblechmannduo/
After one look at this planet any visitor from outer space
would say "I want to see the manager."
William S. Burroughs
Dear friends,
The Real problem is, i need dma to fill
codec controller FIFO in every 600us(8 samples still
in codec FIFO and codec controller is half full, at
that time it will generate interrupts), this will
generate a sin wave which is used to controll another
device. If codec controller did not get data in this
period it sents last available sample again and again,
thus producing a straight line. In Normal linux (2.4
kernel) interrupts and process scheduling may cause
problem. This is the reason for porting linux audio
driver to RTlinux and it is not possible to access
normal linux driver from RTLinux threads.
Nobin Mathew
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
I have recently acquired a XITEL INPORT device, to capture music from my
stereo system, and although [as shown below] it seems to be supported by
the FC3 kernel, I have not yet managed to find any way to access the USB
data stream.
Any clues on how I should proceed?
arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: Solo1 [ESS ES1938 (Solo-1)], device 0: es-1938-1946 [ESS Solo-1]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: Audio [USB Audio], device 0: USB Audio [USB Audio]
Subdevices: 1/1
Subdevice #0: subdevice #0
/proc/asound/card1
total 0
-r--r--r-- 1 root root 0 Mar 29 17:00 id
-rw-r--r-- 1 root root 0 Mar 29 17:00 oss_mixer
dr-xr-xr-x 3 root root 0 Mar 29 17:00 pcm0c
dr-xr-xr-x 3 root root 0 Mar 29 17:00 pcm0p
-r--r--r-- 1 root root 0 Mar 29 17:00 stream0
-r--r--r-- 1 root root 0 Mar 29 17:00 usbbus
-r--r--r-- 1 root root 0 Mar 29 17:00 usbid
Dear friends,
I am porting pxa audio driver to rtlinux (rtlinux
pro). I am mainly using pxa-ac97.c &
pxa-audio.c(eliminated sound_core.c). I will try to
register to two devices /dev/dsp and /dev/mixer in
that. Since RTlinux kernel api are not much documented
it seems very difficult to port dma and
synchronization.
Is it possible to use dma in rtlinux(since it needs
dynamic memory allocation).
I am sending the source code. can anybody tell me how
to port this.
Nobin Mathew
__________________________________
Do you Yahoo!?
Yahoo! Personals - Better first dates. More second dates.
http://personals.yahoo.com
Hi.
I released version 2.2.0 of ZynAddSubFX
News:
- the VST version of ZynAddSubFX is removed
from the instalation until it will be more stable
(hope soon :) )
- now, the instrument banks contains over 300
high quality instruments
- added "Apply" a button from OscilGen window
for PADsynth
- added another parameter to ADsynth that
controls the amount of all detunes of voices
- adaptive harmonics postprocess
- improved the VU-meter and added a RMS plot
- Dvorak support for Virtual Keyboard
- many bugs fixed and code cleanups
ZynAddSubFX is a open-source software synthesizer for
linux and windows.
You can download it from
http://zynaddsubfx.sourceforge.net
Paul
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
The Open-Source Speech Recognition Initiative is holding its first annual conference, to focus on the state of speech recognition geared toward Linux and the open-source community.
DATES: Thursday evening, April 28, to Saturday noon, April 30, 2005.
LOCATION: The Old Salt Inn / Lamie's Tavern, 490 Lafayette Road
Hampton, NH 03842 USA.
http://www.oldsaltnh.com
Toll Free: 1.800.805.5050
Direct: 1.603.926.0330
Fax: 1.603.929.0017
Pre-registration not required for day sessions. However, room reservations are due by April 15th to get OSSRI rate ($89) and DSL in-room.
CONFERENCE SCHEDULE:
THURSDAY, APRIL 28
6:00 - 8:00 PM Registration, cheeses and crackers, socializing.(Cash bar.)
FRIDAY, APRIL 29
8:00 - 9:00 Continental Breakfast
9:00 - 9:15 Opening Announcements and Welcome. (Susan Cragin.)
10:00 - 11:00 Discussion of the State of Speech Recognition today, and the issues developing a Linux model. (Led by Eric Johansson.)
11:00 - 12:00 Discussion of Small-Dialogue Applications. (Led by Turner Rentz.)
12:00 - 1:00 Lunch (Included in $55 fee.)
1:00 - 2:00 Demo / Video - NASA's Uses of Speech Recognition. (John Dowding, NASA.)
2:00 - 3:00 Breakout Session - Creating a Tool That Prompts for Speech.
3:00 - 4:00 Probablistic Grammar-Based Language Models for the Open-Microphone Task.
(John Dowding.)
4:00 - 5:00 Breakout Session - Quality Problems in Speech-Gathering.
SATURDAY, APRIL 30
8:00 - 9:00 Continental Breakfast
9:00 - 10:00 Sphinx as an engine - Current Status and Future Developments. (Willie Walker, Sun Microsystems / Sphinx 4 Developer.)
10:00 - 11:00 Deaf-Voiced Communication Using a Phoenetic Text Display. (Tristram Metcalf.)
11:00 - 11:15 Check-out. Bring luggage to conference room.
11:15 - 12:00 Open.
COST: $55 per person includes entire conference and lunch on Saturday. $15 to attend each half-day (no lunch).
ABOUT THE LOCATION:
Hampton, New Hampshire is located about 1 hour north of Boston, right off Route 95. It is a 5-minute drive to Hampton Beach, a summertime resort, and about 15 minutes to Portsmouth NH, a charming seacoast village with many restaurants and shops.
Swimmers should bring wetsuits.
Questions, please feel free to contact me directly:
Susan Cragin, Clerk, OSSRI
susancragin(a)earthlink.net
781-416-1987 home
781-801-6829 cell