Lee Revell:
> > Won't help if the code is to be part of a GPL'd
> > application.
>
> The Linux kernel is a GPL'ed application yet Nvidia
> gets away with linking into it.
Quite different. Anyone can distribute the kernel
without caring about the existence of the nVidia
drivers. But if an application includes the VSTSDK, it
presumably isn't complete without it.
Chris
Screenshot:
http://www.kvraudio.com/forum/viewtopic.php?
t=114488&postdays=0&postorder=asc&highlight=linux&start=105
i will be interested to see just how good its audio handling
capabilities are.
--p
Lee Revell writes:
> On Fri, 2006-01-27 at 15:57 +0100, Michael Bohle wrote:
> > But anyway, VST on Linux is dead now, beause
> > most of the user are not
> > able to compile it for themself.
>
> Wrong. You just need to write a wrapper that
> handles the compiling.
Won't help if the code is to be part of a GPL'd
application.
Also, I think (?) you have to register to download the
VSTSDK.
Chris
One Harold Chu on LKML is insisting that POSIX requires
pthread_mutex_unlock to reschedule if other threads are waiting on the
mutex, and that even if the calling thread immediately tries to lock the
mutex again another thread must get it. I contend that both of these
assertions are wrong - first, I just don't read the standard that way,
and second, it would lead to obviously incorrect behavior - unlocking a
mutex would no longer be an RT safe operation. What would be the point
of trylock() in RT code if unlocking is going to cause a reschedule
anyway?
Can anyone back me up on this?
Lee
Hello
I started a session in Ardour - drag and dropped a .wav file, then recorded (sucessfully) from a Jack input.
Checking the session folder, Ardour appears to record to disc as it's going along (on-the-fly).
I presume Ardour can also route the incoming sound to its outputs, for monitoring.
Does anyone know what mechanism Ardour uses to do this?
(I'll walk blindly into speculation. Ardour uses a jack ringbuffer on its Jack input port, in the approved way - but that only leaves one read on the ringbuffer output...)
Robert
Hacked some test code here and discovered something "interesting" with
lo_server_add_method() and method handling. If I try to do it like in
the examples and add the default/debug ("match all") method first, it
gets called for every incoming message, before the real handler is
called. That is, I see an error message - and then the correct method
handler is invoked anyway.
Looking quickly at the code, I'd actually expect this behavior, as
lo_server_add_method() does indeed add methods at the end of the
list.
Trying the examples again, I realize that they demonstrate this
behavior as well, so maybe it's not just my code doing something
funny. :-)
This is with liblo 0.22 on Gentoo/AMD64.
//David Olofson - Programmer, Composer, Open Source Advocate
.------- http://olofson.net - Games, SDL examples -------.
| http://zeespace.net - 2.5D rendering engine |
| http://audiality.org - Music/audio engine |
| http://eel.olofson.net - Real time scripting |
'-- http://www.reologica.se - Rheology instrumentation --'
Hello everybody !
I've just added a resampling function to my code thanks to the
excellent work of Erik de Castro Lopo (thanks a lot !). Combined with
libsndfile (thanks again) it is really easy to load any sound file I
want. But I'd like to make sure I'm using it correctly.
I process my input data in one pass using src_simple() and I have to
compute the length of the output data buffer beforehand. So I did
somehting like this :
out_len = (long int) ceil((double) in_len * ratio);
It seems that my output buffer is always one frame too big (I checked
this by reading the output_frames_gen field of the SRC_DATA structure
after the processing is done).
Is it safe to assume that using floor() instead of ceil() will not lead
to a too short output buffer in some cases ?
I can live with wasting a malloced float but I'd like to be sure it
cannot be done in a prettier way.
Thanks.
--
David
Announcing the 20060122 release of WhySynth, a DSSI softsynth
plugin.
New since the last major release:
* A new oscillator mode, based on Nasca O. Paul's gorgeous
PADsynth algorithm.
* A new filter mode, essentially the low-pass filter from amSynth.
* A new dual delay effect.
* Improved and extended wavetables.
* More patches.
* Lots of cleanups and bug fixes, including fixes for more stable
operation especially under Rosegarden, and for compilation on
Mac OS X 10.4 'Tiger'.
Find WhySynth here:
http://home.jps.net/~musound/whysynth.html
More information on the DSSI plugin standard, available hosts
and plugins can be found here:
http://dssi.sourceforge.net/
WhySynth is written and copyright (c) 2006 by Sean Bolton,
under the GNU General Public License, version 2.
Hi all,
is there a recommended way to write / read additional chunks in
WAV files, using libsndfile (assuming it's possible at all - I
didn't find any hints to this in the docs) ?
What I need in particular is some way to calibrate the time
axis - i.e. to say frame #N corresponds to t = 0, and some
other similar info, mostly sample indices.
TIA,
--
FA
Hi all,
I am new to this list, living in Switzerland and working mainly with
electronic. I have done it was some time ago a monophonic realtime note
recognition software on an embeded dsp56k system and I want to adapt it
to linux.
It will take some time to me to do that because I know almost nothing
about c and c++ programming, as about the alsa and jack libraries. But I
know my algorytm, and it is working just fine on my dsp.
The latency is very low, 1+(~1/4) period of the sound, the (~1/4) term
depend of the harmonic content of the sound. I believe
at it is worth to do a jack-alsa software with it.
A gui will be needed, giving the possibility to change, save and recall
some parameters of the recognition loop on an per instrument basis.
Can you recommand me a good, and if possible simple to use and fast,
MIDI library? I will use only basic functionalities as the possibility
to send MIDI notes to the sound server.
Ciao,
Dominique