Thanks Daniel
I haven't look at the LV2 speck's, I have no Idea if it is possible or
not.
hermann
Am Freitag, den 18.12.2009, 16:42 -0200 schrieb Daniel Roviriego:
> Great job!!
> I'm using right now and its a awsome tool!!
> Is there any chance of building it as a lv2 plugin ? Is there
> difficult ?
>
> Thanks a lot for sharing!!
>
> Daniel D2 Roviriego
>
> 2009/12/18 hermann <brummer-(a)web.de>
> snip
>
> > > So with a free impulse-response all should be well?
> > >
> > > But why use a VST plugin under linux to apply an IR when
> there is already
> > > jconvolver present?
> > >
> > > *me wonders*
> > >
> > > Arnold
> > >
> >
> > Hi Arnold,
> >
> > Some reasons come to this mind:
> >
> > A functional GUI.
> >
> > Better sound.
> >
> > Choice.
> >
> > I prefer the color yellow with my reverb's UI.
> >
> > VST plugins mean the world to me.
> >
> >
> > Just some possible answers to your question. They are not
> necessarily
> > valid, true, or believable, they're just possible replies.
> >
> > HTH,
> >
> > dp
> >
>
> Hi
>
> I have break out the jconv settings gui from guitarix and make
> it available as
> stand alone app.
> It act as a (stereo) Host for jconv/jconvolver, to
> create/save/load/run
> configuration files for the use with jconvolver.
> Indeed this Gui didn't cover the full advance of
> jconv/jconvolver, but may be
> the one or other found it useful.
>
> Additional jcgui provide master gain, left/right gain,
> balance, left/right delay and
> tone bass/middle/high controllers.
> It's designed to use in realtime environment (jack) to
> processing data, not for apply
> to a file.
>
> get it here :
> http://sourceforge.net/projects/jcgui/
>
> have fun hermann
>
> _______________________________________________
> Linux-audio-user mailing list
> Linux-audio-user(a)lists.linuxaudio.org
> http://lists.linuxaudio.org/mailman/listinfo/linux-audio-user
>
Hey all,
I've been keeping myself busy lately, mostly with Python and OSC,
and I'm using multiple clients/controllers to send messages to a
sampler with an OSC interface.
>From this experience it seems to me a good idea, to have a kind of "central"
place where all Audio programs "Announce" thier OSC port & program name &
version,
so the linux-audio-desktop can be a little more "coherent" for lack of a
better word.. ;-)
We could create a kind of "Master OSC Host" which would keep track of
which clients are running, and which ports they are using. This would be
easiest
to do if EACH client, would "register" itself on an agreed port.
So if a client wants to read some information, they could query the
"Master OSC Host" with a standardised set of questions about the current
state of QTractor/Ardour/<Any OSC capable program>.
This would mean that any program that wants to find out which Jack B:B:T or
Frame were on
or if there's been XRuns, wouldnt have to be a JACK client, because its been
abstracted to
the "Master OSC Host".
I think this method would allow better inter program operability, as any
"useful" information
that one program could share, other could read.
This would be most effective, if we also requested that programs followed a
specific naming
convention for features that each program has.
EG:
Save : /<prog_name>/save_state
Load: /<prog_name>/load_state
etc
I'd like to get the communities feelings towards an initiative like this,
feedback/criticism/opinions welcomed!
-Harry Haaren
Lennart Poettering:
>
> On Fri, 11.12.09 14:24, Kjetil S. Matheussen (k.s.matheussen(a)notam02.no) wrote:
>
>> After about 10 years of frustration, I'm a bit tired of alsa.
>>
>> Does anyone know if OSS supports proper software mixing?
>> Is the alsa emulation working somewhat okay?
>> Are there any problems configuring the machine to use more than one card?
>
> One should note that OSS on Linux is a dead-end. The latest two Fedora
> releases have disabled kernel support for the OSS APIs by default, and
> Ubuntu is expected to do this now too. (or already did?)
>
> Also, note that the big distributors have folks working on ALSA. OTOH
> nobody who has any stakes in Linux supports OSS anymore or even has
> people working on it.
>
No. Not providing mixing for all devices is a design fault
in alsa. I'm going to install OSS as soon as I get time.
Hi devs,
I would like to make a humble request:
I would like to have a simple, stand-alone application for hosting
native Linux vsts (.so). It would have very simple gui requirements
(gtk?), be simple to use, and multichannel like ghostess is, able to
load a plugin per channel. it would support jack midi.
I would be willing to donate to such a project...I know $100 USD isn't
enough to feed anyone's kids, but I would be willing to give that
amount (today, to a registered paypal acct) to see this simple app
written by someone who has a track record of doing good projects
(Nedko immediate jumps to mind, but I'm sure there are others). is
there someone who would like to do this?
(energyXT hosts native plugins, as does renoise and ardour, but I just
want an app that can be called up live. jost exists, but I've never
been able to get it to play for more than a few minutes without
crashing...maybe something else exists that I'm not aware of?)
--
Josh Lawrence
http://www.hardbop200.com
Lennart Poettering:
>> by itself plus providing low-latency performance (with mixing) when
>> that is required. Leaving out mixing to third-parties, plus exposing
>> a very complicated low-level API and a complicated
>> plugin/configuration system (which probably has taken a more time to
>> develop than implementing a proper mixing engine), has created lots
>> of chaos.
>
> You cannot blame the ALSA folks that they didn't supply you a full
> audio stack from top to bottom from day one with the limited amount of
> manpower available. Just accept that their are different layers in the
> stack and that different projects need to tackle them. And in the end
> it doesn't matter which part of the stack has what name and is written
> by whom.
>
Please read what I wrote one more time. I pointed out a very low-level
API, no mixing, and a complicated configuration system. This
causes software (programmed against ALSA) very often to be buggy, or
simply not run because alsa doesn't do mixing. I'm certainly not
blaming the ALSA foks for not doing enough work, (quite the
opposite), but instead I'm pointing out some design desicions
which I think have been quite disastrous. (As far as I know, mixing
for all devices has never been on any TODO list in ALSA because
they think it's wrong)
Lennart Poettering:
>
> On Thu, 17.12.09 13:52, Kjetil S. Matheussen (k.s.matheussen(a)notam02.no) wrote:
>
>> Mixing works just fine, even when using ASIO. Maybe you have to start
>> the asio program first though, I don't know. But still, there's no
>> reason why you shouldn't have a global option, lets say 256 frames
>> 48000Hz, that everything mixes down to, and then software which needs
>> hardcore low-level performance must obey to that setting.
>
> Uh, that's a great way to burn your battery.
>
> If you care about more than pro audio, then you want to dynamically
> adjust the sleep times based on the requirements of the clients
> connected. That means you cannot use fixed sized hardware fragments
> anymore, but need to schedule audio more dynamically using system
> timers.
>
> This in fact is where most of the complexity in systems such as
> PulseAudio stems from.
>
Okay, I didn't know that. But this is still no reason
why ALSA shouldn't take care of mixing/scheduling/etc.
by itself plus providing low-latency performance
(with mixing) when that is required. Leaving out
mixing to third-parties, plus exposing a very
complicated low-level API and a complicated
plugin/configuration system (which probably
has taken a more time to develop than implementing
a proper mixing engine), has created lots of chaos.
After about 10 years of frustration, I'm a bit tired of alsa.
Does anyone know if OSS supports proper software mixing?
Is the alsa emulation working somewhat okay?
Are there any problems configuring the machine to use more than one card?
< However you can also use them to
< implement circular FIFOs for example, which is a trick used all the
< time in audio as well as in kernel programming.
For anyone interested...
The Effo libraries seem to have a fair choise of lock-free queue
and ringbuffer implementations in the addon project:
http://code.google.com/p/effoaddon/
But I have to admit that i find these concepts quite confusing...
Recently on this list, Paul referred to an "atomic integer
swap" and an "atomic pointer swap." This was a new concept
to me (and possibly others), and this e-mail shares what
I've learned.
If you access a variable from multiple threads -- even a
built-in variable like 'int', it is important to control
access to the variable with a mutex, semaphore, or an atomic
operation.
Alexander Sandler, on his blog, wrote a couple of good
articles on the subject:
"Do you need a mutex to protect an int?"
http://www.alexonlinux.com/do-you-need-mutex-to-protect-int
"Multithreaded simple data type access and atomic
variables"
http://www.alexonlinux.com/multithreaded-simple-data-type-access-and-atomic…
The first article contains code that calculates a wrong
answer on multiprocessor machines. I've attached a similar
example that will even fail on a single-processor machine.
There is a wealth of reading material on using Mutexes and
Semaphores. However, information on atomic operations
appears to be sparse and hard-to-follow. So, here's what
I've found:
+ At the moment, there is no built-in support in
C/C++ for atomic operations. You will need to use
a library, compiler extension, or write your own
in assembly code.
+ The GCC compiler has the built-in __sync_*()
functions[1] that provide atomic operations.
Note that the attached example is using this.
+ glib provides the g_atomic_*() functions[2].
+ Qt 4 has the q_atomic_*() functions.[3] While
they are accessible, they are /not/ a part of
their stable, public API.
+ The next version of ISO C++ (code name c++0x)
is expected to have support for atomic operations
(E.g. the std::atomic<T> template) and memory
barriers. It may even require that all built-in
types be atomic.
+ In the x86 instruction set, these are usually
implemented using the 'LOCK' instruction prefix.[5]
When using atomic operations, perhaps the best advice I
found is near the end of Sandler's second article:
"When using atomic variables, some extra
precautions have to be taken.... There is nothing
that prevents you from incrementing value of the
atomic variable with __sync_fetch_and_add() as I
just demonstrated and later in the code doing same
thing with regular ++ operator.
"To address this problem, I strongly suggest
wrapping around atomic functions and variables with
either ADT in C or C++ class."[4]
Peace,
Gabriel
[1] http://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/Atomic-Builtins.html
[2] http://www.gtk.org/api/2.6/glib/glib-Atomic-Operations.html
[3] http://doc.trolltech.com/4.3/atomic-operations.html
See also the Qt header file QtCore/qatomic_i386.h, and
its brothers.
[4] http://www.alexonlinux.com/multithreaded-simple-data-type-access-and-atomic…
[5] http://siyobik.info/index.php?module=x86&id=159