snip
> > So with a free impulse-response all should be well?
> >
> > But why use a VST plugin under linux to apply an IR when there is already
> > jconvolver present?
> >
> > *me wonders*
> >
> > Arnold
> >
>
> Hi Arnold,
>
> Some reasons come to this mind:
>
> A functional GUI.
>
> Better sound.
>
> Choice.
>
> I prefer the color yellow with my reverb's UI.
>
> VST plugins mean the world to me.
>
>
> Just some possible answers to your question. They are not necessarily
> valid, true, or believable, they're just possible replies.
>
> HTH,
>
> dp
>
Hi
I have break out the jconv settings gui from guitarix and make it available as
stand alone app.
It act as a (stereo) Host for jconv/jconvolver, to create/save/load/run
configuration files for the use with jconvolver.
Indeed this Gui didn't cover the full advance of jconv/jconvolver, but may be
the one or other found it useful.
Additional jcgui provide master gain, left/right gain, balance, left/right delay and
tone bass/middle/high controllers.
It's designed to use in realtime environment (jack) to processing data, not for apply
to a file.
get it here :
http://sourceforge.net/projects/jcgui/
have fun hermann
The FFADO team is proud and happy to announce the release of FFADO 2.0.0.
As the release candidates have been around for almost one year now
without a significant amount of bug reports we feel confident that the
current code-base has matured. Around the end of november the 1000-th
device was registered as being used with FFADO, which seemed to be a
nice number to triggered the release.
Furthermore on December 2 the Linux kernel version 2.6.32 has been
released. This version fixes the new kernel FireWire drivers such that
they are compatible with FFADO. So once the distributions pick up this
kernel the old/new kernel stack confusion should be history.
Thanks go out to the vendors that provided us with gear to support for
the 2.0 release: Echo Digital Audio, Edirol, Ego Systems Inc, Focusrite,
Mackie and Terratec. Kudos for their early-bird support!
Special thanks also go to BridgeCo and TC Applied for providing us with
their development platforms and for helping with vendor contacts. Their
support makes that FFADO covers the most widely used platforms for
FireWire audio and that we can quickly implement support for new devices.
Looking ahead to the 2.1 release we can announce that we have
implemented (basic) support for additional devices from Focusrite,
Behringer, Stanton and TC Electronic. We plan to move to beta-testing
2.1 fairly soon as development on it has been ongoing for more than a
year now. Additionally, work is being done on the RME devices, but its
not yet known when that will be finished. Support for some other vendors
is in the pipeline, so stay tuned for more announcements.
A second major development is the move of the streaming infrastructure
to kernel space. A kernel-space implementation will bring significant
improvements with respect to reliability and efficiency. Furthermore it
will allow to expose an ALSA interface, meaning that the scope of
FireWire audio on Linux is extended significantly. Thanks to the Google
Summer of Code and the Linux Foundation, work on this has been done
during the summer. The code is not yet ready for use, but things are moving.
More information can be found here:
http://www.ffado.org/?q=release/2.0.0
For the eager, a direct download link:
http://www.ffado.org/files/libffado-2.0.0.tar.gz
On behalf of the FFADO team,
Pieter Palmers
Thanks Daniel
I haven't look at the LV2 speck's, I have no Idea if it is possible or
not.
hermann
Am Freitag, den 18.12.2009, 16:42 -0200 schrieb Daniel Roviriego:
> Great job!!
> I'm using right now and its a awsome tool!!
> Is there any chance of building it as a lv2 plugin ? Is there
> difficult ?
>
> Thanks a lot for sharing!!
>
> Daniel D2 Roviriego
>
> 2009/12/18 hermann <brummer-(a)web.de>
> snip
>
> > > So with a free impulse-response all should be well?
> > >
> > > But why use a VST plugin under linux to apply an IR when
> there is already
> > > jconvolver present?
> > >
> > > *me wonders*
> > >
> > > Arnold
> > >
> >
> > Hi Arnold,
> >
> > Some reasons come to this mind:
> >
> > A functional GUI.
> >
> > Better sound.
> >
> > Choice.
> >
> > I prefer the color yellow with my reverb's UI.
> >
> > VST plugins mean the world to me.
> >
> >
> > Just some possible answers to your question. They are not
> necessarily
> > valid, true, or believable, they're just possible replies.
> >
> > HTH,
> >
> > dp
> >
>
> Hi
>
> I have break out the jconv settings gui from guitarix and make
> it available as
> stand alone app.
> It act as a (stereo) Host for jconv/jconvolver, to
> create/save/load/run
> configuration files for the use with jconvolver.
> Indeed this Gui didn't cover the full advance of
> jconv/jconvolver, but may be
> the one or other found it useful.
>
> Additional jcgui provide master gain, left/right gain,
> balance, left/right delay and
> tone bass/middle/high controllers.
> It's designed to use in realtime environment (jack) to
> processing data, not for apply
> to a file.
>
> get it here :
> http://sourceforge.net/projects/jcgui/
>
> have fun hermann
>
> _______________________________________________
> Linux-audio-user mailing list
> Linux-audio-user(a)lists.linuxaudio.org
> http://lists.linuxaudio.org/mailman/listinfo/linux-audio-user
>
Hey all,
I've been keeping myself busy lately, mostly with Python and OSC,
and I'm using multiple clients/controllers to send messages to a
sampler with an OSC interface.
>From this experience it seems to me a good idea, to have a kind of "central"
place where all Audio programs "Announce" thier OSC port & program name &
version,
so the linux-audio-desktop can be a little more "coherent" for lack of a
better word.. ;-)
We could create a kind of "Master OSC Host" which would keep track of
which clients are running, and which ports they are using. This would be
easiest
to do if EACH client, would "register" itself on an agreed port.
So if a client wants to read some information, they could query the
"Master OSC Host" with a standardised set of questions about the current
state of QTractor/Ardour/<Any OSC capable program>.
This would mean that any program that wants to find out which Jack B:B:T or
Frame were on
or if there's been XRuns, wouldnt have to be a JACK client, because its been
abstracted to
the "Master OSC Host".
I think this method would allow better inter program operability, as any
"useful" information
that one program could share, other could read.
This would be most effective, if we also requested that programs followed a
specific naming
convention for features that each program has.
EG:
Save : /<prog_name>/save_state
Load: /<prog_name>/load_state
etc
I'd like to get the communities feelings towards an initiative like this,
feedback/criticism/opinions welcomed!
-Harry Haaren
Lennart Poettering:
>
> On Fri, 11.12.09 14:24, Kjetil S. Matheussen (k.s.matheussen(a)notam02.no) wrote:
>
>> After about 10 years of frustration, I'm a bit tired of alsa.
>>
>> Does anyone know if OSS supports proper software mixing?
>> Is the alsa emulation working somewhat okay?
>> Are there any problems configuring the machine to use more than one card?
>
> One should note that OSS on Linux is a dead-end. The latest two Fedora
> releases have disabled kernel support for the OSS APIs by default, and
> Ubuntu is expected to do this now too. (or already did?)
>
> Also, note that the big distributors have folks working on ALSA. OTOH
> nobody who has any stakes in Linux supports OSS anymore or even has
> people working on it.
>
No. Not providing mixing for all devices is a design fault
in alsa. I'm going to install OSS as soon as I get time.
Hi devs,
I would like to make a humble request:
I would like to have a simple, stand-alone application for hosting
native Linux vsts (.so). It would have very simple gui requirements
(gtk?), be simple to use, and multichannel like ghostess is, able to
load a plugin per channel. it would support jack midi.
I would be willing to donate to such a project...I know $100 USD isn't
enough to feed anyone's kids, but I would be willing to give that
amount (today, to a registered paypal acct) to see this simple app
written by someone who has a track record of doing good projects
(Nedko immediate jumps to mind, but I'm sure there are others). is
there someone who would like to do this?
(energyXT hosts native plugins, as does renoise and ardour, but I just
want an app that can be called up live. jost exists, but I've never
been able to get it to play for more than a few minutes without
crashing...maybe something else exists that I'm not aware of?)
--
Josh Lawrence
http://www.hardbop200.com
Lennart Poettering:
>> by itself plus providing low-latency performance (with mixing) when
>> that is required. Leaving out mixing to third-parties, plus exposing
>> a very complicated low-level API and a complicated
>> plugin/configuration system (which probably has taken a more time to
>> develop than implementing a proper mixing engine), has created lots
>> of chaos.
>
> You cannot blame the ALSA folks that they didn't supply you a full
> audio stack from top to bottom from day one with the limited amount of
> manpower available. Just accept that their are different layers in the
> stack and that different projects need to tackle them. And in the end
> it doesn't matter which part of the stack has what name and is written
> by whom.
>
Please read what I wrote one more time. I pointed out a very low-level
API, no mixing, and a complicated configuration system. This
causes software (programmed against ALSA) very often to be buggy, or
simply not run because alsa doesn't do mixing. I'm certainly not
blaming the ALSA foks for not doing enough work, (quite the
opposite), but instead I'm pointing out some design desicions
which I think have been quite disastrous. (As far as I know, mixing
for all devices has never been on any TODO list in ALSA because
they think it's wrong)
Lennart Poettering:
>
> On Thu, 17.12.09 13:52, Kjetil S. Matheussen (k.s.matheussen(a)notam02.no) wrote:
>
>> Mixing works just fine, even when using ASIO. Maybe you have to start
>> the asio program first though, I don't know. But still, there's no
>> reason why you shouldn't have a global option, lets say 256 frames
>> 48000Hz, that everything mixes down to, and then software which needs
>> hardcore low-level performance must obey to that setting.
>
> Uh, that's a great way to burn your battery.
>
> If you care about more than pro audio, then you want to dynamically
> adjust the sleep times based on the requirements of the clients
> connected. That means you cannot use fixed sized hardware fragments
> anymore, but need to schedule audio more dynamically using system
> timers.
>
> This in fact is where most of the complexity in systems such as
> PulseAudio stems from.
>
Okay, I didn't know that. But this is still no reason
why ALSA shouldn't take care of mixing/scheduling/etc.
by itself plus providing low-latency performance
(with mixing) when that is required. Leaving out
mixing to third-parties, plus exposing a very
complicated low-level API and a complicated
plugin/configuration system (which probably
has taken a more time to develop than implementing
a proper mixing engine), has created lots of chaos.
After about 10 years of frustration, I'm a bit tired of alsa.
Does anyone know if OSS supports proper software mixing?
Is the alsa emulation working somewhat okay?
Are there any problems configuring the machine to use more than one card?