Hi,
I would like to request that if there are any users of the new RME HDSP
9652 card that are able to successfully install and use this card, would you
please get in touch with me and let me know what your system configurations
are? I understand that there are at least a couple of you out there
somewhere. Please let me know what distribution, kernel, C compiler, Alsa
revision and anything else you think might be important.
Using the PlanetCCRMA flow I am unable to get this card configured and
running. We believe that we have patched the Alsa layer correctly to add the
0x64 check but I am still unsuccessful.
Thanks in advance.
With best regards,
Mark
> -----Original Message-----
> From: Joshua Haberman [mailto:joshua@haberman.com]
> Paul Davis <paul(a)linuxaudiosystems.com> wrote:
> > >Has anybody actually tried to get gtk+ and qt working in the same
> > >application?
> >
> > its been done.
> >
> > it was ugly as sin.
>
> This is a strong counterexample to the oft-repeated maxim
> that "choice is
> the strength of OSS."
??? this is a property/feature of X.
and btw I think you would run into same kind of problems in ms windows
(remember, lot of toolkits available for X are available for ms windows,
there are also different ms windows specific toolkits - can you combine them
easily in one program?)
> But I guess the fact that 10 random linux audio applications
> are written to 10
> different APIs that can't interoperate is another.
what do you mean? _one_ program will probably use different APIs,
depending what it needs to interface to (e.g. there would be linux (or
posix) API (system calls), standard library API (most of the programming
languages have some standard library), one (or more) sound API, some UI
API...
just because applications use same API does not mean that applications
will be able to interoperate. they have to be designed to interoperate
(first part of that would be to define what it actually means in context of
given applications)
> In my opinion, this is why the deskop projects (KDE and Gnome) are so
> important. They give consistency of behavior and
> interoperability between
> applications.
IMO in a wrong way - instead of providing protocols to communicate they
lock you in specific implementation. and they are quite messy. it looks like
it's getting better, somewhat...
...
> Think about the difference between writing a game for Win32
> vs. Linux. With
> Win32 you keep your Direct* reference handy and away you go;
> it's an entire
> platform. With Linux you have to make umpteen decisions
you also have openGL. probably other ones (macromedia for kiddie games?)
> about what system to
> use for graphics, sound, networking, timers, etc. People often make
> less-than-optimal decisions due simply to lack of knowledge. What the
it's in process of development. confusion is expected. in graphics area it
is fairly stabilized, as far as networking etc. goes there shouldn't be any
confusion. sound is a big mess (remember oss is considered not good, alsa
just recently stabilized API, it's still not 1.0)
the problem is not that there are many choices, the problem is that there
are not good enough choices in some areas (but that's changing rapidly)
erik
And now I just realized something else...
Why argue whether or not there should be a single event port per
plugin instance, or one per Channel, or whatever? Just have the
*plugin* set up the ports, and have the host ask a callback
XAP_event_port *get_event_port(XAP_cnx_descriptor *cd);
when it wants the port to use to reach a specific Bay:Channel:Slot.
Most plugins will probably consider only the Bay and Channel fields,
but some may ignore all fields and always return the same port, while
others may split the range of Slots of multiple ports in arbitrary
ways.
That way, you can have *one Event Port per Control* if you like. :-)
Well, yes, there is one issue, of course: When you use the same port
for multiple Channels and/or multiple Bays, you'll need the Channel
and/or Bay indices as arguments in the event struct.
I'm thinking that you might still just have a 32 bit "index" or
"slot" field in the event struct, but optionally split it up, encode
it or whatever, as you like. It would be easy enough to have the
plugin do that as part of the get_event_port() callback above;
something like:
XAP_target_descriptor
{
XAP_event_port *port;
unsigned long id; /* Plugin's reference;
* may be *anything*.
*/
}
int map_target(XAP_cnx_descriptor *cd,
XAP_target_descriptor *td);
That is, the map_target() call converts <port, bay, channel, slot>
inte <port, id> in whatever way the plugin author wants. The
ID/index/slot thing is just "that value you're supposed to write into
the "index" field of events sent to this target" anyway, so it
doesn't matter the slightest to senders - or the host - what the
value actually means.
Example:
* Plugin wants *everything* on one port.
* Plugin will return the same physical port whichever
event input Bay:Channel:Slot you ask for.
int map_target(XAP_cnx_descriptor *cd,
XAP_target_descriptor *td)
{
MY_plugin *me = cd->plugin; /* Get "this". */
td->port = me->my_universal_event_port;
td->id = cd->bay << 24;
td->id |= cd->channel << 16;
td->id |= cd->slot;
return 0; /* Ok! --> */
}
* In the single event processing loop, the plugin will
just extract the bay, channel and slot fields like this:
int bay = event->index >> 24;
int channel = (event->index >> 16) & 0xff;
int slot = event->index & 0xffff;
Is this ok?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Some thoughts on that SILENT event for reverb tails and stuff...
(Currently impemented as a "fake" spontaneous state change in
Audiality FX plugins, BTW.)
I would assume that since there is no implicit relation between
Channels on different Bays (remember the input->output mapping
discussion?), this event is best sent from some kind of Master Event
Output Channel. (That is, now we have one Master Event Input Channel,
and one Master Event Output Channel. Each will be in it's own Bay,
and there can be only one and exactly one Channel on each of those.)
So, the SILENT event would need Bay and Channel (but not Slot)
fields, in order to tell the host (or who ever gets the event) which
audio output just went silent.
And it would probably be a rather good idea to have "NOT_SILENT"
event as well, BTW!
Anyway, what I was thinking was: How about allowing plugins to
*receive* SILENT and NOT_SILENT events, if they like?
That way, you could use the plugin API for things like
audio-to-disk-thread "gateways" for recording and that kind of stuff,
without forcing the host to be involved in the details.
Not that recording half a buffer extra of silence would be a
disaster, but I bet someone can or eventually will think of a reason
why their plugin should know the whole thruth about the audio inputs.
Now, there's just one problem: Put a plugin with tail, but without
sample accurate "tail management" support in between a plugin that
sends (NOT_)SILENT events and one that can receive them - and the
information is useless! All you can do is have the host fake the
(NOT_)SILENT events sent to the latter plugin, since the plugin in
the middle thinks only in whole buffers WRT inputs and/or outputs...
And there's another problem: If you would get a (NOT_)SILENT event
*directly* from another plugin, how on earth would you know which one
of *your* audio inputs that other plugin is talking about, when the
event arguments are about *that* plugin's audio outputs?
Only the host knows where audio ports are connected, so the host
would have to translate the events before passing them on.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
> personally, i think ardour is an excellent proof-by-implementation
> that yes, busses are really just a special class of strip,
Well, no. Busses are not strips. Busses are not signal paths. Busses
are unity gain summing nodes that facilitate many-to-one connections.
Ardour depends on jack for all of its busses.
with no
> basic difference in the kinds of controls you'd want for each. these
> days, an AudioTrack in ardour is derived from the object that defines
> a Bus. the only differences are that a Bus takes input from
> "anywhere", whereas an AudioTrack takes input from its playlist (via a
> DiskStream) and can be rec-enabled. other than, they are basically
> identical.
Main outs, aux sends, and sub outs are a special class of strip that
receive their input exclusively from busses. Other than that, there is
no difference between these and any other kind of strip.
Tom
Hi all,
I've been beavering away on a session/config managment system, and it's just
reached the point where projects can be properly saved and restored. It's
an implmentation of the api proposal, http://reduz.dyndns.org/api/ , that
originated from this discussion:
http://marc.theaimsgroup.com/?l=linux-audio-dev&m=102736971320850&w=2 .
This is more an RFC, alpha release, rather than a proper "you can make your
apps work with this" release; a lot of the api will undoubtedly change.
What's right with this release: it saves/restores sessions, it saves data,
it exists. What's wrong with this release: the code is barely commented,
there's no documentation, it's quite inconsistent, the code is scrappy in
many places, and it's not very stable.
So, download it, have a bash, tell me what works/what doesn't, what's good/
what's not, what should stay the same/what should change.
http://pkl.net/~node/software/ladcca-0.1.tar.gz
Bob
does anyone here know if splitting code across different files, or for
that matter, reordering the layout of one source file so that
functions called together are now "far apart" can actually affect
execution speed?
--p
Please follow-up this discussion to LAD.
On Sun, 8 Dec 2002, Paul Davis wrote:
> >> the situation, as i said before, is miserable. we just don't have a
> >> situation in linux where a single point of control can say "*this* is
> >> the GUI toolkit you will use". X is clearly the standard, but its not
> >> a toolkit (see below) that anyone can feasibly use alone.
> >>
> >I have asked you this two times allready, but I'm trying for a third time
> >now. Why do you need this functionality? Unix is built from the ground
> >to support multiple processes running at the same time, and therefore does
> >it very well, at least linux does. And unix has things as sockets, pipes,
> >semaphores and shared memory. WHY do you need to run everything from the
> >same process? I only see disatvantages by doing that, exept that it
> >uses a tiny bit more memory, but thats it. Please explain to me...
>
> because when running a real-time low-latency audio system, the cost of
> context switches is comparatively large. if you've got 1500usecs to
> process a chunk of audio data, and you spend 150usecs of it doing
> context switches (and the cost may be a lot greater if different tasks
> stomp over a lot of the cache), you've just reduced your effective
> processor power by 10%.
>
I dont believe you. I just did a simple context-switching/sockets
test after I sent the last mail. And for doing 2*1024*1024 context
syncronized switches between two programs, my old 750Mzh duron uses 2.78
seconds. That should about 1.3usecs per switch or something. By
having a blocksize of lets say 128 bytes, that means that by
25 minutes of 44100Hz of sound processing, 2.78 seconds is used
for context switching. Not much.
I'm not talking about jack tasks, I'm talking about doing a simple plug-in
task inside a standalone program, the way the vst server works.
I believe the advantages of making a proper plug-in server with an
easy-to-use library binding "plug-ins" and hosts are large:
1. Stability. A Host can not crash, the server can not crash. Just the
plug-in can crash.
2. Runs better on multiprocessor machines. (At least I think so)
3. Ease-of-use. By extending the interface with a library, common tasks
as finding lists of plugins, loading plug-ins and GUI is available
as functions ready to use.
4. A "plugin" is a program, which means that it can choose whatever GUI
system it wants. Ladspa plug-ins can use guis.
5. All sorts of plugins can be supported by one such system, vst, ladspa,
DX, maya, etc.
6. By making a simple wrapper, all ladspa plugins can automaticly be
available as a "plugin" server "plugin", serving GUI.
--
>From: Anthony <avan(a)uwm.edu>
>...
>RTsynth it is impossible for me to tell whether a given synth is on or
>off. That pixmap LED seems like a good idea, but maybe blue would be
>better ;) ...
Thank you for the hint! I will fix it in upcoming versions :)
- Stefan
_________________________________________________________________
The new MSN 8: advanced junk mail protection and 2 months FREE*
http://join.msn.com/?page=features/junkmail
http://www.vischeck.com/vischeck
The above link allows you to upload an image or parse a webpage thru
some software written at stanford that imitates what a colour blind
person would see. It may be worth the time for you GUI designers to
run a screenshot in it and see what 1/10 people see. Case in point: In
RTsynth it is impossible for me to tell whether a given synth is on or
off. That pixmap LED seems like a good idea, but maybe blue would be
better ;) Not that I'm picking on RTsynth. Web page designers may want
to also look at this, although the linux webpage community usually
does a good job.
--ant