Paul,
I think that if the dmesg/var/log/messages output said there were
buffers, then it was getting loaded by automatically. Most likely this is
because it is in modules.conf.
However, when I ran insmod snd-hdsp by hand, I got some error messages
that were a bit different than the :init_module message that shows up at
boot time. These messages (from memory) had something to do with memory
buffer or memory addresses more memory names. I don't remember right now,
but will check it out later and report back.
Mostly Fernando and I wanted to know what configurations successful users
were using. Since I'm on 7.3, I have this funky C compiler, and just wanted
to know if others were successful. I'm still considering a RH8.0 upgrade
simply to get rid of some of these issues with Ardour and Rosegarden. (And
probably create new issues...who knows?)
thanks,
Mark
-----Original Message-----
From: Paul Davis [mailto:paul@linuxaudiosystems.com]
Sent: Tuesday, December 10, 2002 11:36 AM
To: Mark Knecht
Cc: D R Holsbeck; linux-audio-dev(a)music.columbia.edu; PlanetCCRMA;
Ardour; Ardour-User-List; alsa-user(a)lists.sourceforge.net
Subject: Re: [ardour-dev] Re: [linux-audio-dev] HDSP 9652 Users -
Request for info
> Hi. Actually, I tried that last night, but I didn't load the
>snd-hammerfall-mem. It complained about some sort of missing references.
>I'll give this a try later today. Thanks.
in the message you sent recently, snd-hammerfall-mem worked just fine,
and reported allocating buffers for the card.
if snd-hammerfall-mem doesn't load, then neither snd-hdsp nor
snd-rme9652 (not relevant here) can do so.
--p
hi guys !
as most of you will know, the linux-audio-user list is members-only.
many of you post helpful and perfectly on-topic replies there, which is
great, but unfortunately from a non-subscribed address, which is not :(
i used to hand-approve them, but there are just too many of those
messages now to do that, so i'm getting anal about the policy and reject
everything regardless of sender or topic.
please, everyone, do not cease to follow lau, and please do share your
expertise, but also please do it from a subscribed account. it really
hurts me to stash messages from bill s. or paul d. into the bit bucket,
but i just can't keep up. :(
unfortunately, there is no easy and reliable way to tell mailman to
accept all members of another list as posters, and i do not have the
necessary privileges on the list server to roll my own
cross-authentication.
check out the possibility described below if you need to post from
several mail accounts.
best,
jörn
forwarded from lau:
Jörn Nettingsmeier wrote:
>
> hi everyone !
>
> lately, the number of posting attempts from non-members has risen to a
> point where i'm just rejecting them without reading each one and checking
> if it's on-topic or has already been sent to the list from another
> account.
>
> our current list policy is members-only posting, for the following
> reasons:
> * i believe people who want to profit from a community should take part in
> it
> * spam prevention. so far, it has saved everyone from about .5k spam msgs.
>
> a number of list subscribers seem to post from different addresses, so
> some of their mails are being rejected. if this applies to you, there is
> always the possibility of subscribing multiple addresses and disabling mail
> delivery for all but one via the web interface.
>
> best,
>
> jo"rn
--
Jörn Nettingsmeier
Kurfürstenstr 49, 45138 Essen, Germany
http://spunk.dnsalias.org (my server)
http://www.linuxdj.com/audio/lad/ (Linux Audio Developers)
Hi everybody. I've been reading this list for a week. Thought I'd "pitch"
in here because I'm also writing a softstudio; it's pretty far already and
the first public release is scheduled Q1/2003.
First, I don't understand why you want to design a "synth API". If you
want to play a note, why not instantiate a DSP network that does the job,
connect it to the main network (where system audio outs reside), run it
for a while and then destroy it? That is what events are in my system -
timed modifications to the DSP network.
On the issue of pitch: if I understood why you want a synth API I would
prefer 1.0/octave because it carries less cultural connotations. In my
system (it doesn't have a name yet but let's call it MONKEY so I won't
have to use the refrain "my system") you just give the frequency in Hz,
there is absolutely no concept of pitch. However, if you want, you can
define functions like C x = exp((x - 9/12) * log(2)) * middleA, where
middleA is another function that takes no parameters. Then you can give
pitch as "C 4" (i.e. C in octave 4), for instance. The expression is
evaluated and when the event (= modification to DSP network) is
instantiated it becomes an input to it, constant if it is constant,
linearly interpolated at a specified rate otherwise. I should explain more
about MONKEY for this to make much sense but maybe later.
Anyway, the question I'm most interested is: why a synth API?
--
Sami Perttu "Flower chase the sunshine"
Sami.Perttu(a)hiit.fi http://www.cs.helsinki.fi/u/perttu
Hi,
I would like to request that if there are any users of the new RME HDSP
9652 card that are able to successfully install and use this card, would you
please get in touch with me and let me know what your system configurations
are? I understand that there are at least a couple of you out there
somewhere. Please let me know what distribution, kernel, C compiler, Alsa
revision and anything else you think might be important.
Using the PlanetCCRMA flow I am unable to get this card configured and
running. We believe that we have patched the Alsa layer correctly to add the
0x64 check but I am still unsuccessful.
Thanks in advance.
With best regards,
Mark
> -----Original Message-----
> From: Joshua Haberman [mailto:joshua@haberman.com]
> Paul Davis <paul(a)linuxaudiosystems.com> wrote:
> > >Has anybody actually tried to get gtk+ and qt working in the same
> > >application?
> >
> > its been done.
> >
> > it was ugly as sin.
>
> This is a strong counterexample to the oft-repeated maxim
> that "choice is
> the strength of OSS."
??? this is a property/feature of X.
and btw I think you would run into same kind of problems in ms windows
(remember, lot of toolkits available for X are available for ms windows,
there are also different ms windows specific toolkits - can you combine them
easily in one program?)
> But I guess the fact that 10 random linux audio applications
> are written to 10
> different APIs that can't interoperate is another.
what do you mean? _one_ program will probably use different APIs,
depending what it needs to interface to (e.g. there would be linux (or
posix) API (system calls), standard library API (most of the programming
languages have some standard library), one (or more) sound API, some UI
API...
just because applications use same API does not mean that applications
will be able to interoperate. they have to be designed to interoperate
(first part of that would be to define what it actually means in context of
given applications)
> In my opinion, this is why the deskop projects (KDE and Gnome) are so
> important. They give consistency of behavior and
> interoperability between
> applications.
IMO in a wrong way - instead of providing protocols to communicate they
lock you in specific implementation. and they are quite messy. it looks like
it's getting better, somewhat...
...
> Think about the difference between writing a game for Win32
> vs. Linux. With
> Win32 you keep your Direct* reference handy and away you go;
> it's an entire
> platform. With Linux you have to make umpteen decisions
you also have openGL. probably other ones (macromedia for kiddie games?)
> about what system to
> use for graphics, sound, networking, timers, etc. People often make
> less-than-optimal decisions due simply to lack of knowledge. What the
it's in process of development. confusion is expected. in graphics area it
is fairly stabilized, as far as networking etc. goes there shouldn't be any
confusion. sound is a big mess (remember oss is considered not good, alsa
just recently stabilized API, it's still not 1.0)
the problem is not that there are many choices, the problem is that there
are not good enough choices in some areas (but that's changing rapidly)
erik
And now I just realized something else...
Why argue whether or not there should be a single event port per
plugin instance, or one per Channel, or whatever? Just have the
*plugin* set up the ports, and have the host ask a callback
XAP_event_port *get_event_port(XAP_cnx_descriptor *cd);
when it wants the port to use to reach a specific Bay:Channel:Slot.
Most plugins will probably consider only the Bay and Channel fields,
but some may ignore all fields and always return the same port, while
others may split the range of Slots of multiple ports in arbitrary
ways.
That way, you can have *one Event Port per Control* if you like. :-)
Well, yes, there is one issue, of course: When you use the same port
for multiple Channels and/or multiple Bays, you'll need the Channel
and/or Bay indices as arguments in the event struct.
I'm thinking that you might still just have a 32 bit "index" or
"slot" field in the event struct, but optionally split it up, encode
it or whatever, as you like. It would be easy enough to have the
plugin do that as part of the get_event_port() callback above;
something like:
XAP_target_descriptor
{
XAP_event_port *port;
unsigned long id; /* Plugin's reference;
* may be *anything*.
*/
}
int map_target(XAP_cnx_descriptor *cd,
XAP_target_descriptor *td);
That is, the map_target() call converts <port, bay, channel, slot>
inte <port, id> in whatever way the plugin author wants. The
ID/index/slot thing is just "that value you're supposed to write into
the "index" field of events sent to this target" anyway, so it
doesn't matter the slightest to senders - or the host - what the
value actually means.
Example:
* Plugin wants *everything* on one port.
* Plugin will return the same physical port whichever
event input Bay:Channel:Slot you ask for.
int map_target(XAP_cnx_descriptor *cd,
XAP_target_descriptor *td)
{
MY_plugin *me = cd->plugin; /* Get "this". */
td->port = me->my_universal_event_port;
td->id = cd->bay << 24;
td->id |= cd->channel << 16;
td->id |= cd->slot;
return 0; /* Ok! --> */
}
* In the single event processing loop, the plugin will
just extract the bay, channel and slot fields like this:
int bay = event->index >> 24;
int channel = (event->index >> 16) & 0xff;
int slot = event->index & 0xffff;
Is this ok?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
Some thoughts on that SILENT event for reverb tails and stuff...
(Currently impemented as a "fake" spontaneous state change in
Audiality FX plugins, BTW.)
I would assume that since there is no implicit relation between
Channels on different Bays (remember the input->output mapping
discussion?), this event is best sent from some kind of Master Event
Output Channel. (That is, now we have one Master Event Input Channel,
and one Master Event Output Channel. Each will be in it's own Bay,
and there can be only one and exactly one Channel on each of those.)
So, the SILENT event would need Bay and Channel (but not Slot)
fields, in order to tell the host (or who ever gets the event) which
audio output just went silent.
And it would probably be a rather good idea to have "NOT_SILENT"
event as well, BTW!
Anyway, what I was thinking was: How about allowing plugins to
*receive* SILENT and NOT_SILENT events, if they like?
That way, you could use the plugin API for things like
audio-to-disk-thread "gateways" for recording and that kind of stuff,
without forcing the host to be involved in the details.
Not that recording half a buffer extra of silence would be a
disaster, but I bet someone can or eventually will think of a reason
why their plugin should know the whole thruth about the audio inputs.
Now, there's just one problem: Put a plugin with tail, but without
sample accurate "tail management" support in between a plugin that
sends (NOT_)SILENT events and one that can receive them - and the
information is useless! All you can do is have the host fake the
(NOT_)SILENT events sent to the latter plugin, since the plugin in
the middle thinks only in whole buffers WRT inputs and/or outputs...
And there's another problem: If you would get a (NOT_)SILENT event
*directly* from another plugin, how on earth would you know which one
of *your* audio inputs that other plugin is talking about, when the
event arguments are about *that* plugin's audio outputs?
Only the host knows where audio ports are connected, so the host
would have to translate the events before passing them on.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
--- http://olofson.net --- http://www.reologica.se ---
> personally, i think ardour is an excellent proof-by-implementation
> that yes, busses are really just a special class of strip,
Well, no. Busses are not strips. Busses are not signal paths. Busses
are unity gain summing nodes that facilitate many-to-one connections.
Ardour depends on jack for all of its busses.
with no
> basic difference in the kinds of controls you'd want for each. these
> days, an AudioTrack in ardour is derived from the object that defines
> a Bus. the only differences are that a Bus takes input from
> "anywhere", whereas an AudioTrack takes input from its playlist (via a
> DiskStream) and can be rec-enabled. other than, they are basically
> identical.
Main outs, aux sends, and sub outs are a special class of strip that
receive their input exclusively from busses. Other than that, there is
no difference between these and any other kind of strip.
Tom
Hi all,
I've been beavering away on a session/config managment system, and it's just
reached the point where projects can be properly saved and restored. It's
an implmentation of the api proposal, http://reduz.dyndns.org/api/ , that
originated from this discussion:
http://marc.theaimsgroup.com/?l=linux-audio-dev&m=102736971320850&w=2 .
This is more an RFC, alpha release, rather than a proper "you can make your
apps work with this" release; a lot of the api will undoubtedly change.
What's right with this release: it saves/restores sessions, it saves data,
it exists. What's wrong with this release: the code is barely commented,
there's no documentation, it's quite inconsistent, the code is scrappy in
many places, and it's not very stable.
So, download it, have a bash, tell me what works/what doesn't, what's good/
what's not, what should stay the same/what should change.
http://pkl.net/~node/software/ladcca-0.1.tar.gz
Bob
does anyone here know if splitting code across different files, or for
that matter, reordering the layout of one source file so that
functions called together are now "far apart" can actually affect
execution speed?
--p