Fred Gleason wrote:
> On Sunday 16 November 2008 04:16:19 pm you wrote:
>
>> Programmers are not stupid. However the way how typical sound
>> applications are implemented is wrong.
>>
>
> So this is a reason to cripple the API -- because 'typical'
> application programmers don't know what they're doing? What about
> those who *do*?
>
>
Programmers who know what they do don't do stupid things. All the
problems are caused by the programmers who *think* they know what they
do. They are so clever that they don't need to open any manuals, etc.
>> Network or disk performance analyzers/monitors/optimizers are good
>> tools. They are not "ordinary" applications but system tools.
>>
>
> Whatever the semantic difference between 'applications' and 'system
> tools' might be, *both* end up having to interact with the hardware
> via some sort of API, so I'm not sure that the distinction is
> particularly meaningful in this context.
>
Audio (or MIDI) applications produce and consume audio streams. All they
need to do is to tell what kind of stream (rate, format, etc) they want
to play/record. They can also adjust their input/output volume or select
the recording source if necessary. In this way the 'system tools' (or
any application dedicated for these purposes) can be used to route and
mix the
However if the application also tries to interact with the hardware (by
opening /dev/mixer in OSS or by doing similar things with ALSA) then
shit will happen. This kind of interaction with hardware may mean that
the application refuses to work with a pseudo/loopback device that hands
the signal directly to Icecast server. All this just because the
developer of the application wanted to add nice features to wrong place.
>
>> However it's wrong if programs like Mozilla try to do this kind of
>> functions.
>> All a web browser does is opening a TCP/IP socket to the
>> http/ftp/whatever server, sends the request and waits for the response.
>> Equally well an audio player application should just open a connection
>> to the audio device, set the rate/format and then just start to play.
>> They should not try to do things like automatic unmuting the sound card.
>>
>
> Right, but I think you're kind of missing the point. We're not
> talking about garden variety 'audio player' applications here. The
> world of audio -- especially professional audio -- is a much larger
> place. This doesn't make such applications 'system tools', merely
> applications that work outside of the simple assumptions adequate when
> designing garden-variety 'audio players'. To hardwire those simple
> assumptions into the driver system is IMO a design error, one that
> imposes serious limits on the usefulness of the overall system.
> Effectively, it's dictating policy in a layer that should be primarily
> concerned with mechanism.
>
Professional and professional. One definition of professional is
available in Wikipedia (http://en.wikipedia.org/wiki/Professional).
Audio professional is a professional specialized on audio. 'Professional
audio' is what audio professional does for living. Professionals use
hardware/software with much more features than ordinary users do.
Professionals indeed use HW/SW that has "professional features". However
they will not buy a (say) sampler that tries to reset the master mixing
console every time it's powered on.
On the other hand 'professional audio' is also a marketing term that
originates from popular _consumer_ sound cards such as Sound Blaster
Pro, Pro Audio Spectrum and many others. In this context "pro" means
that the device has all the bells and whistles. Usually there is also
loads of more or less useless bundled software included in the package.
So what kind of professional do you mean?
Best regards,
Hannu
Hello all,
Reading the file produced by 'alsactl store', I learn
that my sound hardware has a number of control parameters
that have names, types, values, ranges, etc. etc.
I now want to write some hopefully not too convolved
C or C++ code to read and write these parameters.
Is there, after X years of ALSA, any documentation that
explains the basic concepts and tells me how to do this ?
If such a thing exists I can't find it.
The Doxygen info on the ALSA site is completely useless
for the purpose of learning to understand and use the
control interface.
The textual information provided there usually provides
*nothing* that can't be read from the C types, structs or
functions it is supposed to document. It just repeats the
jargon used in the code, and is at least 99.9% redundant.
What these things actually mean, how they fit together
and what is the big picture is AFAIK nowhere and never
explained. Which is strange, because if you design a
system such as this, that would be the absolutely first
thing you need to define. No doubt the designers have it
in their heads. No doubt it's well structured and also
abstracted to almost absurd levels. But it remains a
complete mystery unless you have the time and energy and
someone is paying you to spend at least half a year to
reverse-engineer the so-called docs. If ever there was
an example of Doxygen or similar system being no more
than a pretext to keep the quality department happy,
ALSA is the best one I know of.
Now if someone can point me to some existing docs that
explain how I can e.g. set the sample clock source on a
RME MADI card in less than ten lines of C code (knowing
the parameter names, ranges, etc - no need to find them
out dynamically, I can read them asound.state) then I'll
eat my hat. It shouldn't be difficult. On some competing
systems all it takes is one ioctl().
Ciao,
--
FA
Laboratorio di Acustica ed Elettroacustica
Parma, Italia
Lascia la spina, cogli la rosa.
Hi,
I can't find anything online that gives me a way to run /sbin/mkdosfs as
a normal user.
Is it just that I need to add the user to the mkdosfs group or something
similar?
Cheers.
--
Patrick Shirkey
Boost Hardware Ltd
Hello folks!
One question, I hope it's not too dumb. :-(
If you have your average patchbay, how does it know, when new MIDI/audio
ports/clients come to live or die? And how does it know, that some connection
was killed by some other application.
Does it simply query it all the time? I wouldn't think so... But perhaps I'm
wrong...
Thanks for hints on this.
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
Ok, things have settled down, and i've tweaked a little here and there.
Seems to be running nicely now, and fairly stable.
A screenshot. of a generic setup.
http://shup.com/Shup/81262/patchage3.png
Alex.
lpatchage
jackdbus
rosegarden
linuxsampler
ardour2
jconv
On Tue, Nov 11, 2008 at 10:35 PM, alex stone <compose59(a)gmail.com> wrote:
> Nedko,
>
> This is what i get when i try, in the messages window of lpatchage, when i
> try to connect linuxsampler audio out:
>
> [JACKDBUS] ConnectPortsByName() failed.
>
> jackdbus log is attached. (I've renamed a copy for your perusal)
>
> Alex.
>
>
>
>
> On Tue, Nov 11, 2008 at 8:55 PM, Nedko Arnaudov <nedko(a)arnaudov.name>wrote:
>
>> "alex stone" <compose59(a)gmail.com> writes:
>>
>> > But i'm still at a loss as to why i can't connect LS audio out, to
>> Ardour
>> > audio in, in lpatchage, visibly.
>> > It works in Qjackctl, but stubbornly refuses to connect in lpatchage,
>> even
>> > though the actual connections are made in Ardour, and most importantly,
>> > work.
>>
>> Do you get any errors in jackdbus log file when you are trying to
>> connect using lpatchage?
>>
>> --
>> Nedko Arnaudov <GnuPG KeyID: DE1716B0>
>>
>
>
release candidate 2 has some important fixes:
* Fix for #46 - on first save of newly appeared clients, their state
was not correcttly recorded as being saved and thus was not being
restored on project load afterwards.
* Memory corruption fixes caused by bug in stdout/stderr handling
code. Was happening when lash client outputs lot of data to stdout or
stderr
* Improved handling of repeating lines sent to stdout/stderr
I would like to ask LASH beleivers and other interested parties to test
the 0.6.0 release candidate. Juuso Alasuutari and me have been doing
some major changes to the lash code. We have done lot of work, we've
fixed several big implementation issues and we need stable point before
doing more changes (0.6.1 and 1.0 milestones).
In the tarball there is simple lash_control script. One can also control
LASH through patchage-0.4.2 and through lpatchage (availabe through
git).
User visible changes since 0.5.4:
* Use jack D-Bus interface instead of libjack, enabled by default, can
be disabled. Ticket #1
* Allow controlling LASH through D-Bus. Ticket #2
* Use D-Bus autolaunching instead of old mechanism. Ticket #3
* Log file (~/.log/lash/lash.log) for LASH daemon. Ticket #4
* Client stdout/stderr are logged to lash.log, when clients are
launched by LASH daemon (project restore). Ticket #5
* Improved handling of misbehaved clients. Ticket #45
* Projects now can have comment and notes associated. Ticket #13
Download:
http://download.savannah.gnu.org/releases/lash/lash-0.6.0~rc2.tar.bz2http://download.savannah.gnu.org/releases/lash/lash-0.6.0~rc2.tar.bz2.sig
--
Nedko Arnaudov <GnuPG KeyID: DE1716B0>
For a new audio application I need to code a JACK client with C++. So
far I did it only with C and have a problem with giving the pointer to
the callback process function, which is a method now. So what is the
best performing solution? Is a delegate function a good idea, being
static and triggering the method in the objectinstance?
Cheers,
Malte
--
----
media art + development
http://www.block4.com
current events:
exhibition spame-moi La Motte-Servolex, France 17.10.-20.12.2008
Hi all,
I need to use a microphone input as a trigger. In other words my idea is
to connect a switch to the microphone input. In this way, when the
switch is turned on it generates a spike in the captured track.
I would like to create a program that trigger an event every spike it
receives.
I succeed in capturing the mic input through a simple program that uses
alsa driver, but I don't know how to "parse" the raw data to search for
the spikes. Any hints?
Second question: on a "full duplex" sound card, can I capture at 8 bit,
mono, 22.050 bit/s , and on the same time playback at 16 bit, stereo,
44.100 ?
Thank you!
Lorenzo
I'm a home studio enthusiast and also a former JAVA programmer.
I'm looking at combining these interests and contributing to a
project, but most apps seem to be written in C++.
Any suggestions? Anyone involved in any good JAVA projects?
--
Cheers, Craig
http://craiglawton.infohttp://romansandals.wordpress.com