Greetings all;
Has anyone a url where I might be able to purchase the expansion
interface gizmo for an Audigy 2 value card?
Thanks.
--
Cheers, Gene
People having trouble with vz bouncing email to me should add the word
'online' between the 'verizon', and the dot which bypasses vz's
stupid bounce rules. I do use spamassassin too. :-)
Yahoo.com and AOL/TW attorneys please note, additions to the above
message by Gene Heskett are:
Copyright 2006 by Maurice Eugene Heskett, all rights reserved.
( LAU folk: this is an initial outline of an email I want to dispatch to
the desktop-architects list in the very near future. Your comments
are eagerly sought. Note that this section specifically seeks to
avoid any discussion of implementations or specific approachs. I
would like to fully flesh out the list of tasks ASAP )
Making Sound Just Work
------------------------
One of the "second tier" of requirements mentioned several times at
the OSDL Portland Linux Desktop Architects workshop was "making audio
on Linux just work". Many people find it easy to leave this
requirement lying around in various lists of goals and requirements,
but before we can make any progress on defining a plan to implement
the goal, we first need to define it rather more precisely.
DEFINING THE GOAL
=================
The list below is a set of tasks that a user could reasonably expect
to perform on a computer running Linux that has access to zero, one
or more audio interfaces.
The desired task should either work, or produce a sensible and
comprehensible error message explaining why it failed. For example,
attempting to control input gain on a device that has no hardware
mixer should explain that the device has no controls for input gain.
PLAYBACK
- play a compressed audio file
* user driven (e.g. play(1))
* app driven (e.g. {kde,gnome_play}_audiofile())
- play a PCM encoded audio file (specifics as above)
- hear system sounds
- VOIP
- game audio
- music composition
- music editing
- video post production
RECORDING
- record from hardware inputs
* use default audio interface
* use other audio interface
* specify which h/w input to use
* control input gain
- record from other application(s)
- record from live (network-delivered) compressed audio
streams
MIXING
- control h/w mixer device (if any)
* allow use of a generic app for this
* NOTE to non-audio-focused readers: the h/w mixer
is part of the audio interface that is used
to control signal levels, input selection
for recording, and other h/w specific features.
Some pro-audio interfaces do not have a h/w mixer,
most consumer ones do. It has almost nothing
to do with "hardware mixing" which describes
the ability of the h/w to mix together multiple
software-delivered audio data streams.
- multiple applications using soundcard simultaneously
- control application volumes independently
- provide necessary apps for controlling specialized
hardware (e.g. RME HDSP, ice1712, ice1724, liveFX)
ROUTING
- route audio to specific h/w among several installed devices
- route audio between applications
- route audio across network
MULTIUSER
- which of the above should work in a multi-user scenario?
MISC
- use multiple soundcards as a single logical device
Hello list,
I've been going through the options available in qjackctl, and also
the options mentioned here: http://www.djcj.org/LAU/jack/. I've
noticed two options:
soft mode
-a (ASIO support)
What's odd is that neither option is mentioned in the other's doc.
For example, I cannot find a reference to ASIO support in qjackctl.
Likewise, I cannot find an explanation for soft mode in jack's doc.
Two questions: what is soft mode, and is ASIO support still supplied
and useful? One additional question: is there a place that provides
comprehensive explanation of jack's features and how to use them?
--
Josh Lawrence
http://www.hardbop200.com
Here's an incredibly simple trick I discovered to synthesize incredibly
hard and groovy snare sounds... The typical THWACK that just makes the
crowds move.
You need nothing more than a noise source, one (yes one!) band pass
filter and a flexible envelope.
Hook up the source and your envelope and tune your snare with the band
pass filters cutoff frequency and bandwidth (also known as CF and Q-factor).
Then set up your envelope to have a "knee". If the volume graph for your
typical snare envelope looks like this:
| \
| \
| \
| \
|_______
Make it look like this
| \
| \
| \
| \
| \
| \
|_______________
The psychological effect is that the listener is 'punched' towards the
knee with great force, and then gently released, constantly keeping him
or her in that gentle musical trance place, while still being an
extremely man-moving sound. It's great to help induce that 'dance
trance' we pop musicians are all looking for for our shows.
Actually, you can get the same effect with an extremely strong
compressor; however, with this little trick you do the same thing and
use no extra CPU power.
Carlo
PS: ZynAddSubFX is a great way too implement this; use a Free-Mode
envelope and add an additional 'point' for the knee.
Greetings:
My publisher, Bill Pollock, has been gently pressuring me to commit to
completing the 2nd edition of The Book Of Linux Music & Sound.
Unfortunately I'm in a precarious position to commit myself to the work.
The first book nearly wiped me out, I'm not sure I can sustain the
effort to bring the next edition to light. Nevertheless, I'm still
interested in seeing this book through to completion. So I have some
questions for the community :
1. Is there a real need for another book such as the The Book Of Linux
Music & Sound ?
2. If so, would I be wise to ignore the 2.4 kernel series ? (It would
make it easier to ignore material re: OSS/Free)
3. Would anyone be interested in co-authoring the book ? I've
considered offering some chapters to certain people on these lists, but
the issue of reimbursement gets sticky WRT royalties and other
compensation. I made very little money from the first book, but money
wasn't the true reward anyway, so perhaps there's a way to turn it into
a community-based work.
4. Is anyone else already working on such a project ? I don't want to
duplicate efforts.
Btw, this is the last hurrah for this project. If I don't take it now
I won't be taking it on at all. I have a life, it's pretty full, and
committing to this edition would be a major disruption. I can guarantee
that it would be the last book I'll ever write.
I look forward to your comments and advice.
Best regards,
dp
Hi list.
I generally DI either my electro-acoustic or solidbody into a Terratec DMX
6Fire 24/96. I have managed to get really good latency (1.45ms) with the
rt-lsm, and am now experimenting with effects chains in ardour.
I've started to accumulate a number of useful sounds but they are all saved in
separate sessions and I can't find any way of recreating them without writing
down the settings and rebuilding them from scratch.
Is it possible to export an entire effects chain for re-use in another ardour
session?
--
David Haggett
Hi all,
yesterday I got back to my computer to do some music. The result is a
bad little piece I uploaded here:
http://dillenburg.dyndns.org/~arnold/node/308
Feel free to play and use it and have a nice weekend,
Arnold
--
visit http://dillenburg.dyndns.org/~arnold/
---
Wenn man mit Raubkopien Bands wie Brosis oder Britney Spears wirklich
verhindern könnte, würde ich mir noch heute einen Stapel Brenner und
einen Sack Rohlinge kaufen.
> > > Need it always be so? Or can we get a little organized and greatly
> > > improve the situation? My own opinion is that the community can do
> this
> > > with a little organization and motivation. Someone well-repsected and
> > > experienced in documenting (e.g. Dave) could head the organization,
> and
> > > the publishing carrot would provide just enough motivation for many
> > > people. Without the publishing carrot I think we would still benefit
> > > from a little organization.
> >
> > Before anyone starts writing new documentation, what is most desperately
> > needed is for someone to remove all the bad documentation out there (for
> > example most of the ALSA wiki dealing with .asoundrc files and dmix)
>
> Yes, absolutely.
Seems to me that what could solve both the issue of consolidation and of
duplicate, mostly outdated documentation is generating a central website
that provides one Wiki page for every pertinent topic, whether that be a
specific software, system setup topic (i.e. ALSA), and/or
distribution-specific how-to. The end-users and/or project devs/contributors
could help generate the material, while Dave would assist in its shaping, so
that it is translated into a well-structured learning resource and
subsequently book-friendly format. Eventually, Dave could sum all this up
into a book and everyone's happy ;-). If this proves to be something the LA
community wishes to pursue, Linuxaudio.org could help foot the space. Just
like the new home for LAD (lad.linuxaudio.org), we could have the
aforementioned Wikis populated within the same domain (i.e.
documentation.linuxaudio.org/pd, doc.linuxaudio.org/muse,
doc.ardour.linuxaudio.org, or something along those lines), utilizing
similar formatting, and therefore generating a kind of a reliable, uniform,
and familiar interface for users, regardless of the topic they wish to
pursue.
FWIW, I am looking to do exactly this with one of my ongoing projects,
simply because the scope of the project I am engaged in encourages feedback
from as many artists as possible. Considering that anything associated with
technology is an elusive target, books covering such topics are practically
outdated as soon as they hit the shelves. Hence, the aforementioned approach
IMHO may soon become the only reliable option if the author(s) wishes to
generate content that is more time-resistant. This kind of an approach could
also help speed-up publishing of subsequent editions--Dave could simply
recompile project summaries from the aforementioned Wikis once per year and,
publisher willing, release a new edition of the book for those who prefer to
have documentation in a paper form (i.e. for teaching and/or reference
purposes).
Best wishes,
Ico
Hi,
I'm trying to set up my Ubuntu Dapper system to work reliably with
reasonably low latency, but I'm unable to really eliminate xruns. It works
quite well most of the time, but sooner or later I'll inevitably get
xruns, usually when starting/closing jack-apps, or sometimes when
starting/stopping playback.
My machine is an Athlon XP 3000+, with VIA chipset and 2 SATA disks.
The sound card is an M-Audio Audiophile 2496, and my aim would be to run
jack at 5.8ms latency (-r44100 -p128 -n2) or less.
I already tried pretty much everything I could thinks of, that is:
- Installed a realtime kernel (currently 2.6.16.1-rt11)
- Used the nv display driver instead of nvidia
- Switched PCI slots around so that the Audiophile doesn't share an IRQ
with the graphics card (an IRQ solely for the Audiophile doesn't seem
possible, it now shares an IRQ with a SBlive)
- Changed filesystems from reiserfs to ext3 (/) and xfs (/home)
I'm running jack at rtprio 70, and also set the sound card IRQ's priority
to 99, but none of that seems to make a difference.
I have latency tracing enabled, but I can't make head or tail of it, to me
it looks like the xruns occur at random...
Now I'm at a loss. What else can I do?
Thanks,
Dominic