I'm new to OSS Programming, and I'm attempting to play some 8bit wav files.
However OSS is telling me that my sound card will not play 8bit , only 16bit.
If I force it. The sound changes pitch, and is very fast. ( obviously ).
Is there anyway to convert 8bit to 16bit on the fly? I've noticed that XMMS
also fails to play the 8bit wav file correctly.
I've even tryed to convert the file from 8bit to 16bit using SOX. But with the
same results. I would like to support 8bit file wavs in my program as MOST of
the wavs available are in 8bit format...
Any one Have some pointers?
PS: The command I used with sox is " sox -V -r 11025 -w -c 1 backup.wav
temp.wav "
--
>From the moment I picked your book up until I put it down I was convulsed
with laughter. Some day I intend reading it.
-- Groucho Marx, from "The Book of Insults"
Q-Audio is a digital audio library for the Q programming language, which
interfaces to Phil Burk's PortAudio and Erik de Castro Lopo's libsndfile
libraries.
Q-Midi is a MIDI interface for Q, built on top of Grame's MidiShare.
Q is an interpreted functional programming language based on symbolic
expression rewriting, providing a high-level interactive programming
environment for scientific, computer music and other advanced
applications. All software is distributed under the GPL. Sources,
binaries and further information can be found on the Q homepage:
http://www.musikwissenschaft.uni-mainz.de/~ag/q/
Enjoy!
Albert Graef
--
Dr. Albert Gr"af
Email: Dr.Graef(a)t-online.de, ag(a)muwiinfa.geschichte.uni-mainz.de
WWW: http://www.musikwissenschaft.uni-mainz.de/~ag
tisdagen den 10 juni 2003 13.21 skrev Frank van de Pol:
> On Tue, Jun 10, 2003 at 08:30:39AM +0200, Robert Jonsson wrote:
> > Hi,
> >
> > > In fact the bounce feature in MusE is "realtime". It means that you
> > > have to wait the real duration of the track to be rendered.
> > > In a non "realtime" mode the track is rendered as fast as computer can.
> >
> > AFAICT the realtimeness of the bounce feature is like that because of
> > design constraints. Okay, bouncing wavetracks should be possible in
> > non-realtime, but not when using softsynths.
> >
> > This is because all softsynths use alsa-sequencer as the input interface.
> > And if I'm not missing anything, this interface is strictly realtime
> > based. (perhaps it can be tweaked by timestamping every note and sending
> > them in batches? it seems very hard though.)
>
> You are right, with the current alsa-sequencer the softsynths are driven by
> realtime events. Though an application can enqueue the events to the
> priority queues with delivery timestamp, the scheduling is handled
> internally by the alsa sequencer. This causes some problems (especially for
> sample accurate synchronisation with JACK or LADSPA synth plugins (XAP?)),
> but also for network transparency and support for MIDI interfaces which
> accepts timing hints (Steinberg LTB or Emagic AMT ... if specs of the
> protocol were available :-( ).
>
> During the LAD meeting at Karlsruhe we discussed this and sketched a
> alsa-sequencer roadmap that focusses on transition of the alsa-sequencer
> from kernel to userspace and better integration with softsynths / JACK.
> A few things from this are very much related to your track bouncing /
> off-line rendering thing:
>
> - Provide facility to delegate scheduling to the client. The implementation
> would be to deliver the events directly (without queuing) with the
> timestamp attached to the registered client port. This would allow the
> client to get the events before the deadline (time at which the event
> should be played) and use that additional time to put the events at the
> right sample position.
>
> Note that for the softsynth to get advantage of this the application
> should enqueue the events (a bit) ahead of time and pass the timestamp.
> Some of the current applications (including MusE) use the alsa-sequencer
> only as event router and drive it real-time.
>
> Since the softsynth/plugin has no notion of the acutal time (only the
> media time and sample position), rendering at arbitrary speed should be
> possible: bounce faster than realtime or even slower than realtime for
> those complex patches.
>
> - JACK is real-time, and bound to the sample rate of the soundcard. Since
> the audio sample rate can also be used as a clock master for the alsa
> sequencer this would be a good option to ensure synchronisation. The
> transport of JACK and alsa sequencer can be tied together (either one of
> the two acting as master, a run-time configurable option) to provide
> uniform transport and media time amongst the applications that hook into
> the JACK and/or alsa sequencer framework.
>
> For the offline rendering no nice scheme has been worked out yet; I guess
> it would be something along the lines where the application that owns the
> sequencer queue has full control on the transport, moving media time at the
> speed the frames are actually rendered, and the app(s) generating the
> events keeping at least one sample frame ahead of time.
>
> Frank.
Okay, I didn't know that this had been up on the table, how far has this work
progressed, was it just the Karlsruhe meeting or has more thinking occured?
(fyi I'm CC:ing LAD, it might be a more appropriate place for this
discussion..).
Regards,
Robert
Hi!
new release ...horgand-092
Program is released GNU/GPL version 2.
Horgand is a organ, Jack capable and recognizes MIDI Program Change (1-32).
Horgand is tested on a PIII 966 (Debian Sid ) and PII 300 (Gentoo).
Changes in 0.92
----------------------
-Added Reverb at this moment only presets available.
-All the sliders and dials widget reponse in realtime.
-Solved bug in keyboard level scaling, that reduced noises,
unfortunatelly sound changes a little bit.
-Small look changes and other minor bugs solved.
REQUERIMENTS:
* FAST COMPUTER
* LINUX
* ALSA
* JACK
* FLTK 1.1
Web Page :
http://personal.telefonica.terra.es/web/soudfontcombi/
Josep
I am Computer Support technician at the Marine Research Lab, near
Leigh, North Island, New Zealand. We are a remote department of the
University of Auckland.
We are building a computer based on a JENLOGIX miniature industrial
motherboard into an underwater housing in order to record on the HDD
the output from 4 hydrophones.
We have a MIDIMAN DELTA 44 four-channel sound card and hope to be
able to write a simple script to record and save the sound streams to
disk automatically whenever the computer wakes up. An external timer
would wake up the computer for 10mins every hour.
I am a newbie to Linux but have just ordered Redhat9 and intend to
get the system working on a desktop (DELL OptiplexGXa, 233MHz PII)
before installing it on the U/W computer.
I have found the snd-ice1712 driver module on the ALSA site.
Does anyone know if there are scripts already written that would do
what we want or could be readily modified?
Any other advice welcomed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Jo Evans, Network Manager, Computer Support ph ext.3601
University of Auckland ph 64 - 9 - 422 6111
Leigh Marine Laboratory fax 64 - 9 - 422 6113
P.O. Box 349, Warkworth j.evans(a)auckland.ac.nz
NEW ZEALAND - - - - - - - - - - - - - - - - - - - - - -
Last week there was a thread on LAD about DSP kits and integration
with Linux, and I mentioned that I knew of a few small DSP projects
with promise, but couldn't find the URL.
Found the URL today, and it is:
http://www.gweep.net/~shifty/death/
And:
http://www.gweep.net/~shifty/ezkit/
Seems like this might give some of you some inspiration in the "put a
cheap DSP into Linux, integrate it ..." department.
--
;
Jay Vaughan
r&d>>music:technology:synthesizers - www.access-music.de/
Hi!
I have ported Sebastien Metrot's libakai to Linux some couple of weeks ago.
Until it will be in CVS one day you can get it from:
http://stud.fh-heilbronn.de/~cschoene/projects/libakai/
I added some code to the demo application to extract samples from a Akai disc
yesterday, so you can now actually not only see but also hear what's on a
disc.
You will notice that there's still a small bug, I will fix it ASAP, but I
won't complain if someone else will look for it ;)
Best regards.
Christian
Hi! i'm doing some studying on dsp, and one thing I could never properly
understand is the term of "excitation signal" I seem to find it associated
to environment or natural sources, but I cant really find a definition.
Could someone with enough knowledge on this subject please
give me a brief on this? Thanks in advance!
Juan Linietsky
JACK 0.72.4
JACK is a low-latency audio server, written primarily for the GNU/Linux
operating system. It can connect a number of different applications to
an audio device, as well as allowing them to share audio between
themselves. Its clients can run in their own processes (ie. as normal
applications), or can they can run within the JACK server (ie. as a
"plugin").
JACK is different from other audio server efforts in that it has been
designed from the ground up to be suitable for professional audio work.
This means that it focuses on two key areas: synchronous execution of
all clients, and low latency operation.
**CHANGES**
* Updated documentation
* Bug fixes
* MacOSX port. Includes a ProjectBuilder file to help compilation.
Requires PortAudio to be installed.
* Ringbuffer example files added
* New example client: simple transport master to demonstrate Jack's
transport API. Requires GNU readline to compile.
* Removed software monitoring and improved hardware monitoring
semantics.
Taybin Rutkin
> I think someone else (Paul?) hit it on the head. I fyou load 2 8bit
> samples and pass them to a soundcard that is expecting 16-bit samples,
> you'll get a LOT of garbage - like white noise - and it will be 1/2 as long
> as your input sample.
>
> See, each pair of 8 bits will become a 16 bit sample.
>
> input 8 8bit samples:
> dec: 0 64 127 64 0 -64 -127 -64
> hex: 0x00 0x40 0x7f 0x40 0x00 0xc0 0x81 0xc0
>
> read as 4 16bit samples (ignore endianness):
> hex: 0x0040 0x7f40 0x00c0 0x81c0
> dec: 64 32576 192 -32320
>
>
> Notice how the waveforms don't resemble each other at all.
>
>
> Clearer, now?
I See.. So if I wanted to convert to a 16bit, how would you recommend I do
this? It would seam I would need some type of Filler.... err. white noise or
just blank noise to fill in the extra 8bits. I guess I could convert every
bit read from the file from an 8bit (unsigned char) to a 16bit (signed short)
then write it to the /dev/dsp?
Bare with me here, I'm not a verteran.. So I would have to convert the
unsigned char to a signed char, then to a signed short... So Not knowing how
the conversion is done in the OS, I'm assuming that the resulting signed
short would be padded with 'off' bits. which would come out as silence
correct? ( but it's in the same sample, so you really wouldn't hear the
silence )
Werd?
--
It looked like something resembling white marble, which was
probably what it was: something resembling white marble.
-- Douglas Adams, "The Hitchhikers Guide to the Galaxy"