Hi folks
How i can from my linux C program at the same time
take sound from souncard line input ( ,maybe change level and something else )
AND give it back to line output
this program
do it but with pauses ~1sec
some ideas ?
Tnx in advance
Ralfs K
#include <unistd.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/ioctl.h>
#include <stdlib.h>
#include <stdio.h>
#include <linux/soundcard.h>
#define LENGTH 3 /* how many seconds of speech to store */
#define RATE 8000 /* the sampling rate */
#define SIZE 8 /* sample size: 8 or 16 bits */
#define CHANNELS 1 /* 1 = mono 2 = stereo */
/* this buffer holds the digitized audio */
unsigned char buf[LENGTH*RATE*SIZE*CHANNELS/8];
int main()
{
int fd; /* sound device file descriptor */
int arg; /* argument for ioctl calls */
int status; /* return status of system calls */
/* open sound device */
fd = open("/dev/dsp", O_RDWR);
if (fd < 0) {
perror("open of /dev/dsp failed");
exit(1);
}
/* set sampling parameters */
arg = SIZE; /* sample size */
status = ioctl(fd, SOUND_PCM_WRITE_BITS, &arg);
if (status == -1)
perror("SOUND_PCM_WRITE_BITS ioctl failed");
if (arg != SIZE)
perror("unable to set sample size");
arg = CHANNELS; /* mono or stereo */
status = ioctl(fd, SOUND_PCM_WRITE_CHANNELS, &arg);
if (status == -1)
perror("SOUND_PCM_WRITE_CHANNELS ioctl failed");
if (arg != CHANNELS)
perror("unable to set number of channels");
arg = RATE; /* sampling rate */
status = ioctl(fd, SOUND_PCM_WRITE_RATE, &arg);
if (status == -1)
perror("SOUND_PCM_WRITE_WRITE ioctl failed");
while (1) { /* loop until Control-C */
printf("Say something:\n");
status = read(fd, buf, sizeof(buf)); /* record some sound */
if (status != sizeof(buf))
perror("read wrong number of bytes");
printf("You said:\n");
status = write(fd, buf, sizeof(buf)); /* play it back */
if (status != sizeof(buf))
perror("wrote wrong number of bytes");
/* wait for playback to complete before recording again */
status = ioctl(fd, SOUND_PCM_SYNC, 0);
if (status == -1)
perror("SOUND_PCM_SYNC ioctl failed");
}
}
---
This message contains no viruses.
Guaranteed by Kaspersky Anti-Virus.
www.antivirus.lv
Clemens Ladisch <clemens(a)ladisch.de> wrote:
>
> chris.wareham(a)btopenworld.com wrote:
> > Has anyone had succes in getting SysEx data flowing back and forth
> > between Roland sound modules and their computer?
>
> Yes, SC-8820, over USB.
>
> > I have attached my
> > simple test program in case I'm doing something obviously wrong.
>
> I see nothing obviously wrong.
>
> It might be time-saving to try the low-tech approach first:
> do a "cat /dev/midi42 > somefile", then, on another console, run:
>
> echo -ne '\xf0\x41\x10\x16\x11\x04\x01\x76\x00\x01\x76\x0e\xf7' > /dev/midi42
>
I'll try this approach tonight.
> > > > However, whenever I send a Request Data
> > > >message the unit doesn't appear to respond. It definitely receives the
> > > >message as the MIDI activity light comes on. But when I try to read from
> > > >the MIDI device file my call blocks forever.
>
> My unit's light is active while the dump is being transmitted (several
> seconds, depending on the size of the data). Are you sure your's is
> sending anything?
>
The MIDI activity light flashes *very* briefly, so it could be that
nothing's being sent back.
> > I have tried using Linux, but gave up when I couldn't get ALSA or
> > OSS to recognise my USB MIDI interface, (a Yamaha UX96). Under
> > NetBSD it's recognised as soon as I plug it in.
>
> Any somewhat recent version of ALSA does support the UX96.
>
I tried RedHat 9.0 and the CCRMA stuff, but I guess I mustn't have setup
the modules.conf file properly. I'm used to the monolithic kernel
approach, and I found the whole modules.conf thing confusing. I assume I
need an entry in there to load the USB MIDI interface driver - could you
send me your modules.conf so I can see what sort of entries are
required?
Chris
tisdagen den 10 juni 2003 13.21 skrev Frank van de Pol:
> On Tue, Jun 10, 2003 at 08:30:39AM +0200, Robert Jonsson wrote:
> > Hi,
> >
> > > In fact the bounce feature in MusE is "realtime". It means that you
> > > have to wait the real duration of the track to be rendered.
> > > In a non "realtime" mode the track is rendered as fast as computer can.
> >
> > AFAICT the realtimeness of the bounce feature is like that because of
> > design constraints. Okay, bouncing wavetracks should be possible in
> > non-realtime, but not when using softsynths.
> >
> > This is because all softsynths use alsa-sequencer as the input interface.
> > And if I'm not missing anything, this interface is strictly realtime
> > based. (perhaps it can be tweaked by timestamping every note and sending
> > them in batches? it seems very hard though.)
>
> You are right, with the current alsa-sequencer the softsynths are driven by
> realtime events. Though an application can enqueue the events to the
> priority queues with delivery timestamp, the scheduling is handled
> internally by the alsa sequencer. This causes some problems (especially for
> sample accurate synchronisation with JACK or LADSPA synth plugins (XAP?)),
> but also for network transparency and support for MIDI interfaces which
> accepts timing hints (Steinberg LTB or Emagic AMT ... if specs of the
> protocol were available :-( ).
>
> During the LAD meeting at Karlsruhe we discussed this and sketched a
> alsa-sequencer roadmap that focusses on transition of the alsa-sequencer
> from kernel to userspace and better integration with softsynths / JACK.
> A few things from this are very much related to your track bouncing /
> off-line rendering thing:
>
> - Provide facility to delegate scheduling to the client. The implementation
> would be to deliver the events directly (without queuing) with the
> timestamp attached to the registered client port. This would allow the
> client to get the events before the deadline (time at which the event
> should be played) and use that additional time to put the events at the
> right sample position.
>
> Note that for the softsynth to get advantage of this the application
> should enqueue the events (a bit) ahead of time and pass the timestamp.
> Some of the current applications (including MusE) use the alsa-sequencer
> only as event router and drive it real-time.
>
> Since the softsynth/plugin has no notion of the acutal time (only the
> media time and sample position), rendering at arbitrary speed should be
> possible: bounce faster than realtime or even slower than realtime for
> those complex patches.
>
> - JACK is real-time, and bound to the sample rate of the soundcard. Since
> the audio sample rate can also be used as a clock master for the alsa
> sequencer this would be a good option to ensure synchronisation. The
> transport of JACK and alsa sequencer can be tied together (either one of
> the two acting as master, a run-time configurable option) to provide
> uniform transport and media time amongst the applications that hook into
> the JACK and/or alsa sequencer framework.
>
> For the offline rendering no nice scheme has been worked out yet; I guess
> it would be something along the lines where the application that owns the
> sequencer queue has full control on the transport, moving media time at the
> speed the frames are actually rendered, and the app(s) generating the
> events keeping at least one sample frame ahead of time.
Okay, I didn't know that this had been up on the table, how far has this work
progressed, was it just the Karlsruhe meeting or has more thinking occured?
(fyi I'm CC:ing LAD, it might be a more appropriate place for this
discussion..).
Regards,
Robert
Jay Vaughan <seclorum(a)mac.com> wrote:
>
> >I'm trying to write a patch editor and librarian for Roland's CM-32L
> >synth module. This was a repackaged version of the popular MT-32 with 8
> >part multimbral support, separate drums and a basic reverb facility. The
> >only controls on the front panel are a power switch and master volume
> >knob - everything else has to be done via MIDI.
>
> I don't understand why you're re-inventing the wheel ... why don't
> you try to write your app using one of the MIDI libs out there which
> already work, such as MidiShare for example?
>
> I can't see any obvious glaring problems with your code, but I'd
> recommend you try to write your app using an already-working MIDI
> library... there's just no point re-inventing a MIDI API for Linux,
> the ones that are there already work perfectly well.
>
Hi Jay,
I took a look at MidiShare, but unfortunately it wont work on NetBSD as
it depends on a kernel module. I have tried using Linux, but gave up
when I couldn't get ALSA or OSS to recognise my USB MIDI interface, (a
Yamaha UX96). Under NetBSD it's recognised as soon as I plug it in.
I did Google around expecting there to be lots of convenient MIDI
libraries, but I couldn't find any. Is ALSA that good that no one uses
anything else? It seems to be overkill for System Exclusive stuff to
use a high level library though, especially as the details differ on
virtually every MIDI device.
Chris
Message to linux-audio-dev(a)music.columbia.edu
Ola!!!!
linux-audio-dev
GANHAMOS O PRÊMIO DE MELHOR SITE DO RAMO
======Estamos operando em Novo Formato======
Confira em:
escuta21.kit.net ou
http://www.escuta21.kit.net
ei.... linux-audio-dev
Cuidado com o que fala ao Celular...
ele tb tem ouvidos...
para remover o linux-audio-dev(a)music.columbia.edu de nossa lista
responda este e-mail e coloque remover
nos perdoe o transtorno...ok?
kandrak
Hi
I can't answer your question.. but this email has answered one of mine,
and I have successfully edited my asound.conf file to be able to do what
you are doing.. recording in through discrete channels and outputting
though discrete channels..
Maybe someone on the ardour list can answer us both by confirming that
this will allow multichannel recording on ardour (a possible solve to
the question I think you are asking)
cheers
Allan
P.S I'm still working on a way of then combining my Delta66 with my
intel8x0 (built-in to MOBO) so that I can use the stereo out of the
intel8x0 for monitoring while using the outputs of the Delta66 for
channel recording. (i.e. sending output into the mixing desk to use
outboard processing gear or for external mixdown of tracks)
cheers
Allan
On Mon, 2003-06-09 at 20:25, Akos Maroy wrote:
> In my quest to record from the different channels of the Delta 1010LT
> have come so far that I can address the different hardware inputs using
> ALSA device names, while using an appropriate /etc/asound.conf file.
> e.g. I can
>
> arecord -f cd -d 5 -D channel2 test.wav
>
> and this would record from hardware input channel #2. I have four stereo
> input channels, e.g channel1 ... channel4, plus the spdif channel. So
> far, so good.
>
> Now I would need a way to map these channels to OSS /dev/dsp interfaces.
> something like:
>
> /dev/dsp1 -> channel1
> /dev/dsp2 -> channel2
> /dev/dsp3 -> channel3
> /dev/dsp4 -> channel4
>
> I understand that I would need to use the kernel module snd-pcm-oss to
> achieve this. how can I tell this module to map to the appropriate ALSA
> devices?
>
> BTW, the contents of this asound.conf file is:
>
> pcm.ice1712 {
> type hw
> card 0
> device 0
> }
>
> # adcdac 1&2
> pcm.channel1 {
> type plug
> ttable.0.0 1
> ttable.1.1 1
> slave.pcm ice1712
> }
>
> # adcdac 3&4
> pcm.channel2 {
> type plug
> ttable.0.2 1
> ttable.1.3 1
> slave.pcm ice1712
> }
>
> #adcdac 5&6
> pcm.channel3 {
> type plug
> ttable.0.4 1
> ttable.1.5 1
> slave.pcm ice1712
> }
>
> # adcdac 7&8
> pcm.channel4 {
> type plug
> ttable.0.6 1
> ttable.1.7 1
> slave.pcm ice1712
> }
>
> #SPDIF channels only
> pcm.ice1712_spdif {
> type plug
> ttable.0.8 1
> ttable.1.9 1
> slave.pcm ice1712
> }
--
Allan Klinbail <allank(a)labyrinth.net.au>
Hi, I've been playing a lot with bristol synth and really love it. So
much so that I've been trying to 'Jackify' it. Actually, I'm pretty
much done, but can't figure out the internal audio format. Its
interleaved floats I think, but not normalised to [-1,1]. If any of
the developers are here could you help me out? I can hear noise, but I
need to tune the maths. TIA.
--ant
http://www.sequencer.de/neuron/neuronal.html
This synth has a mainboard running Linux inside :)
Seems like not only Stanton (Final Scratch) is relying on
Linux in the pro audio world these days ..
regards,
Vincent