> Syncing video playback to Ardour would be a great example of the
> usefullness of jack-transport. Unfortunately, i guess the fact
> is that the design of mplayer means it can never be a real jack
> client. Perhaps a jack client could control mplayer via its slave
> mode, for use with non-keyframe based video formats, but thats a
> bit of a hack.
I needed a jack driven video player to allow me to compose the
soundtrack for a videoclip in MusE. Since there was none available, I
made my own one. I think it is a not so great but adequate example of
the usefullness of jack-transport ;-D
One of the solutions I thought of was that suggested by you: mplayer
slave mode. I don't remember well why I dropped it.
Finally I ended up glueing some libraries to make a very quick hack
that did the job. Don't expect good video performance, nor support of
keyframed video formats, I only spent a couple of days and got what I
needed.
I use it successfully with MusE and ardour. You can check the project
page and a screenshot:
http://sourceforge.net/projects/xjadeo/http://sourceforge.net/project/screenshots.php?group_id=131926
Regards,
Luis
Hi!
The aim is to investigate some signal which consists of main
harmonic and others with rather low level. I'd like to reject
main harmonic _and_do_not_affect_ to other harmonics. What
LADSPA plugin is the most sutable for such work?
Thanks!
Andrew
Hi all,
I did a bit of hacking on my app trying to make it alsa sequencer compatible
but did not want to do too much changing in terms of how it deals with raw
MIDI data. From looking at the API reference it seems that I have 2 choices:
1) Use raw midi option and specify "virtual" name which makes my app's MIDI
I/O appear in the alsa sequencer, *but* does not give me an option of
changing the node names which obviously is very important when it comes to
working with a lot of apps concurrently. So therefore here's my first
question:
Is there a way to specify a "virtual" port so that I can receive raw MIDI
data, have ports show in the alsa sequencer and on top of that be able to
*rename* the port as necessary?
2) The other option is obviously to use alsa-sequencer API but in that case
is there a way to simply convert the stream of received MIDI data into raw
midi format so that I can use my built-in raw MIDI parsing engine for
parsing the messages?
Any help is greatly appreciated!
Best wishes,
Ivica Ico Bukvic, composer & multimedia sculptor
http://meowing.ccm.uc.edu/~ico/
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 3/15/2005
So I messed around for a little bit and found something that kinda
works. I was routing alsa-player in ardour (all this with jack of
course), my sound card input also into ardour, mixing the two channels
in ardour, and outputting the master out of ardour into oddcast, which
then sent to icecast server, which I connected to with xmms, which sent
the audio to soundcard output.
And here was the trick. I increased the buffer size in Jack from 1024 to
4096 which consumed far less cpu resources therefore decreasing
ringbufferfull errors from oddcast, and I also used jack.plumbing to do
whatever it does between oddcast and ardour, which kept the stream from
crashing. So now whenever I get the ringbuffer full error, the stream
just jitters as long as the error persists, and seemed to return back to
normal as soon as cpu resources became available again. So... sort of a
solution.
I'm hoping that in actual practice, I'll be able to divide some of these
tasks between multiple machines, probably have oddcast running on its
own machine, and pumping in an audio stream with jack.udp I'll try to go
check on this later today, and will report back if its stable at all.
Pretty impressive though that my g4 1.5ghz could handle all that. All
this would probably work better on an intel system though?
- Ben
> I've been trying to get oddcast to work the past couple days, but keep
> running into this problem
>
> ringbuffer full, tried to write 4096, but wrote 0
>
> its usually in response to something distracting the processor for a
> second, like dragging a window across the screen. Have tried on a 1.5ghz
> ppc processor and 700mhz pIII and problem persists. Have searched for
> other users reporting similar problems, and found one hint on the
> oddsock site
>
> .... but unfortunately their site is in a period of transition, plug
> above line into google and you'll find a couple similar complaints.
> Apparently, the code for oddsock was scrapped from ices2-jack, which
> would explain why I have no problems when trying to do this with Ogg,
> but maybe Ogg is less processor intensive? I'm only having problems when
> trying to encode with LAME.
>
> If anybody has any ideas, I would love to get this working for our radio
> station. But if not oddcast, does anybody know if there is any other gnu
> mp3 encoders out there (darkice?) that are jack compatible. Seems like
> the darkice guy has been working on livesupport, which I can't wait to
> see in action.
>
> - Ben Racher
>
I've been trying to get oddcast to work the past couple days, but keep
running into this problem
ringbuffer full, tried to write 4096, but wrote 0
its usually in response to something distracting the processor for a
second, like dragging a window across the screen. Have tried on a 1.5ghz
ppc processor and 700mhz pIII and problem persists. Have searched for
other users reporting similar problems, and found one hint on the
oddsock site
.... but unfortunately their site is in a period of transition, plug
above line into google and you'll find a couple similar complaints.
Apparently, the code for oddsock was scrapped from ices2-jack, which
would explain why I have no problems when trying to do this with Ogg,
but maybe Ogg is less processor intensive? I'm only having problems when
trying to encode with LAME.
If anybody has any ideas, I would love to get this working for our radio
station. But if not oddcast, does anybody know if there is any other gnu
mp3 encoders out there (darkice?) that are jack compatible. Seems like
the darkice guy has been working on livesupport, which I can't wait to
see in action.
- Ben Racher
Steve, this is from ardour-users. I know the problem too: SC1 to SC4
more or less always stop working after some twiddling. Meaning the
audio gets through only when SC* are bypassed or removed. I suppose you
don't know the problem but maybe you can tell me what exactly you need
to know to debug this.
Wolfgang
Hello !
I want to setup a multichannel audio environment for use
together with software that synthesize the waveforms for
each channel.
My newest audio card is a "Hollywood@home 7.1" card based on the
ENVY24HT-S chip.
I have managed to use the card to output mono and/or stereo sound
under Slackware 10.1, but still want to find out if the card
supports the conversion of uncompressed multichannel audio (for
example 6 or 8 16-bit (or 24-bit) samples per frame from a FIFO
through operating system and hardware) into analog signals ?
If this is not possible with the internal DACs of the card, then
I hope that it at least will be possible through ALSA with external
DACs connected to the card's optical output, that seems to be of
a high speed kind and expected to support ADAT (a standard
established by Alesis, I think).
I am not so interested in encoding the audio in DTS or some
Dolby Multichannel format or simmilar, since i want the channels
to be handled independently. For example I do not want to label a speaker
as "left" or "right" and then select the speaker by assosiating a direction
to the sound. Instead I want to be able to assign D/A channels to MIDI file
tracks or MIDI channels and then placing the speakers there I want them
(not there any multichannel decoder expect them to be).
If I try to use several cards, then a lot of synchronization
problems need to be solved, to avoid divergation in FIFO usage
between cards and to maintain phase coherency. In addition to
arrangements, like pathing cards to use a common chrystal
oscilator and/or reserving some DAC-channels for synchronizaion
purposes, I also need to convince the motherboard to accept several
sound cards without causing problems during resource allocation
(for example around IRQ mapping) or distributing the cards between
several PCs connected together with maybe 100Mbps network cards.
When I have solved these questions, then I may have to think more
about the best algorithms to synthesize sound, by looking at
existing open source programs and/or implementing additive synthesis
with or without the use of FFT or some other optimized code.
I am new to this list, so please excuse me if the answers to my
questions are hidden in any recent messages in the archive.
Thanks in advance for feedback/answers !
Hans Davidson
_______________________________________________________
Skicka gratis SMS!
http://www.passagen.se
Hi,
first of all, sorry for the cross posting, but I thought the
topic is worth it.
Again, this year the Linuxtag will happen in Karlsruhe,
Germany. It's Europe's most important event concerning free,
especially Linux based software.
During the last years, Frank Neumann has politely organized an
audio booth there. Unfortunately, it can happen that he'll
not be able to do all management stuff this year again.
So, I'll try my very best to help out this year, otherwise we
wouldn't have a booth.
We're still looking for people who are interested in helping,
mainly being at the booth, answering questions and demoing
software.
If you like to contribute, please let me know. For detailed
information about the Linuxtag see
http://www.linuxtag.org/2005/en/home.html
or contact me via personal mail.
Best regards
ce
>Andrew Burgess wrote:
>> Mine comes up as 102, I've had to edit the driver source in the
>> past to get it to work. Could you make sure 102 will work too?
>The firmware for IDs 0100 and 0102 is indeed the same.
>Editing drivers/usb/misc/emi26.c in the kernel source and replacing
>0x0100 with 0x0102, and then recompiling the kernel, should enable it
>to work.
Yep, thats what I did. I posted the note hoping you'd make it work for both
since you were going to look at the code anyway...
>From: David Cournapeau <cournape(a)enst.fr>
>
> I think I am not the only one here to have heard about so called '3d
>ray tracing cpu' (e.g:
>http://graphics.stanford.edu/papers/rtongfx/rtongfx.pdf
Saarcor have earlier demonstrated their ray tracing graphics chips.
One of the screenshots featured Quake game; ray traced version.
http://graphics.cs.uni-sb.de
People have also coded ray tracers on existing GPUs.
>I wondered if this can be used for audio technology, e.g RT reverbs,
>etc... Does anyone have an idea about the possible usage of this kind of
>chips for audio processing ?
If GPU cannot handle audio streams, it could be used to compute
the early reflections for fairly complicated buildings. Only
the reflection taps would be copied back from GPU. Both mirror
and ray tracing methods could be used. The final dense reverberation
should still be computed if mirror and ray tracing generates only
approximation, but that is easier with a lot of reflection taps.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software