Hi All
I think I know the answer to this, but I was wondering if it's
possible to mix two separate audio sources in software. I'd like to be
able to play announcement-type audio atop the currently playing main
track. An example might be when driving down the Autobahn and a
traffic report temporarily pre-empts whatever it is you are listening
to. In my case, though, I'd prefer to keep the main audio playing, but
at a very low volume level compared to the short announcement..
My setup is a roll-your-own embedded Linux distro with kernel 2.6.28.
I'm using Alsa SoC audio driving an I2S output on my Marvell processor
(which is now working just fine, BTW). The I2S signals feed an FM
transmitter chip which is broadcasting audio to a nearby FM receiver.
The FM chip only has the one input so no mixing possible within that
part. Anyhow, considering I only have a single I2S output, seems to me
that I'd need to do the mixing somewhere upstream of that serial port
within the Linux machine. But there is no special hardware for this
and I think the answer is "no, there is no way to mix two separate
audio tracks without a DSP and another device driver". But I thought
I'd ask just the same.
Regards,
Rory
Hi all,
In keeping with the usual Xsynth-DSSI hackery I like to do (since it
makes such an awesome experimental base) I'd like to present my latest
horrible unlistenable noise generator:
http://www.gjcp.net/~gordonjcp/xsynthhack.ogg
Warning: only the voice generation code has been changed. This will
overwrite or otherwise badly affect an existing install of Xsynth-DSSI.
Be careful!
http://www.gjcp.net/~gordonjcp/xsynth-dssi-0.9.4.tar.gz
So what's different about it? Well, the minblep band-limited
oscillators have been replaced by Tomisawa sine-feedback oscillators.
These resemble "operator 4" in a DX21 or other four-op FM synth, in that
by applying FM feedback around a sine function it starts to approximate
a sawtooth wave. If you take two sawtooth waves, offset the phases, and
subtract, you get a squarewave. By varying the offset, you vary the
pulsewidth.
Now here's the clever bit - I've modified things slightly so that the
two Tomisawa generators can run at different speeds. So, by offsetting
the frequencies you get either a deep PWM squarewave or a kind of
"supersaw"-type sound. By varying the amount of modulation (beta) you
can determine the "shape" of the waveform.
So how are the oscillator controls affected? Pitch remains the same.
Waveshapes are Sine (as you'd expect), Tri (saw with not much beta,
really), Saw up and down are both just saw, Square (adjustable
pulsewidth), Square with adjustable drift, and Saw with adjustable
drift. Pitch mod and sync don't currently work. I don't know if sync
can be made to work, without introducing aliasing.
Have a play and let me know how you get on.
Gordon MM0YEQ
Hi all,
I understand that a lot of you develop for free software and are
passionate at what you do. But how do you pay the bills? What do you
do for a living? Are you a student? Do you do software development
just as a hobby, or do you want to make a living doing this kind of work?
The reason I ask this is because I am curious about what kind of
backgrounds a free software developer has. As for me, I am a student
majoring in Music and minoring in Computer Science. I got the idea of
writing this email, actually, because Google had an internship panel at
my school. Google just loooooves open source and those involved. It
sounds like Google has quite a friendly and cooperative working
atmosphere, and they treat their employees very well. Yeah, I'd like to
work for Google, but who doesn't right? :)
-Kris
Hello,
I'm writing a synth module on top of jack and I'm starting to contemplate stereo.
I looked up "pan law" and understand that center should be -3 db (or some say -3.5 or 4.5, whatever) given unity at panned hard L or R. It was said that in an ideal room I should be down -6 db in the center. That would mean linearly transitioning from unity gain to completely off as one pans, I *think* (-6 db in the center = .5).
So that's one (very easy) way to go...
There was also mention of "equal power". Since power is proportional to signal squared, this means with parametrized L and R functions
L(t)^^2 + R(t)^^2 = 1, 0 <= t <= 1
We need an f(t) such that f(t) = L(t) and f(1-t) = R(t).
I played around this for awhile and using the sum of the square of sin and cos etc I got an f(t) of
f(t) = cos( pi * t / 2 ) (details available on request)
So L(t) = cos(t * pi / 2) and R(t) = cos((1 - t) * pi / 2)
And that seems to work out correctly.
Is this equal power version worth spending the processing cycles on? I intend to make pan envelope and LFO controllable so it's not going to be the case that the pan value can be thought of as relatively static.
Thoughts?
Thanks
Eric
PS now that I know what I'm looking for web search turned up this:
http://www.midi.org/techspecs/rp36.php
Hey all,
I've began learning about the LADSPA standard, read the comments in the
header, read the ladspa.org info,
but still I wouldnt know where to start with writing my own code to make a
plugin process a buffer.
So my request is as follows, is there a "here's 20 lines of code to process
a buffer" tutorial somewhere?
I've downloaded the SDK, but reading trough the "applyplugin.c" code
confused me more than it helped...
800+ lines is more than I can understand in one go.. :-)
Cheers for any suggestions.. -Harry
The Problem:
I have a poor wireless card in some notebooks. This card is a hard to work,
but after kernel 2.6.37-rc2 is come to working, some features is add to
laptop and video card now work in built in.
Some apps cannot work so good, and i would add RT patch in this kernel to
get rtprio
I make some tries with patch-2.6.33.7-rt29 but Im fail, so the question is:
How i can apply the rt patch in the kernel 2.6.37+ ?
--
yermandu
It was a previous discussion "Musescore "music trainer"?", about
polyphonic audio to MIDI recognition. I found a windows program that
claim to archive good results : TallStick TS-AudioToMIDI
On this webpage : http://tallstick.com/webhelp/algorithm.htm,
they wrote some interesting claims :
- "They (3 of the 4 algorithms) all are based on the set of oscillator
circuits named sensors. Each sensor gets wave signal as input and
produces some reply. Sensor's reply is a value proportional to the
amplitude of component with frequency about equal to sensor's
resonance one."
This is what I call "filtre en peigne" in french. Comb filter. Each
"teeth" of the comb will test for one frequency.
- After sensor's output is multiplied on correspond Equalizer values,
it arrives on Spectrum Window. All these methods analyze spectrum
data at each instant of time from left to right (from low to high
pitches). When spectral maximum is detected it assumed to be
fundamental frequency of note. This assumption is tested by comparing
spectra to Harmonic model setting. After this, if assumed note is
greater than Threshold value then note accepts, otherwise rejects. If
note is accepted, all it's spectral components are subtracted from
corresponding components of whole spectra.
This show that the whole algorithm is more complex than a simple
recursive filtering. They take in account the spectra of the music. You
can (and must) assign the instruments that play the music, before to
made the conversion.
Ciao,
Dominique
--
"We have the heroes we deserve."