Hello guys..
We are looking for one developer that can add and complete the Qtractor
features under one Audio-midi Styles player/editor.
http://qtractor.sourceforge.net/qtractor-index.html
we need to include the 16 pattern mode player and merge our available
arranger styles engine chords features into the qtractor sequencer, where we
must able to switch in real time the 16 patterns and recognize the chords
system.
Arranger styles are the typical style engine used for the most popular
Roland, Yamaha, Korg.keyboards.
We are looking to replace our basic midi styles player from our Mediastation
Linux keyboards.
If someone is really interested and with a audio-midi styles know how, just
contact me for the more information and sfecification.
Cheers
Domenico
Lionstracs Italy
www.lionstracs.com
2007/9/27, Georg Holzmann <grh(a)mur.at>:
> Hallo list!
>
> I am just thinking about the right strategy for denormal handling in a
> floating point (single or double prec) audio application (and yes I
> already read the docs of the different methods at musicdsp and so on ...)
>
> Basically my question is, if it is enough to simply turn on the
> Flush-to-zero and Denormals-are-zero mode and then compile everything
> with -msse -mfpmath=sse ?
> I know it won't run on older Pentium3,2 etc. - but for the machines
> which support this feature, is this enough ?
>
> Thanks for any hint,
> LG
> Georg
I've a copy of a paper written by Laurent de Soras from ohmforce,
which comments differents solutions, you could find it here
http://rtfm.osslab.eu/english/audio/denormal.pdf
Regards
Elthariel
> _______________________________________________
> Linux-audio-dev mailing list
> Linux-audio-dev(a)lists.linuxaudio.org
> http://lists.linuxaudio.org/mailman/listinfo.cgi/linux-audio-dev
>
On 9/24/07, linux-audio-dev-request(a)lists.linuxaudio.org
> Date: Mon, 24 Sep 2007 21:59:01 +0200
> From: Fons Adriaensen <fons(a)kokkinizita.net>
>
> On Mon, Sep 24, 2007 at 11:50:55AM -0700, Maitland Vaughan-Turner wrote:
>
> > Intuitively, one could also say that more sample points yield a
> > waveform that is closer to a continuous, analog waveform. Thus it
> > sounds more analog.
>
> This is completely wrong. Sorry to be rude, but such a statement
> only shows your lack of understanding.
Why is it wrong? If I drew some dots on a waveform and then connected
the dots, to try to reconstruct the waveform, wouldn't I get a better
result with more dots?
>
> > Thanks for the link. My whole point of digging up this old thread
> > though, was to say that I've tried it, and my ears tell me that the
> > papers are incorrect.
>
> Then please point out the errors in the paper by Lipshitz and Vanderkooy.
my ears tell me that... that's all; it's just subjective. haha, I see
subjective reports don't get you far around here.
>
> I'm not saying that DSD is crap. It sounds well. But it doesn't meet
> the claims set for it (as shown by L&V - you need at least two bits
> to have a 'linear' channel) and as a storage or transmission format
> it's inefficient compared to PCM. That means that if you use PCM with
> the same number of bits per second as used by DSD, you get a better
> result than what DSD delivers.
well, what do you mean by better? It seems like 24 bit is already
better in terms of dynamic range at any sample rate, but if you mean
more detailed representation of a waveform (in time), it seems like
you necessarily need to have the highest possible sample rate.
Like, if I were just recording an acoustic guitar and vocals, of
course 24 bit would be the best choice.
But if I'm recording a live band, there is just so much stuff
happening at once... You can't pinpoint an exact time when the
keyboard player presses the key, and you can't pinpoint just when I
pluck that bass string. A 96 khz 24bit system might say that the two
events happened at exactly the same time, when really it was closer to
1/100000 of a second apart. Now think about how many times something
like that could happen in a live recording with many instruments and
vocals and background noise from the crowd, etc. I'd rather have the
detail than the dynamic range in that case...
~Maitland
Quoting nescivi <nescivi(a)gmail.com>:
> Hiho,
>
> I am having a discussion on the supercollider front about what is the
> proper
> way for dynamic linking.
>
> as far as I know, you use ldconfig and have the library location that
> programs
> dynamically link to defined in /etc/ld.so.conf
>
> but what is supposed to happen if the user just installs the program to a
>
> directory in his home directory?
> how should the dynamic linking be defined?
Ardour installs it's own version of the included libraries in it's own
directory, PREFIX/lib/ardour2/, and the executable it installs in
PREFIX/bin/ is actually a shell script. That script uses the LD_LIBRARY_PATH
environment variable to make sure the version installed with ardour are
loaded. After setting that varible, the script installs the actual binary
which is also installed in PREFIX/lib/ardour2/ .
I think this is the proper way to do it. It is also the way programs like
firefox do it (as a quick 'less $(which firefox)' will tell you).
Sampo
Hello, list!
I'm a regular GNU & Linux user. I'm also a musician. I would like to
know:
How can I make a cool sampler for live performance using GNU & Linux?
I know you guys know. Help me out on this and I volunteer to publish a
HowTo somewhere.
Please, remember that the sampler should be sophisticated enough... not
just a "working sampler".
Any cool ideas and stuff are welcome!
--
Renich Bon Ciric <renich(a)woralelandia.com>
Woralelandia
On 9/25/07, Fons Adriaensen <fons(a)kokkinizita.net> wrote:
> On Mon, Sep 24, 2007 at 09:59:22PM -0700, Maitland Vaughan-Turner wrote:
>
> > Erik de Castro Lopo <mle+la(a)mega-nerd.com> sez:
>
> > > That means the only sensible place to do DSD processing efficiently
> > > is in silicon; either FPGAs or ASICs.
> >
> > ooo, I've been meaning to get one of those! Can you (or anyone) point
> > me toward a FOSS-friendly FPGA?
>
> Sony used to sell a line of HW modules for DSD processing some years
> ago. Not sure if they still exist - I'd be surprised.
Ugh, no I just want an FPGA to play with and do whatever: make a DSD
processor or maybe an evil atomic powered robot. Anyway, I was just
wondering if anyone can tell me of any FPGA's that are made by
Free/Open-Source friendly companies.
I basically just switched topics randomly, there, ahaha sorry about that.
But about the Sony thing, you are talking about the Sonoma? or
something else? Haha, I think the Sonoma is out of my price-range
(for now...). Besides I hate Sony, (although I"m thinking about
buying a PS3. hahaha Anybody wanna help me write a Wii emulator for
PS3?)
~Maitland
Hi all,
Seems I have the exact same problem. I'm running under 2.6.20-16
(ubuntu, feisty), however.
It wasn't detected correctly just like in the OP, so I upgraded the alsa
driver(s) as suggested in this thread, and now it's detected properly
(seemingly). But, to my dismay, no midi events are received whatsoever...
Is there some tracing somewhere that can be enabled to see if anything
is received at all?
André
> Ok, that gives me a port in my qjackctrl-connection-window, but
> unfortunaltely nothing comes out of it ..... Any suggestions, what could
> be wrong?
>
> Thanks and regards,
> Michael
>
> Clemens Ladisch schrieb:
>> Bengt Gördén wrote:
>>> tisdag 18 september 2007 06:25 skrev Mark Watkins:
>>>> I am trying to get the ESI MIDImate (EGO SYstems) to work.
>>>> I am running 2.6.17-5mdv and here is the /proc/bus/usb/devices entry:
>>>>
>>>> T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 3 Spd=1.5 MxCh= 0
>>>> D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs=1
>>>> P: Vendor=0a92 ProdID=1001 Rev= 1.04
>>> It sounds like you're having this trouble:
>>> http://www.ussg.iu.edu/hypermail/linux/kernel/0706.0/2282.html
>>
>> The workaround is in alsa-driver-1.0.15rc2.
Erik de Castro Lopo <mle+la(a)mega-nerd.com> sez:
> Jussi Laako wrote:
>
> > In some cases, direct DSP processing of the DSD stream would be
> > feasible.
>
> Really?
>
> Current CPUs are designed to work on integer or float samples. They
> are not designed to work on single bits. Attempting to do simgle bit
> processing on regular CPUs will be slow.
yeah, well it's time we rethink the way CPUs work anyway. It looks
like to me that we are running into the ceiling as far as Moore's law
goes. How about a generic, digitally controlled analog ALU as a
coprocessor for non-critical operations? (like real-time audio
processing). That could be fast... (no throwing rocks, please ;)
>
> That means the only sensible place to do DSD processing efficiently
> is in silicon; either FPGAs or ASICs.
>
ooo, I've been meaning to get one of those! Can you (or anyone) point
me toward a FOSS-friendly FPGA?
~Maitland
On 9/24/07, Paul Davis <paul(a)linuxaudiosystems.com> wrote:
> On Mon, 2007-09-24 at 13:13 -0700, Maitland Vaughan-Turner wrote:
> > oh yeah, why is that? acoustic waves are continuous, analog
> > representations are continuous. The more samples we can get the more
> > closely digital representation can mimic the analog which is far more
> > like the pressure waves than a series of pulses could ever be.
>
> there are lots of reasons why its wrong. information theory is one angle
> to take: how much information is being delivered per unit time. biology
> is another angle to take: how the human ear actually decodes acoustic
> pressure waves. non-linearities in pressure transducers (speakers etc)
> are another angle. the moment you convert an acoustic pressure wave into
> an electrical signal, its properties start to change. leaving it in
> analog form doesn't change it a lot. converting it to digital of any
> type changes the properties quite a bit, but this makes no difference if
> a symmetrical operation is possible when converting back to an analog
> electrical.
>
> but basically: "more pulses with less information per pulse" isn't
> equivalent to "less pulses with more information per pulse" and it
> certainly isn't equivalent to "continuously varying analog signal".
I dig what your saying, but when was the last time you listened to
just a single sample? Granted a 24 bit sample contains a lot more
data than a 1 bit sample. This is totally obvious. But when you look
at a whole chunk of samples, only the first several samples of the 1
bit stream are a question mark. After several samples it falls into
line and the amplitude can be accurately represented.
Now, I understand that 1-bit 2.8 Mhz can not achieve the dynamic range
of 24 bit samples, but it can surely represent more detailed
waveforms. Besides, there is only one variable to maximize (instead
of two). What happens when we turn that mega into a giga? (I know, I
know, an even bigger processing nightmare... hahaha)
Oh, and btw, just because I like DSD doesn't mean I don't like PCM! I
totally dig your work, and I use Ardour all the time. (well, ok, it's
broken on my box right now, but when it's working I use it all the
time! :) I mean, c'mon dude, you're like a celebrity! Thanks for
even talking to me =)
~Maitland
> On Tue, 22 Apr 2003 19:54:09 -0500
> "Dustin Barlow" <duslow at hotmail.com> wrote:
>
> > I read an interesting article on Direct Stream Digital (DSD) / Pulse Density
> > Modulation (PDM) entitled "A Better Mousetrap" by Brian Smithers in the May
> > 2003 issue of Electronic Musician. Since, Brian did a good job explaining
> > PDM/DSD in quasi-layman terms, I'll just quote snippets from his article to
> > set the stage for my questions.
>
> <snip>
>
> > DSD/PDM appears to be a superiour technique for recording and playing audio
> > material.
>
> Having been around digital audio and digital signal processing for over 10
> years, I am still far from convinced.
>
> > Granted, this technology may never catch on because of all the
> > hardware and software changes that would be required to mirror what a
> > typical PCM based DAW currently does. But, if DSD/PDM does catch on, and
> > DAWs start being produced, how will this effect current audio DSP
> > techniques?
>
> I have not looked into the maths behind algorithm development in DSD/PDM,
> but I doubt it is anywhere near as easy as with PCM.
>
> > The article mentions a program called Pyramix (Windows) which features DSD
> > support. However, for Pyramix to do EQ, dynamics, reverb processing, and to
> > display waveforms and vu levels, it converts DSD to a "high quality" PCM
> > format.
>
> That should tell you something :-).
<snip>
So..? Most PCM converters utilize a 1-bit stream also. Why not
utilize all the tools available for the task at hand?
As for processing, you can look at a PCM representation of a waveform
to ease the processing load and then just apply the changes to the
orignal DSD stream without ever having to process in the 1-bit domain
directly (which is way more processor intensive since you have to look
at a huge chunk of the stream in order to extract the amplitude data
that is available in each multi-bit sample).
IMHO, though, the hippest alternative at present is to process a DSD
stream in the analog domain and re-record it to DSD. This results in
a very "analog" sound. These days you can get analog gear with a
respectable dynamic range for a song (Mackie Onyx anyone?). When you
can get a 130 dB S/N ratio in the analog domain you really don't lose
too much converting back and forth from 1-bit domain. It's freakin
sweet!
If you haven't tried recording 1-bit. Do yourself a favor and demo
one of the new Korg recorders. It really is really good, no kidding.
~Maitland