Here is one of music-dsp articles (critical date becomes important).
http://aulos.calarts.edu/pipermail/music-dsp/2004-May/027009.html
Perhaps they read that article, patented and implemented during the
summer. I don't know.
Juhana
== cut ==
Bill Baldwin wbaldwin at austin.rr.com
Sat May 1 15:18:11 PDT 2004
It's definitely possible, but far from convenient or optimal with today's
hardware.
The h/w is steadily getting better, but you'd likely run into FP precision
issues on many cards.
The new pixel shader 3.0 h/w is starting to appear and will become more
common over the next year - this extends the instruction length of shaders,
adds better flow control, and lots of other stuff that increases the
potential for doing interesting audio processing.
Getting the data in and out is another issue - you could treat an audio
channel as a 1-dimensional texture, turn off h/w filtering, etc. - but you'd
end up locking resources very frequently to write your input and read back
your output, which tends to stall the graphics pipeline and thus slow things
down.
Still, it's worth exploring given that all that GPU power that your
sequencer is ignoring...
-Bill
-----Original Message-----
On Behalf Of vesa norilo
Hi all,
Has anyone thought about it? I bet someone has.
There's plenty of horsepower sitting idle in PCs with modern video
cards. The in them shaders are completely programmable, but I don't
really know about streaming data to and fro. I just thought that it
would be very neat to write a VSTi synth that runs on, say, Radeon X800
or GF6800. Is it possible at all?
Vesa
http://shoko.calarts.edu/musicdsphttp://ceait.calarts.edu/mailman/listinfo/music-dsp
== end ==
Hello, everybody!
I am developing a small speech server for emacspeak. For audio output I
am using /dev/dsp. But I don't know how to stop playback exactly in
moment I choose. Please tell me how to implement correctly this behavior
or give me the link where I can read information about it.
--
Best wishes. Michael Pozhidaev. E-mail: msp(a)altlinux.ru.
Tomsk state university.
Computer science department. (http://www.inf.tsu.ru)
>From: Jens M Andreasen <jens.andreasen(a)chello.se>
>
>> The article says quite clearly that the invention is patented.
>> They would be fools not to try to patent it because the market
>> is huge.
>
>I did not find any references to patents except for the word
>"invention". Not even "patent pending"?
It was this version of the news:
http://www.tomshardware.com/hardnews/20040902_135943.html
"Pricing was not announced yet, but Cann says he will make his technology
available for "far less" than the cost of professional studio DSP solutions
which can run into the high five-figure range. He estimates the price
will be somewhere between $200-$800."
The "technology" is the way how audio is stored to texture memory.
And the audio is apparently stored as float data as the text below
could indicate.
"At this time, Cann plans to only support Nvidia graphics cards. "When I
started, ATI had a problem with floating point data. I have heard they
have resolved it, but I won't have time to purchase and research their
newest cards until after this is released," he said."
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi.
Some of you might remember that I once started a thread
regarding the not-so-good separation of the GUIs from the DSP engines
of typical LAD applications.
I just took the time and looked at DSSI, and as far as I can see,
it would solve most of the problems I am having regarding
UIs. The OSC based UI->to->host communication concept
nicely separates the actual UI from the DSP code and forces the
developer to separate the GUI code from the actual backend.
This would make it as simple as possible to replace existing
UIs with alternative approaches. The DSSI plugin could be
reused as a whole, unmodified, only the UI part would need to
be re-written.
Thanks to those who work on DSSI, it looks very promising to me.
Any early adopters yet?
--
CYa,
Mario
Greetings:
I'm doing some research for an article about Linux MIDI support. In my
text I briefly describe the evolution of the MIDI specification since
its adoption, mentioning things like MIDI Time Code, MMC, the sample
dump standard, and the standard MIDI file. However, one item has me a
bit mystified. I'm unable to ascertain whether multi-port interfaces are
in fact described and supported by the spec. I checked the MMA docs
on-line, and I also have the Sciacciaferro/De Furia MIDI Programmers
Handbook, but nowhere do those sources indicate explicit support for
multi-port hardware. Are multi-port MIDI interfaces vendor-specific
solutions or is there actually an extension to the MIDI spec somewhere
that I'm just missing ? TIA!
Best regards,
dp
Invitation for testing and API comments.
http://plugin.org.uk/libgdither/
Libgdither is a GPL'd library library for performing audio dithering on
PCM samples. The dithering process should be carried out before reducing
the bit width of PCM audio data (eg. float to 16 bit int conversions) to
preserve audio quality.
It can do conversions between any combination of:
in out (optionally interleaved)
-------------------------------------------------------------
normalised mono float 8bit unsigned ints
normalised mono double 16bit signed ints
32bit signed ints
normalised float
normalised double
At any bitdepth supported by the input and output formats
Instructions for testing are in
http://plugin.org.uk/libgdither/TESTING
Basic docs can be found in
http://plugin.org.uk/libgdither/libgdither-0.2/gdither.h
Examples of use can be found in
http://plugin.org.uk/libgdither/libgdither-0.2/examples/ex1.c
Comments welcome,
Steve