Hello, everybody!
I am developing a small speech server for emacspeak. For audio output I
am using /dev/dsp. But I don't know how to stop playback exactly in
moment I choose. Please tell me how to implement correctly this behavior
or give me the link where I can read information about it.
--
Best wishes. Michael Pozhidaev. E-mail: msp(a)altlinux.ru.
Tomsk state university.
Computer science department. (http://www.inf.tsu.ru)
>From: Jens M Andreasen <jens.andreasen(a)chello.se>
>
>> The article says quite clearly that the invention is patented.
>> They would be fools not to try to patent it because the market
>> is huge.
>
>I did not find any references to patents except for the word
>"invention". Not even "patent pending"?
It was this version of the news:
http://www.tomshardware.com/hardnews/20040902_135943.html
"Pricing was not announced yet, but Cann says he will make his technology
available for "far less" than the cost of professional studio DSP solutions
which can run into the high five-figure range. He estimates the price
will be somewhere between $200-$800."
The "technology" is the way how audio is stored to texture memory.
And the audio is apparently stored as float data as the text below
could indicate.
"At this time, Cann plans to only support Nvidia graphics cards. "When I
started, ATI had a problem with floating point data. I have heard they
have resolved it, but I won't have time to purchase and research their
newest cards until after this is released," he said."
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi.
Some of you might remember that I once started a thread
regarding the not-so-good separation of the GUIs from the DSP engines
of typical LAD applications.
I just took the time and looked at DSSI, and as far as I can see,
it would solve most of the problems I am having regarding
UIs. The OSC based UI->to->host communication concept
nicely separates the actual UI from the DSP code and forces the
developer to separate the GUI code from the actual backend.
This would make it as simple as possible to replace existing
UIs with alternative approaches. The DSSI plugin could be
reused as a whole, unmodified, only the UI part would need to
be re-written.
Thanks to those who work on DSSI, it looks very promising to me.
Any early adopters yet?
--
CYa,
Mario
Greetings:
I'm doing some research for an article about Linux MIDI support. In my
text I briefly describe the evolution of the MIDI specification since
its adoption, mentioning things like MIDI Time Code, MMC, the sample
dump standard, and the standard MIDI file. However, one item has me a
bit mystified. I'm unable to ascertain whether multi-port interfaces are
in fact described and supported by the spec. I checked the MMA docs
on-line, and I also have the Sciacciaferro/De Furia MIDI Programmers
Handbook, but nowhere do those sources indicate explicit support for
multi-port hardware. Are multi-port MIDI interfaces vendor-specific
solutions or is there actually an extension to the MIDI spec somewhere
that I'm just missing ? TIA!
Best regards,
dp
Invitation for testing and API comments.
http://plugin.org.uk/libgdither/
Libgdither is a GPL'd library library for performing audio dithering on
PCM samples. The dithering process should be carried out before reducing
the bit width of PCM audio data (eg. float to 16 bit int conversions) to
preserve audio quality.
It can do conversions between any combination of:
in out (optionally interleaved)
-------------------------------------------------------------
normalised mono float 8bit unsigned ints
normalised mono double 16bit signed ints
32bit signed ints
normalised float
normalised double
At any bitdepth supported by the input and output formats
Instructions for testing are in
http://plugin.org.uk/libgdither/TESTING
Basic docs can be found in
http://plugin.org.uk/libgdither/libgdither-0.2/gdither.h
Examples of use can be found in
http://plugin.org.uk/libgdither/libgdither-0.2/examples/ex1.c
Comments welcome,
Steve
On Sep 13, 2004, at 9:01 AM, Eric Rz wrote:
> So at what level in the tcp/ip stack does a collision get detected?
> From
> what I understand, if there is a collision on a network segment each
> end
> will backoff for a randomly chosen time and then retransmit. Is this at
> the ethernet, IP, or TCP level?
If you're designing the interface between Layer-2 (Ethernet,
Wi-Fi, what have you ...) and IP, as a rule, the right thing to
do is to pass through packet loss rates in the 1-2% range
to the IP layer. If the Layer-2 sees loss rates significantly
above that on a regular basis, IP applications are known
to not cope well, and so the right thing to do is to make
the Layer-2 appear to have a 1-2% packet loss rate, by
using techniques like retransmission or FEC.
Modern Ethernet (what you buy new from Linksys or Netgear
or Cisco in 2004) is switched, not shared. It achieves 1-2% loss
rates extremely easily. So, stacks usually pass through
the tiny loss rates of switched Ethernet up to the IP layer.
This means that yes, occasionally you will see lost packets
if you a UDP application (UDP is a thin layer on top of IP,
one IP packet to the OS == one UDP packet to an app)
on a local switched Ethernet. I've seen it with real hardware.
Usually, the network is having a burst of traffic, and
something -- probably the receiving network stack --
gives up and throws away a packet. But, its very
rare -- 0.1% or less, if I had to put a number on it.
But if that 0.1% was a NoteOff sent to an Hammond
organ patch, you care :-). Thus, the recovery journal
technology in RTP MIDI.
Shared media wired Ethernet technology got us through the
80's. Which was a good thing :-). But it really is a technology
for the history books now ... its really good history, its good
to know about it because it was such a classic design, but
its not what people mean anymore when they say "wired
Ethernet". All that is left from that era is the bit-field -- the
pattern of bits in the packet -- and the semantics of the bits.
Modern wired Ethernet is switched Ethernet.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
>In pratice people dont really demand hard realtime and it will be OK, but
>the maximum time taken to transmit a UDP packet is unbounded, it uses
>exponential backoff IIRC.
That sounds like TCP. I think UDP is send and forget, if you want guaranteed
delivery or sequencing you need a higher protocol like TCP.
Or are you thinking of ethernet level collision detect and retransmit? Does
that go on forever (unbounded)?
(Sorry I know it's an older thread, I'm trying to catch up ;-)