On Tue, April 2, 2013 7:36 am, Ralf Mardorf wrote:
> On Tue, 2013-04-02 at 16:33 +0200, Ralf Mardorf wrote:
>> On Tue, 2013-04-02 at 15:34 +0200, Peder Hedlund wrote:
>> > Your car can probably do 140 mph even though you never go that fast.
>> > Being able to use the card in 192 kHz probably doesn't cost that much
>> > extra for the manufacturer and I guess the marketing department really
>> > loves being able to use it in the advertising.
>>
>> Yes, it's the second sentence :D
>> http://www.rme-audio.de/en_products_hdspe_aio.php .
RME has no choice really. If they want their gear to be used to make
bluray sound tracks, they have to support 192k. This is the certification
needed by equipment for that use. True, the sound is not any better than
if the studio used 48k and resampled the finished product to 192k (maybe
worse). but this is not about sound quality or RME doing marketing... it
is Hollywood doing the marketing...
--
Len Ovens
www.OvenWerks.net
Hello everyone!
I know, no longer strictly on topic. but the wealth of information is too
much, so I'm asking those who know.
I'm looking for an A/D converter, more precisely a converter from analog ins
to ADAT out. I've seen the Q-ADAT, which sounds almost reasonable. I've also
seen the ADA8000. But frankly both seem a bit much. I mean that literally "a
bit". any cheaper alternative known or anyone here, wants to seel their old
analog to ADAT converter?
Thanks for any advise!
Warm regards
Julien
----------------------------------------
http://juliencoder.de/nama/music.html
I have a series of wave files that I want to combine into one long mp3 and
flac. I have checked each file individually, and I'm convinced that I've
confirmed my previous notion that these files combined last a total of just
above 12 hours.
Now, normally I've been using sox to combine the wave files, and using the
result in lame and flac to create the result files, but in this instance
I'm getting wrong results. A normal run creates a wave file (and resultant
mp3) of just above 5 hours. I figured this might be because of a limitation
in the wave headers, so I made sox output to a w64 file instead. Now both
the wave and result mp3 turned out at 18+ hours. Any idea why sox is
getting a wrong result here? Do I need to tell it that the input files are
regular wave?
When I tried to pipe the output, like this:
sox 01.wav 02.wav -t wav - | lame - result.mp3 (or something like that)
the file turned out at only 2+ hours, while
sox 01.wav 02.wav -t raw - | lame -r -s 44.1 - result.mp3
turned out a 56 hour file.
Nevertheless, I think I'm on to something with the last command, but I
might have misunderstood some sox or lame documentation. Tips?
Arve
Hi,
I'm looking for something which I can use together with my 88keys
masterkeyb without knobs. It would be nice if it's small and light
enough to put on my keyboard itself. Something like the evolution uc-16,
but it would be nice if I could route my midi masterkeybd via the
controller to the computer (midi in / midi out / midi thru (?).
I like to control stuff like setBfree and some effects in Non-Mixer
probably.
Thanks in advance,
\r
Hello,
There is a very bad ground noise when I connect the M-Audio pulsar
mic into a mic input of the 1010LT. As soon as I touch the mic, the
ground noise disappears. Complete silence. But then, try to play bass
like that. The mic goes through a Radial JDI passive direct box. It
has the 15dB pad on because it makes a bit less ground noise. I tried
various settings on the DI box, bit none gets rid of the noise.
The electric plug has only two prongs. I though of getting some
copper wire and somehow run a length of wire from the electric plug
receptacle to a nearby set of copper water pipes. That shoudl get the
rig grounded, would it ?
Before getting into that or anything else, is that ground noise a
factor of a cheap microphone ? Or is it the quality of the 1010LT input
that uses not-so-good decoupling capacitors (or some such) ? Is it
possible to have silence without getting pro hardware ?
Thanks for suggestions/ideas/hints !
Hi,
after a somewhat productive weekend I'm happy to announce some alpha
quality software (i.e. bug ridden, not feature complete) for your
consideration and feedback :)
But: release early, release often XD
I went a little overboard with modularization and separation of
concerns, so in the end it became four packages (with possibly one more
in the future - a LV2 plugin to load the synths/instruments).
Documentation is also very much lacking, but each package contains at
lease a single example file to illustrate the usage.
* ladspa.m - https://github.com/fps/ladspa.m
ladspa.m is a header only c++ library to build and run general synthesis
graphs made up out of LADSPA plugins. The interface is kept deliberately
kept simple and unsafe, as it is expected that one uses higher level
tools to build these synthesis graphs (e.g. using a library on top op
ladspa.m.swig or ladspa.m.proto).
* ladspa.m.swig - https://github.com/fps/ladspa.m.swig
ladspa.m.swig are SWIG generated python bindings for ladspa.m. This
allows building and running general synthesis graphs made up of LADSPA
plugins from within python. This requires ladspa.m. NOTE: I just saw
that the swig interface definition lacks the ability to connect outside
buffers onto plugin ports. This will be fixed in the next few days..
* ladspa.m.proto - https://github.com/fps/ladspa.m.proto
ladspa.m.proto contains google protobuf definitions for general
synthesis graphs made up of LADSPA plugins. It also contains a
definition for an instrument file format. This library does not depend
on either of the two above. It becomes useful with the last package
(ladspa.m.jack) and possibly in the future with an LV2 plugin to load
and run these (to be announced when done). The python bindings generated
for ladspa.m.proto can be used to generate synth and instrument files
that can be loaded by ladspa.m.jack. The instrument file definition
allows for polyphony while at the same time putting no constraints on
the inner structure of the instrument (each voice is made up out of
plugins, they can be identical or not between all voices)..
An example is included which defines a simple sawtooth instrument with
exponential envelopes and with 5 identical voices except for a different
delay setting on each voice.
https://github.com/fps/ladspa.m.proto/blob/master/example_instrument.py
Pipe its output into a file called e.g. instrument.pb. This you can then
load into ladspa.m.jack.instrument.
Here's a little example of the generated instrument file loaded into
ladspa.m.jack.instrument and playing a little 120bpm loop (from ardour3)
with it:
https://soundcloud.com/fps-2/t-m
This also highlights the need for a higher level interface on top of it
to ease the process..
* ladspa.m.jack - https://github.com/fps/ladspa.m.jack
ladspa.m.jack is a library which allows loading ladspa.m.proto synth and
instrument definition files into jack hosts that are provided as example
clients.
ladspa.m.jack.synth allows loading a synth definition file and run it in
the jack graph.
ladspa.m.jack.instrument allows loading an instrument definition file
and provides a midi in port which allows playing the synth.
But like I said this is all ALPHA software and I just announce it
because someone else might have fun with it. Please report all issues
that you find either per email to me, on LAD or LAU or on the issue
trackers of the github projects..
Have fun,
Flo
--
Florian Paul Schmidt
http://fps.io
Hi,
I don't know if what I'm going to ask makes sense or is terribly stupid.
Don't be too harsh on me if it's the latter, please :-)
I've got two audio "outputs" (headphones and a 4.1 set of speakers) and two
audio "inputs" (a computer and a TV). In an ideal world, I'd be able to switch
between them with some sort of device: let's say I'm watching a film through
the speakers but it's getting late and I want to switch to the headphones. Or
I'm done with the TV and I want to do something with the computer, and want to
listen to that instead.
I don't know what kind of devices are used in the real world to do this, but
to me it looks quite similar to what we do with Jack. So I thought that with
some small Linux-capable device (hence the Raspberry-pi) and some external
audio card (with two inputs and two outputs; or maybe even two different
external cards, each one with one input and output), you should be able to
build a DYI audio mixer for not-so-much money. You could ssh into it and use
qjackctl or the like to adjust inputs and outputs.
Am I completely off-mark, or is this a viable/common/established thing?
Thanks in advance,
--
Roberto Suarez Soto Hurt me plenty