On Sat, Oct 9, 2010 at 2:55 AM, Geoff Beasley
<geoff(a)laughingboyrecords.com> wrote:
> until Clemens chimes in... is it your own kernel Luis ? looks like usb2
> device in usb1 port maybe? check your usb config in the kernel perhaps
Hi, Geoff, thanks for answering. Clemens is perhaps the ultimate USB
audio guru, but everyone else's experience in this issue is of course
valuable and appreciated.
I tried this on Ubuntu and Fedora Core stock kernels (no -rt patch).
On Ubuntu jackd simply failed with no clue about was going on;
Fedora's kernel at least supplied the message I quoted in the top
message (probably it has activated some debugging flag that Ubuntu's
doesn't.)
If it helps some (probably not, since all the hubs in the market will
be some kind of wrapper or another around the same Taiwanese chip),
the hub model is Manhattan 160612:
http://komputercenter.com/usb-gadgets-c-7/hub-usb-2-0-manhattan-160612-p-16
Has someone else here managed to successfully run jack over USB audio
through an external hub? It is probably not the best setup out there
latency-wise (how long does it actually take for a USB frame to pass
across a hub, anyway?), but may be worth considering if low latency is
not critical, providing in return integrated USB port protection and
perhaps some degree of power supply noise isolation (or yet another
noise source, you never know, but I'd hazard the guess that anything
that separates audio equipment from LCD inverters in laptops should be
a good thing.)
Cheers,
L
Hello all,
Two new Jack apps are available at the usual place:
Zita-at1: Autotuner.
Zita-rev1: Stereo or Ambisonic reverb.
More info at <http://www.kokkinizita.net/linuxaudio>
Enjoy !
--
FA
There are three of them, and Alleline.
On Mon, Oct 11, 2010 at 01:37:16AM +0400, Oleg Ivanenko wrote:
> Your tools as always like katana -- lightweight, visually simple, and precise.
:-) But not intended to slice off someone's head :-)
Ciao,
--
FA
There are three of them, and Alleline.
On Mon, Oct 11, 2010 at 10:29:57PM +0200, Jostein Chr. Andersen wrote:
> > Zita-rev1: Stereo or Ambisonic reverb.
>
> I tried it on a snare and did a test on a whole drum set, damn it sounds good,
> it to good to be true! I'm seldom satisfied with the reverbs I hear, but this
> is amazing. The controls works exactly as expected should when I tweak them:
> natural and responsive.
I didn't expect it to be used on drums, but if it sounds OK, why not !
> This two new additions fits perfect to what I consider to be the
> philosophy of your eq-channel strip you kindly sent me: Great, musically
> natural sound that does precise what I want and make my trust my ears again.
Never let technology get in the way of your ears !!
Ciao,
--
FA
There are three of them, and Alleline.
>> BUT never ever a licenced Windows + a bought Cubase will cause such an
>> issue, assumed you didn't install a cracked Windows Office too.
>clearly you have no idea who Jeff McClintock is, or you wouldn't be
offensive.
;)
I do use licensed software. I am quite anti-piracy and have made submissions
to the government on the subject, even got a letter published in PC World
magazine.
Off-Topic: IMHO Piracy hurts Linux by providing a competing 'low cost'
alternative to *real* free (FOSS) software.
Now that I have an ADAT-capable card ($20 ebay ice1712-based terratec
ews88d) I'm curious... if I combine it with
something like http://www.kellyindustries.com/computer/alesis_ai4.html
( http://www.alesis.com/ai4 ) and use the
S/MUX mode built in to the AI4 across eight channels to create four
24/96 channels.
//// //// //// ////
In order for the AI-4 to operate at the 96 kHz samplerate it has to be
run in S/MUX mode or sample split mode which means that you get 4
channels of conversion and not 8. This is standard and perfectly
acceptable. The first two channels will be routed out to the ADAT
lightpipe outputs 1 through 4 and the 2nd two channels with be routed
out ADAT lightpipe outputs 5 through 8.
//// //// //// ////
Would I be able to "transparently" use these "S/MUX"d channels in a
linux DAW, by simply recording/playing-back a higher channel count per
track (e.g. 4 for a stereo track, 2 for a mono)? This seems like a
good way to achieve "studio quality" 24/96 record&playback at a
distance from the computer -- via ADAT cable -- using high quality
outboard A/D and D/A or interfacing to external equipment already
presenting AES/EBU format I/O.
If a device like the AI4 actually does all the bit-splitting and other
fu, both for input and output -- wouldn't it not matter that the
actual 2-track or 4-track contents are essentially "noise" because
nothing in Linux-land would understand the S/MUX format.
Next question: to avoid the hack suggested above, is there some kind
of ALSA plugin that would reconstitute/create synchronized pairs of
S/MUX data on the same soundcard into single 24/96 streams, both for
input and output? How is S/Mux handled in Linux & ALSA?
Thanks.
Niels
http://nielsmayer.com
PS: The Alesis AI4 seems nicer than the http://www.aphex.com/144.htm
-- for one I won't need to make 110 ohm cables with XLR connectors to
DB25. The AI4's support for S/MUX mode is especially nice-sounding --
if there was a way for it to work in linux. Any other suggestions for
converting ADAT to AES/EBU (or spdif) w/ decent synchronization
options for input?
Hi all,
Latest release version 1.0.23 is available here:
http://www.mega-nerd.com/libsndfile/#Download
Changes are:
* Add version metadata to Windows DLL.
* Add a missing 'inline' to sndfile.hh.
* Update docs.
* Minor bug fixes and improvements.
Cheers,
Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/
> It could be useful to have some anecdotal evidence to quantify measures
> of jitter like "annoying" and "drunk", so:
>
> What is your buffer-size?
Hi Jens,
I test with several sound cards. M-Audio, Creative Audigy, ASIO-for-all, and
generic motherboard driver. I've found the jitter at settings over 30-50ms
difficult for serious recording. Without ASIO drivers latency can be
100-200ms or more, which is very bad.
ASIO at 5-10ms seems the best I can use on Windows without stutter, that
feels nice and responsive to me.
Best Regards,
Jeff
> I'm also a bit puzzled by people complaining about jitter. I don't have
> any exceptional kit, but in reality I can't say I've ever noticed it.
> Latency yes, but that's easily corrected with a bit of post record
> nudging.
Cubase is particularly bad when playing a soft-synth live, esp with larger
audio buffer sizes, because even though VST supports sample-accurate MIDI,
all note-ons are sent with timestamp of zero (the exact start of the
buffer).
It's like trying to play drunk, like glue in the keys, I keep looking at my
fingers thinking "did my finger slip off that note?".
Playing a pre-recorded MIDI tract is different, timestamps are then
honoured.
Why did Steinberg implement it like this?, I think it's a misguided attempt
at reducing latency. It's doesn't, the worst case notes are still delayed
exactly one 'block' period. There's no upside.
It's far better to have small latency and no jitter because your brain will
compensate very accurately for consistent latency, you will instinctively
hit the keys a fraction early. All will sound fine.
Jitter is baked-in timing error, once it's in your tracks you can't get it
out. Latency can always be compensated for and eliminated later.
The right way is to timestamp the MIDI, send it to the synth delayed by one
block period. Since audio is already buffered with the same delay, you will
get perfect audio/MIDI sync.
IMHO - After writing my own plugin standard, sample-accurate MIDI is no more
difficult to support than block-quantized MIDI.
Jeff McClintock
> Message: 8
> Date: Tue, 5 Oct 2010 21:22:23 +0100
> From: Folderol <folderol(a)ukfsn.org>
> Subject: Re: [LAD] on the soft synth midi jitters ...
> To: linux-audio-dev(a)lists.linuxaudio.org
> Message-ID: <20101005212223.5a7fbb61@debian>
> Content-Type: text/plain; charset=US-ASCII
>
> On Tue, 5 Oct 2010 22:00:11 +0200
> fons(a)kokkinizita.net wrote:
>
> > On Tue, Oct 05, 2010 at 02:50:10PM +0200, David Olofson wrote:
> >
> > > Not only that. As long as the "fragment" initialization overhead can
> be kept
> > > low, smaller fragments (within reasonable limits) can also improve
> throughput
> > > as a result of smaller memory footprint.
> >
> > 'Fragment initialisation' should be little more than
> > ensuring you have the right pointers into the in/out
> > buffers.
> >
> > > Depending on the design, a synthesizer with a large number of voices
> playing
> > > can have a rather large memory footprint (intermediate buffers etc),
> which can
> > > be significantly reduced by doing the processing in smaller fragments.
> >
> > > Obviously, this depends a lot on the design and what hardware you're
> running
> > > on, but you can be pretty certain that no modern CPU likes the
> occasional
> > > short bursts of accesses scattered over a large memory area -
> especially not
> > > when other application code keeps pushing your synth code and data out
> of the
> > > cache between the audio callbacks.
> >
> > Very true. The 'bigger' the app (voices for a synth, channels for
> > a mixer or daw) the more this will impact the performance. Designing
> > the audio code for a fairly small basic period size will pay off.
> > As will some simple optimisations of buffer use.
> >
> > There are other possible issues, such as using FFT operations.
> > Calling a large FFT every N frames may have little impact on
> > the average load, but it could have a big one on the worst case
> > in a period, and in the end that's what counts.
> >
> > Zyn/Yoshimi uses FFTs for some of its algorithms IIRC. Getting
> > the note-on timing more accurate could help to distribute those
> > FFT calls more evenly over Jack periods, if the input is 'human'.
> > Big chords generated by a sequencer or algorithmically will still
> > start at the same period, maybe they should be 'dispersed'...
> >
> > Ciao,
>
> I'm all in favour of a bit of dispersal.
>
> When I started out with a Yamaha SY22 and Acorn Archimedes it was all
> too easy to stuff too much down the pipe at once. However, doing some
> experimenting, I was surprised at how much you could delay or advance
> Note-On events undetectably although it depended to some extent on the
> ADSR envelope.
>
> I don't need to do that any more, but old habits die hard, so if I'm
> copy-pasting tracks I tend to be deliberately a bit sloppy.
>
> I'm also a bit puzzled by people complaining about jitter. I don't have
> any exceptional kit, but in reality I can't say I've ever noticed it.
> Latency yes, but that's easily corrected with a bit of post record
> nudging.
>
> --
> Will J Godfrey
> http://www.musically.me.uk
> Say you have a poem and I have a tune.
> Exchange them and we can both have a poem, a tune, and a song.
Here is an example of Electromyography* *sensors (emg)
http://www.biometricsltd.com/analysisemg.htm
I'd like to be able to control a sequencer with muscle movements, I'd write
some code to process the inputs and convert them to midi, but need to find
some inexpensive emg's to use that I can read data from under Linux.
Anyone have any reconsiderations?
Thanks
Nathanael