Hi all,
Latest release version 1.0.23 is available here:
http://www.mega-nerd.com/libsndfile/#Download
Changes are:
* Add version metadata to Windows DLL.
* Add a missing 'inline' to sndfile.hh.
* Update docs.
* Minor bug fixes and improvements.
Cheers,
Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/
> It could be useful to have some anecdotal evidence to quantify measures
> of jitter like "annoying" and "drunk", so:
>
> What is your buffer-size?
Hi Jens,
I test with several sound cards. M-Audio, Creative Audigy, ASIO-for-all, and
generic motherboard driver. I've found the jitter at settings over 30-50ms
difficult for serious recording. Without ASIO drivers latency can be
100-200ms or more, which is very bad.
ASIO at 5-10ms seems the best I can use on Windows without stutter, that
feels nice and responsive to me.
Best Regards,
Jeff
> I'm also a bit puzzled by people complaining about jitter. I don't have
> any exceptional kit, but in reality I can't say I've ever noticed it.
> Latency yes, but that's easily corrected with a bit of post record
> nudging.
Cubase is particularly bad when playing a soft-synth live, esp with larger
audio buffer sizes, because even though VST supports sample-accurate MIDI,
all note-ons are sent with timestamp of zero (the exact start of the
buffer).
It's like trying to play drunk, like glue in the keys, I keep looking at my
fingers thinking "did my finger slip off that note?".
Playing a pre-recorded MIDI tract is different, timestamps are then
honoured.
Why did Steinberg implement it like this?, I think it's a misguided attempt
at reducing latency. It's doesn't, the worst case notes are still delayed
exactly one 'block' period. There's no upside.
It's far better to have small latency and no jitter because your brain will
compensate very accurately for consistent latency, you will instinctively
hit the keys a fraction early. All will sound fine.
Jitter is baked-in timing error, once it's in your tracks you can't get it
out. Latency can always be compensated for and eliminated later.
The right way is to timestamp the MIDI, send it to the synth delayed by one
block period. Since audio is already buffered with the same delay, you will
get perfect audio/MIDI sync.
IMHO - After writing my own plugin standard, sample-accurate MIDI is no more
difficult to support than block-quantized MIDI.
Jeff McClintock
> Message: 8
> Date: Tue, 5 Oct 2010 21:22:23 +0100
> From: Folderol <folderol(a)ukfsn.org>
> Subject: Re: [LAD] on the soft synth midi jitters ...
> To: linux-audio-dev(a)lists.linuxaudio.org
> Message-ID: <20101005212223.5a7fbb61@debian>
> Content-Type: text/plain; charset=US-ASCII
>
> On Tue, 5 Oct 2010 22:00:11 +0200
> fons(a)kokkinizita.net wrote:
>
> > On Tue, Oct 05, 2010 at 02:50:10PM +0200, David Olofson wrote:
> >
> > > Not only that. As long as the "fragment" initialization overhead can
> be kept
> > > low, smaller fragments (within reasonable limits) can also improve
> throughput
> > > as a result of smaller memory footprint.
> >
> > 'Fragment initialisation' should be little more than
> > ensuring you have the right pointers into the in/out
> > buffers.
> >
> > > Depending on the design, a synthesizer with a large number of voices
> playing
> > > can have a rather large memory footprint (intermediate buffers etc),
> which can
> > > be significantly reduced by doing the processing in smaller fragments.
> >
> > > Obviously, this depends a lot on the design and what hardware you're
> running
> > > on, but you can be pretty certain that no modern CPU likes the
> occasional
> > > short bursts of accesses scattered over a large memory area -
> especially not
> > > when other application code keeps pushing your synth code and data out
> of the
> > > cache between the audio callbacks.
> >
> > Very true. The 'bigger' the app (voices for a synth, channels for
> > a mixer or daw) the more this will impact the performance. Designing
> > the audio code for a fairly small basic period size will pay off.
> > As will some simple optimisations of buffer use.
> >
> > There are other possible issues, such as using FFT operations.
> > Calling a large FFT every N frames may have little impact on
> > the average load, but it could have a big one on the worst case
> > in a period, and in the end that's what counts.
> >
> > Zyn/Yoshimi uses FFTs for some of its algorithms IIRC. Getting
> > the note-on timing more accurate could help to distribute those
> > FFT calls more evenly over Jack periods, if the input is 'human'.
> > Big chords generated by a sequencer or algorithmically will still
> > start at the same period, maybe they should be 'dispersed'...
> >
> > Ciao,
>
> I'm all in favour of a bit of dispersal.
>
> When I started out with a Yamaha SY22 and Acorn Archimedes it was all
> too easy to stuff too much down the pipe at once. However, doing some
> experimenting, I was surprised at how much you could delay or advance
> Note-On events undetectably although it depended to some extent on the
> ADSR envelope.
>
> I don't need to do that any more, but old habits die hard, so if I'm
> copy-pasting tracks I tend to be deliberately a bit sloppy.
>
> I'm also a bit puzzled by people complaining about jitter. I don't have
> any exceptional kit, but in reality I can't say I've ever noticed it.
> Latency yes, but that's easily corrected with a bit of post record
> nudging.
>
> --
> Will J Godfrey
> http://www.musically.me.uk
> Say you have a poem and I have a tune.
> Exchange them and we can both have a poem, a tune, and a song.
Here is an example of Electromyography* *sensors (emg)
http://www.biometricsltd.com/analysisemg.htm
I'd like to be able to control a sequencer with muscle movements, I'd write
some code to process the inputs and convert them to midi, but need to find
some inexpensive emg's to use that I can read data from under Linux.
Anyone have any reconsiderations?
Thanks
Nathanael
Hi all,
Latest libsndfile is now available here:
http://www.mega-nerd.com/libsndfile/#Download
Changes are:
* Couple of fixes for SDS file writer.
* Fixes arising from static analysis.
* Handle FLAC files with ID3 meta data at start of file.
* Handle FLAC files which report zero length.
* Other minor bug fixes and improvements.
Cheers,
Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/
It's TYOQA (The Year Of Qtractor Automation:) what else?
But wait, there's three months to go yet. Meanwhile, the foundations
have already been laid and one can now tell that a rocky milestone is
ready to get bumped. Ouch!
Qtractor 0.4.7 (furious desertrix) is out!
Release highlights:
- MIDI learn/controller mapping for all plugin parameters (NEW)
- Extended Clip fade-in/out WYSIWYG curves (NEW)
- MIDI resolution overflow (FIX)
- MIDI tempo standard base on quarter-note (FIX)
- Extended MIDI controller mapping for mixer/tracks (NEW)
- Audio metronome gain control (NEW)
- Mute/solo tracks while looping (FIX)
- MIDI Clock support (NEW)
- Audio clip import while looping (FIX)
- MIDI track bank-select/program-change transparency (FIX)
- VeSTige headers included for native VST plugin support (NEW)
- JACK transport sync support (FIX)
- Clip tempo-adjust tool (NEW)
- Audio tracks auto-monitoring (FIX)
- Transport back/forward stops on loop points (NEW)
- MIDI tracks redundant mute/solo (FIX)
See also:
http://www.rncbc.org/drupal/node/235
Website:
http://qtractor.sourceforge.net
Project page:
http://sourceforge.net/projects/qtractor
Downloads:
- source tarball:
http://downloads.sourceforge.net/qtractor/qtractor-0.4.7.tar.gz
- source package (openSUSE 11.3):
http://downloads.sourceforge.net/qtractor/qtractor-0.4.7-1.rncbc.suse113.sr…
- binary packages (openSUSE 11.3):
http://downloads.sourceforge.net/qtractor/qtractor-0.4.7-1.rncbc.suse113.i5…http://downloads.sourceforge.net/qtractor/qtractor-0.4.7-1.rncbc.suse113.x8…
- user manual (nevermind outdated):
http://downloads.sourceforge.net/qtractor/qtractor-0.3.0-user-manual.pdf
Weblog (upstream support):
http://www.rncbc.org
License:
Qtractor is free, open-source software, distributed under the terms of
the GNU General Public License (GPL) version 2 or later.
Change-log:
- While moving multi-selected MIDI events around the clip editor (aka
piano-roll), with help of keyboard arrow keys, that is, was not clear
which one was the so-called "anchor" event, the one which positioning
gets honored for snap-to-beat business. Not anymore: the anchor event
now defaults to the earliest in time or the one the user's last
point(-click)ed.
- MIDI control observer pattern implementation has sneaked in, making it
ready for the so-called and long-awaited "MIDI Learn" feature and
arbitrary MIDI controller assignment, for plugin parameters in particular.
- MMC DEFERRED PLAY doesn't cause transport state to stop if currently
rolling (mitigating bug #3067264).
- Audio clip merge processing might have been skipping a few initial
frame blocks, now fixed.
- Clip selection and plugin parameter hash optimization.
- Anti-glitch audio clip macro fade-in/out fixed again.
- New clip fade-in/out slopes (curves) are introduced, partially adapted
and refactored from those easing equations of Robert Penner's fame.
- Clip fade-in/out non-linear slopes are now shown as actual WYSIWYG curves.
- Escape key now closes generic plugin widgets as ever found usual
elsewhere.
- Picking nits: unselect current track when clicking on any gray empty
area, also accessible from a new menu item: Track/Navigate/None.
- A nasty and deadly MIDI resolution overflow has been finally fixed,
allowing for long MIDI sequences (1h+) to load correctly on 32bit
machines from now on (was perfectly fine on 64bit though).
- MIDI editor selection hash optimization in face of reasonably huge
event sequences.
- MIDI controller mapping finally refactored to support some other MIDI
event types than just CC (0xBn) ones.
- Nitpicking fix: corrected main track-list (left pane) display when no
track is currently selected.
- libX11 is now being added explicitly to the build link phase, as seen
necessary on some bleeding-edge distros eg. Fedora 13, Debian 6. (fixing
bug #3050944).
- New audio metronome bar and beat sample gain options.
- Progressively, the observer pattern is being finally introduced,
targeting all potentially automation controls and widgets as plain
ground-zero for the (ultra-)long overdue automation feature.
- MIDI controller mapping of still non-existing tracks were being
implicitly assigned to the last, highly numbered, existing track. Now fixed.
- Moving from old deprecated Qt3'ish custom event post handling into
regular asynchronous signal/slot strategy.
- Muting/soloing tracks while playback is looping was leaving current
audio clip out-of-sync whenever that same track is later un-muted on any
other preceding clip. Now hopefully fixed.
- MIDI Clock support makes its first appearance.
- All tempo (BPM) calculations are now compliant to the MIDI
conventional equivalence between beat and quarter note (1/4, crotchet)
as common standard time division.
- Automatic audio time-stretch option is not enabled by default anymore.
- Standard warning Apply button is now only shown when dismissing dialog
changes are actually valid.
- Make sure non-dedicated metronome and player buses are properly reset
and reopen when changing regular audio buses (hopefully fixing bug item
#3021645 - Crash after changing audio bus).
- Hopefully, an outrageously old bug got squashed away, which was
causing random impromptu crashes, most often when importing audio clips
while looping and play-head is any near the loop end point.
- General standard dialog buttons layout is now in place.
- Fixed main track view off-limits play-head positioning.
- Main tool-bar Time and Tempo spin-boxes, may now have their colors
correct, as for most non-Qt based theme engines (ie. Gnome). Green text
on black background has been and still is the the intended aspect design ;)
- MIDI file import and internal sequence representation has been changed
to be inclusive on all bank-select (CC#0,32) and program-change events
which were previously discarded while honoring MIDI track properties.
Interleaved SysEx events are now also preserved on their original
sequence positions instead of squashing a duplicate into the MIDI bus
SysEx setup.
- Attempt to include the VeSTige header by default, as for minimal VST
plugin support.
- JACK transport support has been slightly rewritten, in fact the sync
callback is now in effect for repositioning.
- The MIDI clip editor (piano roll) widget won't be flagged as a tool
window anymore.
- A tempo adjustment tool is making inroads from the menu, as
Edit/Clip/Tempo... (factory shortcut: F7).
- Audio tracks auto-monitoring is now effective on playback.
- Make sure to ask whether a dirty MIDI clip should be saved, upon
resizing or stretching its edges (fixes bug #3017723).
- Backward and Forward transport commands are now taking additional
stops on loop points.
- Attempt to optimize track solo/mute redundant transactions, in special
regard to MIDI track events which were being duplicated on soloing and
temporarily muted on unsoloing.
Cheers && Enjoy (be happy!)
--
rncbc aka Rui Nuno capela
rncbc(a)rncbc.org
I'm writing on behalf of a friend who's having trouble dealing with midi
latency in a soft synth (possibly yoshimi) ...
Given a jack period of N frames, the midi latency with the original code
effectively ranges from N frames to 2 * N frames, which I guess qualifies
it as jittery. So far my friend has tried a few things, but there's no
workable solution as yet.
What seemed most promising was to break the audio generation into smaller
blocks, applying pending midi events between blocks. Sadly, that drags the
creation and destruction of note objects into the realtime jack process
callback path. Latency improves, but the number of notes you can get
away with before it all falls in a heap is significantly reduced.
Getting the destruction of dead notes out of the realtime path is trivial,
not so the creation of new ones. Even with a pool of pre-allocated note
objects, it seems the amount of initialization code per note is still a
real limiting factor on how busy things can get before it all falls apart.
Such is life (for my "friend").
cheers, Cal
Hallo,
I am new here, so I hope this is the right place to talk about what I
want to do.
I want to make a CUDA implementation of the algorithms from the
calf-plugins. On the front end there should be placed a button (or
something else) to (de-)activate the CUDA support. I have already
written a jack-program (which makes some simple changes on audio data)
using CUDA. It works good and at a first sight the performance looks
promisingly.
I have read a part of the mailing list archive and found out that there
already was a discussion about audio processing with CUDA. I know there
are some reasons for not using CUDA like the duty to use the proprietary
Nvida driver, the limitation that only people who have an Nvidia card
will have a benefit and so on. But the CUDA implementation may show
which performance can be reached and may beuseful for Nvidia users
immediately.
I know there is OpenCL but it is not as sophisticated as CUDA at the
moment, will have less performance than CUDA and I do not have the time
to learn OpenCl at the moment (but the project has to be finished soon).
I heard it is not too much work to transfer existing CUDA code to OpenCl
code later (assuming there is already an OpenCL implementation for all
CUDA functions which were used).
So I want to do this with CUDA.
At the moment I have some questions:
1. Is there anybody has already done or is doing something like this?
2. Where can I get information to make any specific changes on the calf
code? (I examined it a bit but it will take time to understand the
structure of the program when I only have the code, especially the part
for the GUI seems to be conceptualized a bit more complex.)
It would be nice if I can get some help here.
Regards
Max Tandetzky
http://www.youtube.com/watch?v=AoAOx97G8ewhttp://www.gizmag.com/roger-linn-linnstrument-digital-music-interface/15155/
........
Sadly, that's unlikely to happen anytime soon - because the TouchCo
multitouch pad that Linn used in the production of his prototype has
been withdrawn from production. Apparently Amazon bought up the
technology earlier this year (
http://www.nytimes.com/2010/02/04/technology/04amazon.html?_r=1 ) with
a view to using it in the Kindle eBook reader, but has completely
shelved it and shut down the TouchCo operation, presumably due to the
ongoing Intellectual Property chest-beating, suing and counter-suing
going on in the multitouch arena right now.
So the TouchCo website has nothing but a sad placeholder to offer (
http://touchco.com/ ), and Linn has nothing but his pre-production
prototype to work with, ruling out the possibility of a LinnStrument
hitting the market in the immediate future.
........
Could the meego touch API support the features needed by the
instrument used in the above Youtube Video of Roger Linn? Some of the
swipes and other gestures demonstrated in the video are available in
http://apidocs.meego.com/mtf/gestures.html but an important one isn't:
multitouch pressure sensitivity that is equivalent to "polyphonic
aftertouch" on a MIDI keyboard -- allowing analog pressure readings to
be taken continuously and simultaneously per touch.
Is there interest in making such devices part of the "use case" for meego touch?
And are there any such displays&touchsensors available for use with
Meego capable hardware such as the beagleboard?
I imagine one could extrapolate touch pressure by looking at how much
area each touch consumes, dynamically -- (assuming a normal human
finger whose tip would deform and cover more surface area with
application of more pressure).
Niels
http://nielsmayer.com
Hey guys!
I have two questions.
1. How does Sound Stretch work? It is incredible the way it can produce a
tone which has no noticeable vibrations, just a wall of sound. How is that
accomplished, in layman terms if possible :)
2. Can this program be jackified and is that a lot of work?
--
Louigi Verona
http://www.louigiverona.ru/