Hi,
i was thinking about latencies of softsynths and how cubase vst handles
this when playing back recorded midi tracks as opposed to playing the
softsynth directly. I'm talking about the vst-softsynths here. not the
ones that get controlled via usual midi ports.
It seems that during playback of a prerecorded midi track cubase vst
knows the latency of the softsynth and and arranges for that so that the
softsynth is really tight [no offset to other recorded audio material].
This is, of course, only possible because the midi signal if routed
vst-internally is handled different than midi that gets sent out.
I suppose that todays available audio/midi-sequencer support midi in a
form that it is synchronized to the latency of the audio signal that is
sent out. So, if the audio signal takes 2*128/44100s [2 buffers a 128
frames] to reach the ear of the user, then the prerecorded midi signal
is delayed this exact amount of time, because midi signals is supposed
to be of zero latency. And for external hardware synthesizers this is
pretty much true. This way prerecorded audio and midi tracks are nicly
synchronised during playback.
Now, for the case of a sofstsynth this scenario doesn't fit, because the
synth is not zero latency. This means that a "tight" midi track is
audible with the softsynths output latency.
Now, Jack is really an audio server but it could also be used to
communicate the latency of the softsynth to the Audio/Midi Sequencer. I
know there's calls to get the output latency of physical output ports.
But it would be nice to have calls to ask for the physical output
latency of clients [this value can be different for different clients -
maybe with different output devices].
This way, the audio/midi-sequencer could ask jack for a list of clients
and offer this choice to the user who then can select which midi tracks
correspond to which softsynth. The Sequencer could then send the midi
events a tad bit earlier [the output latency of the softsynth].
What do you think? I have no insight into the implementation details of
jack, so i don't know how possible this scenario is and if it fits with
design decisions.
Regards
Florian Schmidt
Cheesetracker is a portable Impulse Tracker clone. It supports all the main
Impulse Tracker and FastTracker/SoundTracker features, plus many more.
It is licensed under the GNU Public License. It runs under
Linux/BSDs, MacOSX(QT/Mac or Qt/X11), or Win32(Cygwin).
It can be obtained at http://cheesetronic.sf.net
For those unfamiliar with trackers, this is basically an
all-in-one sequencer/sampler/sample editor/mixer/fx processor bundle,
wich provides fast and flexible means for professional grade
music composing.
Including in the release are tutorials and documentation.
Volunteers to help to better documment it would
be highly appreciated.
The ChangeLog for this release follows:
v0.9.0
------
-Removed sample mode (Scream Tracker 3 mode) as It's obsolete and not needed
for backwards compatibility.
-Instruments are now layered and can perform up to 4 simultaneous voices with
individual parameters each.
-Added an effect buffers system. Instruments are now routed to custom buffers
(each with individual effect chains), which can also re-route to other
buffers. This allows to create very complex effect routes for realtime
processing.
-Effect buffers are "process on demand", which means they are smart enough to
notice when they are doing nothing, thus disabling themselves.
-Added a few internal effects: Amplifier, Clipping/Distortion, Recursive Delay
Line, Stereo Enhancer, Chorus and Reverb.
-Added a LADSPA effect source plugin. LADSPA plugins can be added to the
chains.
-Created new file formats that save all the new features: .CT (CheeseTracker
Module) .CI (Cheesetracker Instrument) and .CS (CheeseTracker Sample)
-Added preview to the sample file selection box, just hilite a file and use
your keyboard to play notes (/ and * work in there too).
-Readded JACK Driver (Kasper Souren)
-Added RTAUDIO driver, allows for porting to Win32/ASIO and OSX/CoreAudio
-Fixed some big endian compatibility issues. CheeseTracker should work fine
again on big endian machines.
-MacOSX port and build system/build fixes courtesy of Benjamin Reed
-Fixed tons and tons of bugs.
AND NOW PLEASE READ: How fast CheeseTracker reaches version 1.0 depends on
YOU. The focus of this version is STABILITY. Because of this, I need to
receive as many bug reports as I can, both program and build system. If you
find a bug, I'd be enormously grateful if you submit it. Even if it is an
obvious bug to you, chances that other people will find and report the same
bug are much smaller than what you may think. If you dont report a bug that
annoys you, the chances of it reappearing in the next version will allways be
higher.
Planned for 1.0.0:
-=-=-=-=-=-=-=-=-
-Rock Solid stability
-WAV exporting
-A hopefully working Windows port. This depends mainly on the Qt-Win32
project. If you are a good Windows programmer and would like to see
CheeseTracker working in there sooner, please give those guys a hand!
Enjoy!!
Juan Linietsky
On Monday 27 October 2003 15:08, Benno Senoner wrote:
>Assume I press C2 with velocity 50 pedal up, the C2-pedalup (associated
>to velocity 50) sample sounds.
>Now I press the sustain pedal and press C2 with velocity 100.
>What should the sampler do ? Quickly fade out the C2-pedalup
>(velocity-50) note and trigger the
>C2-pedaldown (velocity 100) note ?
>And of course when you release the pedal all sustained notes will get a
>note-off.
I am a piano player.
I would expect the C2 note to be replaced with the next attack on that note.
If the pedal is pressed, the damper stays up, so the next attack will be the
hammer hitting the strings, and the sounding note is not faded out before.
So I think you should fade out the previous note after or on the attack of the
next one. Maybe even a lot later. There is an adding resonance effect if you
keep hitting the same note with the pedal down, but that is probably the
sympathetic vibrations from the other strings.
If the pedal is not pressed the damper returns to the string as the key is
released to play the next attack. You will get a note off message then
anyway.
Note that releasing the pedal should only send note-off to those notes that
are sounding, but whose key is not pressed. If you have keys pressed down,
the dampers will be up, regardless of the position of the sustain-pedal. So
these should continue. I have no idea if digital piano's actually do this
correctly, but it is the way a accoustic piano works.
Gerard
electronic & acoustic musics-- http://www.xs4all.nl/~gml
Hi all,
There's now a mailing list specifically for LADCCA discussion,
ladcca-devel. You can subscribe from here:
http://mail.nongnu.org/mailman/listinfo/ladcca-devel
Bob
--
Bob Ham <rah(a)bash.sh>
"At some point, keystroke recorders got installed on several machines at
Valve. Our speculation is that these were done via a buffer overflow in
Outlook's preview pane." -- Gabe Newell on the Half-Life 2 source leak
I am looking for a simple adaptive echo cancellation algorithm for a
project I'm working on. Does anybody know of one under a
GPL-compatible license? Preferably optimized for real time use? This
might be found in a VoIP application, although I want it for musical
purposes.
I couldn't find anything at freshmeat.net. Then again, I can never find
anything on that site.
Regards,
Mark
markrages(a)mlug.missouri.edu
--
To invent, you need a good imagination and a pile of junk. -Thomas Edison
BEAST/BSE version 0.5.5 is available for download at:
ftp://beast.gtk.org/pub/beast/v0.5
or
http://beast.gtk.org/beast-ftp/v0.5
BEAST (the Bedevilled Audio SysTem) is a graphical front-end to
BSE (the Bedevilled Sound Engine), a library for music composition,
audio synthesis, MIDI processing and sample manipulation.
The project is hosted at:
http://beast.gtk.org
This new development series of BEAST comes with a lot of
the internals redone, many new GUI features and a sound
generation back-end separated from all GUI activities.
The most outstanding new features are the demo song, the effect and
instrument management abilities, the track editor which allowes
for easy selection of synthesizers or samples as track sources, loop
support in songs and unlimited Undo/Redo capabilities.
Note, if you encounter problems with .bse files from previous BEAST
versions, this may indicate bugs at the compatibility layer.
A bug report accompanied by the problematic file can be send to the
mailing list and is likely to get you a fixed file in return.
Overview of Changes in BEAST/BSE 0.5.5:
* New (or ported) modules:
DavCanyonDelay - Canyon Echo by David A. Bartold
BseMidiInput - Monophonic MIDI Keyboard input module
BseBalance - Stereo panorama position module
ArtsCompressor - Mono and stereo compressor [Stefan Westerfeld]
* Added utility script to crop and duplicate parts [Stefan Westerfeld]
* Added "Party Monster" demo song [Stefan Westerfeld]
* Implemented ability to use sequencer as modulation source
* Added support for external MIDI events in song tracks
* Added .bse file playback facility to bsesh
* Added support for C++ Plugins
* Now installs bse-plugin-generator for simple creation of C++ Modules
* Added manual pages for installed executables
* Lots of small MIDI handling fixes
* Fixed MP3 loader
* Major GUI improvements
* Registered MIME types for .bse files, provided .desktop file
* Made search paths for various resources user configurable
* Added prototype support to IDL compiler [Stefan Westerfeld]
* Work around PTH poll() bug on NetBSD [Ben Collver, Tim Janik]
* Support NetBSD sound device names [Ben Collver]
* Added i18n infrastrukture for BEAST and BSE [Christian Neumair, Tim Janik]
* Added Azerbaijani translation [Metin Amiroff]
* Added Russian translation [Alexandre Prokoudine]
* Added Serbian translation [Danilo Segan]
* Added Swedish translation [Christian Rose]
* Added German translation [Christian Neumair]
* Added Czech translation [Miloslav Trmac]
* Added Dutch translation [Vincent van Adrighem]
* Lots of bug fixes
---
ciaoTJ
Hi,
I was wondering what's the correct way to handle the sustain pedal when
implementing a MIDI sound generating module.
from the MIDI specs:
-----------
Hold Pedal, controller number: 64:
When on, this holds (ie, sustains) notes that are playing, even if the
musician releases the notes. (ie, The Note Off effect is postponed until
the musician switches the Hold Pedal off). If a MultiTimbral device,
then each Part usually has its own Hold Pedal setting.
Note: When on, this also postpones any All Notes Off controller message
on the same channel.
Value Range: 0 (to 63) is off. 127 (to 64) is on.
--------------
My question is about ".... holds (ie, sustains) notes that are playing,
even if the musician releases the notes."
Assume I play a chord, press the hold pedal, which causes the notes to
be sustained. When I play new notes those are sustained too.
So far so good.
The question arises when I press the same key two times.
Assume no sustain pedal for now.
When I press C2 I hear the note. When I release it the sound does not
vanish immediately but takes a small amout of time to decay due to the
release envelope. If after releasing C2 I immediately press C2 again I
hear two C2 notes for a brief time.
Now same situation as above but with the sustain pedal pressed.
You hear the first C2, release it (the corresponding note-off is
postponed) and then press C2 again.
In that case is it correct that you must hear two sustained C2 notes.
Or must the first C2 be forced to get faded out / muted ?
If not (eg you hear two sustained C2 notes), how far can this go ?
Can there be 3, 4 etc sustained notes on the same key too ?
While I am not a piano player,common sense says me thatpiano has only
one string per key so IMHO it would sound unnatural to play two
notes on the same key.
As you might have guessed I ask this stuff because we want to add
support of sustain in linuxsampler.
thanks for your infos.
PS: a new CVS repository for linuxsampler is up: cvs.linuxsampler.org
interested developers and users please check it out and give us feedback
via our mailing list.
(subscription infos at http://www.linuxsampler.org ).
cheers,
Benno
http://www.linuxsampler.org
Hello, (I'm new to this list, so hi everyone!)
I'm rather stuck on the following: I'm writing an app that uses JACK for its
audio output. I now want to control this app using midi but I have trouble
figuring out how to synchronize the rendered sound to the incoming events.
The events, midi notes for example, come in with timestamps in one thread.
Another thread (the one entered by process()) renders the audio. In order to
render properly, it would need to calculate the exact sample at which the
incoming note should begin to take effect in the rendered output stream.
If you have an evenly spaced font, here's a graphical representation of the
problem:
|...e.....e|e....e....|...ee...e.|.....e.e.e|....e...e.| midi events
|..........|...rrr....|.rr.......|......rrr.|....rrrr..| rendering
|..........|..........|ssssssssss|ssssssssss|ssssssssss| sound
Here, the e's represent midi events (but could be gui events just as well).
The r's in the second bar represent the calls to the process function of my
app. During this time, the audio that will be played back during the next
cycle will be rendered. The s'es in the third bar represent the actual sound
as it was rendered during the previous block. The vertical bars represent
blocks of time equivalent to the buffer size.
The best I can think of now is that I have to record midi events during the
first block, process these into audio during the second block (because I
want to take into account all events that occured during the first block) so
it can be played back during the third. Now, all is fine, but time in the
event-bar is measured in seconds and fractions thereof, but time in the
third bar is measured in samples. How can I translate the time recorden in
the events (seconds) to time in samples? How can I know at which exact time
relative to the current playback time my process() method was called?
If I just measure time at the start of my application I'm afraid things will
drift. Is that correct? How have other people solved this problem? Hope
somebody can help!
Regards,
Denis
_________________________________________________________________
Help STOP SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
Hi list,
in a local music store in the "clearance corner" I found a Roland CR80
Rhythm Composer, built in 1991. It doesn't seem to have much similarity
with the classic CR78, but at 50 Euro, it sounded like a nice bargain.
Can anyone comment on it - sound quality, stability? MusicMachines etc
don't have much info on it, and I even can't find samples to download
anywhere..
Thanks,
Frank
Hi,
please excuse my stupidity, but:
Why can't ladccad daemon start up without jack server being running?
>From my point of view it would be right for ladccad to bring jack up.
So allowing for example multiple jack sessions with different jack settings.
horsh