Thanks to the great reverse engineering work of Paul Kellett and Ruben van
Royen I created this library. libgig is a C++ cross-platform loader library
for Gigasampler files.
The library consists of three parts:
- RIFF classes for parsing arbitrary RIFF files
- DLS classes which use the RIFF classes to parse and provide access to DLS
level 1 and 2 files
- gig classes which are based on the DLS classes and provide the necessary
extensions for the Gigasampler file format
So you can also use the library for loading DLS files or RIFF files in
general, but the main focus is the Giga format, which you might get from the
name of this lib ;)
You can get the sources coming with tools / demo apps, API documentation, UML
diagram and a short kick start docu at:
http://stud.fh-heilbronn.de/~cschoene/projects/libgig/
I claim that this library provides all articulation data the Gig format
contains. If you think I might miss some, let me know!
The library should be compilable on all platforms. Nevertheless I don't own a
non Intel system, so I can't test it; maybe it needs some minor adjustments
but I took care about endian and word size correctness when I wrote the lib,
so I'm quite confident it will work! Let me if you tried it on a non Intel
system!
Best regards
Christian Schoenebeck
Hi,
I regularly read the german "Keyboards" magazine and in some
occasions they tested some Windows softsynths/HDR apps and measured
voicecount/number of parallel instances of plugins.
They usually talk about "3 msec latency" case but in one occasion
I've seen they talked about "1.5msec latency" too.
I could be wrong but I assume they are misinterpreting "fragment
latency" and total latency, fooling users into believing that the
latency numbers they gave refered to the total latency.
My question is: since the Hammerfall cards always use two fragments,
are the buffer size (latency) figures shown in the diagram below:
http://www.rme-audio.de/images/hdsp/mf_set.gif
refered to the whole buffer or to a single fragment.
for example take the 1.5msec (64 samples) case.
Does that mean 64 samples in total (32samples/fragment = 0.75msec per
fragment) or 64 samples per fragment (1.5msec per fragment thus a total
3msec).
I'm sure Paul D. and other Hammerfall experts will be able to enlighten me.
Sorry for the question but I just wanted to be sure that it's not the
case where per-fragment latencies as sold as total latencies since lower
latency numbers are always cool :-)
cheers,
Benno
http://www.linuxsampler.org
Hi,
i was thinking about latencies of softsynths and how cubase vst handles
this when playing back recorded midi tracks as opposed to playing the
softsynth directly. I'm talking about the vst-softsynths here. not the
ones that get controlled via usual midi ports.
It seems that during playback of a prerecorded midi track cubase vst
knows the latency of the softsynth and and arranges for that so that the
softsynth is really tight [no offset to other recorded audio material].
This is, of course, only possible because the midi signal if routed
vst-internally is handled different than midi that gets sent out.
I suppose that todays available audio/midi-sequencer support midi in a
form that it is synchronized to the latency of the audio signal that is
sent out. So, if the audio signal takes 2*128/44100s [2 buffers a 128
frames] to reach the ear of the user, then the prerecorded midi signal
is delayed this exact amount of time, because midi signals is supposed
to be of zero latency. And for external hardware synthesizers this is
pretty much true. This way prerecorded audio and midi tracks are nicly
synchronised during playback.
Now, for the case of a sofstsynth this scenario doesn't fit, because the
synth is not zero latency. This means that a "tight" midi track is
audible with the softsynths output latency.
Now, Jack is really an audio server but it could also be used to
communicate the latency of the softsynth to the Audio/Midi Sequencer. I
know there's calls to get the output latency of physical output ports.
But it would be nice to have calls to ask for the physical output
latency of clients [this value can be different for different clients -
maybe with different output devices].
This way, the audio/midi-sequencer could ask jack for a list of clients
and offer this choice to the user who then can select which midi tracks
correspond to which softsynth. The Sequencer could then send the midi
events a tad bit earlier [the output latency of the softsynth].
What do you think? I have no insight into the implementation details of
jack, so i don't know how possible this scenario is and if it fits with
design decisions.
Regards
Florian Schmidt
Cheesetracker is a portable Impulse Tracker clone. It supports all the main
Impulse Tracker and FastTracker/SoundTracker features, plus many more.
It is licensed under the GNU Public License. It runs under
Linux/BSDs, MacOSX(QT/Mac or Qt/X11), or Win32(Cygwin).
It can be obtained at http://cheesetronic.sf.net
For those unfamiliar with trackers, this is basically an
all-in-one sequencer/sampler/sample editor/mixer/fx processor bundle,
wich provides fast and flexible means for professional grade
music composing.
Including in the release are tutorials and documentation.
Volunteers to help to better documment it would
be highly appreciated.
The ChangeLog for this release follows:
v0.9.0
------
-Removed sample mode (Scream Tracker 3 mode) as It's obsolete and not needed
for backwards compatibility.
-Instruments are now layered and can perform up to 4 simultaneous voices with
individual parameters each.
-Added an effect buffers system. Instruments are now routed to custom buffers
(each with individual effect chains), which can also re-route to other
buffers. This allows to create very complex effect routes for realtime
processing.
-Effect buffers are "process on demand", which means they are smart enough to
notice when they are doing nothing, thus disabling themselves.
-Added a few internal effects: Amplifier, Clipping/Distortion, Recursive Delay
Line, Stereo Enhancer, Chorus and Reverb.
-Added a LADSPA effect source plugin. LADSPA plugins can be added to the
chains.
-Created new file formats that save all the new features: .CT (CheeseTracker
Module) .CI (Cheesetracker Instrument) and .CS (CheeseTracker Sample)
-Added preview to the sample file selection box, just hilite a file and use
your keyboard to play notes (/ and * work in there too).
-Readded JACK Driver (Kasper Souren)
-Added RTAUDIO driver, allows for porting to Win32/ASIO and OSX/CoreAudio
-Fixed some big endian compatibility issues. CheeseTracker should work fine
again on big endian machines.
-MacOSX port and build system/build fixes courtesy of Benjamin Reed
-Fixed tons and tons of bugs.
AND NOW PLEASE READ: How fast CheeseTracker reaches version 1.0 depends on
YOU. The focus of this version is STABILITY. Because of this, I need to
receive as many bug reports as I can, both program and build system. If you
find a bug, I'd be enormously grateful if you submit it. Even if it is an
obvious bug to you, chances that other people will find and report the same
bug are much smaller than what you may think. If you dont report a bug that
annoys you, the chances of it reappearing in the next version will allways be
higher.
Planned for 1.0.0:
-=-=-=-=-=-=-=-=-
-Rock Solid stability
-WAV exporting
-A hopefully working Windows port. This depends mainly on the Qt-Win32
project. If you are a good Windows programmer and would like to see
CheeseTracker working in there sooner, please give those guys a hand!
Enjoy!!
Juan Linietsky
On Monday 27 October 2003 15:08, Benno Senoner wrote:
>Assume I press C2 with velocity 50 pedal up, the C2-pedalup (associated
>to velocity 50) sample sounds.
>Now I press the sustain pedal and press C2 with velocity 100.
>What should the sampler do ? Quickly fade out the C2-pedalup
>(velocity-50) note and trigger the
>C2-pedaldown (velocity 100) note ?
>And of course when you release the pedal all sustained notes will get a
>note-off.
I am a piano player.
I would expect the C2 note to be replaced with the next attack on that note.
If the pedal is pressed, the damper stays up, so the next attack will be the
hammer hitting the strings, and the sounding note is not faded out before.
So I think you should fade out the previous note after or on the attack of the
next one. Maybe even a lot later. There is an adding resonance effect if you
keep hitting the same note with the pedal down, but that is probably the
sympathetic vibrations from the other strings.
If the pedal is not pressed the damper returns to the string as the key is
released to play the next attack. You will get a note off message then
anyway.
Note that releasing the pedal should only send note-off to those notes that
are sounding, but whose key is not pressed. If you have keys pressed down,
the dampers will be up, regardless of the position of the sustain-pedal. So
these should continue. I have no idea if digital piano's actually do this
correctly, but it is the way a accoustic piano works.
Gerard
electronic & acoustic musics-- http://www.xs4all.nl/~gml
Hi all,
There's now a mailing list specifically for LADCCA discussion,
ladcca-devel. You can subscribe from here:
http://mail.nongnu.org/mailman/listinfo/ladcca-devel
Bob
--
Bob Ham <rah(a)bash.sh>
"At some point, keystroke recorders got installed on several machines at
Valve. Our speculation is that these were done via a buffer overflow in
Outlook's preview pane." -- Gabe Newell on the Half-Life 2 source leak
I am looking for a simple adaptive echo cancellation algorithm for a
project I'm working on. Does anybody know of one under a
GPL-compatible license? Preferably optimized for real time use? This
might be found in a VoIP application, although I want it for musical
purposes.
I couldn't find anything at freshmeat.net. Then again, I can never find
anything on that site.
Regards,
Mark
markrages(a)mlug.missouri.edu
--
To invent, you need a good imagination and a pile of junk. -Thomas Edison
BEAST/BSE version 0.5.5 is available for download at:
ftp://beast.gtk.org/pub/beast/v0.5
or
http://beast.gtk.org/beast-ftp/v0.5
BEAST (the Bedevilled Audio SysTem) is a graphical front-end to
BSE (the Bedevilled Sound Engine), a library for music composition,
audio synthesis, MIDI processing and sample manipulation.
The project is hosted at:
http://beast.gtk.org
This new development series of BEAST comes with a lot of
the internals redone, many new GUI features and a sound
generation back-end separated from all GUI activities.
The most outstanding new features are the demo song, the effect and
instrument management abilities, the track editor which allowes
for easy selection of synthesizers or samples as track sources, loop
support in songs and unlimited Undo/Redo capabilities.
Note, if you encounter problems with .bse files from previous BEAST
versions, this may indicate bugs at the compatibility layer.
A bug report accompanied by the problematic file can be send to the
mailing list and is likely to get you a fixed file in return.
Overview of Changes in BEAST/BSE 0.5.5:
* New (or ported) modules:
DavCanyonDelay - Canyon Echo by David A. Bartold
BseMidiInput - Monophonic MIDI Keyboard input module
BseBalance - Stereo panorama position module
ArtsCompressor - Mono and stereo compressor [Stefan Westerfeld]
* Added utility script to crop and duplicate parts [Stefan Westerfeld]
* Added "Party Monster" demo song [Stefan Westerfeld]
* Implemented ability to use sequencer as modulation source
* Added support for external MIDI events in song tracks
* Added .bse file playback facility to bsesh
* Added support for C++ Plugins
* Now installs bse-plugin-generator for simple creation of C++ Modules
* Added manual pages for installed executables
* Lots of small MIDI handling fixes
* Fixed MP3 loader
* Major GUI improvements
* Registered MIME types for .bse files, provided .desktop file
* Made search paths for various resources user configurable
* Added prototype support to IDL compiler [Stefan Westerfeld]
* Work around PTH poll() bug on NetBSD [Ben Collver, Tim Janik]
* Support NetBSD sound device names [Ben Collver]
* Added i18n infrastrukture for BEAST and BSE [Christian Neumair, Tim Janik]
* Added Azerbaijani translation [Metin Amiroff]
* Added Russian translation [Alexandre Prokoudine]
* Added Serbian translation [Danilo Segan]
* Added Swedish translation [Christian Rose]
* Added German translation [Christian Neumair]
* Added Czech translation [Miloslav Trmac]
* Added Dutch translation [Vincent van Adrighem]
* Lots of bug fixes
---
ciaoTJ
Hi,
I was wondering what's the correct way to handle the sustain pedal when
implementing a MIDI sound generating module.
from the MIDI specs:
-----------
Hold Pedal, controller number: 64:
When on, this holds (ie, sustains) notes that are playing, even if the
musician releases the notes. (ie, The Note Off effect is postponed until
the musician switches the Hold Pedal off). If a MultiTimbral device,
then each Part usually has its own Hold Pedal setting.
Note: When on, this also postpones any All Notes Off controller message
on the same channel.
Value Range: 0 (to 63) is off. 127 (to 64) is on.
--------------
My question is about ".... holds (ie, sustains) notes that are playing,
even if the musician releases the notes."
Assume I play a chord, press the hold pedal, which causes the notes to
be sustained. When I play new notes those are sustained too.
So far so good.
The question arises when I press the same key two times.
Assume no sustain pedal for now.
When I press C2 I hear the note. When I release it the sound does not
vanish immediately but takes a small amout of time to decay due to the
release envelope. If after releasing C2 I immediately press C2 again I
hear two C2 notes for a brief time.
Now same situation as above but with the sustain pedal pressed.
You hear the first C2, release it (the corresponding note-off is
postponed) and then press C2 again.
In that case is it correct that you must hear two sustained C2 notes.
Or must the first C2 be forced to get faded out / muted ?
If not (eg you hear two sustained C2 notes), how far can this go ?
Can there be 3, 4 etc sustained notes on the same key too ?
While I am not a piano player,common sense says me thatpiano has only
one string per key so IMHO it would sound unnatural to play two
notes on the same key.
As you might have guessed I ask this stuff because we want to add
support of sustain in linuxsampler.
thanks for your infos.
PS: a new CVS repository for linuxsampler is up: cvs.linuxsampler.org
interested developers and users please check it out and give us feedback
via our mailing list.
(subscription infos at http://www.linuxsampler.org ).
cheers,
Benno
http://www.linuxsampler.org