Hi all,
Thanks to suggestions from people here I now have a relatively
complete C++ wrapper for libsndfile:
http://www.mega-nerd.com/tmp/sndfile.hh
There is also a pre-release of libsndfile which includes a
test for this wrapper:
http://www.mega-nerd.com/tmp/libsndfile-1.0.17pre7.tar.gz
C++ users, please comment.
Cheers,
Erik
--
+-----------------------------------------------------------+
Erik de Castro Lopo
+-----------------------------------------------------------+
"Even among Europe's Muslim minorities, roughly one-in-seven in France,
Spain, and Great Britain feel that suicide bombings against civilian targets
can at least sometimes be justified to defend Islam against its enemies."
-- http://pewglobal.org/reports/display.php?ReportID=253
New source location:
http://www.notam02.no/~kjetism/src/
(Sorry, I have temporarily lost access to both my previously used upload
directories)
das_watchdog
*************************************************************************
Whenever a program locks up the machine, das_watchdog will temporarily
sets all realtime process to non-realtime for 8 seconds. You will get an
xmessage window up on the screen whenever that happens.
Changes 0.2.3->0.2.4
--------------------
*Test if the xmessage program found during the make process is a valid
executable. If not, search the $PATH instead. This should fix it for
Gentoo when the pro-audio overlay is updated to at least this version.
*Various modifications for the High Res Timer, which should be used
instead of setting the timer interrupt process to SCHED_FIFO/99.
jack_capture
*************************************************************************
jack_capture is a small program to capture whatever sound is going out to
your speakers into a file without having to patch jack connections, fiddle
around with fileformats, or set options on the argument line.
This is the program I always wanted to have for jack, but no
one made. So here it is.
Changes 0.3.1 -> 0.3.7:
-----------------------
*Fixed potentional buffer underrun error.
*Fixed potentional ringbuffer size allocation miscalculation.
*Better way to set leading zeros in filename. Thanks to Melanie.
*Better underrun handling. Thanks to Dmitry Baikov.
*Added support for jack buffer size change.
*Removed some unnecessary code and comments
*Beautified code a bit.
*Fixed a bug in the reconnection code.
*Beautified code a lot.
*Changed bufsize argument to accept seconds instead of frames. Default
buffer size is 60 seconds.
*Improved documentation and help option.
*Beautified source a bit.
*Fixed bug in ringbuffer size allocation.
*Fixed so that more than one instance of jack_capture can run at once.
Hi all,
I am working on LADSPA support in Jokosher and we want Jokosher to
depend on particular plug-ins in different parts of the application.
Specifically, I would like to see a powerful compressor and equalizer
as part of the application.
So, my question to you all is which compressor and equalizer do we
depend on? Importantly, the chosen plug-ins need to exhibit the
following qualities:
* packages for all major distributions (Ubuntu, Debian, Red Hat,
Fedora, Gentoo, SuSE etc.)
* well maintained
* very high quality audio quality
Cheers,
Jono
I've never been a MIDI expert but I'm now having to learn. I have a
question about this excerpt of a MIDI file viewed with hexedit.
00001BB0 22 80 3D 35 31 80 3A 39 0E 80 37 31 03 80 31 1F ".=51.:9..71..1.
00001BC0 81 0C 90 30 5B 00 90 3C 79 81 70 90 39 73 00 90 ...0[..<y.p.9s..
00001BD0 36 69 4B 80 36 43 0A 80 3C 26 01 80 30 44 0A 80 6iK.6C..<&..0D..
00001BE0 39 42 82 08 90 37 63 00 90 43 7B 81 70 90 3E 5E 9B...7c..C{.p.>^
00001BF0 00 90 3A 66 08 80 37 30 02 80 43 32 31 80 3E 11 ..:f..70..C21.>
Take the sequence "80 3D 35 31 80 3A 39 0E 80 37 31 03 80 31 1F" in
the first line for example. I know that 0x80 is note-off, and 0x3D are
note number and 0x35 the velocity of the note-off. But what the heck is
the next byte, 0x31? The MIDI standard says note-off is one status byte
followed by 2 data bytes!
Lee
On Jul 25, 2006, at 9:33 AM, Dave Robillard
<drobilla(a)connect.carleton.ca> wrote:
> But you don't "just get plug and play" with MIDI. It's all about
> learning with MIDI.
"Common things should be easy, and unusual things
should be possible". The common things in MIDI are
plug-and-play. Only the "unusual things" are "all about
learning".
NoteOn and NoteOff, sustain pedal, volume control,
stereo pan, pitch-bend, mod-wheel ... these are all
plug-and-play, and have been since the earliest days of MIDI.
Manufacturers who make controllers know to send out these
commands in a stylized way, and sound designers who write
patches for synths (soft and hard) know to make their synths
respond in an appropriate way to these controllers. And for
a lot musicians, this is enough for them to do what they want
to do. This is the MIDI world Garageband lives in, for example,
and the biggest problem Apple has with Garageband is that
it is an entry-level program that makes most of its users so happy
that they aren't interested in upgrading to semi-pro software.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
On Jul 24, 2006, at 1:39 PM, linux-audio-dev-
request(a)music.columbia.edu wrote:
> what about applying the journal data to an OSC-over-UDP stream. the
> journal data could be encapsulated in OSC. sounds like a paper and
> liblo patch waiting to happen ;)
Personally, my suggestion is that the community starts by
defining OSC profiles for specific classes of gestural input
and synthesis methods that are widely used in the community.
These profiles should standardize syntax and semantics. If
you are working on a music project that is doing something
that fits a profile, use the profile. Otherwise, do as you do today.
If OSC goes down this route, one can imagine developing a
recovery-journal system with recovery semantics for all the
standard profiles. Part of developing a new OSC profile would
be defining the recovery journal for the profile.
The least of the benefits of a design like this would be
network resiliency. The big win is by defining OSC profiles
with semantics, it starts to make sense to create a hardware
or software synth that "understands OSC profile X" out of
the box, in the same way a synth understands MIDI. And
you can also create mass-market controller hardware that
"puts out OSC data using profile X". And so, you can
connect the two boxes up and get plug and play -- just
like MIDI.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
On Jul 24, 2006, at 7:43 AM, Dave Robillard
<drobilla(a)connect.carleton.ca> wrote:
> Anyway, as soon as you go sysex you lose the semantics and you have
> the same
> problem anyway - retransmission is the only possible solution if you
> know nothing about the data (it becomes application specific).
RTP MIDI has several ways to deal with this. For senders that know the
semantics of what they are sending (like, say, Novation would if they
were
adding Wi-Fi to their keyboard line), the recovery journal syntax for
SysEx
lets the sender specify recovery data in a way that's suitable to the
semantics,
and this encoding lets a receiver figure out how to use the data to
recovery.
For senders that don't know the semantics of what they are sending
(like a box with MIDI DIN jacks on one end and a WiFi antenna on
the other), there are several options. One is to use the recovery
journal
encoding for SysEx that is a simple list of all commands for a type
of SysEx, and rely on more frequent RTCP feedback from receiver
to sender to keep the journal trimmed to a reasonable length.
Alternatively,
It's possible to split a MIDI cable into two RTP MIDI streams -- one
TCP and
one UDP -- and gate the SysEx onto the TCP stream.
> (especially custom sysex evil that throws interoperability completely
> out the window).
Most industry folks who need to do unusual things with MIDI don't start
with SysEx. They start by making analogies of what they need to
do with the standard MIDI command set, and repurposing. This is
partially done to make sure DAWs can edit the data, and partially
done to get the efficiency of running status over the wire. SysEx
is used for secondary features. You can see this design philosophy in
the Logic Control specification in Appendix B of:
http://manuals.info.apple.com/en/Logic7_DedicatedCntrlSurfaceInfo.pdf
If I were rewriting an OSC application to use MIDI, with an eye
towards good RTP MIDI loss behavior, I'd take this re-purposing
approach ... it would be curious to see how Jazzmutant did it,
since in their latest release of Lemur MIDI is now a full-fledged
transport and not sent via OSC, if I read this web page correctly:
http://www.jazzmutant.com/lemur_lastupdate.php
> Human readability and interoperability IS often
> important (eg using supercollider or pd or whatever to control
> things).
I use Structured Audio in my own work, and Eric's Scheirer's language
support design for MIDI has many good aspects. See:
http://www.cs.berkeley.edu/~lazzaro/sa/book/control/midi/index.html
In 2006 if I were designing a replacement language, I'd do the MIDI
interface language design differently, given my experience using and
implementing SAOL. But I don't consider MIDI's use of SAOL as
hard to program in its present state, apart from some details of
extend() and turnoff for handling NoteOff release activities.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hello,
My name is Rene Bon Ciric, I am a Linuxer/musician/designer/webdev.
I am trying to assemble a team to start developing an Open Source OS for
the AKAI MPC4000 Sampler/Workstation. We have just started and are at
irc://freenode/openmpc
If you're interested, please join the group.
Thanks for your time!
Renich