On Sat, Feb 16, 2013 at 7:15 AM, Pedro Lopez-Cabanillas <
pedro.lopez.cabanillas(a)gmail.com> wrote:
**
I like to explain this issue with an analogy: it is similar to the image
world, where you have vector graphics (MIDI) and bit-mapped or raster
graphics (digital audio). There are Linux programs working from both points
of view, for instance: Inkscape (vector graphics) and Gimp (bitmaps). You
can use bitmaps with Inkscape and vector graphics in Gimp, they can
interoperate quite well, but when you mix both worlds, each program tries
to convert the alien object to its own point of view. There are graphic
artists that start a design with Inkscape producing SVG files, and for the
final product they import a SVG into Gimp. There is also people working
directly with Gimp from scratch.
well i guess as the Great Dictator and Malevolent Guru, i am forced to
comment again just so that you can remain sufficiently angry that you keep
coding useful apps for people to use. let me know what else i can add that
will get you sufficiently riled up. do i need to harp on about KDE some
more? possibly critique some long dead technology like artsd once more?
just let me know - it would be a shame to lose your talent and skills just
because i stopped irritating you.
that said ... your analogy is a good one, but it doesn't mention an
important detail that matters a great deal from a development perspective.
you can't see an SVG image without rendering it as a bitmap. inkscape does
this for you, both while you work and when you decide to use "Export as
bitmap".
likewise, you can't hear MIDI without rendering it to audio.
now, if someone uses a workflow in which they edit MIDI,then render, then
listen, then go back to editing MIDI again, it is entirely reasonable to
keep the audio and MIDI aspects of software rather separate. this is, in
fact, how systems like CSound and RTMix and SuperCollider worked for many
years.
but as soon as someone wants to either (a) have their tool respond to
incoming MIDI in realtime and/or (b) avoid the requirement for an
edit/render/listen cycle, it becomes important to integrate the audio and
MIDI parts of a piece of software fairly tightly. this by itself isn't so
different from what inkscape does - after all, it can render bitmap images
all the time as you edit. the added dimension for audio (and MIDI) though
is ... time. the user can not only edit the data represented as MIDI and
position it in different tracks/channels and arrange for different
renderers (think "instrument plugins"), but the MIDI data itself contains a
notion of "time" in the sense of "don't play me yet, do it in 1.2
seconds".
this adds an entirely new dimension to the development task that simply
isn't there for graphics (but is of course part of video).
and it is the handling of the temporal aspect where tools like the ALSA
sequencer come into play.