On Tue December 20 2005 20:22, Emiliano Grilli wrote:
IMHO there are tasks that are best expressed by
"gestures"
(mouse clicks and acting with icons), and other that are best
expressed by "words" (command line interface, scripting), and
I think both approaches are valuable. Having both well
cooperating is a good thing for me. I don't understand why
denigrate one over another.
Actually, I *am* a programmer (just one who's not especially good
at writing stable C++ code), and I wish I could "program" songs
in a way that made sense to me both as a programmer and as a
composer. Something higher level than csound and less Lisp-like
than nyquist, for example, but which could still talk to all the
nifty audio stuff like Jack and ALSA synthesizers and MIDI and
LADSPA filters. "emacs mysong" (or "kwrite mysong") is always
gonna be more comfortable to me than a mouse-driven sequencer
interface, because I live most of my life in it. I'd like to be
able to play my song back whenever I want to, or type "make all"
and get nice big honkin' wav, ogg and mp3 files of it.
However, I recognize that I am not like most musicians. Since no
one else here is either, I felt someone needed to stick up for
them. So many people seem to wonder why more musicians aren't
using Linux.... the "use the command line, it's better"
mentality is one of them. Most musicians and many recording
engineers are going to think something's gone horribly wrong if
they see a command line.
Rob