I just had to come out of lurk mode for this:
tim wrote:
> all of them.
>
> rhythmn is always based on one integral periodic 'pulse'. if
> time is not divisible by this atom, there is no musical time.
Nancarow, Ives, Stockhausen, Xenakis, Boulez, Schaeffer, Henry etc. etc in
the classical field
Taylor, Sun Ra, Ornette Coleman, Coltrane, Mengelberg, Broetzman, Zorn,
Ayler etc etc in jazz/impro
lots of ambient stuff that I don't know the names of.
lots of acapella vocal music from various cultures.
There can be easily multiple time-frames going happening in a single piece
of music that have non-lineair relationships.
A computer can also be used to make sounds that a player cannot make.
A sequencer/daw will also be used for non-musical ordering of sounds in
time. It might be handy to use an extended beat/measure structure for
setting event frames for dialog editing for a radio play.
BTW measure are much more complicated than just A/B. Even a 6/8 is really 2/
2.6666666.... in a way. Unless it is divided differently. See Brahms for
nice examples of playing with the groupings of eight notes in 4/4.
The notation x/y is just a shorthand in classical music _notation_, that
only becomes meaningfull in the context of other notation parameters, such
a note-beam groupings etc.
So notating 17/16 instead of 4.25/4 is fine, because the score gives the
grouping information. (to the player and conductor)
Although I have written (4+1/2) / 4 because I wanted to mae sure that the
piece s counted that way and not in 9/8 (=3+3+3).
Anyway my point is that the A/B concept of measure if only really relevant
if your dealing with western _notation_, and then together with the entire
score.
going back to lurk mode now
Gerard
You move the play position marker.
Plugins get the position changes from the timeline,
and those that need to, do their best to prebuffer
audio data from disk, or whatever. While doing that,
the put a "1" on their "READY" Control Outputs, that
are connected to the transport control.
You press "Start".
The transport control simply waits until it has
received a "0" from each one of the "READY" Controls
it's watching. Then it actually starts the sequencer.
If there are no READY Controls in the net, the
sequencer will just start instantly.
Sounds reasonable?
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
The Rosegarden team have the pleasure of announcing the
latest release of their MIDI and audio sequencer and score
editor for Linux.
The source code is available now from the project homepage:
http://www.all-day-breakfast.com/rosegarden
The availability of binary packages depends on their various
maintainers; please check the project homepage for more
information.
New features since the 0.8 release include:
o Improved MIDI file I/O: better support for banks and
merging in import, better support for delay, transpose
etc in export
o MIDI device Bank and Program editor, including import
and export of Studio data
o MIDI Panic Button for clearing down stuck notes
o Added some keyboard controls to matrix
o Added real-time segment delays
o Progress display completely overhauled
o ALSA clients can be added dynamically (you can change
your soft synth configuration while Rosegarden is running)
o MIDI events filter dialog for MIDI THRU and MIDI record
o MIDI/ALSA recording bug fixed (stopping after so many
recorded events)
o Many bug fixes, tweaks and performance improvements
Unfortunately we've had to drop KDE2 support from this release
onwards as it was getting too difficult to maintain both KDE2
and KDE3 in the same development tree. We're hoping this won't
affect too many users in the long term.
Finally, we would like to thank the cachegrind/kcachegrind team
for a truly useful development tool.
Chris
Hi, guys,
I am working on a project in which I need to implement
palyback and recording on the same sound card and in
the same time.
I open the soundcard with RDWR mode and used 'select'
to wait for sound data on card and read it. After
that, I receive audio data from socket and write this
data into buffer on card. for now. I tested it and i
found if just one way, ie. just read or write, it
works fine, the quality of sound is fine. but if two
way, ie. play&record, the sound is horrible and the
weird is that I always get better sound quality from
playback that what i got from recording. (I heard the
audio from two ends). by the way, I used
SNDCTL_DSP_TRIGGER to syncronyse them.
can this method implement full-duplex functions ? if
not, what should I do? thanks in advance
leo
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
I was just thinking about the details of Voice vs Channel Controls.
The events are identical in format and semantics, except that Voice
Controls must have a Virtual Voice ID argument.
As a result, if you have only Channel controls, those could be
exported as Voice controls, if you say you have only 1 Voice. (*)
Seeing it from the sender side, we can also conclude that you can
control a single voice synth with Channel Control events, *provided*
the synth really ignores the VVID. (Which we cannot assume just like
that, of course.)
Or, we could require that Channel Control events carry VVIDs as well!
We wouldn't really have to waste VVIDs on this, because when
connecting Channel Control outputs to Channel Control inputs, no
VVIDs are needed. (They will be ignored, since Channel Controls can't
have that dimension of addressing.)
When connecting a Channel Control output to a Voice Control input, a
fixed VVID will be allocated, giving you a single voice to do what
you want with - as if you were controlling a monophonic synth!
If you're into polyphony, you'll want to be able to grab a bunch of
VVIDs, so you can control multiple Voices. Obiously, this is what
most sequencers will do. But then, what happens if you connect a
sequencer to a monophonic plugin, that has only Channel Controls?
Well, sending Voice Control events with a single (possibly fake) VVID
to Channel Controls works just fine... :-)
We just have to tell the sequencer that there really is only one
Voice. It'll ask for a VVID for that, and the host will go "aha;
Voice -> Channel" and hand the sequencer a fake VVID. (0, that is.)
I'll have to think some more about this, but I think it would be
possible to handle channel->voice and voice->channel Control
connections pretty much transparently. I think this is quite
important, especially for monophonic synths (controlling them from a
standard sequencer), and for building monophonic control processing
nets that run polyphonic synths. (Each Voice will act as a mono
synth.)
(*) Which is probably something we should support, BTW. Some synths
may have a fixed number of voices that are not independent, and
it might be useful to be able to express that in some way.
Documentation or naming might be sufficient, though, as it's
really a matter of how you control the synth. A dual voice
"interference" synth would probably best be controlled from two
sequencer tracks, set to use only one Virtual Voice each - ie
monophonic tracks, like on a traditional tracker.
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
My current event struct is 32 bytes on 32 bit CPUs, and 36 bytes for
64 bit. (That is, hosts on 64 bit archs should make it 64.)
That means 12 bytes for arguments. 8 bytes on 64 bit platforms, if we
stick with 32 bytes/event.
What I'm thinking is just that we won't need 4 billion different
actions, and we won't need 4 billion VVIDs.
How about using 24 bits for 'id' and 8 bits for 'action'? 16 Mvoices
and 256 actions? I think we should stay away from dynamically
assigned user actions anyway, so that should be enough. (We have only
some 20 events or so, timeline events included.)
//David Olofson - Programmer, Composer, Open Source Advocate
.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
--- http://olofson.net --- http://www.reologica.se ---
DIKKAT
VIDEO CD :
Iddia ediyoruz.. Hic bir yerden temin edemeyeceginiz ses ve g�runtu kalitesi ile
yuzlerce porno video CD. arSivimiz yenilenmistir. istemis oldugunuz video CD.ler
bire bir yollanir kesinlikle isteginiz harici alakasiz baSka video CD.ler yollanmaz.
Anal. Oral. Vajinal. Grup. Zenci. FethiS. Ayak fethiS. Gay. Zenci gay. Trans. Transexual.
Lezbiyen ve daha bircok ceSit .... http://www.erotikgece.com.tr.tc
ZENGIN URUN CESITLERIMIZ :
Sisme Bebekler ..... (Erkek & Bayan) Kesinlikle size hayir demeyecek.
Vibrat�rler ........ Istediginiz boy ve ebatlarda (Vajinal/Anal/catal.Pilli.Motorlu.TitreSimli)
Suni Vajinalar ..... Asla gerceginden ayirt edemeyeceksiniz (Gercek ten hassasiyetinde)
Reailistik Penisler. Gercek ten hassasiyetinde ve dokusunda (Vantuzlu/Deri kemer kilotlu)
Vakum Pompalari .... Ereksiyonu kolaylastirici ve duzenli kullanimlarda peniste irilesme saglar.
Geciktiriciler ..... Erken boSalmayi dert etmeyin (Sprey ceSitleri/Kremler)
Kremler ............ Anal ve Vajinal iliSkilerde kullanabileceginiz kayganlaStirici krem ceSitleri
Uyandiricilar ...... Cinsel istek uyandirici haplar ve damlalar.
Yapmaniz gereken tek Sey http://www.erotikgece.com.tr.tc TIKLAMAK ..
NOT : BU MAIL REKLAM AMAcLI OLUP HIcBIR SEKILDE TARAFIMIZDA KAYDINIZ BULUNMAMAKTADIR.
ILGI ALANINIZIN DISINDA ISE EGER LUTFEN DIKKATE ALMAYINIZ TESEKKURLER..
for no special reason and with the intent of public delight,
here's an excerpt from "The Raga Guide", published by Nimbus
Records in association with the Rotterdam Conservatory of Music,
by Joep Bor, Suvarnalata Rao, Wim van der Meer and Jane Harvey,
musicians on the CD set are: Hariprasad Chaurasia, flute,
Buddhadev DasGupta, sarod, and Shruti Sadolikar-Katkar,
Vidhyadhar Vyas, both vocal.
5 Talas in performance
[...] A composition in Hindustani music is set to a particular
rhythm cycle (tala), which consists of a fixed number of time
units or counts (matras) and is made up of two or more sections.
[...]
Among the talas which are in common use, the sixteen-beat tintal
(or trital: 4+4+4+4) is perhaps the most popular today [33].
Other common talas are:
dadra - six counts: 3+3
rupak - seven counts: 3+2
kaharva - eight counts: 4+4
jhaptal - ten counts: 2+3+3+3
ektal and chautal - twelve counts: 2+2+2+2+2+2
dhamar - fourteen counts: 5+2+3+4
dipchandi - fourteen counts: 3+4+3+4
addha tintal or sitarkhani: sixteen counts: 4+4+4+4
[33] Over eighty-five percent of the ragas on the CDs have been
performed in tintal.
tim
I couldn't resist it so I hacked up a quick script to try the blockless,
dynamicly compiled processing we were discussing the other day.
http://plugin.org.uk/blockless/
just "make" if you want to test it
Its really hacky insomnia perl code, so dont look at it ;)
It works by defining graphs (.g), that are made up of atoms (C code) and
other graphs. The perl script turns it all into one giant lump of C and
builds it.
Graph files look like eg (pinknoise.g):
noise n(); // declares an instance n
zm1 d();
mix2to1 m();
gain half(0.5f);
n:out -> m:in1; // connect the 'out' port of n to the 'in' port of m
n:out -> d:in;
d:out -> m:in2;
m:out -> half:in;
half:out -> this:out; // an output from this module to the parent graph
I went as far as defining a biquad filter in the graph format
(http://plugin.org.uk/blockless/blockless/modules/biquad.g), but it
dosen't quite work because the execution order is more or less random.
I used the biquad in a simple toplevel graph
(http://plugin.org.uk/blockless/blockless/graphs/test4.g), it takes about
50 cycles per sample on PIII (interestingly its compiles to slightly worse
code in gcc3.2), the source my script produces is very tangled and
funtion-y, but gcc manages to untagle it and inline it all, eg.
http://plugin.org.uk/blockless/blockless/intem/test4.c
Its too much work to create a reasonably complex synth or anything in this
as theres no UI and keeping all the links straight in your head is
painful, so I dont know how well it scales up.
Its quite cool building up modules from gain and z^-1 units though :)
- Steve