Hello.
Recently Logic 6's Freeze feature was mentioned here, and Freeze
is mentioned in a new interview at Emagic's webpage:
Q: Any other favorite feature in Logic 6, that you use most?
Boris Blank [of Yello]: I think Freeze is ingenious. [ ... ]
I'm wondering did Emagic borrow the idea from this list, as the
freeze feature was discussed here at november 2001. And soon after
that the feature stands in their software.
((Anyone has Logic 6's PDF manuals? Evoc's manual? For research purposes.))
Regards,
Juhana
Hi everyone,
Sorry about sending random ASCII to the list, I
was editing a "is this, or is this not, RTP" reply and I
clicked the wrong button (oops). Basically, I should
note that
[1] Low-latency itself is not a problem with RTP --
if one runs RTP over a transport with low latency,
it can fully utilize the latency, see:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/nossdav01.pdf
[2] RTP came out of the multi-cast WAN videoconferencing
world (early 1990s) and then slowly migrated into other
domains (like content-streaming, etc). The Peak folks, like
the MIDI Show control folks, came to the problem from the
LAN direction. That said, any LAN that is running IP can
set up a link-local multicast group and run RTP on top of it,
and access the "broadcast" nature of the LAN environment.
[3] The problem Peak solves in hardware is akin to the
problem SPDIF and AES/EBU solves in hardware -- if the
sender and receiver have free-running clocks, sooner or
later underflow or overflow occurs, and so if your goal is
to "never click", you have to address the issue somehow.
Note this isn't an issue with non-continuous-media RTP
payload formats like RTP MIDI, as the command nature
of MIDI lets you slip time without artifacts. Neither is it an
issue for voice codecs used in conversational speech,
because you can resync at the start of each voice-spurt
(packets don't get sent for the side not talking -- this is part
of the efficiency advantage of VoIP over switched-circuit telephony).
For continuous-stream audio over RTP, the state of the art
to avoid this problem is a software sample-rate converter on
the receiver end, which speeds up or slows down the
sample rate of the sent stream by tiny amounts to null out
the tiny differences from "nominal" in the sender and receiver
sample-rate clocks. Quicktime does this, according to a thread
on the AVT mailing list a few years ago that discussed this issue.
Note this method isn't modifying the actual sender's sample rate;
on the contrary, its modifying the receiver's actual sample rate
to match the intentions of the sender.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
On Feb 26, 2004, at 8:31 AM, linux-audio-dev-request(a)music.columbia.edu
wrote:
>> It appears to be ethernet, not IP-based.
>
> Ah. Silly me.
>
>> http://www.mkpe.com/articles/2001/Networks_2001/networks_2001.htm
>
> "so, where are the products?" (referring to RTP and RTCP). Silly
> author. Expecting people to productize[sic] publically owned protocols.
>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 26 Feb 2004 14:58:38 +0000
> From: Steve Harris <S.W.Harris(a)ecs.soton.ac.uk>
> Subject: Re: [linux-audio-dev] is this, or this not, RTP?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <20040226145838.GD14603(a)login.ecs.soton.ac.uk>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Feb 26, 2004 at 09:33:25AM -0500, Paul Davis wrote:
>>> It appears to be ethernet, not IP-based.
>>
>> Ah. Silly me.
>>
>>> http://www.mkpe.com/articles/2001/Networks_2001/networks_2001.htm
>>
>> "so, where are the products?" (referring to RTP and RTCP). Silly
>> author. Expecting people to productize[sic] publically owned
>> protocols.
>
> I guess he was talking about routers and the like, in 2001 there
> weren't
> any that I know of, now there are plenty, eg:
> http://www.cisco.com/en/US/products/hw/routers/ps221/
>
> - Steve
>
>
> ------------------------------
>
> Message: 5
> Date: Thu, 26 Feb 2004 10:00:07 -0500
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Subject: Re: [linux-audio-dev] [ANN] Website
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <200402261500.i1QF07P6008166(a)dhin.linuxaudiosystems.com>
>
>>> * two ogg files recorded using the current alpha version of Aeolus,
>>> the pipe organ synthesiser I'll present at the second LAD
>>> conference
>>> in Karlsruhe.
>>
>> these files sound incredible. i can't wait to hear your presentation
>> on aeolus!
>
> absolutely! i'm calling off my talk so we can spend extra time
> listening to Aeolus perform. incredible work!
>
> --p
>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 26 Feb 2004 16:03:11 +0100
> From: kloschi <linux-lists(a)web.de>
> Subject: [linux-audio-dev] Announcement: Camp Music 2004, techlab,
> call for participiants/exhibitors
> To: A list for linux aduio developers
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <20040226160311.3b2060ea(a)magrathea.funk.subsignal.org>
> Content-Type: text/plain; charset=US-ASCII
>
> Hi list,
>
> we are doing the camp music, a festival for electronic music on
> 14.o5.-15.o5.2oo4 in the 'motorpark' near magdeburg [germany].
> this event includes a usual festival with 2 stages and additionally
> a forum for musicians and independent labels [labelforum] and also
> the so called 'techlab'.
> techlab will be a place where music-software and -hardware producers
> show their [new] stuff, give workshops and so on. So far companies like
> native instruments and emagic and some more commercial
> manufacturers/developers will show up.
> I would also like to invite free software developers with showable
> products, to present and give workshops, to introduce musicians and
> producers in the possibilities of the free software world.
>
> please contact me _soon_ at kloschi(a)seekers-event.com or
> kloschi(a)subsignal.org feel free to forward this announcement.
>
> kloschi
>
>
> ------------------------------
>
> Message: 7
> Date: Thu, 26 Feb 2004 10:04:47 -0500
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <200402261504.i1QF4lwA008182(a)dhin.linuxaudiosystems.com>
>
>> It was, however, the automatic merits that I wished mainly to explore.
>> Freezing has it's merits, but it requires that you dedicate some
>> brain cycles
>> to deciding when and where you wish to freeze/unfreeze something. I
>> could
>> sure use those cycles for keeping creativity flowing.
>
> remember: freezing has no merits at all unless you need to save CPU
> cycles.
>
> the problem is that in a typical DAW session, you can potentially
> freeze most tracks most of the time. so how can you tell what the user
> wants frozen and what they don't? More importantly, freezing consumes
> significant disk resources. Can you afford to do this without it being
> initiated by the user? A typical 24 track Ardour session might consume
> 4-18GB of audio. Freezing all or most of it will double that disk
> consumption (and its not exactly what you would call quick, either :)
>
> thus, either you have s/w smart enough to figure out what to freeze
> and then do it, which does not come without certain costs, or the user
> has to play a significant role in the process.
>
>> ... and I do think you can freeze a bus, but it requires that the app
>> has full
>> knowledge of the connection graph. Mmmm, I see a jack extension
>> forming ;))
>
> sorry, but i don't think so. if i have a bus that is channelling audio
> in from an external device (say, a h/w sampler), you cannot possibly
> freeze it.
>
> --p
>
>
> ------------------------------
>
> Message: 8
> Date: Thu, 26 Feb 2004 15:17:31 +0000
> From: Steve Harris <S.W.Harris(a)ecs.soton.ac.uk>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <20040226151731.GG14603(a)login.ecs.soton.ac.uk>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Feb 26, 2004 at 10:04:47AM -0500, Paul Davis wrote:
>> the problem is that in a typical DAW session, you can potentially
>> freeze most tracks most of the time. so how can you tell what the user
>> wants frozen and what they don't? More importantly, freezing consumes
>> significant disk resources. Can you afford to do this without it being
>> initiated by the user? A typical 24 track Ardour session might consume
>> 4-18GB of audio. Freezing all or most of it will double that disk
>> consumption (and its not exactly what you would call quick, either :)
>
> No, but you can do it (semi-)transparently when the user presses play.
> I
> dont know if that would be acceptable or not, but if you imagine
> adding a
> few effeects, auditioning, rinse, repeat it might work out ok.
>
> I'd always go for more CPU if possible - I'm not a huge fan of multiple
> code paths :)
>
> - Steve
>
>
> ------------------------------
>
> Message: 9
> Date: Thu, 26 Feb 2004 09:06:59 -0600
> From: Benjamin Flaming <lad(a)solobanjo.com>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <200402260906.59583.lad(a)solobanjo.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Thursday 26 February 2004 09:04 am, Paul Davis wrote:
>> the problem is that in a typical DAW session, you can potentially
>> freeze most tracks most of the time. so how can you tell what the user
>> wants frozen and what they don't?
>
> I want my plug-ins frozen the instant I close the parameter
> editor. ;)
>
>> More importantly, freezing consumes
>> significant disk resources. Can you afford to do this without it being
>> initiated by the user? A typical 24 track Ardour session might consume
>> 4-18GB of audio. Freezing all or most of it will double that disk
>> consumption (and its not exactly what you would call quick, either :)
>
> Agreed, it's a very definite trade-off - storage space for CPU
> cycles.
> It is my observation, however, that storage space is cheap, and
> readilly
> available.
>
>> sorry, but i don't think so. if i have a bus that is channelling audio
>> in from an external device (say, a h/w sampler), you cannot possibly
>> freeze it.
>
> However, buses which simply contain a submix of several audio
> tracks can
> be safely frozen, saving both processing power and disk bandwidth.
>
> The purpose of my project is to create a working environment which
> encourages songs to be organized in such a way that offline rendering
> can
> *usually* be done transparently. Thus, the hierarchal tree structure.
>
> When I finish comping the vocals for a chorus, I want to be left
> with 1
> fader, and 1 editable audio track, for the chorus. If I need to make
> one of
> the voices softer, I can bring up the underlying tracks within a second
> (which is *at least* how long it usually takes me to find a single
> fader in a
> 48-channel mix). While I'm making adjustments, Tinara will read all
> the
> separate chorus tracks off the disk, mixing them in RT. When I move
> back one
> layer in the mix hierarchy (thereby indicating that I'm finish
> adjusting
> things), Tinara will begin re-rendering the submix in the background
> whenever
> the transport is idle. When the re-rendering is done, Tinara will go
> back to
> pulling a single interleaved stereo track off the disk, instead of 6-8
> mono
> tracks.
>
> The basic idea is to turn mixing into a process of
> simplification. When
> I'm finishing up a mix, I don't want to deal with a mess of tracks and
> buses,
> with CPU power and disk bandwidth being given to things I haven't
> changed in
> days. I want to be able to focus on the particular element or submix
> that
> I'm fine-tuning - and have as much DSP power to throw at it as
> possible.
>
> This will also make the use of automated control surfaces much
> nicer,
> IMHO. Since there will be fewer elements in each layer of the
> hierarchy,
> fewer faders would be needed. Additionally, it would be easier to
> keep track
> of what's going on. I've worked extensively with Digi Design's
> Control|24,
> and my feeling is that things start to get messy when there are more
> than
> about 12 faders (not to mention how easy it is to get lost when there
> are two
> or more banks of 24 faders!).
>
> Just for the record, please understand that any negativity I may
> express
> toward conventional DAW systems is *not* directed toward Ardour. It's
> just
> pent-up frustration with Pro Tools ;)
>
> |)
> |)enji
>
>
>
> ------------------------------
>
> Message: 10
> Date: Thu, 26 Feb 2004 11:25:33 -0500
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <200402261625.i1QGPXbe015706(a)dhin.linuxaudiosystems.com>
>
>> I want my plug-ins frozen the instant I close the parameter
>> editor. ;)
>
> oh, you don't want to do any graphical editing of plugin parameter
> automation? :))
>
>> Agreed, it's a very definite trade-off - storage space for CPU
>> cycles.
>> It is my observation, however, that storage space is cheap, and
>> readilly
>> available.
>
> not my experience.
>
>>> sorry, but i don't think so. if i have a bus that is channelling
>>> audio
>>> in from an external device (say, a h/w sampler), you cannot possibly
>>> freeze it.
>>
>> However, buses which simply contain a submix of several audio
>> tracks can
>> be safely frozen, saving both processing power and disk bandwidth.
>
> sure, but thats a subset of all busses. its not a bus per se.
>
>> When I finish comping the vocals for a chorus, I want to be left
>> with 1
>> fader, and 1 editable audio track, for the chorus. If I need to make
>> one of
>> the voices softer, I can bring up the underlying tracks within a
>> second
>> (which is *at least* how long it usually takes me to find a single
>> fader in a
>> 48-channel mix). While I'm making adjustments, Tinara will read all
>> the
>> separate chorus tracks off the disk, mixing them in RT. When I move
>> back one
>> layer in the mix hierarchy (thereby indicating that I'm finish
>> adjusting
>> things), Tinara will begin re-rendering the submix in the background
>> whenever
>> the transport is idle.
>
> have you actually experienced how long it takes to "re-render"?
> steve's suggestion is an interesting one (use regular playback to
> render), but it seems to assume that the user will play the session
> from start to finish. if you're mixing, the chances are that you will
> be playing bits and pieces of of the session. so when do you get a
> chance to re-render? are you going to tie up disk bandwidth and CPU
> cycles while the user thinks they are just editing? OK, so you do it
> when the transport is idle - my experience is that you won't be done
> rendering for a long time, and you're also going to create a suprising
> experience for the user at some point - CPU utilization will vary
> notable over time, in ways that the user can't predict.
>
> you also seem to assume that the transport being stopped implies no
> audio streaming by the program. in ardour (and most other DAWs), this
> simply isn't true. ardour's CPU utilization doesn't vary very much
> whether the transport is idle or not, unless you have a lot of track
> automation, in which case it will go up a bit when rolling.
>
>> The basic idea is to turn mixing into a process of
>> simplification. When
>> I'm finishing up a mix, I don't want to deal with a mess of tracks
>> and buses,
>> with CPU power and disk bandwidth being given to things I haven't
>> changed in
>> days. I want to be able to focus on the particular element or submix
>> that
>> I'm fine-tuning - and have as much DSP power to throw at it as
>> possible.
>
> the focusing part seems great, but seems to be more of a GUI issue
> than a fundamental backend one. it would be quite easy in ardour, for
> example, to have a way to easily toggle track+strip views rather than
> display them all.
>
> the DSP power part seems like a good idea, but i think its much, much
> more difficult than you are anticipating. i've been wrong many times
> before though.
>
> and btw, the reason Ardour looks a lot like PT is that it makes it
> accessible to many existing users. whether or not ardour's internal
> design looks like PT, i don't know. i would hope that ardour's
> development process has allowed us to end up with a much more powerful
> and flexible set of internal objects that can allow many different
> models for editing, mixing and so forth to be constructed. the backend
> isn't particularly closely connected in any sense, including the
> object level, to the GUI.
>
> --p
>
>
>
>
> ------------------------------
>
> _______________________________________________
> linux-audio-dev mailing list
> linux-audio-dev(a)music.columbia.edu
> http://music.columbia.edu/mailman/listinfo/linux-audio-dev
>
>
> End of linux-audio-dev Digest, Vol 5, Issue 57
> **********************************************
>
>
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hi list,
we are doing the camp music, a festival for electronic music on
14.o5.-15.o5.2oo4 in the 'motorpark' near magdeburg [germany].
this event includes a usual festival with 2 stages and additionally
a forum for musicians and independent labels [labelforum] and also
the so called 'techlab'.
techlab will be a place where music-software and -hardware producers
show their [new] stuff, give workshops and so on. So far companies like
native instruments and emagic and some more commercial
manufacturers/developers will show up.
I would also like to invite free software developers with showable
products, to present and give workshops, to introduce musicians and
producers in the possibilities of the free software world.
please contact me _soon_ at kloschi(a)seekers-event.com or
kloschi(a)subsignal.org feel free to forward this announcement.
kloschi
Hello List,
I've finally found the time to put my Linux Audio things online, at
<http://users.skynet.be/solaris/linuxaudio>.
You will find there
* the latest releases of the MCP, REV and VCO plugins (previously on
the alsamodular site),
* some things that are under construction,
* two ogg files recorded using the current alpha version of Aeolus,
the pipe organ synthesiser I'll present at the second LAD conference
in Karlsruhe.
--
Fons
Hi!
The four way phase/amplitude cross mudulating multichannel realtime
polysynthesizer for intel mmx located at http://hem.passagen.se/ja_linux
is now updated ... (phew!)
Changes:
New 2inOne oscillator (see below)
Finer frequency resolution
Smooth, squared envelope decay
Enhanged touch response
Gtk2
The new version will load old patches but they will be tuned down by two
octaves and have reduced cross-modulation values.
Use "mx4 -s" to split the oscillators across seperate windows
cheers // Jens M Andreasen
PS: Screenshot and text on site is not updated ...
Q: What is a "2inOne oscillator"?
A: A 2inOne oscillator is an oscillator that calculates two related
frequencies for the price of one.
Originally intended as a sine table replacement, where two slightly
"bumpy" versions of the same oscillator (shifted 60 degrees apart) are
averaged to approximate one smooth and round wawe. By simple integer
multiplication of one of the components, you get the second oscillator
(tuned to a harmonic of the other component) "for free" :-)
int sxin(short w) // One half of a pseudo sin()
{
register short a,b;
a = w;
b = a;
b += (SHRT_MIN);
a = (a * b)>>16; // mmx: pmulhw_r2r(mm1, mm0);
b >>= 15; /* b equals -1 when 'i' is positive else 0 */
b ^= a;
/* Alternating positive/negative halfwawes are in 'b'
* Always negative halfwawes are left in 'a'
*
* If you'd rather keep the squarewawe that was in 'b'
* then do ..
a ^= b;
* .. and return a instead.
*
*/
return b;
}
Greetings:
I've received notes and announcements regarding changes of URLs for
various LADSPA collections. I have updated the LADSPA section of the
Linux soundapps site to reflect those changes. Please let me know of any
other errors or changes, and my thanks to all of you who sent in the new
URLs.
Best regards,
dp
>From: Frank Barknecht <fbar(a)footils.org>
>
>I still don't understand what "Script UI" means. For example: In what
>way has jMax a "Script UI" and Pd not?
My mistage.
For example, Csound has script UI because that is all what is there.
If PD can be fully programmed via text files, then it has script UI.
A script support is not exactly the same thing as script UI if
user has to use GUI at some point. Maybe "text UI" would be better
than "script UI".
Regards,
Juhana