Hi all (sorry for the cross-post, but this may not be just a problem
with Ardour),
Over the last week and the weekend, I took to recording a song in full
using Jack, Hydrogen, Ardour and Jamin. I'm not sure if Ardour/LAD is
the best place to send this, but some things that I noticed may be
across different software, but I thought I'd list a few issues that came
up, as well as some delights. I'm not on the Jack or Hydrogen lists,
but if this is a Jack or Hydrogen problem, please let me know and I'll
post it there.
The main problem I had was the sync between Hydrogen and Arodur. I had
Hydrogen set as Jack transport slave, and Ardour as master. Both
programs were set at 130 bpm, but if I recorded something to a track in
Ardour, and then played it back, it sounded fine (in time), but on the
screen, the recorded material does not line up with the bar lines in
Ardour. The recorded stuff appears a few millimetres before the bar
line.
Another interesting thing was if I changed the (period?) in Jack from
512 to 1024, the Hydrogen playback was out of time to the Ardour
playback, if I switched it back to the original setting it was recorded
in, it was fine.
I had a few stability problems, but I didn't test them very much, it
seemed to be realted to having certain plugins enabled in Ardour. Jack
was kicking Ardour out when a particular plugin was being used. I'll
have to test that another time to get more detail.
Overall though, things went fairly smoothly.
The result of the weekend is available at http://danharper.org/songs.php
if anyone is interested. It was all done in Linux:
HYDROGEN -> ARDOUR ------------------------> JAMIN -> QARECORD
Electric Guitar (3 tracks)
Vocals (3 tracks)
Bass Guitar
Vocals Bus
Hydrogen Out to an Ardour Bus
Master Bus
Feel free to give feedback on the song, mix, and mastering. One thing
that I loved was Jack. Getting a nice sounding mix and master was so
easy because I could change a track level in Hydrogen, and immediately
hear the results through Jamin. Same also if I needed to change a
plugin parameter or track level in Ardour, the results were immediate.
There is no other set of audio tools around that I know of that can do
this. A very powerful and useful feature of the design of Jack and its
clients.
Overall, I should mention that the majority of my time was spent
wrestling with LADSPA plugins. Some cased reliability issues in Ardour
(see above, more info to come). Some gave me some OK sounds, but I have
noticed in the mixdown that the guitar orverdrive doesn't have a nice
warm sound. I can't recall the exact plugins I used from memory, but I
did find it hard to find plugins that would give me a nice warm sound on
guitar tracks. Maybe that is something to improve upon.
Dan
>The University of Miami is pleased to announce the general call for
>submissions to ICMC 2004, to be held 1-6 November in Miami, Florida USA.
Hello. Will these papers be freely available in PDF format?
E.g., just like DAFX papers are.
There are many interesting ICMC papers published during last 20
years, but the papers seems to be only available for a rich man.
Could you ICMC people place all older papers freely available to
a webpage? I could help by scanning the papers only if somebody
would borrow me the proceedings and clear the copyrights.
I remember a company is selling the PDF vesions. Anyone here
has them and could borrow/share them privately for my personal
free software development use? If more people could join the
efforts, we could turn all interesting algorithms in to free
software.
Regards,
Juhana
http://plugin.org.uk/liblo/
liblo is a simple to use, lightweight OSC C library implementation
(http://www.cnmat.berkeley.edu/OpenSoundControl/)
chasnges since the pre-release include type coercion and API cleanups.
this is a candidate release to get feedback on the API. usage example can
be found in src/testlo.c
- Steve
Hi,
I have been quite frustrated with the difficulty of getting multiple OSS
applications using my sound hardware simultaneously. The LD_PRELOAD
hacks that I have tried leave much to be desired in terms of usability,
especially when dealing with arts and mozilla plugins.
oss2jack uses Jeremy Elson's useful fusd library
(http://www.circlemud.org/~jelson/software/fusd/) to create a userspace
character device, which is also a jackd client. It supports mono and
stereo streams, with virtually any sample rate thanks to libsamplerate
(http://www.mega-nerd.com/SRC/). Only the commonly-used OSS ioctls are
currently supported.
I have tested oss2jack with the following applications, and they behave
fairly well:
- mplayer
- artsd (and thus all the kde applications)
- xmms
- Macromedia Flash
TODO:
- support mmap for quake and other games
(requires support in fusd)
- detect jackd period for audio sync (currently assumes 64 samples).
Should not be too noticeable unless you get above a 256 sample
period)
- lower CPU usage with artsd by blocking until the minimum fragment
size is available (rather than the jackd period)
- support for the OSS mixer.
Available at:
http://fort.xdas.com/~kor/oss2jack/
WARNING: I have not yet tested oss2jack on the 2.4 series kernel.
Currently I am using a self-created 2.6 patch for fusd available at
the site above. It has been stable for the past 3 weeks on my machine,
but no guarantees about stability or safety... :)
fusd currently requires that devfs be enabled in the kernel.
Any comments are appreciated.
Kor
Hello,
DRC 2.4.0 has been released and it is available at:
http://freshmeat.net/projects/drc/
Changes in this release:
The Takuya Ooura and GNU Scientific Library FFT routines have been included
in the program. These routines are about 10 times faster than the previous
routines, providing about the same accuracy.
Best of listening,
--
Denis Sbragion
InfoTecna
Tel: +39 0362 805396, Fax: +39 0362 805404
URL: http://www.infotecna.it
Please pardon cross postings.
The ICMC 2004 submission deadline for music, video art, and
installations which was originally February 27, 2004 has been changed to
reflect an additional grace period of two business days.
The updated submissions deadline for music, video art, and installations
is now March 2, 2004 (Tuesday). Please note that this is the receipt
deadline and not postmark deadline.
The deadline for papers, posters, roundtables, and demonstrations have
not changed and are to be submitted by
midnight EST, Friday, March 12, 2004.
Forms, submission guidelines, and further details are available at
http://www.icmc2004.org. Thank you.
Best Regards,
Tae Hong Park, ICMC 2004 Publicity Chair
Hello.
Recently Logic 6's Freeze feature was mentioned here, and Freeze
is mentioned in a new interview at Emagic's webpage:
Q: Any other favorite feature in Logic 6, that you use most?
Boris Blank [of Yello]: I think Freeze is ingenious. [ ... ]
I'm wondering did Emagic borrow the idea from this list, as the
freeze feature was discussed here at november 2001. And soon after
that the feature stands in their software.
((Anyone has Logic 6's PDF manuals? Evoc's manual? For research purposes.))
Regards,
Juhana
Hi everyone,
Sorry about sending random ASCII to the list, I
was editing a "is this, or is this not, RTP" reply and I
clicked the wrong button (oops). Basically, I should
note that
[1] Low-latency itself is not a problem with RTP --
if one runs RTP over a transport with low latency,
it can fully utilize the latency, see:
http://www.cs.berkeley.edu/~lazzaro/sa/pubs/pdf/nossdav01.pdf
[2] RTP came out of the multi-cast WAN videoconferencing
world (early 1990s) and then slowly migrated into other
domains (like content-streaming, etc). The Peak folks, like
the MIDI Show control folks, came to the problem from the
LAN direction. That said, any LAN that is running IP can
set up a link-local multicast group and run RTP on top of it,
and access the "broadcast" nature of the LAN environment.
[3] The problem Peak solves in hardware is akin to the
problem SPDIF and AES/EBU solves in hardware -- if the
sender and receiver have free-running clocks, sooner or
later underflow or overflow occurs, and so if your goal is
to "never click", you have to address the issue somehow.
Note this isn't an issue with non-continuous-media RTP
payload formats like RTP MIDI, as the command nature
of MIDI lets you slip time without artifacts. Neither is it an
issue for voice codecs used in conversational speech,
because you can resync at the start of each voice-spurt
(packets don't get sent for the side not talking -- this is part
of the efficiency advantage of VoIP over switched-circuit telephony).
For continuous-stream audio over RTP, the state of the art
to avoid this problem is a software sample-rate converter on
the receiver end, which speeds up or slows down the
sample rate of the sent stream by tiny amounts to null out
the tiny differences from "nominal" in the sender and receiver
sample-rate clocks. Quicktime does this, according to a thread
on the AVT mailing list a few years ago that discussed this issue.
Note this method isn't modifying the actual sender's sample rate;
on the contrary, its modifying the receiver's actual sample rate
to match the intentions of the sender.
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
On Feb 26, 2004, at 8:31 AM, linux-audio-dev-request(a)music.columbia.edu
wrote:
>> It appears to be ethernet, not IP-based.
>
> Ah. Silly me.
>
>> http://www.mkpe.com/articles/2001/Networks_2001/networks_2001.htm
>
> "so, where are the products?" (referring to RTP and RTCP). Silly
> author. Expecting people to productize[sic] publically owned protocols.
>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 26 Feb 2004 14:58:38 +0000
> From: Steve Harris <S.W.Harris(a)ecs.soton.ac.uk>
> Subject: Re: [linux-audio-dev] is this, or this not, RTP?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <20040226145838.GD14603(a)login.ecs.soton.ac.uk>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Feb 26, 2004 at 09:33:25AM -0500, Paul Davis wrote:
>>> It appears to be ethernet, not IP-based.
>>
>> Ah. Silly me.
>>
>>> http://www.mkpe.com/articles/2001/Networks_2001/networks_2001.htm
>>
>> "so, where are the products?" (referring to RTP and RTCP). Silly
>> author. Expecting people to productize[sic] publically owned
>> protocols.
>
> I guess he was talking about routers and the like, in 2001 there
> weren't
> any that I know of, now there are plenty, eg:
> http://www.cisco.com/en/US/products/hw/routers/ps221/
>
> - Steve
>
>
> ------------------------------
>
> Message: 5
> Date: Thu, 26 Feb 2004 10:00:07 -0500
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Subject: Re: [linux-audio-dev] [ANN] Website
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <200402261500.i1QF07P6008166(a)dhin.linuxaudiosystems.com>
>
>>> * two ogg files recorded using the current alpha version of Aeolus,
>>> the pipe organ synthesiser I'll present at the second LAD
>>> conference
>>> in Karlsruhe.
>>
>> these files sound incredible. i can't wait to hear your presentation
>> on aeolus!
>
> absolutely! i'm calling off my talk so we can spend extra time
> listening to Aeolus perform. incredible work!
>
> --p
>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 26 Feb 2004 16:03:11 +0100
> From: kloschi <linux-lists(a)web.de>
> Subject: [linux-audio-dev] Announcement: Camp Music 2004, techlab,
> call for participiants/exhibitors
> To: A list for linux aduio developers
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <20040226160311.3b2060ea(a)magrathea.funk.subsignal.org>
> Content-Type: text/plain; charset=US-ASCII
>
> Hi list,
>
> we are doing the camp music, a festival for electronic music on
> 14.o5.-15.o5.2oo4 in the 'motorpark' near magdeburg [germany].
> this event includes a usual festival with 2 stages and additionally
> a forum for musicians and independent labels [labelforum] and also
> the so called 'techlab'.
> techlab will be a place where music-software and -hardware producers
> show their [new] stuff, give workshops and so on. So far companies like
> native instruments and emagic and some more commercial
> manufacturers/developers will show up.
> I would also like to invite free software developers with showable
> products, to present and give workshops, to introduce musicians and
> producers in the possibilities of the free software world.
>
> please contact me _soon_ at kloschi(a)seekers-event.com or
> kloschi(a)subsignal.org feel free to forward this announcement.
>
> kloschi
>
>
> ------------------------------
>
> Message: 7
> Date: Thu, 26 Feb 2004 10:04:47 -0500
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <200402261504.i1QF4lwA008182(a)dhin.linuxaudiosystems.com>
>
>> It was, however, the automatic merits that I wished mainly to explore.
>> Freezing has it's merits, but it requires that you dedicate some
>> brain cycles
>> to deciding when and where you wish to freeze/unfreeze something. I
>> could
>> sure use those cycles for keeping creativity flowing.
>
> remember: freezing has no merits at all unless you need to save CPU
> cycles.
>
> the problem is that in a typical DAW session, you can potentially
> freeze most tracks most of the time. so how can you tell what the user
> wants frozen and what they don't? More importantly, freezing consumes
> significant disk resources. Can you afford to do this without it being
> initiated by the user? A typical 24 track Ardour session might consume
> 4-18GB of audio. Freezing all or most of it will double that disk
> consumption (and its not exactly what you would call quick, either :)
>
> thus, either you have s/w smart enough to figure out what to freeze
> and then do it, which does not come without certain costs, or the user
> has to play a significant role in the process.
>
>> ... and I do think you can freeze a bus, but it requires that the app
>> has full
>> knowledge of the connection graph. Mmmm, I see a jack extension
>> forming ;))
>
> sorry, but i don't think so. if i have a bus that is channelling audio
> in from an external device (say, a h/w sampler), you cannot possibly
> freeze it.
>
> --p
>
>
> ------------------------------
>
> Message: 8
> Date: Thu, 26 Feb 2004 15:17:31 +0000
> From: Steve Harris <S.W.Harris(a)ecs.soton.ac.uk>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <20040226151731.GG14603(a)login.ecs.soton.ac.uk>
> Content-Type: text/plain; charset=us-ascii
>
> On Thu, Feb 26, 2004 at 10:04:47AM -0500, Paul Davis wrote:
>> the problem is that in a typical DAW session, you can potentially
>> freeze most tracks most of the time. so how can you tell what the user
>> wants frozen and what they don't? More importantly, freezing consumes
>> significant disk resources. Can you afford to do this without it being
>> initiated by the user? A typical 24 track Ardour session might consume
>> 4-18GB of audio. Freezing all or most of it will double that disk
>> consumption (and its not exactly what you would call quick, either :)
>
> No, but you can do it (semi-)transparently when the user presses play.
> I
> dont know if that would be acceptable or not, but if you imagine
> adding a
> few effeects, auditioning, rinse, repeat it might work out ok.
>
> I'd always go for more CPU if possible - I'm not a huge fan of multiple
> code paths :)
>
> - Steve
>
>
> ------------------------------
>
> Message: 9
> Date: Thu, 26 Feb 2004 09:06:59 -0600
> From: Benjamin Flaming <lad(a)solobanjo.com>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <200402260906.59583.lad(a)solobanjo.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Thursday 26 February 2004 09:04 am, Paul Davis wrote:
>> the problem is that in a typical DAW session, you can potentially
>> freeze most tracks most of the time. so how can you tell what the user
>> wants frozen and what they don't?
>
> I want my plug-ins frozen the instant I close the parameter
> editor. ;)
>
>> More importantly, freezing consumes
>> significant disk resources. Can you afford to do this without it being
>> initiated by the user? A typical 24 track Ardour session might consume
>> 4-18GB of audio. Freezing all or most of it will double that disk
>> consumption (and its not exactly what you would call quick, either :)
>
> Agreed, it's a very definite trade-off - storage space for CPU
> cycles.
> It is my observation, however, that storage space is cheap, and
> readilly
> available.
>
>> sorry, but i don't think so. if i have a bus that is channelling audio
>> in from an external device (say, a h/w sampler), you cannot possibly
>> freeze it.
>
> However, buses which simply contain a submix of several audio
> tracks can
> be safely frozen, saving both processing power and disk bandwidth.
>
> The purpose of my project is to create a working environment which
> encourages songs to be organized in such a way that offline rendering
> can
> *usually* be done transparently. Thus, the hierarchal tree structure.
>
> When I finish comping the vocals for a chorus, I want to be left
> with 1
> fader, and 1 editable audio track, for the chorus. If I need to make
> one of
> the voices softer, I can bring up the underlying tracks within a second
> (which is *at least* how long it usually takes me to find a single
> fader in a
> 48-channel mix). While I'm making adjustments, Tinara will read all
> the
> separate chorus tracks off the disk, mixing them in RT. When I move
> back one
> layer in the mix hierarchy (thereby indicating that I'm finish
> adjusting
> things), Tinara will begin re-rendering the submix in the background
> whenever
> the transport is idle. When the re-rendering is done, Tinara will go
> back to
> pulling a single interleaved stereo track off the disk, instead of 6-8
> mono
> tracks.
>
> The basic idea is to turn mixing into a process of
> simplification. When
> I'm finishing up a mix, I don't want to deal with a mess of tracks and
> buses,
> with CPU power and disk bandwidth being given to things I haven't
> changed in
> days. I want to be able to focus on the particular element or submix
> that
> I'm fine-tuning - and have as much DSP power to throw at it as
> possible.
>
> This will also make the use of automated control surfaces much
> nicer,
> IMHO. Since there will be fewer elements in each layer of the
> hierarchy,
> fewer faders would be needed. Additionally, it would be easier to
> keep track
> of what's going on. I've worked extensively with Digi Design's
> Control|24,
> and my feeling is that things start to get messy when there are more
> than
> about 12 faders (not to mention how easy it is to get lost when there
> are two
> or more banks of 24 faders!).
>
> Just for the record, please understand that any negativity I may
> express
> toward conventional DAW systems is *not* directed toward Ardour. It's
> just
> pent-up frustration with Pro Tools ;)
>
> |)
> |)enji
>
>
>
> ------------------------------
>
> Message: 10
> Date: Thu, 26 Feb 2004 11:25:33 -0500
> From: Paul Davis <paul(a)linuxaudiosystems.com>
> Subject: Re: [linux-audio-dev] Freeze?
> To: "The Linux Audio Developers' Mailing List"
> <linux-audio-dev(a)music.columbia.edu>
> Message-ID: <200402261625.i1QGPXbe015706(a)dhin.linuxaudiosystems.com>
>
>> I want my plug-ins frozen the instant I close the parameter
>> editor. ;)
>
> oh, you don't want to do any graphical editing of plugin parameter
> automation? :))
>
>> Agreed, it's a very definite trade-off - storage space for CPU
>> cycles.
>> It is my observation, however, that storage space is cheap, and
>> readilly
>> available.
>
> not my experience.
>
>>> sorry, but i don't think so. if i have a bus that is channelling
>>> audio
>>> in from an external device (say, a h/w sampler), you cannot possibly
>>> freeze it.
>>
>> However, buses which simply contain a submix of several audio
>> tracks can
>> be safely frozen, saving both processing power and disk bandwidth.
>
> sure, but thats a subset of all busses. its not a bus per se.
>
>> When I finish comping the vocals for a chorus, I want to be left
>> with 1
>> fader, and 1 editable audio track, for the chorus. If I need to make
>> one of
>> the voices softer, I can bring up the underlying tracks within a
>> second
>> (which is *at least* how long it usually takes me to find a single
>> fader in a
>> 48-channel mix). While I'm making adjustments, Tinara will read all
>> the
>> separate chorus tracks off the disk, mixing them in RT. When I move
>> back one
>> layer in the mix hierarchy (thereby indicating that I'm finish
>> adjusting
>> things), Tinara will begin re-rendering the submix in the background
>> whenever
>> the transport is idle.
>
> have you actually experienced how long it takes to "re-render"?
> steve's suggestion is an interesting one (use regular playback to
> render), but it seems to assume that the user will play the session
> from start to finish. if you're mixing, the chances are that you will
> be playing bits and pieces of of the session. so when do you get a
> chance to re-render? are you going to tie up disk bandwidth and CPU
> cycles while the user thinks they are just editing? OK, so you do it
> when the transport is idle - my experience is that you won't be done
> rendering for a long time, and you're also going to create a suprising
> experience for the user at some point - CPU utilization will vary
> notable over time, in ways that the user can't predict.
>
> you also seem to assume that the transport being stopped implies no
> audio streaming by the program. in ardour (and most other DAWs), this
> simply isn't true. ardour's CPU utilization doesn't vary very much
> whether the transport is idle or not, unless you have a lot of track
> automation, in which case it will go up a bit when rolling.
>
>> The basic idea is to turn mixing into a process of
>> simplification. When
>> I'm finishing up a mix, I don't want to deal with a mess of tracks
>> and buses,
>> with CPU power and disk bandwidth being given to things I haven't
>> changed in
>> days. I want to be able to focus on the particular element or submix
>> that
>> I'm fine-tuning - and have as much DSP power to throw at it as
>> possible.
>
> the focusing part seems great, but seems to be more of a GUI issue
> than a fundamental backend one. it would be quite easy in ardour, for
> example, to have a way to easily toggle track+strip views rather than
> display them all.
>
> the DSP power part seems like a good idea, but i think its much, much
> more difficult than you are anticipating. i've been wrong many times
> before though.
>
> and btw, the reason Ardour looks a lot like PT is that it makes it
> accessible to many existing users. whether or not ardour's internal
> design looks like PT, i don't know. i would hope that ardour's
> development process has allowed us to end up with a much more powerful
> and flexible set of internal objects that can allow many different
> models for editing, mixing and so forth to be constructed. the backend
> isn't particularly closely connected in any sense, including the
> object level, to the GUI.
>
> --p
>
>
>
>
> ------------------------------
>
> _______________________________________________
> linux-audio-dev mailing list
> linux-audio-dev(a)music.columbia.edu
> http://music.columbia.edu/mailman/listinfo/linux-audio-dev
>
>
> End of linux-audio-dev Digest, Vol 5, Issue 57
> **********************************************
>
>
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hi list,
we are doing the camp music, a festival for electronic music on
14.o5.-15.o5.2oo4 in the 'motorpark' near magdeburg [germany].
this event includes a usual festival with 2 stages and additionally
a forum for musicians and independent labels [labelforum] and also
the so called 'techlab'.
techlab will be a place where music-software and -hardware producers
show their [new] stuff, give workshops and so on. So far companies like
native instruments and emagic and some more commercial
manufacturers/developers will show up.
I would also like to invite free software developers with showable
products, to present and give workshops, to introduce musicians and
producers in the possibilities of the free software world.
please contact me _soon_ at kloschi(a)seekers-event.com or
kloschi(a)subsignal.org feel free to forward this announcement.
kloschi