Hello all,
(Hoping there's SC3 expert on the list)
I haven't been using SC3 for some time, but now I have
to revive some of my work from years ago. As before i'm
controlling it from Emacs.
The current release refuses to run my old code, apparently
because it loads/stores synthdefs from/to
~/share/SuperCollider/synthdefs instead of ./synthdefs.
Re-running all the definitions stores them in the new
place and then things work. But the last thing I want
is to have all synthdefs in one giant heap. They are
always very specific to a particular composition, and
I want to keeo them together with the other files for
each work.
Is there confituration option to make SC3 revert to
the old behaviour ?
More generally, I deeply dislike apps creating
non-specific directories such as ~/share. If SC3
wants a per-login directory it should be ~/.SC3
or something like that.
Ciao,
--
FA
Laboratorio di Acustica ed Elettroacustica
Parma, Italia
Wie der Mond heute Nacht aussieht !
Ist es nicht ein seltsames Bild ?
I've been playing with and watching various tutorials for Processing,
the Java framework for generative video and video effects.
http://processing.org/
What blew me away recently, though, was this video from Wesen on
building a "game of life" MIDI sequencer with it (watch the whole thing,
it's worth it):
http://vimeo.com/1824904?pg=embed&sec=1824904
(Of course, Paul reads the same blog that I do, so he'll know about this
already.)
Notice that Processing has its own editor, with controls to compile and
run any program you make in it, single-click. Not much different than an
IDE, I suppose, though I would be hesitant to say that an IDE is better
because it's more powerful, as I would have to disagree: what makes
Processing so powerful and so popular is because it's so specific to its
niche. Combine it with a very thorough (and expandable) framework and it
becomes very powerful.
Why couldn't we make something like that for audio? It would most likely
be C++ rather than Java, but the idea of building up DSP networks using
a large framework of code, plus some pre-defined functions and settings,
and being able to launch our new code with a one-touch button into a
JACK client (or whatever), is extremely appealing to me. Throw in some
GUI-building elements (Cairo-based, perhaps) that can handle
mouse-clicks, keyboard input, and the like, then suddenly people who are
good at math and DSP but not so good at coding might have a shot at
making some great programs.
Consider this a feeler post for a potential project. I am unfortunately
not a great coder, but at this point, I can't help but think that
something badly-coded and working is still better than well-written code
that never actually gets written.
-- Darren Landrum
Heya!
At the audio microconf at the Linux plumbers conference one thing
became very clear: it is very difficult for programmers to figure out
which audio API to use for which purpose and which API not to use when
doing audio programming on Linux. Someone needed to sit down and write
up a small guide. And that's what I just finished doing.
I'd thus like to draw your attention to this new (long) blog story of
mine containing this guide:
http://0pointer.de/blog/projects/guide-to-sound-apis
I'd be very thankful for comments!
Lennart
--
Lennart Poettering Red Hat, Inc.
lennart [at] poettering [dot] net ICQ# 11060553
http://0pointer.net/lennart/ GnuPG 0x1A015CC4
That was not my experience. I put together a pulseaudio IO module
for Csound using the simple API (pulse/simple.h) in about half an hour.
It seemed much simpler than any alternative. And it seemed to do
everything I needed from it.
Victor
At 13:59 29/09/2008, you wrote:
>On Sun, 28.09.08 09:38, Paul Davis (paul(a)linuxaudiosystems.com) wrote:
>
> > > Also, I guess it depends on how you upgrade, because my workstation is
> > > 8.04, which is upgraded every year or so for 2 years and a half now,
> > > and I don't have pulseaudio. One of the package I wan't to add sound
> > > support for is for science mostly, and many people are still using
> > > Ubuntu Dapper, Fedora 3, etc...
> > >
> > > So it does not look like pulseaudio is that great if you want to
> > > support various linux and have very little needs for audio.
> >
> > As Lennart tried to make reasonably clear, the primary goal of
> > PulseAudio is NOT to act as a new API, but to act as a new
> > *infrastructure* that supports existing APIs transparently.
> > I am sure that he would be happy if it eventually takes over the world
> > and everybody writes apps using its API, but that doesn't appear to be
> > the goal right now.
>
>The reason why I don't ask application developers at this time to
>adopt the native PA API is that it is a relatively complex API since
>all calls are asynchronous. It's comprehensive and not redundant, but
>simply too complex for everyone but the most experienced.
>
>Lennart
>
>--
>Lennart Poettering Red Hat, Inc.
>lennart [at] poettering [dot] net ICQ# 11060553
>http://0pointer.net/lennart/ GnuPG 0x1A015CC4
>_______________________________________________
>Linux-audio-dev mailing list
>Linux-audio-dev(a)lists.linuxaudio.org
>http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Victor Lazzarini
Music Technology Laboratory
Music Department
National University of Ireland, Maynooth
The following is a cross-post of an exchange that took
place on the rosegarden-devel mailing list. I'm posting
it here because I think it hints at something fairly
serious with the current state of open audio specifications
and issues with their implementation.
It is not my intent to start a flame war. I come at this
with over twelve years of involvement (albeit chiefly at a
hobbyist level) with audio development and production on
windows system (predominantly VSTis). I've used linux
since the late nineties, but these issues have become more
important to me since fully ditching Windows about 18
months ago.
The open source philosophy has always been that,
if something were not to your tastes, you were completely
free to build it, or alter something else. This is true,
and one of the things greatly appreciated about the system.
The key issue, though, is the ability to use that facility;
sometimes it is not as available as it appears. There are a
great many people who would offer much more to open audio,
I feel, but issues like the following need to be addressed.
all the best,
Chris Williams
Chris Williams <yikyak> wrote:
> Chris Cannam wrote:
>> Chris Williams <yikyak> wrote:
>>> Hello,
>>>
>>> I've been developing a toy DSSI plugin, partly to become
>>> more familiar with the spec and partly as a platform for
>>> writing base code for use with less toy instruments. I've
>>> been using Rosegarden and jack-dssi-host as test hosts.
>>>
>>> All was going well until I increased the number of DSSI /
>>> LADSPA output audio ports in the plugin. For some reason,
>>> I expected that rosegarden would create extra synth
>>> channels in the audio mixer for these outputs, but this
>>> didn't happen. Instead, one 'synth' audio channel was
>>> maintained in the mixer for the synth, out of which all
>>> outputs could seemingly be heard and controlled globally.
>>>
>>> Does Rosegarden mix-down all the audio outputs of a given
>>> synth before feeding the signal to its own internal signal
>>> path, or am I misunderstanding something / being an idiot?
>>> The number of outputs shows up correctly in the synths
>>> 'controls' dialog along with its ID, but this seems to be
>>> the only place at which they're discernible outside of the
>>> synth. (I'm using v.1.7.0 from Arch's repository).
>>>
>>
>> No, you aren't misunderstanding anything. Rosegarden is
>> very simplistic in this respect -- it mixes the number of
>> audio outputs up or down to match the number of channels
>> on the track, which is always either 1 or 2 depending on
>> whether the stereo button in the instrument parameters is
>> pushed in or not.
>>
>> If the aim is to accept multi-MIDI-channel input and send
>> multi-channel output in order to support multiple effective
>> instances on separate tracks, the "DSSI way" to do that is
>> to have several actual instances (perhaps sharing data
>> behind the scenes) and then call run_multiple_synths once
>> for all of them instead of run_synth on each one.
>> Rosegarden will do this if you have the same plugin set for
>> more than one track (and it supports run_multiple_synths).
>>
>> Unfortunately, that mechanism is rather different from any
>> other plugin format.
>>
>
> Thanks Chris, that's just the information I was looking for.
>
> I was thinking more of the situation where where you have
> *single*-midi-channel input but, as with some synths and
> samplers, want to run the output to different banks of
> effects (e.g. LADSPA plugins) depending on the specific
> midi note or the range in which that note lies (output
> groups). This seems "theoretically" possible under DSSI
> using only run_synth() (given an idiosyncratic parsing of
> the DSSI spec) but not if the host routes all output ports
> through the same audio channel. At the same time, I can see
> the problem from the host developers' perspective: the DSSI
> spec uses LADSPA's port system and there's no good reason
> for an effect's output ever to be routed to multiple 'host
> audio channels'.
>
> The two other ways of doing it would seem to be:
>
> 1) Incorporate a 'channel' system and ladspa hosting system
> *internal* to the instrument which would then only ever
> need a stereo output to the host (the massive downside of
> this being that it unnecessarily replicates complex
> functionality already provided by the host); or
>
> 2) Use run_multiple_synths() in a hackish manner for which
> it probably wasn't fully intended, as you'd be writing to
> separate LADSPA handles but essentially using only one midi
> event queue (this would also have the nasty side effect of
> requiring multiple redundant instances simply to use their
> output ports). You could probably do something equally
> nasty by playing with the descriptor returning and
> instantiate functions.
>
> Anyway, thanks again. It's not a Rosegarden-specific issue,
> I know; more strangeness in the DSSI spec coupled with my
> own ignorance.
I've been thinking about this some more, and can't let it
lie. It seems to me that there's definitely a bug or broken
implementation involved somewhere.
Under jack-dssi-host, if I request n outputs, I get n
outputs, any one of which is routable to wherever I wish
to send it using jack_connect or qjackctl. Under
rosegarden, if I request n outputs, I always get either one
or two, depending on the stereo settings of the host-
ascribed audio channel.
This is a problem, because any summing rosegarden does
*could* have been done internally in the plugin. What's the
point of requesting, configuring and filling >2 output
ports if the client is never *effectively* going to get
them? If the client wanted them bumped down to one (or two)
buffers, it could -- would -- have done this itself and
needn't have requested and configured >2 outputs in the
first place.
Now, one way of approaching this is to say, "Oh, well; it's
an implementation bug in rosegarden and there's nothing
that can be done". That's OK as far as it goes, but it
doesn't actually solve the problem / ambiguity and it's
questionable as to whether it's true.
Another approach is to say that rosegarden implements the
DSSI spec correctly and the bug is actually in the spec
itself. This is *far more serious* -- and it's far more
serious for two reasons.
Firstly, rosegarden is currently the only fully-featured
standalone DSSI host. How are the other hosts currently
implementing it (MuSE etc.) going to behave? Are they going
to implement rosegarden's behaviour, or are they going to
implement jack-dssi-host's behaviour? In either case, you
have a situation where plugin developers simply don't know
what the host is going to do, which kind of defeats the
point of a specification.
At this point a cynic might say that the uptake of DSSI is
such that it doesn't matter too much and that the likes of
LV2 will solve this problem anyway, but this brings us to
the second serious problem (which the more well-versed
among you will already have realised):
If this ambiguity isn't specific to rosegarden but is
inherent in the spec, then *LV2 doesn't solve it either*.
Part of the problem with DSSI, if this is where the
problem lies, is that it encourages host developers to
treat synth / sampler plugins as extensions of effects
plugins for the sake of simplicity. This works to an
extent, but there's a subset of functionality that they do
not have in common (the ability to leverage other effects
plugins and host capabilities for two).
There is no output-negotiation scheme in either LV2 *or*
DSSI beyond the output port request and instantiation
stage, and neither spec makes clear what a plugin can
expect having acquired those outputs, or, from the other
side, what a host is fully expected to provide in response
to a given request. The assumption goes unstated, which is
why there is this problem to begin with.
Perhaps an extra negotiation is required, so that both the
host *and* the plugin know unambiguously the contexts in
which they are operating. It's not reasonable to expect a
host to grant any and all resources that a synth plugin
might request (2^32 outputs, for example), but at the same
time the *plugin* needs to *know* the context in which it's
operating, and not have to guess it based on external
knowledge of specific host implementations.
Anyway, I appreciate that the rosegarden developers are not
fully responsible (if at all) for this situation, but it is
a major issue, I feel, when it comes to standard adoption
and plugin development with open systems. I may cross-post
this to the LAD list as it seems just as relevant to them
as to you.
Thanks for your time,
Chris Williams
2008/9/29 Darren Landrum <darren.landrum(a)sbcglobal.net>:
> Sorry for starting this entire argument. I'm just tired of getting
> nowhere with all of the same tools that everyone else seems to have no
> problem with. I have a very bad habit of putting myself under a great
> deal of pressure to exceed everyone's expectations of me.
>
> Look, I know that everything I'm asking for exists on the Linux
> platform. The problem is, it doesn't all exist in one place, or under a
> single language. I'm convinced at this point that starting over from
> scratch with a solid design is preferable to trying to use several
> disparate tools and somehow glue them all together.
>
> I've already played around with code here and there to try out some
> different approaches to this problem, but nothing that I've bothered
> keeping around. Starting tonight, I'm going to draft a very detailed
> design, create a code hosting account somewhere (probably Google Code),
> and get started. I will keep the list apprised of any progress with
> regular emails.
>
> It's been pointed out to me that many people on the list seem to think
> that I'm trying to get someone else to code this for me. That is not and
> never was my intention, and I apologize for any miscommunication on my
> part for that. I am a very slow and not very good coder, though, and it
> might take a little while to see any progress.
>
> First things first, though. A solid design.
>
> -- Darren
>
> _______________________________________________
> Linux-audio-dev mailing list
> Linux-audio-dev(a)lists.linuxaudio.org
> http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
>
I don't know if it is relevant to this discussion (at least in an
"acceptable" amount of time) but I just wanted you to know about my
attempt: NASPRO (http://naspro.atheme.org). I hope people here don't
take this message as spamming, because it simply is not.
The ideas here are:
* to make different existing and not-yet-existing sound processing
technology interoperate, both general-purpose sound processing stuff
(for example plugins a la LADSPA, LV2, etc.) and special purpose stuff
(for example check this:
http://naspro.atheme.org/content/ndf-01-file-format-overview), in both
compiled and interpreted forms.
* be techonlogy neutral (support for each technology implemented in
external modules).
* define distinct "layers", each dealing with a specific aspect of the
whole problem (one for sound processing, one for GUIs, one for session
handling, etc.), so that a "DSP coder" can only work on the DSP part
and have all the rest automagically implemented and working (for
example, you write a LADSPA plugin or write an NDF file and you get an
automatically generated GUI without writing one more line of code);
* have "back bridges" when possible, so that applications with support
for one of NASPRO-supported technologies gets support for all other
technologies without writing a single line of code.
* build dead-easy-to-use tools on top of that to make it easy for non
demaning applications to support DSP stuff.
* build tools on top of that to do data routing among each "sound
processing component" (in other words, chain-like and/or graph-like
processing) - plus, since we have those back bridges, you could also
use, for example, CLAM networks (as soon as CLAM will be supported) as
an alternative to these tools and have the same degree of supported
technology (the same goes for gstreamer, Pd, etc).
* be cross-platform (apart from Mac/Windows, alternative
desktop-oriented OSes like Haiku or Syllable are getting stronger
these days and could become viable to do sound processing in some near
or distant future).
The result will hopefully be to make it also easier to develop new
technologies AND without breaking interoperability.
Now, since I'm only one, and I am the only one working on this, it
will take an insane amount of time probably, and getting each of these
abstraction layers right is astonishingly difficult already (anyone
remembering GMPI?) - at the moment I'm fighting with core level stuff
and I will be doing that at least for another year or two.
If you can wait, I will probably have a talk about NASPRO by the end
of October and will put down some slides trying to describe the inner
working of it (a lot of people complained that I wasn't clear enough
on the website)...
Maybe this helps :-\
Stefano
[[sorry darren, this was meant for the list. i hit the wrong button..]]
On Mon, Sep 29, 2008 at 1:38 PM, AlgoMantra <algomantra(a)gmail.com> wrote:
> Look, I know that everything I'm asking for exists on the Linux
>> platform. The problem is, it doesn't all exist in one place, or under a
>> single language.
>>
>
> I have exactly the same crib. I could not have said it better, and I am
> convinced that this is a pressing issue. Let me list out the languages I
> set out to learn in sequence, and finally where I stand.
>
> Python - PureData - CSound - Chuck - Processing (after this I shifted to
> Ubuntu from XP
> and found the kind of freedom I wanted) - C/C++ (fullstop)
>
> ------- -.-
> 1/f ))) --.
> ------- ...
> http://www.algomantra.com
>
> "I'm not your personal army. I'm MY personal army."
>
Hi!
I'm trying to merge two ICE1712 based soundcards, so they appear as one big
soundcard in JACK. I tried it as described on John Lapeyre's website:
http://www.johnlapeyre.com/linux_audio/linux_audio.html
That is, I recompiled libasound2 1.0.14-rc3 with the following patch:
http://www.johnlapeyre.com/linux_audio/pcm_multi.patch
And compiled JACK 0.109 accordingly and finally installed the .asoundrc file,
listed here:
http://www.sound-man.co.uk/linuxaudio/ice1712multi.html
However when I try to start jackd with:
usr/bin/jackd -t 4999 -s -R -m -d alsa -C multi_capture -P multi_playback -p
128 -n 2 -r 44100
it fails with the following messages:
jackd 0.109.2
Copyright 2001-2005 Paul Davis and others.
jackd comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
JACK compiled with System V SHM support.
loading driver ..
apparent rate = 44100
creating alsa driver ... multi_playback|multi_capture|128|2|44100|0|0|nomon|
swmeter|-|32bit
ALSA lib pcm_hw.c:1357:(_snd_pcm_hw_open) Invalid value for card
ALSA lib pcm_hw.c:1357:(_snd_pcm_hw_open) Invalid value for card
cannot load driver module alsa
no message buffer overruns
The target system is a Debian system and I wasn't really sure where to install
the .asoundrc file, so installed it
as /etc/asound.conf, /etc/alsa/asound.conf and in the home directory
as .asoundrc
Does anybody have an idea what could be wrong or could give me pointers how to
debug it?
CU
Christian
Hello there!
Is there a nice and simple awk/sed trick to just capitalise a word? I didn't
come up with one so far. I could do it by hand, it's a known set of words, but
it would be nicer with a real word-independent mechanism.
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
Hello everyone!
You might have been wondering what I'm up to. So here we go.
BSS is the "BBC7 Shell Script Suite" and package of a few shell scripts to
search the bbc7 website via a simple commandline interface. You can also
listen to the "listen again" service, by directly providing day and time.
There's also a small audio-file converter.
BBC7 is the BBC's station for comedy and drama. From the criminal, to the
horrifying to the hilarious, to quiz-shows to readings of everyday and high
literature.
Please don't investigate the hidden bbc7-listen.sh option too much. They
make their living from that and they offer a very nice service, which I hated
to see going down. But of course it can occasionally be helpful to dump files
to your disk.
The link:
http://juliencoder.de/bss-0.5.tar.bz2
Requirements: bash and the usual utilities for all of them, for the listen
and convert scripts it's mplayer and ecasound and the search script requires
wget.
Hope you'll enjoy it!
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de