Hello Lists,
As some of you may know, I'm the guy who wrote Specimen. And as a tiny
fraction of that some may know (or care), Specimen hasn't seen an update
in an exceedingly long time.
What, exactly, is going on?
The answer, dear friends, is simple: music. My introduction to the art
of electronic music composition began precisely when Specimen
development halted. For a few months, I fumbled around with countless
little songs in an attempt to figure out how one goes about the process
of coaxing the electrons in such a fashion as to result in sonic
euphoria. And once I reached that point where the magnitude of my
suckiness became bearable, I began writing songs.
This process began in February with the assistance of an old friend, who
lent his discerning ear and exceptional bass talent to the effort. And
yesterday, we finished step one --- general composition. We now have 11
rough drafts that will be polished into a full length album and released
next fall --- 100% Linux and OSS produced.
It occurred to us that we might share some of this with all you folks
out there in Internet land, both to stoke the flames of anticipation and
to remind my adoring public that I have not yet shed this mortal coil.
So, I am pleased to present a collection of snippets from the
aforementioned tracks above, pieced together in an easy to swallow
medley:
http://www.gazuga.net/preliminary_beats.ogghttp://www.gazuga.net/preliminary_beats.mp3
And for those interested in staying abreast of the latest and greatest,
be sure to tune into The State of the Beat:
http://www.gazuga.net/blog
That's all for now, but don't worry --- I'll be back before you know it.
Kinda like herpes, only better.
Peace out,
-Pete
Hi,
I have a little Problem:
I'm using OpenGL in my Program, in combination with OSS via portaudio.
The Updates of the OpenGL display are triggered by the audio callback,
protected using the available Methods of the GUI-Toolkit that I use
(fltk).
And I'm experiencing very strange effects, like: without Audio the
OpenGL works fine. without the OpenGL, the audio works fine, however
when used in combination, the Audio Thread (callback) stops very soon,
and I have no Idea why?
I used efence to discover wheater I'm screwing up memory somewhere,
but I couldn't find anything. I ran valgrind, but it couldn't find
something helpful either.
I ran it from within gdb, but the program behaves correctly if run
from gdb (heisenbug?). It even works better whenever I insert some
cout's for debugging purposes.
in gdb it also reports a signal like this when initializing the
openGL, but I believe that might be propably unreleated.
http://music.columbia.edu/pipermail/linux-audio-user/2004-October/016821.ht…
BTW. I'm running an unmodified ubuntu warty.
Any tips or pointers??
Thanks
Richard
PS.: (writing this message led me to some more experiments ;) )
I'm fairly sure that something is screwed up within thread
syncronisation, because I can "fix" it by inserting a usleep() call at
a certain position, but I do have a fairly hard time imagining what
could be wrong. I'm considering to switch to blocking IO for Audio
because, latency and stuff is not really an issue.
The RT rlimits patch (nice-and-rt-prio-rlimits.patch) has been proposed
as a solution to allow audio users to run their applications with
realtime priorities. While more complicated to configure, it's a much
cleaner patch than realtime-lsm and it's likely to get merged soon _if_
enough audio users test it and confirm that it works.
To encourage this, I have created a wiki page containing installation
instructions, links to prebuilt PAM packages, etc:
http://www.steamballoon.com/wiki/Rlimits
If this works for you, I am collecting success reports. Please email
rlimits-success(a)steamballoon.com .
If you have any problems with it, LAU is probably the best place to ask
for help. Unfortunately I don't have large amounts of time to spend
helping people with this, so any help requests emailed to me directly
may be deleted without a reply. Sorry.
Thanks for testing!
Jody
Through local contacts, I ended up meeting with a person close to
Philadelphia who is starting a company to build a sample playback
engine that will be used to enhance and extend the sound of an
existing large scale instrument. They need a relatively simple sample
playback engine:
- MIDI in
- up to 400 samples being played at once
- all samples assumed to live in RAM
- up to 50-70 channels of output
- no functional GUI (simple one required at
install/configuration time)
The work needs doing quickly (they have existing customers who are
waiting), and well (the system will have no monitor; stuff has to Just
Work All The Time).
If you think you might be interested in doing this, get in touch with
me, estimating time to completion and cost. Available compensation is
reasonable, and may be adjustable to include stock in the company.
--p
ps. you know who you are :)
looks nice Kjetil!
OS X ??
also (OT) are you still developing/mainting Mammut?
cheers~
PAtrick
On Fri May 20 08:32:42 EDT 2005, Kjetil Svalastog Matheussen
<k.s.matheussen(a)notam02.no> wrote:
>
>
>
> The Realtime Extension for the sound editor SND consists of two
> parts:
>
> 1. The RT Engine - An engine for doing realtime signal
> processing.
> 2. The RT Compiler - A compiler for a scheme-like programming
> language
> to generate realtime-safe code understood
> by the
> RT Engine.
>
> Homepage:
> http://www.notam02.no/arkiv/doc/snd-rt/
>
> Screenshot:
> http://www.notam02.no/arkiv/doc/snd-rt/screenshot.png
>
>
> ********************************************
>
>
> Snd-ls v0.9.3.0
> ---------------
> Released 19.5.2005
>
>
> About
> -----
> Snd-ls is a distribution of the sound editor Snd. Its target is
> people that don't know scheme very well, and don't want
> to spend too much time configuring Snd. It can also serve
> as a quick introduction to Snd and how it can be set up.
>
>
> Changes 0.9.2.0 -> 0.9.3.0
> ---------------------------
> -Updated SND to v7.13 from 18.5.2005. Many important changes.
> -Fixed a small error in the installation script.
>
>
> http://www.notam02.no/arkiv/src/snd/
>
>
>
> _______________________________________________
> Cmdist mailing list
> Cmdist(a)ccrma.stanford.edu
> http://ccrma-mail.stanford.edu/mailman/listinfo/cmdist
>
>
Patrick Pagano,M.F.A
Digital Media Specialist
Digital Worlds Institute
University Of Florida
(352) 294-2082
The Realtime Extension for the sound editor SND consists of two parts:
1. The RT Engine - An engine for doing realtime signal processing.
2. The RT Compiler - A compiler for a scheme-like programming language
to generate realtime-safe code understood by the
RT Engine.
Homepage:
http://www.notam02.no/arkiv/doc/snd-rt/
Screenshot:
http://www.notam02.no/arkiv/doc/snd-rt/screenshot.png
********************************************
Snd-ls v0.9.3.0
---------------
Released 19.5.2005
About
-----
Snd-ls is a distribution of the sound editor Snd. Its target is
people that don't know scheme very well, and don't want
to spend too much time configuring Snd. It can also serve
as a quick introduction to Snd and how it can be set up.
Changes 0.9.2.0 -> 0.9.3.0
---------------------------
-Updated SND to v7.13 from 18.5.2005. Many important changes.
-Fixed a small error in the installation script.
http://www.notam02.no/arkiv/src/snd/
--- Dave Robillard <drobilla(a)connect.carleton.ca> wrote:
> Hi all,
>
> A while ago I started a thread about the proper way to refer to LADSPA
> plugins (in save files or whatever) and the consensus was library
> filename + label.
>
> People have been having problems with library name - different packages
> seem to make different names for the libraries (prefixing blop_, for
> example) so it doesn't always work. Basically I think using shared
> library file name is an awful way to reference plugins for numerous
> reasons.
As the guilty party (author of blop) I admit that when this option was added
(--program-prefix for library files) it was done in blissful ignorance of the
use of library basename as an identifier, as I had assumed that the Unique ID
was as claimed in ladspa.h.
The purpose was to avoid name clashes with generic names such as 'sawtooth' - I
ended up copying swh, and append the UID (so it does have a use after all :) to
the filenames. I'd meant to remove the --program-prefix option from configure,
but forgot.
> So why wasn't the unique ID the thing to use? There is a unique plugin
> ID in LADSPA, if not for this then for what reason?
Going by what is said on ladspa.org, I think that it was originally intended to
be the way to refer to plugins, and changed as development progressed.
IIRC, the UID is still required to lookup metadata with liblrdf, but this may
have changed since I last looked.
> In a similar vein, I really think the current system for LADSPA
> distribution sucks - big tarballs from various devs containing heaps of
> completely unrelated plugins. A centralized site where plugins can be
> submitted on their own (or in related groups) would be a great thing,
> IMO, and would make it easy to verify that unique IDs are in fact unique
> to solve the above problem.
I don't think you'll get very far arguing the case for UIDs - the arguments
against were pretty clear in the previous discussion and summarised by Chris
Cannam in his response. I think we're stuck with basename+label until a better
scheme can be implemented, possibly for LADSPA 2.
Regarding unrelated plugins in one library, I personally don't have a problem
with this as liblrdf does a fine job of categorising plugins where it's really
needed (in the host).
> Right now if a developer wants to make just one random plugin, they
> don't really have a sane way of getting it out there.
I agree here. The best option short of creating a new library distribution is
to get your plugin 'adopted' into an existing library.
I'm certainly willing to merge any homeless plugins into blop.
> I'm willing to full-time maintain the site, but I don't really have the
> hosting/abilities to create it. What do the other plugin authors think
> about this?
I'm all for it. Maybe liase with Richard Furse to update the ladspa.org site
itself? There's already a list of links there so all that is really needed is
to add details for maintainers willing to adopt plugins, with appropriate
provisions (kind of plugin, language, build system and so on)?
-
Mike
___________________________________________________________
Yahoo! Messenger - want a free and easy way to contact your friends online? http://uk.messenger.yahoo.com
>From: Alfons Adriaensen <fons.adriaensen(a)alcatel.be>
>
>IFF JACK could be modified to allow this then probably you wouldn't
>need M, its function would be taken care of by jackd.
That was the idea.
>The problem with such a scheme could be that the load is not spread
>evenly, so it would be necessary to give those clients that use
>longer buffers a lower priority. There may be other troule hidden
>somewhere...
Yes. The priority should depend on the buffer size.
Longer buffers would run with non-soft-RT schdulers.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
>From: Benno Senoner <sbenno(a)gardena.net>
>
>Since we cannot increase the speed at which the sound travels and even
>DACs add some latency (1msec or so)
>I see any effort to reduce latency below 2-3msec quite useless.
Lucasfilm's sound processor at 1983 had fixed 1.5 ms latency.
But did they have as lousy DACs as we have today? Don't know.
Did they mean the DSP effect processing latency only? Not sure.
I dislike that the Jack buffersize must be turned up for all
clients when one client does not perform well. It well could
be that I would like to use the buffersize 32 for
A/D --> EQ --> M --> D/A
and the buffersize 256 for
Zyn --> M --> D/A (the part M --> D/A is the same as above)
where M is a magic processing node which mixes the audios
having different buffersizes. M would be quite simple, actually.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
Hi LADs
As you might remember from my recent questions about MIDI tuning and
microtonality, I'm currently designing a music editor unlike any other
(that I've seen.) It will be a graphical music editor, similar to
existing "piano roll" editors on the surface, but with several important
differences. It will be a tool for composing music in Just Intonation.
It will also be free software (that goes without saying :-)
Anyway, I need to make a few key decisions about it and I'd like to have
some feedback and advice from you experienced people!
The most important issue is how to integrate my editor with the greatest
possible number of synths and other existing music sofware/hardware.
Unfortunately my software will need to set the pitch of every single
note independently of the others, so "common" MIDI will not suffice.
Pitch bend will not work either, or rather it will limit the output to
one note per MIDI channel. Not adequate at all for most uses.
In the other thread you kindly provided me with some advice and links,
including mentioning the MIDI Tuning Standard and OSC.
I'm designing my software with extensibility in mind, so adding new
protocols will not be a problem. Nonetheless, the more protocols and
APIs I know of in advance, the more extensible I can make it!
I have heard of several standards in the Linux audio world, thinks like
Jack and ALSA. But I'm quite new to all this, so I don't have a good
idea of what standards is a modern music editor supposed to support.
Could you please mention them to me? I will gladly study the APIs on my
own, but I don't want to waste time studying stuff that has no practical
value for building a graphical music editor. Also, please keep in mind
my special needs about note tuning/microtonality.
I hope this is not considered a repeat of my previous email... I've read
quite a bit of docs in the meantime, but I'm still confused. Is an
editor supposed to do something with Jack? I'm handling the "editing"
part on my own, but how should I embed playback/recording functionality
into my editor? Is there a way to interface it with VST plugins (where
binary compatible) and/or any free alternatives? Should I do all this
on my own, or are there any architectures that I could simply plug into?
Regards,
Toby