Hi!
First Release.
gmorgan is a .. Rhythm Station, an organ with auto-accompaniment. Uses MIDI
and the ALSA sequencer for play the rhythm patterns. Styles, patterns and
sounds, the mixer settings, can be edited and saved.
Tested on Gentoo, debian PIII 933 and PII 300
REQUERIMENTS
--------------------------
Linux
ALSA
Fltk
Take a look at http://personal.telefonica.terra.es/web/soudfontcombi/
And please ... if you enjoy this prog and wants to share patterns, send me,
and i will include in future versions, i have a large TODO, and i need some
help.
Josep
Could anyone check the shmserver.tar.gz, compile it, and test
if you have the same problem? Additional instructions below.
I have though about why the shared memory freezes. It could be
that Linux copies the shared memory pages for each client:
one shared memory segment, multiple copies. Would locking
the memory help? Maybe, maybe not.
I know there are high performance applications using the shared
memory well (at least their authors may think so). But so does
my own alsashmrec works well too (so I thought). If there indeed
is a severe (a minor?) problem in the kernel, then in all these
application it may cause very short freezes (until read, write,
sleep, suspend, etc. call releases it). A jitter in updating
lock-free FIFOs on shared memory.
If you don't get it or otherwise cannot find out what is going on,
then where I should post this? Is there a list where the kernel
and shared memory experts are reading?
And hey, why shmget did not work? See shmalloc_named_get().
Best regards,
Juhana
Additional instructions and the previous mail.
1. Compile it without sleep(1) [ as in code below ] and without
fprintf with ggg's [ one could change that to "g" so that
screen does not go too fast ].
Then compile and test with sleep, and then test with fprintf with
"g".
2. First run shmserver in a terminal. Copy the mid value
(second value printed).
3. Then run shmclient in another terminal. Paste the mid value:
% shmclient <midvalue>
>From: Juhana Sadeharju <kouhia(a)nic.funet.fi>
>To: linux-audio-dev(a)music.columbia.edu
>Subject:
>Date: Fri, 4 Jul 2003 16:46:25 +0300
>Reply-To: linux-audio-dev(a)music.columbia.edu
>
>>From: Ralfs Kurmis <kurmisk(a)inbox.lv>
>>
>>Try out this example
>>Split it in seperate files
>>( Needs x )
>
>Hello. Thanks for the example, but I see some problems there:
>if the second process does not find the segment given by the key,
>you example makes two distinct segments. That is what happen with
>me. Because I don't have IPC_CREAT in the second process, my
>program simply fails instead of creating second segment.
>
>I got it working otherway, but there are severe problems.
>In client, I simply skipped the shmget() and queried
>immediately the segment with shmat() with the mid value
>printed by the server.
>
>The example mailed here used shmget() with IPC_CREAT.
>When I used IPC_CREAT for both server and client, as I expected,
>I got two separate shared memories. In fact, as I make the shared
>memory in shmserver which is run first, the shmclient should not use
>IPC_CREAT at all.
>
>It works, but while the server seems to fill the shared memory
>with increasing integer numbers, the client behaves strangely.
>I have this code in shmclient now:
>
> k = -1;
> for (;;) {
> if (k != nums[1]) {
> k = nums[1];
> fprintf(stderr,"%i\n",k);
> }
> // sleep(1);
> // fprintf(stderr,"ggggg\n");
> }
>
>What should it do? It should ideally print the increasing numbers:
>5435, 5436, 5437, etc. With sleep(1) it prints a new value once per
>second. However, without sleep(1), it prints only one number and then
>does not print anything anymore. It looks like Linux does not update
>the shared memory, why?
>
>When the "ggggg" is printed (without sleep), the shmclient prints only
>one number and repeatedly the "ggggg". Why the shared memory is not
>updated in this? I remember I had a similar problem with old XWave
>software at 1998 with much earlier kernel version (now I have 2.4.18 of
>RedHat 7.3).
>
>This looks serious problem. It may be that nobody has noticed
>it because either one uses sleep() or read()/write() in an
>audio system. That is, your software may work, but the problem
>may degrage the performance (as it certainly did freeze the
>printing in my shmclient). Perhaps the problem may cause an audio
>engine never work as fully as possible.
>
>If you get the shmclient work while the sleep(1) is commented out,
>please let me know :-)
>
>http://www.funet.fi/~kouhia/shmserver.tar.gz
Hello all,
I've been working on VLevel, a LADSPA plugin to keep me from having to
fiddle with the volume, and it's now in a useful state, so I'm looking
for some feedback. Basically, VLevel keeps track of the peak
amplitudes, and adjusts the volume smoothly to make the quiet parts
louder. Since it looks ahead a few seconds, the gain change is always
smooth.
<http://vlevel.sourceforge.net>
VLevel is written in C++. I have two questions. First, why do most
other plugins allocate and free copies of their strings and structures,
instead of just passing the literal (as I do)? The declarations in
ladspa.h don't allow the host to modify what the pointers reference.
Second, I keep a buffer of length n in my code, so the first n seconds
of data I return is useless, and after the audio is sent, I need n more
seconds of input before all the audio is returned. Is there any way of
informing the host about this?
In the future I plan to make some performance improvements, and perhaps
a nice cross-platform GUI for applying VLevel to files. I may also try
to get XMMS-LADSPA to save its state, which would be very useful to me.
I suppose VLevel could use RMS or a psychoacoustic model to estimate
volume, but that would make it very complex, and more difficult to avoid
clipping. Despite that, it serves my purpose, to play classical music
on the road, quite well.
Have fun,
--
Tom Felker <tcfelker(a)mtco.com>
Hi everyone... I guess it's been more than a year since the last time we
discussed such issues here. I am sure that everyone here, aswell as myself,
works very hard to mantain and improve their respective apps. Because of it,
the intention of this list post is to try to inform myself, aswell as
possibly other developers about the status of many things that affect the way
we develop our apps under linux.
As many of you might remember, there were many fundamental issues regarding
the apis and toolkits we use, I will try to enumerate and explain each one of
them as best as I can. Please I ask everyone who is more up to date with the
status of each to answer, comment, or even add more items.
1- GUI programming, and interface/audio syncronization. As well as I can
remember, a great problem of many developers is how to syncronize the
interface with the audio thread. Usually, we have the interface running at
normal priority and the audio thread running at high priority (SCHED_FIFO) to
ensure that it wont get preempted while mixing, specially when working with
low latency. For many operations we do (if not most) we can resort to shared
memory to do changes, as long as they are not destructive. But when we want
to lock, it is most certainly that we will suffer from a priority inversion
scenario. Althought POSIX supports functionality to avoid such scenarios from
happening (Priority ceiling/inheritance), there are no plans to include
support for such anytime soon in Linux (at least for 2.6, from what Andrew
Morton told me). Althoght some projects exist, it will not likely to become
mainstream for a couple of years (well, low latency patches are not
mainstream either, with good reason).
I came to find out that the prefered method is to transfer data through a FIFO
(due to the userspace lock free nature), althought that can be very annoying
for very complex interfaces.
What are your experiences on this subject? Is it accepted to lock in
cases where a destructive operation is being performed? (granted
that if you are doing a mixdown you will not be supposed to be doing that)
>From my own perspective, I've seen even commercial HARDWARE to lose
the audio, or even kill voices when you do a destructive operation, but I dont
know what users are supposed to expect. One thing I have to say about this
also, is JACKit (and apps written for it ) low tolerance for xruns. I found
many apps (or even JACKit itself) would crash or exit when such happens,
I understand xruns are bad, but I dont see how they could be problem if you
are "editing" (NOT recording/performing) and some destructive operation
needs to lock the audio thread for a relatively long time.
2-The role of low latency/Audio and MIDI timing. As much as we love working
with low latency (And I personally like controlling softsynths from my roland
keyboard). In many cases, if not most? It is not really necesary, and it can
be counterproductive, since working in such mode eats a lot of CPU out of the
programs. Low latency is ideal when performing LIVE input and you want to
hear a processed output. Examples of this are input from a midi controller and
output from a softsynth, or input thru a line (guitar for example) and
processed output (effects). But Imagine that you dont really need to do
that.. you could simply increase the audio buffering size to have latencies
up to 20/25 milliseconds, while saving CPU, preventing xruns, and the latency
is still perfetly acceptable for working in a sequencer, for example or doing
audio mixing of pre-recorded tracks. Doing things this should also ease the
pain to softsynth writers, as they wouldnt be FORCED to support low latency
for their app to work properly. And despite the point of view of many people,
many audio users and programmers dont care about low latency and/or dont need
it. But such scenario, at least a year ago, was(is?) not possible under
Linux, as softsynths (using ALSA and/orJACKit) have no way to syncronize
audio and midi, unless running in low latency mode, where it no longer matters
(audio update interval is so small that works as a relatively high resolution
timer). Last time I checked, Alsa could not deliver useful timestamping
information for this, and JACKit would also not deliver info on when did the
audio callback happened. I know back then there were ideas floating aroudn on
integrating MIDI capabilities to JACKit to override this problem and provide
a more standarized framework. I dont see how should also MIDI sync/clocks
help in this case, since it's basically meant for wires or "low latency"
frameworks.
3-Host instruments. I remember some discussion on XAP a while ago, but having
been to the page recently, I saw no progrerss at all. Is there still really a
need on this? (besides the previous point) or is it that ALSA/JACKit do this
better, besides prooviding interface abstraction? Also, I never had very
clear what is the limitation regarding the implementation of the VST api
under linux, granting that so many opensource plugins exist. Is it because of
the api being propertary, or similar legal reasons?
4-Interface abstraction for plugins.: We all know how our lovely X11 does not
r allow for a sane way of sharing the event loop between toolkits (might this
be a good idea for a proposal?) So it is basically impossible to have more
than a toolkit in a single process. Because of this, I guess it's impossible
and unfair to decide on a toolkit to configure LADSPA plugins from a GUI.
I remember Steve Harris proposed the use of (rdf was it?), and plugins may
also provide hints, but I still think that such may not be enough if you want
to do advanced features such an envelope editor, or visualizations for things
such as filter responses, cycles of an oscilator, etc. Has anything happened
in the latest months regarding to this issue?
5-Project framework/session management. After much discussion and a proposal,
Bob Ham started implementing LADCCA. I think this is a vital component, and
will grow even more important granting the complexity that an audio setup can
lead to. Imagine if you are runniing a sequencer , many softsynths, effect
processors and then a multitrack recorder, all interconnected via ALSA or
JACKit.. saving the setup for working later on can be a lifesaver. What is
it's state nowadays? And how many applications provide support for it?
Well, those are basically my main concerns on the subject, I hope to not have
sounded like moron, since that is not my intention at all. I am very happy
with the progress of the apps, and It's great to see apps like swami,
rosegarden or ardour become mature with the time.
Well, cheers to everyone and lets keep working hard!
Juan Linietsky
Hi all,
Yesterday I visited a demo event of ableton live in Switzerland. I've
read quite a lot about this thingy in magazines but I never tried it
myself (I don't use windows since 1994 anymore). But man I was really
impressed. This is by far the most intuitive sequencer for all kind of
music I've ever seen.
The concept is a bit hard to describe, if you know trackers from the
good old days imagine that mixed with a real-time timestrecher for all
your samples, a harddisk recording tool and many nice enhancements like
effects, crossfader (dj mixtable like)...
The best thing is if you download a demo at http://www.ableton.de/
("Products->live->demo download") and give it a try, it should also
work inside VirtualPC, VMWare or whatsover. The thingy is quite fast,
even on old machines.
anyway, after trying to find the most intuitive sequencer interface for
years I think I've found that yesterday, too bad it was not my idea ;)
So I'm tempted to start a project to write something like that as open
source. The most important part in it is definitely the timestreching
(they call it "elastic" audio...). But as far as I know timestreching
algorithms are 1. not easy to implement and 2. not open source if they
sound good :)
Because I'm an absolute newbie with timestretching I request for
comments. Can anyone point me to some papers or reference
implementations of (realtime) timestretching algorithms? It won't be
needed in a first stage of the application but in a long term it is a
must.
Also, if someone is working on something like a "live" for Linux let me
know :)
cu
Adrian
--
Adrian Gschwend
@ netlabs.org
ktk [a t] netlabs.org
-------
Free Software for OS/2 and eCS
http://www.netlabs.org
hi everyone !
while i'm slowly getting in linuxtag mode, it occured to me it might be
nice to direct booth visitors with very specific questions to the
developers themselves...
so if any of you are hanging out on #lad anyway, let me know when and
where you can be reached (and which time zone you are in), it might be
useful. in any case, it will be fun :)
just a little idea...
jörn
--
All Members shall refrain in their international relations from
the threat or use of force against the territorial integrity or
political independence of any state, or in any other manner
inconsistent with the Purposes of the United Nations.
-- Charter of the United Nations, Article 2.4
Jörn Nettingsmeier
Kurfürstenstr 49, 45138 Essen, Germany
http://spunk.dnsalias.org (my server)
http://www.linuxdj.com/audio/lad/ (Linux Audio Developers)
>From: Ralfs Kurmis <kurmisk(a)inbox.lv>
>
>Try out this example
>Split it in seperate files
>( Needs x )
Hello. Thanks for the example, but I see some problems there:
if the second process does not find the segment given by the key,
you example makes two distinct segments. That is what happen with
me. Because I don't have IPC_CREAT in the second process, my
program simply fails instead of creating second segment.
I got it working otherway, but there are severe problems.
In client, I simply skipped the shmget() and queried
immediately the segment with shmat() with the mid value
printed by the server.
The example mailed here used shmget() with IPC_CREAT.
When I used IPC_CREAT for both server and client, as I expected,
I got two separate shared memories. In fact, as I make the shared
memory in shmserver which is run first, the shmclient should not use
IPC_CREAT at all.
It works, but while the server seems to fill the shared memory
with increasing integer numbers, the client behaves strangely.
I have this code in shmclient now:
k = -1;
for (;;) {
if (k != nums[1]) {
k = nums[1];
fprintf(stderr,"%i\n",k);
}
// sleep(1);
// fprintf(stderr,"ggggg\n");
}
What should it do? It should ideally print the increasing numbers:
5435, 5436, 5437, etc. With sleep(1) it prints a new value once per
second. However, without sleep(1), it prints only one number and then
does not print anything anymore. It looks like Linux does not update
the shared memory, why?
When the "ggggg" is printed (without sleep), the shmclient prints only
one number and repeatedly the "ggggg". Why the shared memory is not
updated in this? I remember I had a similar problem with old XWave
software at 1998 with much earlier kernel version (now I have 2.4.18 of
RedHat 7.3).
This looks serious problem. It may be that nobody has noticed
it because either one uses sleep() or read()/write() in an
audio system. That is, your software may work, but the problem
may degrage the performance (as it certainly did freeze the
printing in my shmclient). Perhaps the problem may cause an audio
engine never work as fully as possible.
If you get the shmclient work while the sleep(1) is commented out,
please let me know :-)
http://www.funet.fi/~kouhia/shmserver.tar.gz
Best regards,
Juhana
Hello,
as both the participants of the 1st LAD conference and people at ZKM
enjoyed the meeting, Frank and I asked for the possibility to hold a
second meeting at ZKM next year.
The answer was positive and therefore we can announce that the 2nd LAD
conference is planned to take place April 29th - May 2nd 2004 at ZKM Karlsruhe.
the option to have more room. In addition to the rooms we had for the last
conference, we now have the option to also use a hall which is about double
the size of the lecture hall we used for the last meeting. This hall is even
more attractive since it is the recording studio of ZKM and can also serve as
a concert hall. This time there is also the option to invite artists who
actually do music with Linux software.
Early registrations (email either me or Frank Neumann <Frank.Neumann_AT_st.com>)
would help us to estimate the approximate scale of the event which can be
even larger than last time. If you can do a talk or presentation please let
us know the subject and estimated time you need for this. Depending on the
number of talks we can decide whether we will have two parallel sessions.
If the program of the event is fixed earlier than last time, this will
help to advertise it in journals and among relevant companies. It might
also help to find possible sponsors.
Updates on this will follow from time to time.
Matthias
--
Dr. Matthias Nagorni
SuSE Linux AG
Deutschherrnstr. 15-19 phone: +49 911 74053375
D - 90429 Nuernberg fax : +49 911 74053483
Hi all,
Announcing immediate availability of Soundmesh Internet2 audio streaming
software.
1. What is Soundmesh
Soundmesh is a result of a collaborative work with Mara Helmuth. It
originally started as an "Internet Sound Exchange" Internet2 project and
has since grown to become a full-fledged audio streaming front-end. The
sole purpose of this app is to provide a mechanism for streaming
multiple
CD-quality (or better) audio soundfiles via fast Internet2 connection,
utilizing hacked version of the RTcmix v.3.1.0. Hence, Soundmesh
provides for a unique "jamming" tool via Internet for a larger groups of
participants.
2. Obtaining Soundmesh
Soundmesh is currently only available in a source form and is
downloadable from my website. It's download is broken into 2 parts: the
soundmesh front-end (~530KB)and the hacked full version of the
RTcmix3.1.0 (8.2MB). They can be obtained using the following direct
URL's:
http://meowing.ccm.uc.edu/~ico/soundmesh/soundmesh-latest.tar.gz
(~530KB)
http://meowing.ccm.uc.edu/~ico/soundmesh/rtcmix-soundmesh.tar.gz (8.2MB)
Alternately, you can also find the download links on my website.
Documentation is also available:
http://meowing.ccm.uc.edu/~ico/soundmesh/Documentation.txt
Screenshot:
http://meowing.ccm.uc.edu/~ico/soundmesh/Screenshot.jpg
3. Current Limitations
*Soundmesh obviously does not currently support modular numbers of
incoming and outgoing streams. This is something that is planned for a
future release.
*Perl (.pl) scorefiles are supported in soundmesh but do not work in
RTcmix
*Python RTcmix scorefiles are currently not supported and do not work
with
either soundmesh or this version of RTcmix
*A number of playable streams before "gapping" occurs varies depending
on the quality of a stream and the internet connection. Considering that
this is an Internet2 project, chances are your modem connection will
simply not work [well or at all].
*Sound played via network is not heard locally (should be a quick fix).
*Connections are not secure.
4. Disclaimer
Copyright Mara Helmuth & Ivica Ico Bukvic 2001-3
Linux version distributed under the GPL license (see
http://www.gnu.org/licenses/gpl.html for more info)
This software comes with no warranty whatsoever! Use at your own risk!
Best wishes,
Ivica Ico Bukvic, composer & multimedia sculptor
http://meowing.ccm.uc.edu/~ico
Hi all,
This will probably be the last release for this version; there's
something Bigger And Better (TM) in the works:
http://pkl.net/~node/misc/jr2-proto.png
* fixed some segfaults
* fixed some bad configure script checks
* i18n support and Russian translation, courtesy of Alexey Voinov and
Alexandre Prokoudine
http://pkl.net/~node/jack-rack.html
Bob
--
Bob Ham <rah(a)bash.sh>
Can you say "death chambers at Guantanamo with no real trial"?