Hello LADs!
I am trying to build cheestracker 0.8.0
on debian-stable, g++-2.95, with custom installed qt3
and getting trouble.
here's how the make log looks like:
=================================8<=================
scons .
scons: Reading SConscript files ...
Detecting if PKG-CONFIG is installed...
0.11.0
pkg-config found!
Checking for libsigc++-1.2...
1.2.5
libsigc++-1.2 found!
QT Check:
$QTDIR exists at, using QTDIR instead of harcdoded pathlist /export/qt3/
Looking for QT 3.x Includes:
/export/qt3//include/qt
Checking QT version..
#define QT_VERSION_STR "3.0.3"
Looking for QT 3.x Libraries:
using: -lqt-L/export/qt3//lib
Looking for QT 3.x 'moc' Binary:
moc command: /export/qt3//bin/moc
QT was found!
Detecting if OSS exists on the system..
OSS was found.
Dependency check successful, writing cache
MOC check: property_bridge_edit.h
MOC check: cspinbutton.h
MOC check: ccolor_bridge.h
MOC check: ccolor_list.h
MOC check: font_bridge.h
MOC check: keyboard_input_config.h
MOC check: audio_config.h
MOC check: sample_editor.h
MOC check: sample_editor_format.h
MOC check: sample_viewer.h
MOC check: sample_viewer_zoom.h
MOC check: envelope_point_editor.h
MOC check: envelope_editor.h
MOC check: resampler_config.h
MOC check: note_bridge.h
MOC check: sample_editor_clipboard.h
MOC check: sample_editor_effects.h
MOC check: pattern_edit.h
MOC check: pattern_edit_widget.h
MOC check: sample_edit.h
MOC check: instrument_edit.h
MOC check: interface.h
MOC check: order_and_defaults_editor.h
MOC check: variables_edit.h
MOC check: mdi_main_window.h
mdi_main_window.h:201: Warning: Variable as signal or slot.
['sigc-1.2']
scons: done reading SConscript files.
scons: Building targets ...
c++ -I/export/local/lib/sigc++-1.2/include -I/export/local/include/sigc++-1.2 -D
POSIX_ENABLED -O3 -ffast-math -DOSS_ENABLED -Icommon/ -Icommon/defines/ -Icommon
/components/data -I/export/qt3//include/qt -Icommon -I. -Icommon/defines -c -o c
ommon/interface__QT/audio/moc__audio_config.o common/interface__QT/audio/moc__au
dio_config.cpp
common/interface__QT/audio/moc__audio_config.cpp:30: no `void Audio_Config::init
MetaObject()' member function declared in class `Audio_Config'
common/interface__QT/audio/moc__audio_config.cpp: In method `void Audio_Config::
initMetaObject()':
common/interface__QT/audio/moc__audio_config.cpp:34: implicit declaration of fun
ction `int badSuperclassWarning(...)'
common/interface__QT/audio/moc__audio_config.cpp: At top level:
common/interface__QT/audio/moc__audio_config.cpp:41: prototype for `class QStrin
g Audio_Config::tr(const char *)' does not match any in class `Audio_Config'
common/interface__QT/audio/audio_config.h:41: candidate is: static class QString
Audio_Config::tr(const char *, const char * = 0)
common/interface__QT/audio/moc__audio_config.cpp: In function `static class QMet
aObject * Audio_Config::staticMetaObject()':
common/interface__QT/audio/moc__audio_config.cpp:95: no method `QMetaObject::new
_metadata'
common/interface__QT/audio/moc__audio_config.cpp:96: no method `QMetaObject::new
_metaaccess'
common/interface__QT/audio/moc__audio_config.cpp:98: `struct QMetaData' has no m
ember named `ptr'
common/interface__QT/audio/moc__audio_config.cpp:98: `QMember' undeclared (first
use this function)
common/interface__QT/audio/moc__audio_config.cpp:98: (Each undeclared identifier
is reported only once
common/interface__QT/audio/moc__audio_config.cpp:98: for each function it appear
s in.)
common/interface__QT/audio/moc__audio_config.cpp:98: parse error before `;'
common/interface__QT/audio/moc__audio_config.cpp:101: `struct QMetaData' has no
member named `ptr'
multiple parse errors follow.
nikodimka
__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com
Hello,
I have been doing some tests with OpenAFS, and it hangs on me
frequently. On the OpenAFS I found out that this is very likely
caused by the preemptible patch. Knowing that a lot of people
around here are using that patch, I'd like to ask if any of you
have experienced the same, and know of a solution, or an alter-
native that NFS which would work well with low latency kernels.
Maarten
Hi!
First Release.
gmorgan is a .. Rhythm Station, an organ with auto-accompaniment. Uses MIDI
and the ALSA sequencer for play the rhythm patterns. Styles, patterns and
sounds, the mixer settings, can be edited and saved.
Tested on Gentoo, debian PIII 933 and PII 300
REQUERIMENTS
--------------------------
Linux
ALSA
Fltk
Take a look at http://personal.telefonica.terra.es/web/soudfontcombi/
And please ... if you enjoy this prog and wants to share patterns, send me,
and i will include in future versions, i have a large TODO, and i need some
help.
Josep
Could anyone check the shmserver.tar.gz, compile it, and test
if you have the same problem? Additional instructions below.
I have though about why the shared memory freezes. It could be
that Linux copies the shared memory pages for each client:
one shared memory segment, multiple copies. Would locking
the memory help? Maybe, maybe not.
I know there are high performance applications using the shared
memory well (at least their authors may think so). But so does
my own alsashmrec works well too (so I thought). If there indeed
is a severe (a minor?) problem in the kernel, then in all these
application it may cause very short freezes (until read, write,
sleep, suspend, etc. call releases it). A jitter in updating
lock-free FIFOs on shared memory.
If you don't get it or otherwise cannot find out what is going on,
then where I should post this? Is there a list where the kernel
and shared memory experts are reading?
And hey, why shmget did not work? See shmalloc_named_get().
Best regards,
Juhana
Additional instructions and the previous mail.
1. Compile it without sleep(1) [ as in code below ] and without
fprintf with ggg's [ one could change that to "g" so that
screen does not go too fast ].
Then compile and test with sleep, and then test with fprintf with
"g".
2. First run shmserver in a terminal. Copy the mid value
(second value printed).
3. Then run shmclient in another terminal. Paste the mid value:
% shmclient <midvalue>
>From: Juhana Sadeharju <kouhia(a)nic.funet.fi>
>To: linux-audio-dev(a)music.columbia.edu
>Subject:
>Date: Fri, 4 Jul 2003 16:46:25 +0300
>Reply-To: linux-audio-dev(a)music.columbia.edu
>
>>From: Ralfs Kurmis <kurmisk(a)inbox.lv>
>>
>>Try out this example
>>Split it in seperate files
>>( Needs x )
>
>Hello. Thanks for the example, but I see some problems there:
>if the second process does not find the segment given by the key,
>you example makes two distinct segments. That is what happen with
>me. Because I don't have IPC_CREAT in the second process, my
>program simply fails instead of creating second segment.
>
>I got it working otherway, but there are severe problems.
>In client, I simply skipped the shmget() and queried
>immediately the segment with shmat() with the mid value
>printed by the server.
>
>The example mailed here used shmget() with IPC_CREAT.
>When I used IPC_CREAT for both server and client, as I expected,
>I got two separate shared memories. In fact, as I make the shared
>memory in shmserver which is run first, the shmclient should not use
>IPC_CREAT at all.
>
>It works, but while the server seems to fill the shared memory
>with increasing integer numbers, the client behaves strangely.
>I have this code in shmclient now:
>
> k = -1;
> for (;;) {
> if (k != nums[1]) {
> k = nums[1];
> fprintf(stderr,"%i\n",k);
> }
> // sleep(1);
> // fprintf(stderr,"ggggg\n");
> }
>
>What should it do? It should ideally print the increasing numbers:
>5435, 5436, 5437, etc. With sleep(1) it prints a new value once per
>second. However, without sleep(1), it prints only one number and then
>does not print anything anymore. It looks like Linux does not update
>the shared memory, why?
>
>When the "ggggg" is printed (without sleep), the shmclient prints only
>one number and repeatedly the "ggggg". Why the shared memory is not
>updated in this? I remember I had a similar problem with old XWave
>software at 1998 with much earlier kernel version (now I have 2.4.18 of
>RedHat 7.3).
>
>This looks serious problem. It may be that nobody has noticed
>it because either one uses sleep() or read()/write() in an
>audio system. That is, your software may work, but the problem
>may degrage the performance (as it certainly did freeze the
>printing in my shmclient). Perhaps the problem may cause an audio
>engine never work as fully as possible.
>
>If you get the shmclient work while the sleep(1) is commented out,
>please let me know :-)
>
>http://www.funet.fi/~kouhia/shmserver.tar.gz
Hello all,
I've been working on VLevel, a LADSPA plugin to keep me from having to
fiddle with the volume, and it's now in a useful state, so I'm looking
for some feedback. Basically, VLevel keeps track of the peak
amplitudes, and adjusts the volume smoothly to make the quiet parts
louder. Since it looks ahead a few seconds, the gain change is always
smooth.
<http://vlevel.sourceforge.net>
VLevel is written in C++. I have two questions. First, why do most
other plugins allocate and free copies of their strings and structures,
instead of just passing the literal (as I do)? The declarations in
ladspa.h don't allow the host to modify what the pointers reference.
Second, I keep a buffer of length n in my code, so the first n seconds
of data I return is useless, and after the audio is sent, I need n more
seconds of input before all the audio is returned. Is there any way of
informing the host about this?
In the future I plan to make some performance improvements, and perhaps
a nice cross-platform GUI for applying VLevel to files. I may also try
to get XMMS-LADSPA to save its state, which would be very useful to me.
I suppose VLevel could use RMS or a psychoacoustic model to estimate
volume, but that would make it very complex, and more difficult to avoid
clipping. Despite that, it serves my purpose, to play classical music
on the road, quite well.
Have fun,
--
Tom Felker <tcfelker(a)mtco.com>
Hi everyone... I guess it's been more than a year since the last time we
discussed such issues here. I am sure that everyone here, aswell as myself,
works very hard to mantain and improve their respective apps. Because of it,
the intention of this list post is to try to inform myself, aswell as
possibly other developers about the status of many things that affect the way
we develop our apps under linux.
As many of you might remember, there were many fundamental issues regarding
the apis and toolkits we use, I will try to enumerate and explain each one of
them as best as I can. Please I ask everyone who is more up to date with the
status of each to answer, comment, or even add more items.
1- GUI programming, and interface/audio syncronization. As well as I can
remember, a great problem of many developers is how to syncronize the
interface with the audio thread. Usually, we have the interface running at
normal priority and the audio thread running at high priority (SCHED_FIFO) to
ensure that it wont get preempted while mixing, specially when working with
low latency. For many operations we do (if not most) we can resort to shared
memory to do changes, as long as they are not destructive. But when we want
to lock, it is most certainly that we will suffer from a priority inversion
scenario. Althought POSIX supports functionality to avoid such scenarios from
happening (Priority ceiling/inheritance), there are no plans to include
support for such anytime soon in Linux (at least for 2.6, from what Andrew
Morton told me). Althoght some projects exist, it will not likely to become
mainstream for a couple of years (well, low latency patches are not
mainstream either, with good reason).
I came to find out that the prefered method is to transfer data through a FIFO
(due to the userspace lock free nature), althought that can be very annoying
for very complex interfaces.
What are your experiences on this subject? Is it accepted to lock in
cases where a destructive operation is being performed? (granted
that if you are doing a mixdown you will not be supposed to be doing that)
>From my own perspective, I've seen even commercial HARDWARE to lose
the audio, or even kill voices when you do a destructive operation, but I dont
know what users are supposed to expect. One thing I have to say about this
also, is JACKit (and apps written for it ) low tolerance for xruns. I found
many apps (or even JACKit itself) would crash or exit when such happens,
I understand xruns are bad, but I dont see how they could be problem if you
are "editing" (NOT recording/performing) and some destructive operation
needs to lock the audio thread for a relatively long time.
2-The role of low latency/Audio and MIDI timing. As much as we love working
with low latency (And I personally like controlling softsynths from my roland
keyboard). In many cases, if not most? It is not really necesary, and it can
be counterproductive, since working in such mode eats a lot of CPU out of the
programs. Low latency is ideal when performing LIVE input and you want to
hear a processed output. Examples of this are input from a midi controller and
output from a softsynth, or input thru a line (guitar for example) and
processed output (effects). But Imagine that you dont really need to do
that.. you could simply increase the audio buffering size to have latencies
up to 20/25 milliseconds, while saving CPU, preventing xruns, and the latency
is still perfetly acceptable for working in a sequencer, for example or doing
audio mixing of pre-recorded tracks. Doing things this should also ease the
pain to softsynth writers, as they wouldnt be FORCED to support low latency
for their app to work properly. And despite the point of view of many people,
many audio users and programmers dont care about low latency and/or dont need
it. But such scenario, at least a year ago, was(is?) not possible under
Linux, as softsynths (using ALSA and/orJACKit) have no way to syncronize
audio and midi, unless running in low latency mode, where it no longer matters
(audio update interval is so small that works as a relatively high resolution
timer). Last time I checked, Alsa could not deliver useful timestamping
information for this, and JACKit would also not deliver info on when did the
audio callback happened. I know back then there were ideas floating aroudn on
integrating MIDI capabilities to JACKit to override this problem and provide
a more standarized framework. I dont see how should also MIDI sync/clocks
help in this case, since it's basically meant for wires or "low latency"
frameworks.
3-Host instruments. I remember some discussion on XAP a while ago, but having
been to the page recently, I saw no progrerss at all. Is there still really a
need on this? (besides the previous point) or is it that ALSA/JACKit do this
better, besides prooviding interface abstraction? Also, I never had very
clear what is the limitation regarding the implementation of the VST api
under linux, granting that so many opensource plugins exist. Is it because of
the api being propertary, or similar legal reasons?
4-Interface abstraction for plugins.: We all know how our lovely X11 does not
r allow for a sane way of sharing the event loop between toolkits (might this
be a good idea for a proposal?) So it is basically impossible to have more
than a toolkit in a single process. Because of this, I guess it's impossible
and unfair to decide on a toolkit to configure LADSPA plugins from a GUI.
I remember Steve Harris proposed the use of (rdf was it?), and plugins may
also provide hints, but I still think that such may not be enough if you want
to do advanced features such an envelope editor, or visualizations for things
such as filter responses, cycles of an oscilator, etc. Has anything happened
in the latest months regarding to this issue?
5-Project framework/session management. After much discussion and a proposal,
Bob Ham started implementing LADCCA. I think this is a vital component, and
will grow even more important granting the complexity that an audio setup can
lead to. Imagine if you are runniing a sequencer , many softsynths, effect
processors and then a multitrack recorder, all interconnected via ALSA or
JACKit.. saving the setup for working later on can be a lifesaver. What is
it's state nowadays? And how many applications provide support for it?
Well, those are basically my main concerns on the subject, I hope to not have
sounded like moron, since that is not my intention at all. I am very happy
with the progress of the apps, and It's great to see apps like swami,
rosegarden or ardour become mature with the time.
Well, cheers to everyone and lets keep working hard!
Juan Linietsky
Hi all,
Yesterday I visited a demo event of ableton live in Switzerland. I've
read quite a lot about this thingy in magazines but I never tried it
myself (I don't use windows since 1994 anymore). But man I was really
impressed. This is by far the most intuitive sequencer for all kind of
music I've ever seen.
The concept is a bit hard to describe, if you know trackers from the
good old days imagine that mixed with a real-time timestrecher for all
your samples, a harddisk recording tool and many nice enhancements like
effects, crossfader (dj mixtable like)...
The best thing is if you download a demo at http://www.ableton.de/
("Products->live->demo download") and give it a try, it should also
work inside VirtualPC, VMWare or whatsover. The thingy is quite fast,
even on old machines.
anyway, after trying to find the most intuitive sequencer interface for
years I think I've found that yesterday, too bad it was not my idea ;)
So I'm tempted to start a project to write something like that as open
source. The most important part in it is definitely the timestreching
(they call it "elastic" audio...). But as far as I know timestreching
algorithms are 1. not easy to implement and 2. not open source if they
sound good :)
Because I'm an absolute newbie with timestretching I request for
comments. Can anyone point me to some papers or reference
implementations of (realtime) timestretching algorithms? It won't be
needed in a first stage of the application but in a long term it is a
must.
Also, if someone is working on something like a "live" for Linux let me
know :)
cu
Adrian
--
Adrian Gschwend
@ netlabs.org
ktk [a t] netlabs.org
-------
Free Software for OS/2 and eCS
http://www.netlabs.org
hi everyone !
while i'm slowly getting in linuxtag mode, it occured to me it might be
nice to direct booth visitors with very specific questions to the
developers themselves...
so if any of you are hanging out on #lad anyway, let me know when and
where you can be reached (and which time zone you are in), it might be
useful. in any case, it will be fun :)
just a little idea...
jörn
--
All Members shall refrain in their international relations from
the threat or use of force against the territorial integrity or
political independence of any state, or in any other manner
inconsistent with the Purposes of the United Nations.
-- Charter of the United Nations, Article 2.4
Jörn Nettingsmeier
Kurfürstenstr 49, 45138 Essen, Germany
http://spunk.dnsalias.org (my server)
http://www.linuxdj.com/audio/lad/ (Linux Audio Developers)