Greetings LADders:
I'm working with Kjetil's vstserver, testing various plugins, and
occasionally I get this error from the server :
VSTSERVER/SMS_new: Unable to allocate 12288 bytes of shared memory.
VSTSERVER/CH_new: Could not set up shared memory.
The plugin then politely refuses to load. 'cat /proc/meminfo' reports
:
total: used: free: shared: buffers: cached:
Mem: 526311424 511303680 15007744 0 70569984 234008576
Swap: 337195008 223801344 113393664
MemTotal: 513976 kB
MemFree: 14656 kB
MemShared: 0 kB
Buffers: 68916 kB
Cached: 208332 kB
SwapCached: 20192 kB
Active: 253612 kB
Inactive: 177876 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 513976 kB
LowFree: 14656 kB
SwapTotal: 329292 kB
SwapFree: 110736 kB
I tried expanding the shared amount with 'echo some-big-number >
/proc/sys/kernel/shmmax' but still got the error. I'm obviously not
understanding something, so I turn to the wizards for help. Is there a
fix for this problem ?
Best regards,
== dp
Me too!
For me Jack has been confusing as I'm not sure in what process/thread
my audio routine runs. Somebody here wrote Jack's interesting thing is
that it can route audio through applications. I don't want that.
I want that Jack takes my C function (or its compiled object) and
executes that within the real-time audio engine.
Even if Jack is used, should one prepare process non-realtime audio
within your own application. You cannot send background audio
processings to Jack. Threads in the application are important
because it is not good that the whole application freezes when
one opens a big audiofile to editor, for example.
I have not yet seen even moderate explanations or guides on those
topics. We have discussed about Model-View-Controller (MVC)
scheme, but nobody seemed to have any practical tool-level
explanations so that I could easily use MVC in my applications.
I suggest to start with Jack, not with Alsa or OSS.
It took me an hour to write a Theremin synth, sort of.
Very easy. (GUI was built with GTK+.)
Best regards,
Juhana
I was hoping someone could help me with my first newbie steps in linux audio
programming. I finally made it over to Linux, and discovered that I arrived
here before cubase and fruityloops did. They never did what I wanted them to
anyway. *sob*.
Anyhoo, now that I'm over them, I would love to get cracking on some chunky
audio projects but I'm a bit unsure of a kind of "best practises" approach
for audio programming. What should I study? Whats the best way to go?!
I have done some messing around with SDL and that seems pretty cool - cross
platform, good graphic capabilites, and the audio is really simple. The faq
says "low level support for audio" - is it low enough?! Is it useless?
Ive really gotten into audio programming, but the stuff I'm making is
currently all over the place (like this message) using different snippets of
code and tutorials I've found here and there. Half of the sources are old,
so I don't know which path I should be following.
Any hints would be appreciated!
Earle
Wow, thanks a lot everyone. I'm certainly going to check out JACK, but
is GTK+ flexible enough (can you make your own buttons and stuff)
Actually, don't answer that, that's just laziness on my part - Its just
that I've noticed that the graphics sometimes take a back seat in the
linux world :)
... Well that uses up my one free "really newbie" question. I promise
the next question I post will be thoroughly researched!
Thanks again,
Earle
Hallo,
I'm having a strange problem with linmpeg3 while compiling DJplay
(http://linux1.t-data.com/djplay/)
All goes well until the final linker step which gives this error:
g++ -o djplay djplay.o display.o [...] mp3.o [...] recorder.o \
-L$QTDIR/lib -lqt-mt `pkg-config --libs glib jack` -laudiofile -lmad \
-lmpeg3
mp3.o(.text+0x4ac): In function `mpeg3demux_read_char':
/usr/include/mpeg3demux.h:104: undefined reference to `mpeg3demux_read_char_packet(mpeg3_demuxer_t*)'
mp3.o(.text+0x4e9): In function `mpeg3demux_read_prev_char':
/usr/include/mpeg3demux.h:118: undefined reference to `mpeg3demux_read_prev_char_packet(mpeg3_demuxer_t*)'
collect2: ld returned 1 exit status
make: *** [djplay] Error 1
This is using the Debian testing packages of libmpeg3. I als tried
recompiling the library, but it still gives this error. Anone any tips
on where to look for the error?
ciao
--
Frank Barknecht _ ______footils.org__
>yes, suse kernel (since 8.1) already includes most of the
>necessary changes. some parts are missing but they are on
>the rare code path, which has been not audited quite well,
>anyway.
Well i do have a LL patch for the original SusE 8.1 kernel that comes
with the distribution, including the capability patch. The original SuSE
8.1 kernel does not work well without that patch on my machines.
I also have a patch that adds cpufreq and a patch for BIOSes with
broken ACPI. That patches are only tested on my Dell i8500, but
for me they work perfect.
I dont have patches for SuSE 8.2 ... i wait for an update until the
first stable 2.6.x kernel based SuSE distro is out.
- Stefan
_________________________________________________________________
STOP MORE SPAM with the new MSN 8 and get 2 months FREE*
http://join.msn.com/?page=features/junkmail
Hey there...
I have an incredibly naive question... How does one mix n channels of
audio down to one channel? I've scoured the 'net as best I could and
haven't really found anything very authoritative. Suppose I'm dealing
with float point data between -1.0 and 1.0 and have n channels. I know
physically, it's all a summation of the individual waves, but strictly,
summation of multiple waves between -1.0 and 1.0 doesn't keep you
between -1.0 and 1.0. Looking through sox, they tend to multiply by some
gain value relative to the number of channels. (namely... an average) So
is averaging considered a valid professional audio method of digital
mixing? what if your source samples are all 16bit? Wouldn't you need to
dither in a case like that? Doesn't averaging also imply truncation of
your source signal? Am I missing something here? I've tried looking to
see what jack and ardour do, but was lefting wondering WHERE to look.
respectfully,
d!
The M-Audio MobilePre/Sonica/Transit/Ozone devices need a firmware
download before they can be used with Linux.
I've released a first beta of the madfu-firmware package which tries
to accomplish this.
Please note that I do not have any of these devices, and that the
loader is not tested at all. That's why I'm searching for volunteers.
:-)
To download the beta, go to
http://sourceforge.net/project/showfiles.php?group_id=87777&release_id=1833…
Please send any success/failure reports to
usb-midi-fw-user(a)lists.sf.net
Regards,
Clemens
Hi all,
I am working towards using a set of linux audio apps in a live context.
With recent development of Jack and apps such as djEQ and such, things
look promising. In this light, an idea came to mind. (Keep in mind that
I am thinking aloud, and that I am no expert in audio or midi
programming).
How feasible would it be to implement and a Midi controller API to have
a standard, graphical ( a la qJackconnect) application API that Synths
and such could use to change midi controllers assignment on the fly or
load existing presets?
I don't think that the audio framework that has been worked on in linux
had live use in mind, but I am convinced that it lends itself to it.
In the same context, and this might already be possible, is it possible
to have a script calling different apps with patches/presets for each
one of those? I am thinking along those lines:
during performance:
Call script -> opens say Zynaddsubfx with a certain master, Freqtweak
with a certain session, Hydrogen with a certain song etc -> call end of
piece from script which exits all these apps -> call another script for
next piece.
Regards
Ant-
--
antoine rivoire <antoine.rivoire(a)ntlworld.com>
-------- directBOX Reply ---------------
From: harold_zhu(a)hotmail.com
To : linux-audio-dev(a)music.columbia.edu
Date: 15.09.2003 04:13:24
<p>Hi, Folks: <p>I am new here and I am also kind of new to Linux, so I have a basic question. I have worked on a multi-media project on Windows platform, and I used WaveIn and WaveOut functions as audio IO interface to capture and playback real-time audio. Now, I need to do a port to Linux.......so what's the best audio IO API on Linux? <p>In my search so far, I understand that both OSS and ALSA can do the job (real-time audio capture/play).......but what's the difference? Stability wise? Ease-of-use? Performance wise? Also any other candidate, in particular, cross-platform wrapper API? <p>Thanks Express yourself with MSN Messenger 6.0 -- download now!
----
Hi!
Well, the main differences between the OSS-Layer and ALSA are:
1.) alsa is the new driver in linux kernel >= 2.6.X
2.) oss-layers need ioctl() - calls to manage/configure sound devices (and ioctl() calls
are just possible for root-users !!!!!),so oss is really user-unfriendly if you want to programm a application
3.) alsa is easier to use for midi / pcm capturing/playback
4.) both are less documentated ( I can tell you that !!! :) ) , but alsa
has a few basing programming tuts for C ( c++)
5.) alsa is basicly said a "wrapper", a front-end of the oss-arch and it is highly recommended to use alsa at all.
I hope this helps,
Sascha Retzki
__________________________________________________
Verpassen Sie keine eBay-Auktion und bieten Sie bequem
und schnell über das Telefon mit http://www.telefonbieten.de
Ihre eMails auf dem Handy lesen - ohne Zeitverlust - 24h/Tag
eMail, FAX, SMS, VoiceMail mit http://www.directbox.com