Hi!
I guess everybody of you know that allocating memory on the heap at runtime in
RT threads is not a good idea. This can easily prevented by user space /
application managed memory pools.
But whatabout the stack? Even if you try to use the heap for local variables
as well, you (usually) won't be able to prevent usage of the stack due to ABI
definitions; e.g. on most platforms function arguments will be delivered
through the stack. So what can you do to prevent physical pages to be
allocated for the stack at RT critical time? Because this would lead to the
same problems as calling malloc(), new and co.
I read: "Although it is possible to give memory back to the system and shrink
a process's address space, this is almost never done." [1] That sounds to me
as once physical memory pages were assigned to the virtual stack range (due
to stack growth), those physical pages were not be freed even if the virtual
stack shrinks (that is the stack pointer increments). Is that true? And what
does "almost" mean in this manner? Unfortunately I'm not into the mm
internals of Linux yet.
If the above claim is true, then we could simply increase the stack for a
short time at the beginning of a RT application, e.g. by calling alloca()
(maybe not good - dangerous and not portable) or by defining a big array on
the stack in a helper function and/or by making that helper function recurse
to a certain level to solve the danger of the stack.
Anybody with deep Linux mm knowledge around?
CU
Christian
[1] http://www.informit.com/articles/article.asp?p=173438
Hi all,
I've posted this one before on the LAU list but unfortunately I've not
received any response. Please accept my apologies for cross-posting, but I
am hoping that someone on LAD might be able to help me learn more on this
particular topic. I would greatly appreciate help in this matter!
At any rate, here's the question:
I have been messing with the ambisonic plugins provided in the CMT ladspa
collection. I am namely looking into a simple implementation of B-format
1ch->4ch encoder. The problem is that I am failing to figure out how to use
this on a simple mono (non-encoded) sound input in order to control its
diffusion over 4 channels. When running it via Ardour and sending it to 4
separate busses via this plugin, even though the x, y, and z coordinates do
affect the output of the 4 channels, they do not conform to the expected
3,4,2,1 output (I presume these correspond to 4 speaker outputs, but I am
also having doubts about this as well, so your help in understanding this
better would be most appreciated).
By now you can see that although I know a bit about ambisonics (namely
theory), that I am quite a newb in this area, but for what it's worth I am
eager to learn :-).
The pd set of plugins pretty much work as expected but they are also
designed
differently, likely with discrete 4-8 output that can be then patched
directly to main outs. In the case of ladspa plugins I am not even sure
whether what I am getting is encoded version of a 4-channel stream that then
needs to be decoded. Yet, when I tried having:
mono sound-> b-format 1ch to 4ch encoder (LADSPA)->4ch to 4ch decoder
(LADSPA)
-> out
That gave me bunch of garbage with two out of 4 channels consistently
clipping, so this means that this is probably not the case either...
So, to conclude I must be not using the CMT plugins properly and would
therefore greatly appreciate it if someone would enlighten me. I am already
aware of the theory behind the ambisonics, it's just that I am a bit puzzled
how the CMT plugins (if at all) could be used with mono and/or stereo
non-encoded streams in order to spatialize them in real-time. I would most
appreciate a simple practical example as to how to do this (if such proves
to
be possible).
Thank you very much!
Best wishes,
Ico
Hi all. I'd like to receive Midi note events from
another Midi Player. But before I attempt to dive into
a wad of documentation, I'd like to know if it's even
possible under the following situation...
1) I don't have a hardware sequencer; so, I use
timidity as an ALSA sequencer, then pmidi a player...
timidity -iA -B8,8 -Os &
pmidi -p 128:0 <midi file>
2) That plays fine, but how to tap into the events
pmidi is sending to 128:0? I've tried modifying sample
code such as dump-alsa.c, but no output. Also tried
tips at "ALSA related stuffs", but was unable to get
"aseqview" to compile.
Any tips? Thank you in advance. - Sean
http://www.home.unix-ag.org/simon/files/dump-alsa.chttp://mitglied.lycos.de/iwai/alsa.html
__________________________________
Do you Yahoo!?
Yahoo! Small Business - Try our new Resources site
http://smallbusiness.yahoo.com/resources/
Here is an oscilloscope DSSI plugin that I've been hacking on the last
few weeks:
http://www.student.nada.kth.se/~d00-llu/music_dssi.php?lang=en
It has two audio input ports and will display the two input signals as
two waves in the display. The trigger level and direction is
controllable, as well as the amplification and offset for each channel
and the time resolution.
Note that this plugin will NOT work with jack-dssi-host from version 0.9
of the DSSI package since that does not support audio input ports. It
will work with jack-dssi-host or Om from current CVS (I haven't tested
it with Rosegarden). Om 0.1.1 and earlier will not work because of an
optimisation that breaks this type of plugins.
The plugin includes a DSSIUIClient class that might be of interest to
plugin UI writers using libsigc++ and Glibmm - it handles all the
required communication with the plugin host and exposes public member
functions and signals for the UI -> host and host -> UI OSC commands
specified in the DSSI RFC. It can also handle the nasty business of
setting up an initial shared memory segment for the plugin and the UI
(the actual implementation of the shared memory handling is in a
separate C file, so it could be useful for people who don't use libsigc
++ and Glibmm or even C++). Source code documentation is here
http://www.student.nada.kth.se/~d00-llu/plugins/ll-scope/dox/html/classDSSI…
and here
http://www.student.nada.kth.se/~d00-llu/plugins/ll-scope/dox/html/dssi__shm…
For more details about the plugin, read the README file in the source
package.
It's all GPL.
--
Lars Luthman
PGP key: http://www.d.kth.se/~d00-llu/pgp_key.php
Fingerprint: FCA7 C790 19B9 322D EB7A E1B3 4371 4650 04C7 7E2E
Paul Davis:
>
> >what is the 'easiest' MIDI interface to get working under the average
> >Linux kernel? what sort of experiences do folks have with getting
> >MIDI working (on a programming level) using API's such as (but not
> >limited to) ALSA, and MidiShare?
> >
> >for me so far, MidiShare seems to offer the most direct and usable
> >approach .. while ALSA is wraught with complexity and dependencies
> >which often seem out of control. i'd like to know what your
> >experience, as developers, has been with getting a working MIDI
> >subsystem under Linux ...
>
> i can speak up as someone who has avoided any interactions with the
> ALSA sequencer for years because of a similar perception. i recently
> converted Tim Thompson's KeyKit from its raw MIDI port-based
> implementation under Linux to one that uses the ALSA sequencer (and
> adding multiport support along the way). it was suprisingly easy and
> obvious, given a simple example (in this case, the "ALSA sequencer
> MIDI Port object" that someone contributed to Ardour). even more
> gratifying was comparing the code to the keykit implementations for
> win32 and CoreMidi. the ALSA sequencer one was smaller, more logical
> and less bogged down in details.
>
Yepp, same experience as you. Before using the alsa seq api myself,
it seemed to be horrible complicated. But it really wasn't.
Tip: the source for jack-rack by bob ham has some clean alsa-seq code
to look at.
--
Hi all,
I have a soundblaster live.
I also am using debian/demudi and have a dual PIII 500 mhz.
I am looking for hints on how to optimize my card with alsa and jack
etc.
I once had found on the alsa site example of .asoundrc tailored for
the sblive but now on searching I don't see such an animal.
I want to use skype or a voice over ip in additon to audio processing.
I find that with my setup I get clicks where I can't hear the other
person.
I am doing some mixes where I need as much umph from my system as
possible and especially my sound card. I see that when I check my
irq's my sound card is:
theone:/home/aamehl# cat /proc/interrupts
CPU0 CPU1
0: 190504469 36529323 IO-APIC-edge timer 0/33792
2: 0 0 XT-PIC cascade 0/0
7: 0 0 IO-APIC-edge parport0 0/0
14: 852744 235050 IO-APIC-edge ide0 0/87773
15: 14 0 IO-APIC-edge ide1 1/12
145: 15477006 1925905 IO-APIC-level aic7xxx, aic7xxx, nvidia 0/2911
153: 15069558 1527074 IO-APIC-level EMU10K1 0/96632
161: 3021752 703622 IO-APIC-level uhci_hcd, eth0 0/25374
NMI: 0 0
LOC: 227042766 227042753
ERR: 0
MIS: 0
153 and most of the other devices are not addon cards but part of the
mother board. I can't move my video card and the ethernet is onboard.
Is all lost or can I give more priority to the sblive?
Any other suggestions are welcome as far as squezzing performance out
of my 128megs rams.
Thanks
Aaron
Greetings all,
Apologies for cross-posting.
Please allow me to use this opportunity to bring to your attention an
upcoming contemporary multimedia art concert titled "0th Sound." The
evening-long event will take place in the Cincinnati area on May 29th
(Sunday) 2005 at 8pm, and will feature a portfolio of my latest works,
including a number of compositions created exclusively using GNU/Linux
tools. For more info, sound clips and other goodies, please visit:
http://meowing.ccm.uc.edu/~ico
For promotional and press-release materials please visit:
http://meowing.ccm.uc.edu/cgi-bin/ico/yabb/YaBB.cgi?board=News_id;action=dis
play;num=1116212106;start=0
Many thanks!
Best wishes,
Ivica Ico Bukvic, composer & multimedia sculptor
http://meowing.ccm.uc.edu/~ico/
Hello everyone,
I'm looking to lend a hand at an audio project. I'm not a new programmer but
I'm new to audio programming. Is there any good references out there to help
me get up to speed? How about any essential libraries, system code that I
should learn?
Thanks a bunch,
Kevin