Hi,
Assembling piezo microphones, cardboard, foam, wood and a cymbal stand,
I have just made my first DIY electronic pads. Actually it's electronic
percussions, because I will play these mostly with hands.
I've found a few sites about DIY "edrums" (1), as well as some detailed
documentation about how to build a trigger-to-midi hardware controller
(2). But since I started this little project, I've been thinking about
plugging the piezo mikes directly into my soundcard inputs. My first
tests are very good : the signal is clean, and indicates faithfully how
hard the pads get hit.
I am now about to code a software controller to :
1 - either interprete the signal and produce midi (or OSC) events
2 - or interprete the signal, and play samples by itself, in a
standalone manner (no midi)
I tend to prefer the second option, for the following reasons (criticism
welcome) :
- I want to reduce latency _as_much_as_possible_ (keeping the rythm is
hard enough ;-). To me, the midi layer introduces useless buffers
- I'd like the whole thing to use minimal resources, so that I can run
it on old hardware (simple boxes to be carried in studios and on stage).
In this regard, a small tool built upon Alsa, which run without any
sound server or midi layer, looks to me as the lightest solution
- I don't want to use harware expanders, such as the one that is built
into my soundcard. I want to play samples. With midi, I would then need
a such tool as Timidity, which is pretty heavy according to my latest
tests. Jtrigger seems lighter (http://sparked.zadzmo.org/jtrigger), but
it relies on Jack : I like Jack, but it seems too heavy to me, for this
specific job.
- In regard to clicks and other noises, I believe a lightweight
application running on a dedicated barebone Linux system is more
reliable than a realtime thread with a complex layer as Jack, especially
on slow hardware.
Are these good/coherent reasons to you ? Do you have any other advice ?
Is there already some software that could help me, or some pieces I
could reuse, to achieve any part of this process ?
Best regards
References :
(1) http://edrum.for.free.fr
(2) http://www.midibox.org/edrum
--
og
qjackLaM is a Latency Meter for jack.
There are 2 JackClients now: 1 only outputs, the other only reveives.
This should cause 0 impact on Jacks graphordering concerning the other clients.
Also new: qjacklam measures ALL possible ways by itself and displays the results in a table.
Get the source, cvs and fc3-binary via
http://developer.berlios.de/projects/qjacklam
.
have fun
Karsten
___________________________________________________________
Gesendet von Yahoo! Mail - Jetzt mit 1GB Speicher kostenlos - Hier anmelden: http://mail.yahoo.de
Version 0.2.1 of the Oscilloscope DSSI plugin is now available here:
http://www.student.nada.kth.se/~d00-llu/music_dssi.php?lang=en
It has fixes for some bugs spotted by Sean Bolton:
- The GUI will now actually quit when it receives a /quit command from
the plugin host
- There will be no /tmp/dssi_shm_tmpfile_* files left behind when the
plugin and GUI exits
- The Makefile uses pkg-config to get the compiler flags for DSSI, so
it should work now even if your DSSI header is installed in a
non-standard directory
Thanks to feedback from Sean Bolton and Chris Cannam on the DSSI
mailing list I know also know that the plugin works with
ghostess-20050516 and the latest CVS version of Rosegarden (but not
with any releases of Rosegarden).
--
Lars Luthman
PGP key: http://www.d.kth.se/~d00-llu/pgp_key.php
Fingerprint: FCA7 C790 19B9 322D EB7A E1B3 4371 4650 04C7 7E2E
Hi!
I guess everybody of you know that allocating memory on the heap at runtime in
RT threads is not a good idea. This can easily prevented by user space /
application managed memory pools.
But whatabout the stack? Even if you try to use the heap for local variables
as well, you (usually) won't be able to prevent usage of the stack due to ABI
definitions; e.g. on most platforms function arguments will be delivered
through the stack. So what can you do to prevent physical pages to be
allocated for the stack at RT critical time? Because this would lead to the
same problems as calling malloc(), new and co.
I read: "Although it is possible to give memory back to the system and shrink
a process's address space, this is almost never done." [1] That sounds to me
as once physical memory pages were assigned to the virtual stack range (due
to stack growth), those physical pages were not be freed even if the virtual
stack shrinks (that is the stack pointer increments). Is that true? And what
does "almost" mean in this manner? Unfortunately I'm not into the mm
internals of Linux yet.
If the above claim is true, then we could simply increase the stack for a
short time at the beginning of a RT application, e.g. by calling alloca()
(maybe not good - dangerous and not portable) or by defining a big array on
the stack in a helper function and/or by making that helper function recurse
to a certain level to solve the danger of the stack.
Anybody with deep Linux mm knowledge around?
CU
Christian
[1] http://www.informit.com/articles/article.asp?p=173438
Hi all,
I've posted this one before on the LAU list but unfortunately I've not
received any response. Please accept my apologies for cross-posting, but I
am hoping that someone on LAD might be able to help me learn more on this
particular topic. I would greatly appreciate help in this matter!
At any rate, here's the question:
I have been messing with the ambisonic plugins provided in the CMT ladspa
collection. I am namely looking into a simple implementation of B-format
1ch->4ch encoder. The problem is that I am failing to figure out how to use
this on a simple mono (non-encoded) sound input in order to control its
diffusion over 4 channels. When running it via Ardour and sending it to 4
separate busses via this plugin, even though the x, y, and z coordinates do
affect the output of the 4 channels, they do not conform to the expected
3,4,2,1 output (I presume these correspond to 4 speaker outputs, but I am
also having doubts about this as well, so your help in understanding this
better would be most appreciated).
By now you can see that although I know a bit about ambisonics (namely
theory), that I am quite a newb in this area, but for what it's worth I am
eager to learn :-).
The pd set of plugins pretty much work as expected but they are also
designed
differently, likely with discrete 4-8 output that can be then patched
directly to main outs. In the case of ladspa plugins I am not even sure
whether what I am getting is encoded version of a 4-channel stream that then
needs to be decoded. Yet, when I tried having:
mono sound-> b-format 1ch to 4ch encoder (LADSPA)->4ch to 4ch decoder
(LADSPA)
-> out
That gave me bunch of garbage with two out of 4 channels consistently
clipping, so this means that this is probably not the case either...
So, to conclude I must be not using the CMT plugins properly and would
therefore greatly appreciate it if someone would enlighten me. I am already
aware of the theory behind the ambisonics, it's just that I am a bit puzzled
how the CMT plugins (if at all) could be used with mono and/or stereo
non-encoded streams in order to spatialize them in real-time. I would most
appreciate a simple practical example as to how to do this (if such proves
to
be possible).
Thank you very much!
Best wishes,
Ico
Hi all. I'd like to receive Midi note events from
another Midi Player. But before I attempt to dive into
a wad of documentation, I'd like to know if it's even
possible under the following situation...
1) I don't have a hardware sequencer; so, I use
timidity as an ALSA sequencer, then pmidi a player...
timidity -iA -B8,8 -Os &
pmidi -p 128:0 <midi file>
2) That plays fine, but how to tap into the events
pmidi is sending to 128:0? I've tried modifying sample
code such as dump-alsa.c, but no output. Also tried
tips at "ALSA related stuffs", but was unable to get
"aseqview" to compile.
Any tips? Thank you in advance. - Sean
http://www.home.unix-ag.org/simon/files/dump-alsa.chttp://mitglied.lycos.de/iwai/alsa.html
__________________________________
Do you Yahoo!?
Yahoo! Small Business - Try our new Resources site
http://smallbusiness.yahoo.com/resources/
Here is an oscilloscope DSSI plugin that I've been hacking on the last
few weeks:
http://www.student.nada.kth.se/~d00-llu/music_dssi.php?lang=en
It has two audio input ports and will display the two input signals as
two waves in the display. The trigger level and direction is
controllable, as well as the amplification and offset for each channel
and the time resolution.
Note that this plugin will NOT work with jack-dssi-host from version 0.9
of the DSSI package since that does not support audio input ports. It
will work with jack-dssi-host or Om from current CVS (I haven't tested
it with Rosegarden). Om 0.1.1 and earlier will not work because of an
optimisation that breaks this type of plugins.
The plugin includes a DSSIUIClient class that might be of interest to
plugin UI writers using libsigc++ and Glibmm - it handles all the
required communication with the plugin host and exposes public member
functions and signals for the UI -> host and host -> UI OSC commands
specified in the DSSI RFC. It can also handle the nasty business of
setting up an initial shared memory segment for the plugin and the UI
(the actual implementation of the shared memory handling is in a
separate C file, so it could be useful for people who don't use libsigc
++ and Glibmm or even C++). Source code documentation is here
http://www.student.nada.kth.se/~d00-llu/plugins/ll-scope/dox/html/classDSSI…
and here
http://www.student.nada.kth.se/~d00-llu/plugins/ll-scope/dox/html/dssi__shm…
For more details about the plugin, read the README file in the source
package.
It's all GPL.
--
Lars Luthman
PGP key: http://www.d.kth.se/~d00-llu/pgp_key.php
Fingerprint: FCA7 C790 19B9 322D EB7A E1B3 4371 4650 04C7 7E2E
Paul Davis:
>
> >what is the 'easiest' MIDI interface to get working under the average
> >Linux kernel? what sort of experiences do folks have with getting
> >MIDI working (on a programming level) using API's such as (but not
> >limited to) ALSA, and MidiShare?
> >
> >for me so far, MidiShare seems to offer the most direct and usable
> >approach .. while ALSA is wraught with complexity and dependencies
> >which often seem out of control. i'd like to know what your
> >experience, as developers, has been with getting a working MIDI
> >subsystem under Linux ...
>
> i can speak up as someone who has avoided any interactions with the
> ALSA sequencer for years because of a similar perception. i recently
> converted Tim Thompson's KeyKit from its raw MIDI port-based
> implementation under Linux to one that uses the ALSA sequencer (and
> adding multiport support along the way). it was suprisingly easy and
> obvious, given a simple example (in this case, the "ALSA sequencer
> MIDI Port object" that someone contributed to Ardour). even more
> gratifying was comparing the code to the keykit implementations for
> win32 and CoreMidi. the ALSA sequencer one was smaller, more logical
> and less bogged down in details.
>
Yepp, same experience as you. Before using the alsa seq api myself,
it seemed to be horrible complicated. But it really wasn't.
Tip: the source for jack-rack by bob ham has some clean alsa-seq code
to look at.
--
Hi all,
I have a soundblaster live.
I also am using debian/demudi and have a dual PIII 500 mhz.
I am looking for hints on how to optimize my card with alsa and jack
etc.
I once had found on the alsa site example of .asoundrc tailored for
the sblive but now on searching I don't see such an animal.
I want to use skype or a voice over ip in additon to audio processing.
I find that with my setup I get clicks where I can't hear the other
person.
I am doing some mixes where I need as much umph from my system as
possible and especially my sound card. I see that when I check my
irq's my sound card is:
theone:/home/aamehl# cat /proc/interrupts
CPU0 CPU1
0: 190504469 36529323 IO-APIC-edge timer 0/33792
2: 0 0 XT-PIC cascade 0/0
7: 0 0 IO-APIC-edge parport0 0/0
14: 852744 235050 IO-APIC-edge ide0 0/87773
15: 14 0 IO-APIC-edge ide1 1/12
145: 15477006 1925905 IO-APIC-level aic7xxx, aic7xxx, nvidia 0/2911
153: 15069558 1527074 IO-APIC-level EMU10K1 0/96632
161: 3021752 703622 IO-APIC-level uhci_hcd, eth0 0/25374
NMI: 0 0
LOC: 227042766 227042753
ERR: 0
MIS: 0
153 and most of the other devices are not addon cards but part of the
mother board. I can't move my video card and the ethernet is onboard.
Is all lost or can I give more priority to the sblive?
Any other suggestions are welcome as far as squezzing performance out
of my 128megs rams.
Thanks
Aaron