> I'm hoping that you're thinking of a realtime display, in which the
> peaks roll off to create a true waterfall effect.
Baudline (http://www.baudline.com) is a fantastic viewer that does fft
cascade. I've used it for a couple of years, and it is great for figuring out
how different sounds "work", and it has an oscilloscope-type display as well.
Cheers,
Jason Downer
Hello.
I finally started making my pet music project and realized I need a
drum synth to make some cool sounds. psindustrializer is good but also
need some tr-909-style sounds. I remeber from my old windoze days I
used a nice piece of software called Stomper. Does anybody know any
software for linux with comparable capabilities? Or we need to write
one?
Stomper does not work under wine :(
Thanks.
Hello.
I had a couple of articles on drum synths. Check
ftp://ftp.funet.fi/pub/sci/audio/devel/lad/drumsynth/
I built the circuit in a00*.jpg at the time when this article
was fresh. The article b00*jpg mentions an earlier article.
I will check that out at library.
Hmm.. I coded a drum synth for Commodore VIC-20 at the time.
VIC provided an audio chip with three oscillators, noise,
and a common volume if I remember correctly. What I did was to
modulate osc pitch and volume parameters with a fast and accurate
(compared to Basic) assembly code. The drum sounds were assigned to
the keys. This was about 1984, inspired by Yamaha's digital RX drum
synths, not by analog drums.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
-- oops, the first one was sent as a reply, here's the whole thing as a
new thread.
frustrated by the poor implementation of the jack bindings for python
(pyjack), i wrote my own in native python using ctypes.
the first test client mixed a 440hz sine wave using native python lists,
and the cpu usage was about ~11%.
i reimplemented the sine generator with numerics, and got it down to
~2%.
i believe that considering the overhead of the python implementation,
that result isn't too bad, and maybe allows for more than just
prototyping.
i attached the jack wrapper with the test client contained for those who
are interested. its not entirely wrapped and lacks some functionality.
--
-- leonard "paniq" ritter
-- http://www.mjoo.org
-- http://www.paniq.org
>From: "Levi D. Burton" <ldb(a)puresimplicity.net>
>
>does the idea of documenting various lad design patterns make
>sense to anyone?
Such "LAD Gems" doc would be much needed here too.
(For audio dsp gems, take a look at "musicdsp.org".)
I would appreciate if somebody would take a look at
Ardour and document best gems found there. E.g., the GUI
and audio thread separation and start up sequences.
Likewise for Linuxsampler and one of its GUI frontends.
Juhana
--
http://music.columbia.edu/mailman/listinfo/linux-graphics-dev
for developers of open source graphics software
http://www.notam02.no/arkiv/src/
ABOUT
-----
jack_capture is a small simple program to capture whatever
sound is going out to your speakers into a file.
This is the program I always wanted to have for jack, but no
one made. So here it is.
USAGE
-----
jack_capture [-f filename] [ -b bitdepth ] [-c channels] [ -B bufsize ]
Filename is by default auotogenerated to something like "jack_capture_<date+exact_time>.wav"
Bitdepth is by default FLOAT.
Channels is by default 2.
Bufsize is by default 262144.
ACKNOWLEDGMENT
--------------
Mostly based on the jackrec program in the jack distribution
made by Paul Davies and Jack O'Quin. Automatic filename generation
code taken from the timemachine program by Steve Harries.
--
Hi folks.
This message does get into driver specifics to an extent, but I'm mostly
coming to list for advice on how to find what "mystery codec" is being
used.
I've been putting some of the tiny bit of actual freetime I get during
this winter holiday :), into trying to get a libusb-based driver going
for my little Olympus VN480PC. It's a digital voice recorder that comes
with a USB cable for transferring voice recordings to a computer, and
the accompanying software is windows only.
I'd very much like to be able to do this transfer using linux instead of
needing windows though.
I think I -might- have the protocol mostly figured using a USB sniffer
on the windows side, but that may prove to be the easy part of this
project. :-S
What I'm faced with now, is I have 10K of data which I'm assuming just
about has to be the voice data in some format or other - but it isn't
clear what format it's in.
Toward trying to figure out the format, I've:
1) Computed the difference in the lengths of the apparently-voice data
transferred via USB, and the "data" section of the resulting .wav file.
They do not differ by a constant - so it's not just a matter of tacking
on a header.
2) Computed the quotient of the lengths of the apparently-voice data
transferred via USB, and the "data" section of the resulting .wav file.
They do not differ by a constant factor - so it's not just a matter of
converting, for example, shorts to reals again and again, for example,
in which case the lengths, I would think, should differ by a constant
factor of 2.
3) Symlinked a file containing the data transferred via USB, to all of
the file extensions known to sox, and attempted to use sox to convert
those files to .wav. None of the conversions succeeded.
4) Wrote a small python program to treat the data transferred via USB as
data to be stuffed into the "data" section of a .wav file, and created a
series of .wav files with all format types from 0 to 999. sndfile-info
did not give errors for 9 of these, but none of them look or sound right
in gnusound.
5) Googled about olympus and voice/audio codecs, to see if there is a
proprietary one they favor. It appears they were involved in the design
of the "DSS" format.
6) Downloaded "DSS Player Lite" from Olympus' web site, and copied the
data transferred via USB to "hi.dss". However, DSS Player Lite did not
recognize the file format.
Does anyone have any thoughts about what else I might try to see what
format this data is in, and/or convert it to a known format?
I've got detailed documentation of most of what I've done so far on this
project at http://dcs.nac.uci.edu/~strombrg/VN480PC/ The page includes
some .wav's, a binary file I'm assuming is voice data in a mystery
codec, full USB sniffer logs, and so on.
Does anyone have any suggestions - especially toward how to convert that
"likely voice data" in the USB Sniff to some sort of known and
supported-on-linux codec?
Thanks!
Hi!
Seems like the father of FM-synthesis has joined wikipedia. Some of you
guys might care to take a brief look at the FM-synthesis page, just once
in a while, so it wont get vandalised again?
--
mvh // Jens M Andreasen
Florian Schmidt writes:
> I further assume that the alsa seq event system
> is used
This is true of Rosegarden,
> and midi events are not queued
> for future delivery but always delivered immediately.
but this isn't -- Rosegarden always queues events
from a non-RT thread and lets the ALSA sequencer
kernel layer deliver them. (Thru events are delivered
directly, with potential additional latency because of
the lower priority used for the MIDI thread.) In
principle this should mean that only the priority of
the receiving synth's MIDI thread is significant for
the timing of sequenced events. We also have a
mechanism to compensate for gradual drift between
the MIDI timing source (kernel timers or RTC) and
soundcard clock, when synchronising to audio, by
adjusting the sequencer skew factor. (This happens
to be similar to the mechanism for slaving to MTC,
which is handy.)
In my experience this is all a long way from
foolproof. The most common problems for users
seem to be:
- ALSA sequencer uses kernel timers by default and
of course they only run at 100 or 250Hz in many
kernels.
- ALSA sequencer can sync to RTC, but the
associated module (snd-rtctimer) appears to hang
some kernels solid when loaded or used. I don't have
much information about that, but I can probably find
out some more.
- ALSA sequencer can sync to a soundcard clock,
but this induces jitter when used with JACK and has
caused confusion for users who find themselves
inadvertently sync'd to an unused soundcard (the
classic "first note plays, then nothing" symptom).
The biggest advantage of course is not having to run
an RT MIDI timing thread. My impression is that this
aspect of MusE (which does that, I think) causes
as many configuration problems for its users as using
ALSA sequencer queue timers does for Rosegarden's.
Any more thoughts on this?
Chris
On Friday 30 December 2005 17:37, Werner Schweer wrote:
> The ALSA seq api is from ancient time were no realtime threads were
> available in linux. Only a kernel driver could provide usable
> midi timing. But with the introduction of RT threads the
> ALSA seq api is obsolete IMHO.
I don't agree with this statement. IMHO, a design based on raw MIDI ports used
like simple Unix file descriptors, and every user application implementing
its own event schedule mechanism is the ancient and traditional way, and it
should be considered obsolete now in Linux since we have the advanced
queueing capabilities provided by the ALSA sequencer.
You guys are talking here about MIDI timing, considering only the event
scheduling point of view, as if Rosegarden or MusE were simple MIDI players.
Of course, playing beats on time is a required feature. But my bigger concern
about MIDI timing issues is when you are *recording* events. Here is where
ALSA queues, providing accurate timestamps for incoming events, are so good.
It could be the absolute winner if problems like the audio synchronization
and slave MTC synchronization were solved likewise.
Regards,
Pedro