Figured this might be of some interest to someone around here - and either
way, it's all done on Linux, and it will be released on Linux. ;-)
(Original announcement post at the end!)
-------------------------------------------------------------------------
The whole game engine will probably go Free/Open Source eventually; older
versions of parts of it already are. The synth engine will be JACKified and
open sourced as soon as I get around to it! Going to support JACK in the game
as well, as I use it on my devsystem all the time anyway.
No idea if anyone will ever understand or care for this strange beast of a
sound engine, but anyway... :-D For your amusement, here's the "lead synth"
used for the theme and some other melodic features in the song:
CuteSaw(P V=1)
{
!P0 sp, +P sp, w saw, p P, a 0, !er .05, !vib 0, !tvib .01
.rt wg (V + a - .001) {
sp (vib * 6 + rand .01)
12 { -sp vib, +p (P - p * .8), +a (V - a * er), d 5 }
12 { +sp vib, +p (P - p * .8), +a (V - a * er), d 5 }
+vib (tvib - vib * .1)
}
a 0, d 5
1(NV) {
V NV
if NV {
vib .005, tvib .005, er .05, wake rt
} else {
tvib .02, er .02
}
}
2(NP) { P (P0 + NP), p P }
}
Yeah, I was in a neurotically minimalistic kind of mood when I designed that
language... But, it Works For Me(TM)! Less typing ==> quicker editing. ;-)
(The original version of ChipSound, with a more assembly-like scripting
language, was less than 2000 lines of C code. It's slightly below 4500 lines
now, compiler included.)
When playing a note, a voice with its own VM is started, and set to run this
script. The VM runs in unison with the voice, alternating between audio
processing and code execution. Thus, timing is sub-sample accurate, allowing
the implementation of hardsync, granular synthesis and the like without
specific engine support.
Timing commands can deal in milliseconds or musical ticks, making it easy to
implement rhythm effects, or even writing the music in the same language, as
I've done here.
Voices (microthreads) are arranged in a tree structure, where each voice can
spawn any number of sub-voices it needs. Messages can be sent to these voices
(broadcast or single voice), allowing full real time control.
Oscillators currently available are wavetable (mipmapping, Hermite
interpolating, arbitrary waveform length sampleplayers) and "SID style" S&H
noise. It's possible to use arbitrary sampled sounds and waveforms, but so far
I've only been using the pre-defined sine, triangle, saw and square waveforms,
and the noise generator.
-------------------------------------------------------------------------
Kobo II: Another song WIP
-------------------------
"Yesterday, I started playing around with a basic
drum loop made from sound effects from the game.
The usual text editor + ChipSound exercise, using
only basic waveforms. I sort of got caught in the
groove, and came up with this: [...]"
Full story:
http://olofsonarcade.com/2011/11/06/kobo-ii-another-song-wip/
Direct download:
http://www.olofson.net/music/K2Epilogue.mp3
--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate
.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.nethttp://olofsonarcade.com |
'---------------------------------------------------------------------'
Paul Davis:
> On Wed, Nov 2, 2011 at 12:23 AM, Iain Duncan <iainduncanlists(a)gmail.com>
> wrote:
>> Hi, I'm working on an a project that I intend to do using the STK in a
>> callback style, but am hoping I can prototype the architecture in python
>> until I've figured out the various components and their responsibilities
>> and
>> dependencies. Does anyone know of any kind of python library ( or
>> method? )
>> that would let me simulate the way callback based STK apps using RTAudio
>> work? IE I want to have a python master callable that gets called once
>> per
>> audio sample and has a way of sending out it's results to the audio
>> subsystem.
>>
>> I've found a bunch of python audio libs but it doesn't seem like they
>> work
>> that way, maybe I'm missing something?
>
> the obvious choices would be the python wrappers for JACK and/or
> PortAudio.
>
> note, however, that the chances of using Python for per-sample
> processing at low latency without intense dropouts are low.
>
Actually, since Python has an incremental garbage collector,
it should in theory only require replacing the memory allocator with a
real time
memory allocator to make Python pretty hard real time capable. I think.
I also think I remember someone using Python for real time sample
by sample signal processing in Pd...
Efficiency is another story though
I looked into this about five years ago, but didn't get too far. Wondering
if anyone on here has experience splitting apps up into:
- realtime low latency engine in C++ using per sample callback audio (
either RTAudio or Jack or PortAudio )
- user interfaces and algo composition routines in python that either
communicate with the engine by shared memory or over a queue
Basically I want to be able to do the gui and data transforming code in
Python whenever possible and allow plugins to be written to work on the
data in Python
I'm hoping to arrive at some sort of design that ultimately lets the engine
act as an audio server with multiple user interface clients, possibly not
even on the same machines, but definitely not on the same cores. If anyone
has tips, war stories, suggestions on reading or projects to look at, I'd
love to hear them.
Thanks
Iain
Hi!
Well the subject is more technical term of "cutting my favourite show
from radio stream". :D
I guess the usual is to simply define the start and end times and trim
the data to this range.
Pretty straightforward, but in my case nor the start time, nor the show
length is strictly known. My intention is to analyse the data (which
isn't necessarily streaming) to detect signature of show in some way.
Now the case-specific details:
The show I'm interested in is readily recognizable on waveform already.
(see this pic: http://home.venta.lv/s9_smedin_j/files/waveform.png )
The show has an "intro", so I can detect that by "sliding" correlator.
The naive algorithm would be to use one or more sample chunks of this
"intro" and to detect correlation spike when "sliding" and testing
over some range of the larger audio data.
I already started implementation of this, but haven't tested it yet.
Maybe there is good implementation of this? Does phase shift between
sampling positions in test chunk and data skews results significantly?
The end (or duration) of show is a bit more complicated since it is
highly variable. But as can be seen in picture, the end is quite
visible on the waveform and the show has "spikey" waveform since it's
all talking. But I have only very vague ideas how to detect this with a
program. Any ideas on this?
Maybe there is good programs, scripts or LADSPA/LV2 plugins that do
something like this already, which I could use?
Thanks in advance,
JohnLM
Sounds like there are a number of options for me to look at and spend some
time getting the communication right, I'm wondering if anyone can give me a
suggestion on how I can start experimenting with object design *deferring*
the question of getting communication *right*, while I study options. I'd
like to come up with a well encapsulated API, wondered if anyone has ideas
for what would 'sort of work' for now while I'm writing experimental code,
but still be layered properly so that when it's time to examine the
threading and timing issues in detail I can. or maybe this isn't possible?
For example, what kind of queuing system would one suggest for just getting
started where occasional blocking is ok? Does anyone use boost queues or is
strictly a roll-your-own endeavor?
Is planning on sending messages with an osc protocal realistic as a
starting point?
thanks
iain
Hi, I'm working on an a project that I intend to do using the STK in a
callback style, but am hoping I can prototype the architecture in python
until I've figured out the various components and their responsibilities
and dependencies. Does anyone know of any kind of python library ( or
method? ) that would let me simulate the way callback based STK apps using
RTAudio work? IE I want to have a python master callable that gets called
once per audio sample and has a way of sending out it's results to the
audio subsystem.
I've found a bunch of python audio libs but it doesn't seem like they work
that way, maybe I'm missing something?
thanks!
Iain
QasMixer version 0.15 is now available.
QasMixer is an ALSA mixer with a size adaptive Qt GUI.
Changes:
* New simpler device selection view
* New user setting: Mixer device on startup
* ALSA configuration view moved to a separate application: QasConfig
* Localizations moved to a separate package: qasmixer-l10n
* Default fallback translation
* Code merges and cleanups
* Version code shortened to two numbers instead of three
Homepage with more information
http://xwmw.org/qasmixer
Project page
http://sourceforge.net/projects/qasmixer/
Happy volume changing!
-- Sebastian Holtermann