Apologies for the double-post! Here are the scripts used in the linked recording:
Hi all,
I'm working on an open source (well public domain) computer music system for python
called Pippi. It's a library for non-realtime composition: python scores. It's in
the 4th beta of the 2.0 -- the 1.0 series was all python2 and based around bytestrings,
while the 2.0 is python3 and hopefully a bit more python-ish making use of numpy-backed
classes that do some operator overloading and provide a set of affordances for
transforming buffers, processing or synthesizing sounds.
I'm also working on a live / "just in"-real-time (meaning asynchronous
processing mixed in realtime) system called astrid meant for performance / live-coding /
development etc. This is very much alpha in its current form, also a rewrite from previous
built-in features from the 1.0 series...
Anyway! Here's a snippet of two instruments running in astrid:
https://www.dropbox.com/s/5fbglgx7ts1656l/asmallthing.mp3?dl=0
One is just a simple granular smoosh of a recording of a particularly harmonic water
pump. The other is a 2d pulsar OSC using a stack of wavetables derived from a convolution
of some vocal and guitar recordings my friend made, occasionally also processed with
waveset-based procedures (ripped from Trevor Wishart's Audible Design book) and some
other time domain processes.
This is pippi:
https://github.com/luvsound/pippi
I've been working on this for a long time but documentation sucks... I'm more
than happy to field any questions anyone who is kind enough to try to dive in might have,
and really do plan to work on the docs soon!
Here's a start:
http://htmlpreview.github.io/?https://github.com/luvsound/pippi/blob/master…
Erik
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user(a)lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-user