- sorry for crossposting -
Hi all
The winners of the Hydrogen Drumkit Contest have just been announced on the
Hydrogen site !
Check out the announcement
<http://www.hydrogen-music.org/hcms/node/2327>and listen to the demo
songs of the submitted drumkits.
A big thank you to all the people that submitted a drumkit and to the jury !
Enjoy :-)
The Hydrogen team
As you may know from the other sampling thread here on this list I have written several emails to sample developers over the last two days and suggested CC-By-Sa as sampling license.
Clearly the intention of sample developers, they all write it in their currenct licenses, is that the resulting music is not part of the samples license. e.g. it is not considered a derived work.
But for Creative Commons ShareAlike? Is music a derived work from samples under cc-by-sa?
If yes I made a dumb error which could have negative impact on further talk with those developers since I was obviously talking about things I didn't know enough about.
Also if yes: Is there even a pre-packaged license that allows:
-Music or other resulting works are not derived works and the following conditions do not apply to the music itself.
-Sharing the sample packages is allowed
-Editing, Repackaging (sf2->sfz) etc. is allowed
-Selling the sample package itself is allowed or not (two different flavours)
Nils
For a few years I have used an Atom UMPC as my "mobile development
terminal", allowing me to build and run code at much lower performance
than I would expect from a "real" system but good enough for working on
new bits of code and finding performance bottlenecks :)
I am migrating over to a Nexus 7 with a Debian chroot environment and
tightvnc for display. This is working great for most things. The last
bit is to be able to run jackd so I can actually test audio parts of the
programs I work on the most.
I don't care about latency, or even xruns. This isn't for production
use. I just want to be able to exercise the jack client library, make
connections, and hopefully get SOME audio output. The current problem I
am having is that sys V shared mem API is not supported.
$ jackd -d alsa
jackdmp 1.9.9
Copyright 2001-2005 Paul Davis and others.
Copyright 2004-2012 Grame.
jackdmp comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details
JACK semaphore error: semget creation (Function not implemented)
jack_shm_lock_registry fails...
No access to shm registry
Failed to open server
Has anybody done anything like what I'm working toward? Any workarounds
for missing shm API?
Thanks.
Bill Gribble
Hi
Kinda new to linux audio and still a bit new to dev generally and I'm
trying to understand the basis of linux audio.
By that, I mean the ALSA API :
I would like to use the PCM interface and the mixer interface to mix 2
sounds and understand the real meaning of a mixer used in the Audio
Architectures.
For now I understood that the mixer interface uses the High level control
interface (hcontrol), which uses basic kernel modules.
I was wondering if anyone had some docs or examples of a program that uses
the mixer interface ?
Thanks
--
Alex
> I think the most reasonable standard for an absolute 1/oct
> frequency unit is 0.0 = 440Hz
My modular plugins use a reference of 440Hz. Also parameters are ranged
between 0.0 - 10.0 but can exceed that if need be. (in a modular synth,
everything needs to interoperate).
So for frequency 5.0 is 440Hz (Middle-A). i.e. the middle of the range -
5.0, is the standard 'middle' key.
Great idea though. Octaves are far more universal than western semitones,
yet trivial to convert between. 440Hz is a good choice.
Best Regards,
Jeff
Hello everyone!
I have something not realted to Linux. It's mainly how to read a few things.
Anyone knowledgeable about fourier-transformation in general here, who might
wish to help me?
Background is a university exam, that I have to take, but only having had
the script and not having found the right information anywhere else, I'd like
to know, how to read/speak a few things mainly.
Thank you for being with me so far.
Kindest regards
Julien
050e010d0f12010401-0405-0d09-030f12011a0f0d-
Such Is Life: Very Intensely Adorable;
Free And Jubilating Amazement Revels, Dancing On - FLOWERS!
******** Find some music at ********
http://juliencoder.de/nama/music.html
---------------------------------------------------------------
"If you live to be 100, I hope I live to be 95 and 37 days,
so I can be sure, there's someone at your site, who loves you."
(Not Winnie the Puh)
Hi :-)
i'm writing a tool for monitoring Jack2 (actually the only thing I
need right now is to be able to check the XRuns).
I'm using the jacklib.py
[https://raw.github.com/falkTX/Cadence/master/src/jacklib.py] and it
opens the client connection to jack ok. For example:
==================
import jacklib
client = jacklib.client_open("test-client", jacklib.JackNoStartServer, None)
xruns=0
def cb(*args):
global xruns
xruns += 1
return 0
jacklib.set_xrun_callback(client, cb, None)
while True:
raw_input("(%d) > " % xruns)
==================
This runs ok, but my callback (cb) is never called.
I'm sure it's registered to receive XRun notifications because
whenever I call "jacklib.set_xrun_callback" it starts showing me some
jack debug messages like "Jack: JackClient::ClientNotify ref = 3 name
= test-client notify = 3" for each xrun.
Am I missing anything?
Thanks!
--
Bruno Gola <brunogola(a)gmail.com>
http://bgo.la/ | +55 11 9294-5883
Hi All,
I wrote two scripts for tetrafile, an ambisonics A-format to B-format
converter, made by Fons Adriaensen.
It converts a folder with A-format recordings done in Ardour to B-format
files.
If you don't have an ambisonic microphone, you probably don't need this.
https://github.com/StudioDotfiles/DotRepo/blob/master/i3/scripts/ardour2Bfo…https://github.com/StudioDotfiles/DotRepo/blob/master/i3/scripts/ardour2ard…
Ardour should record 4-channel tracks into for example:
audio1-7%a.wav audio1-7%b.wav audio1-7%c.wav audio1-7%d.wav
Sometimes Ardour records them to other numbers, for example:
audio1-7%a.wav audio1-7%b.wav udio1-7%c.wav audio1-6%d.wav
These script works around that.
There are two versions: one that outputs 4-channel files, and one that
outputs 4 mono files per input file.
These are my first zsh script's and almost my first shell scripts in
general, so feedback is wanted.
Also: if it eats your hamster, don't look at me.
Enjoy!
Hi everybody!
I'm interested in wavetable synthesis, so read around a bit on how they
work, best used etc, but I can find preciously little information that
describes how to best "create" a wavetable.
Pre-recorded material seems a pretty go-to choice rather than using csound
or freinds to generate wavetables.
The issue of tuning is where I currently struggle the most: How should that
be approached?
I should perhaps specify I'm hoping to achieve electronic bassline / dirty
instrument sounds: I'm not attempting to create orchestral wavetables...
sorry! :D
-Harry
The "I went on a diet and look at me now!" version. There's been a lot
of fat cut out in this version, namely floor modulation. I tried to
remove the things that really weren't making much of a difference in
regards to the breadth of sounds that are capable of being generated.
Changes in v0.6:
- removed floor modulation altogether; wasn't getting enough bang for the buck, sound-wise
- removed Gravity Readjust
- removed Switch Velocity
- removed Channel Separation
- removed patched Stk source code from code-base, now it compiles against dynamic lib
- added limit to velocity
- made stereo synthesis optional in UI
Next version will probably focus on some new ideas, but if any given control doesn't make a big difference in the sound being made, it won't make
the cut.
The Newtonator is an LV2 soft
synth that uses a unique algorithm based on simple ideas of velocity and acceleration to produce some unpredictable sounds. More documentation
can be found on the project website at http://newtonator.sf.net/.
Thanks,
Michael Bechard