Hi :-)
i'm writing a tool for monitoring Jack2 (actually the only thing I
need right now is to be able to check the XRuns).
I'm using the jacklib.py
[https://raw.github.com/falkTX/Cadence/master/src/jacklib.py] and it
opens the client connection to jack ok. For example:
==================
import jacklib
client = jacklib.client_open("test-client", jacklib.JackNoStartServer, None)
xruns=0
def cb(*args):
global xruns
xruns += 1
return 0
jacklib.set_xrun_callback(client, cb, None)
while True:
raw_input("(%d) > " % xruns)
==================
This runs ok, but my callback (cb) is never called.
I'm sure it's registered to receive XRun notifications because
whenever I call "jacklib.set_xrun_callback" it starts showing me some
jack debug messages like "Jack: JackClient::ClientNotify ref = 3 name
= test-client notify = 3" for each xrun.
Am I missing anything?
Thanks!
--
Bruno Gola <brunogola(a)gmail.com>
http://bgo.la/ | +55 11 9294-5883
Hi All,
I wrote two scripts for tetrafile, an ambisonics A-format to B-format
converter, made by Fons Adriaensen.
It converts a folder with A-format recordings done in Ardour to B-format
files.
If you don't have an ambisonic microphone, you probably don't need this.
https://github.com/StudioDotfiles/DotRepo/blob/master/i3/scripts/ardour2Bfo…https://github.com/StudioDotfiles/DotRepo/blob/master/i3/scripts/ardour2ard…
Ardour should record 4-channel tracks into for example:
audio1-7%a.wav audio1-7%b.wav audio1-7%c.wav audio1-7%d.wav
Sometimes Ardour records them to other numbers, for example:
audio1-7%a.wav audio1-7%b.wav udio1-7%c.wav audio1-6%d.wav
These script works around that.
There are two versions: one that outputs 4-channel files, and one that
outputs 4 mono files per input file.
These are my first zsh script's and almost my first shell scripts in
general, so feedback is wanted.
Also: if it eats your hamster, don't look at me.
Enjoy!
Hi everybody!
I'm interested in wavetable synthesis, so read around a bit on how they
work, best used etc, but I can find preciously little information that
describes how to best "create" a wavetable.
Pre-recorded material seems a pretty go-to choice rather than using csound
or freinds to generate wavetables.
The issue of tuning is where I currently struggle the most: How should that
be approached?
I should perhaps specify I'm hoping to achieve electronic bassline / dirty
instrument sounds: I'm not attempting to create orchestral wavetables...
sorry! :D
-Harry
The "I went on a diet and look at me now!" version. There's been a lot
of fat cut out in this version, namely floor modulation. I tried to
remove the things that really weren't making much of a difference in
regards to the breadth of sounds that are capable of being generated.
Changes in v0.6:
- removed floor modulation altogether; wasn't getting enough bang for the buck, sound-wise
- removed Gravity Readjust
- removed Switch Velocity
- removed Channel Separation
- removed patched Stk source code from code-base, now it compiles against dynamic lib
- added limit to velocity
- made stereo synthesis optional in UI
Next version will probably focus on some new ideas, but if any given control doesn't make a big difference in the sound being made, it won't make
the cut.
The Newtonator is an LV2 soft
synth that uses a unique algorithm based on simple ideas of velocity and acceleration to produce some unpredictable sounds. More documentation
can be found on the project website at http://newtonator.sf.net/.
Thanks,
Michael Bechard
hi *!
this unfortunate announcement from nedko seems to have spawned a
discussion on LAA. LAA list policy used to be no follow-ups except for
factual corrections. the idea was to keep the traffic low for people who
want to stay informed but not have to wade through too much mail. (think
lwn.net among others.)
can i suggest that all further contributions to this LAA thread be NAKed
by the moderator, please? the discussion is certainly important, but it
should continue on LAD or LAU.
that said, while i can understand nedko's POV, the initial message never
really belonged on LAA either. obviously, such announcements _will_
cause heated discussion, so they should be made in a forum which allows
them.
best,
jörn
-------- Original Message --------
Subject: [LAA] my lv2-related projects
Date: Tue, 21 Aug 2012 06:34:49 +0300
From: Nedko Arnaudov <nedko(a)arnaudov.name>
To: linux-audio-announce(a)lists.linuxaudio.org
I'm abandoning all lv2 related projects that I currently maintain.
Here is a list:
* zynjacku/lv2rack
* lv2fil
* ssg
* lv2vocoder
* lv2dynparam
* external ui extension
* lv2zynadd [partially, see below]
* maybe something else I dont recall right now
The zyn-rewrite project that produced lv2zynadd stays but will be
cleared from all lv2 code. If anyone wants to take over the
maintainership of any project of mine, contact me. I'll wait a month
before wiping out all online lv2-related resources I control.
I don't want to participate in the lv2 madness anymore. I admit I cannot
communicate rationally with David Robillard. If contributing is not
pleasure, then a one doesn't belong to the community. I wish everyone
inloved more luck than I had.
--
Nedko Arnaudov <GnuPG KeyID: 5D1B58ED>
> > My concept with GMPI (not everyone agreed) was that MIDI was not
> > required *in* the plugin.
> > I decided the *host* should provide that routine (mapping MIDI to port
value). Written once,
> > available to every plugin developer.
> This is almost exactly what I proposed as an LV2 extension in this
> previous thread - " In practice, however, there are a few border cases
where the plugin
> would want to indicate its default MIDI bindings".
Cool, I like it. I disagree that synthesisers are 'border cases' though ;)
> The only real worry is that hosts will be unhappy of the "bloat" added
> to the library they are using.
Yeah, Host developers want the 'bloat' in the plugins, plugin developers
want the 'bloat' in the host.
I think I good experiment is to imagine you have to write both an LV2 host
and 100 LV2 plugins, and you have to write MIDI-binding code. Do you put it
in the plugin OR the host?
-If a feature consumes 100 kB RAM and disk space, and it's implemented on
the host side - that's 100 kB.
-If it's implemented on the plugins side, that's 100,000 kB.
Which choice is more 'bloated'?
A very real scenario is you write this MIDI-binding support, ship 50
plugins, then 6 months later discover a bug. Now if that feature is in the
host - that's one fix and everyone is happy. If that bug is in the 50
plugins, already shipped to 1000 customers. Then you have a much bigger
problem.
It's not a question of 'bloat' YES/NO. The code has to go *somewhere*, there
is only a tradeoff - HOST vs PLUGIN.
My choice was to have very lightweight plugins, and a more sophisticated
host.
P.S.
The one other reason you want the host handling the MIDI Binding...
> On Fri, 2012-06-08 at 09:45 +0000, Jeremy Salwen wrote:
> > Currently, a plugin writer is in a bit of a sticky situation: if the
> > plugin supports MIDI CC events, then the internal parameters are
> > hidden from the host. You can do something where you have a switch
> > which toggles between MIDI CC control, and Control Port control, but
> > this is not a fun thing to do, and I think it is additionally
> > confusing for the user.
True, a plugin's parameters can be set by many sources:
* pre-recorded automation.
* The user tweaking the GUI.
* MIDI messages.
What if they all happen at once? Only the host is in a position to mediate.
For example if you click-down on a plugins GUI slider - you don't expect
that slider to continue jumping arround in response to MIDI automation. The
human is in control until the mouse-up happens, then automation resumes
control. This is a host feature, it can only be implemented of the MIDI
message is decoded by the host, the plugin can't be changing it's own
parameters 'in secret'.
Best Regards,
Jeff
Hi all,
I am porting to LV2 some AMS-influenced plugins (mainly those by Fons)
which have odd 1/Oct frequency ports. I understand why it is sometimes
convenient to use octaves rather than the more typical Hz for frequency,
but after some digging to figure out how to precisely describe this
unit, I discovered the central frequency is middle C, i.e. C4, i.e.
around 262Hz.
For hosts to be able to use such plugins effectively, I need to
precisely describe this unit (and then other plugins can implement to
spec and they will all get along). We already have an 'octaves' unit,
but no base frequency is defined. I can add one, but I am not sure
about this strange choice.
Nobody tunes anything based on middle C, with its odd frequency of
261.62556... writing this in a spec gives me pause. I suspect it
evolved from MIDI code in AMS where the 60 of middle C looks as
reasonable as anything else, but when you try to actually
define/document the unit it looks silly.
I think the natural central frequency to use is A440, at precisely a
nice round 440.0 Hz, so 0.0 is A4=440.0Hz, 1.0 is A5=880.0Hz, -1.0 is
A3=220Hz, and so on. Whenever a default or center or tuning frequency
is needed, you use A4/440Hz...
tl;dr: I think the most reasonable standard for an absolute 1/oct
frequency unit is 0.0 = 440Hz
Thoughts?
-dr
Hi,
Does anyone know a timecode library, that allows converting, adding,
subtracting SMPTE timecode with knowledge about drop-frame timecode, etc
...that can be used in C programs.
The only timecode lib I was able to find is 'mffm-timecode'.
It is C++ only and not really concerned with SMPTE timecode.
I'm thinking about writing one - unless I can find sth suitable.
Five of my current projects include replicated code for dealing with
SMPTE. Some use libltcsmpte.sf.net functions to some extent.
I'm currently refactoring libltcsmpte. As it was the first lib I've ever
written, I did make a couple of mistakes in designing its API/ABI. The
new incarnation of libltc only concerns itself with LTC frames and drop
SMPTE. -- So I'll need a libsmpte of some kind.
TIA,
robin
> Since the unofficial wiki seems to have disappeared, the documentation
> of the "Jack and Loopback device as Alsa-to-Jack bridge" has gone with
> it. Neither google cache nor the wayback machine fare able to serve a
> copy of the page. There are plenty of references to the wiki page on
> the web, but no-one seems to have mirrored the page. Does anyone
> happen to have a copy of the documentation lying around?
>Michael
The documentation has just been back after some database backup issues faced by Mark Constable (maintainer of the WIKI page).
Maybe someone can mirror the content somewhere ? (jackaudio.org for example)
http://alsa.opensrc.org/Jack_and_Loopback_device_as_Alsa-to-Jack_bridge
Cheers!
J.
Hello, this is my first communication here.
I'm a former Windows user and recent Linux convert. After switching, I
noticed some utilities I regularly used in music production were missing
from the major repositories, simple things like tap-temp, delay/Hz
calculator, and note-to-frequency conversion. I was looking for an
excuse to learn programming so I started working on this "music toolkit"
of mine. It's all the stuff I need for making music calculations all in
one place (like a producer's Swiss Army knife). Maybe you have a use for
it too? Includes: tap-tempo, delay/Hz calculator, song time calculator,
note-to-frequency converter, simple frequency generator, and a metronome.
http://www.brianhilmers.com/code/rasp/
I'm a novice programmer and this is my first project. Advice and help is
welcome. Thanks.
Brian Hilmers
____________________________________________________________
53 Year Old Mom Looks 33
The Stunning Results of Her Wrinkle Trick Has Botox Doctors Worried
http://thirdpartyoffers.juno.com/TGL3131/502c2374d1a97237301abst02duc