Hi guys,
I've run into some problem while restruckturing the audio backend of
TerminatorX. When I want to debug the Jack process callback, Jack throws
out my client. Is there some way of stepping through the process
callback, without having my client being shutdown?
Gerald
Has anybody been able to build freqtweak 0.7.2 using gcc 4.4.x recently? There are some missing stdint.h includes, but beyond that there is some loss of precision errors that are over my head.
-Reuben
On Fri, March 19, 2010 11:53, Ralf Mardorf wrote:
> IMO automation is overrated, it's useful, but OTOH how often is it
> needed to change settings during an opus? Most times a mix, selected
> synth etc. are fixed from the start to the end of an opus. For example,
> normally a musician plays an instrument dynamically by the touch or by
> using a volume pedal. Dynamic for the loudness seldom is done by a fader
> after the recording is done.
it obviously depends on what type of music you're making. loop based music
especially makes extensive use of automation.
Hi,
How to do automation on Linux when you work the 'modular way' and how is
the quality of those features on Linux at the moment? That question came
up and was followed by a quick research. There are things possible or
promising in the area, but we got some weak areas here also.
Non-daw and non-mixer seems to be very promising. You can make 'cv
tracks' or automation tracks in non-daw (when you add controls), draw
an automation line and connect that to an strip to automation for gain
(and LADSPA). DSSI is lacking here, but it is 'planned'. LV2 is not very
popular in the non project, so it seems it will not be possible to
automate LV2 plugin parameters, unless I miss something or an developers
step in and build LV2 support into non-mixer. (It would be a missed
chance imo if LV2 isn't supported here...)
But what to do when the softsynth is not an plugin, but an standalone
application, like phasex and zynaddsubfx/yoshimi? You can use an midi
sequencer to control the synth via sending midi cc messages.
Let me quote some quick (!) test results:
"My quick test shows that *Muse* can display cc's on the piano roll, but
only seems to allow me to edit them in the event list. Not very pratical.
*Non-sequencer* only offers the event list.
*Non-Daw* allows me to add lots of controls, and edit them happily on
the timeline, but it only outputs them as "CV", which means I'd need a
pd patch or similar to convert them into midi cc's.
I'll have a look at Rosegarden, once it's finished installing."
"*Rosegarden *could work, but apart from being bloated, it is also
rather complicated to add automation:
Draw event
Open in Martrix editor
Then under "View" I can add a controller, but for some reason only a
select few.
This controller can then be edited by right-clicking and adding
controller line.
I tried *Seq24*, but it keeps crashing on me. At least it allowed me to
choose a midi CC by number.
Damn, this should be a lot easier. Even the ancient Cakewalk was
lightyears better.
_I think we have a great opportunity here for some developer to make the
world a better place. Simply make *non-daw's* timeline controllers
output midi CCs"_
"*QTractor* actually works very well, only problem is it doesn't have a
curve-drawing feature, so you need to write lots of little automation
points. But it lets you select from all possible midi CCs. Just select
"controller" in the midi clip editor, where it normally says "note" (or
similar)
Then of course connect the midi output to your softsynth, and remember
to define what synth knob is controlled by what CC (the midi mapping) in
the synth."
"it looks like *Dino *has a nice curve editor."
"Midi mapping in *Phasex* and *AMS* seems to be ok.
And here is the *Zynaddsubfx *Midi implementation (seriously lacking, if
you ask me):
http://zynaddsubfx.sourceforge.net/doc_3.html
"
* So it looks like Qtractor is pretty good to do the job, all though
it misses the nice curve editor like Dino has, for automation with
external synths.
* Zynaddsubfx needs improvements when it comes to midi mapping.
* A big improvement would be to make *non-daw's* timeline
controllers output midi CCs
Other questions are: "Is it possible atm to automate LV2 plugins in a
host (lv2rack) and DSSI (ghostess)?"
See this post as sort of feedback from some Linux audio users. Maybe it
can bring us to some improvements...
Thanks,
\r
ps. see also thread here: http://linuxmusicians.com/viewtopic.php?f=4&t=2535
for more information's read here.
http://en.wikipedia.org/wiki/MIDI_beat_clock
my question, exist something like this for alas. i am interested to send midi beat clock
signals from hydrogen to external hardware synthesisers/arpeggiators. and i am explicit
not interested to sync them to any timecode. because the external machines have to run
independent and in a randomly order. they only have to sync there beats.
here the mbc specs.
midi beat clock defines the following real time messages:
* clock (decimal 248, hex 0xF8)
* tick (decimal 249, hex 0xF9)
* start (decimal 250, hex 0xFA)
* continue (decimal 251, hex 0xFB)
* stop (decimal 252, hex 0xFC)
and about ticks.
i fond out that linux audio apps all have other or there own definitions about the quantity of ticks per beat.
make it sense to find out an accordance about ticks per beat. or is this irrelevant for any syncing. especially i mean here syncing via jack-transport.
greetings wolke
AudioGrapher is a C++ library for managing signal flow within
applications or plugins. It is mainly a bunch of utility classes that
ease passing data around and debugging error situations. Currently it
also includes all the functionality that is used in Ardour's export,
including the following:
- sample rate conversion (libsamplerate)
- sample format conversion (gdither)
- file i/o (libsndfile)
- interleavin/deinterleaving
- normalizing
- threading parallel datapaths
- unit tests for most of these
I'm planning on using AudioGrapher in future dsp code (plugins
probably), so more functionality will be available, just no promises
when :)
This might sound a lot like GStreamer, but here are some differences:
- AudioGrapher is "modern" C++ instead of C
- The core of AudioGrapher is only 12 headers
- AudioGrapher is designed to be usable in RT applications (AFAIK most
of GStreamer is not)
- Extending AudioGrapher is extremely simple if you know your C++
- Audiographer was designed for audio only, but can stream other data also
Error checking and debugging can be adjusted using C++ template
parameters on a per-class basis. This means that only the chosen level
of error checking and debugging is built in at compile time. This makes
it possible to remove all overhead from performance critical parts.
Doxygen documentation is available at
http://beatwaves.net/files/software/audiographer/doc/index.html
and you can get the code via SVN from
http://svn.beatwaves.net/svn/libaudiographer/trunk
Some history if for the interested:
The very first ideas behind AudioGrapher were born in the summer of 2008
during my summer of code work on Ardour. I needed something that was
able to move around different amounts of data in different data formats.
So I made very simple Sink and Source classes. At the end of last year I
started making some changes to the data flow in Ardour's (3.0) export,
and noticed it would be worth making this a separate library which would
be usable in other projects also. I started working on AudioGrapher in
the beginning of November, and by January I had something I thought was
worth publishing. And over a month later here we are...
-Sakari-
> I would want to be able to assign midi
> control to triggering loops, volume and panning - at least that. Otherwise,
> Kluppe is very difficult to use in a live performance.
>
> However, instead of proposing to allow to create separate controls for each
> looper like they have in SooperLooper, I would advice (and actually, ask
> for
> this feature to be implemented in such a manner) to instead go for the
> Selected looper scheme. So that one would not need a dozen of knobs to
> control things. There should be an ability to have one "Selected" looper.
>
sooperlooper allready has this feature, look for the latest releases.
simple MIDI programming with bash/mididings, pd or csound could easily add
the "random" feature to it
olivier
Hello all,
Some updates are available on
<http://www.kokkinizita.net/linuxaudio/downloads>
1. libclalsadrv-2.0.0
This the the Alsa interface library used by Aeolus, Jaaa,
Japa and AMS.
The new release allows to specify separate Alsa device
names for playback, capture and control, i.e. it allows
the use of 'split' devices that may result from e.g.
combining several soundcards into one device.
The old API is still available but will be removed in
future releases.
Note that 2.0.0 is *not* binary compatible with the
previous release (hence the major version increment),
and may require a recompile of the apps using it.
There are two example programs in the 'apps' directory.
* alsa-loopback: just copies stereo input to output.
* alsa-latency: latency measurement, same algorithm
as used in jack_delay.
2. jaaa-0.6.0 and japa-0.6.0
* General cleanup, now compile without warnings using
gcc-4.4.3.
* When using Jack, a new option (-s) allows to specify
the Jack server to use.
* Require libclalsadrv-2.0.0.
* When using Alsa, two new options (-P, -C, used instead
of -d) allow to specify a split Alsa device.
* Added $(DESTDIR) to the Makefiles.
3. Aeolus-0.8.4
* Same changes as for Jaaa and Japa (except for -C, -P).
* Five new temperaments added, provided by Hanno
Hoffstadt and Adam Sampson.
Note to AMS users: if you have a binary install of AMS
make sure not to remove the current libclalsadrv. If you
have a source install, update libclalsadrv and recompile
AMS.
Ciao,
--
FA
O tu, che porte, correndo si ?
E guerra e morte !
Indamixx Is Hiring - Developers Wanted
Trinity Audio Group Inc. is seeking core developers for the Indamixx project
and Transmission Distribution and Audio OS.
Maintenance and developing specifics:
- Custom kernel work, systems building.
- Real Time (RT) kernel (low latency)
- JACK
- ALSA
Also any Ardour experts for ongoing customer support is vital, as we are
looking to significantly increase our support efforts for Ardour globally
and the entire Indamixx project going forward.
contact and resume:
ronaldjstewart(a)gmail.com
Thank you
Ronald Stewart
Creative Director
Trinity Audio Group Inc.
9854 National Blvd. #322
Los Angeles CA 90034
213-915-6020
ronaldjstewart(a)gmail.com
The CLAM project[1] is delighted to announce the long awaited 1.4.0 release of
the C++ framework for Audio and Music, code name '3D molluscs in the space'.
[1] http://clam-project.org
In summary, this long term release includes a lot of new spacialization
modules for 3D audio; MIDI, OSC and guitar effects modules; architectural
enhancements such as typed controls; nice usability features for the
NetworkEditor interface; convenience tools and scripts to make CLAM experience
better; enhanced building of LADSPA plugins and new support for LV2 and VST
plugin building; a new easy to use application to explore songs chords called
Chordata; many optimizations, bug fixing and code clean ups.
Many thanks to the people who contributed to this release, including but not
limited to the GSoC 2008 students and all the crew at Barcelona Media's Audio
Group.
Some details follow:
* Chordata is a new CLAM application which offers a user friendly way to
explore the chords of your favourite songs, using already existing technology
in the CLAM framework but with a much simpler interface. [2]
[2] http://www.youtube.com/watch?v=xVmkIznjUPE
* The spacialization module and helper tools, contributed by Barcelona Media
[3] audio group, turn CLAM in tandem with Blender and Ardour, into a powerful
3D audio authoring and exhibition platform.[4]
[3] http://barcelonamedia.org
[4] http://www.youtube.com/watch?v=KSfqJUIAiXk
* Typed controls extend CLAM with the ability to use whichever C++ type as the
message for a control. So, not just floats, but also bools, enums, integers, or
envelopes can be sent as asynchronous controls. Examples on boolean and MIDI
controls are provided.
* NetworkEditor has been ported to the QGraphicsView [5] framework. Dealing
with heavy networks such the big ones used in Barcelona Media have pushed
many usability enhancements into its interface: multi-wire dragging, wire
highlighting, default port and control actions, network and in-canvas
documentation... [6]
[5] http://doc.trolltech.com/latest/qgraphicsview.html
[6] http://www.youtube.com/watch?v=0kt0WDmvMwo
* It also made necessary to provide a tool such clamrefactor.py to perform
batch high level changes to clam network XML files such as renaming processing
types, ports, or configuration parameters, changing configuration values,
duplicating sets of processings, connecting them...
* Music Annotator application now is designed to aggregate several sources of
descriptors and update them after edit. Descriptors are mapped to a work
description schema that can be graphically defined. Also semantic web
descriptor sources to access webservices such as MusicBrainz have been
implemented.
You can download sources, windows, debian and ubuntu packages from the
download page[7]. Contributed binaries for other platforms are welcome.
[7] http://clam-project.org/download/
See also:
Screenshots: http://clam-project.org/wiki/Development_screenshots
Youtube channel: http://www.youtube.com/group/clamproject
Detailed changelog: http://clam-project.org/clam/trunk/CLAM/CHANGES
Version migration guide: http://clam-project.org/wiki/Version_Migration_Guide
--
David García Garzón
(Work) david dot garcia at upf anotherdot edu
http://www.iua.upf.edu/~dgarcia