just babbling around
- Back from LAC2013@IEM-Graz
http://www.rncbc.org/drupal/node/646
so much to tell, so short on time ...
cheers
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org
MFP -- Music For Programmers
Release 0.04, "More Fun Patches"
I'm pleased to announce a new version of MFP, containing many fixes
and improvements. It is still not at a "production" level, but is
becoming more and more usable. Your interest and participation are
invited!
A summary of changes is below. Please see the GitHub issue tracker
for complete details:
http://github.com/bgribble/mfp
Thanks to the great environment, continuous hacking time, and great
feedback at LAC-2013, I was able to put in a record (for me) number of
commits (50+) during the last week and make many small and
not-so-small bugfixes and improvements. Please see http://lac.iem.at
for a video of my talk, and/or have a look at the paper and slides in
the source repository under doc/lac2013.
Changes since release v0.03.1:
----------------------------------------
* #31: Support exported UIs from user patches ("graph-on-parent")
* #64: Improved implementation of "Operate" mode, making
editing/control
fully modal
* #66: Expanded information in tooltips and "badges"
* #87: New Dial object (round slider)
* #85: Support audio input/output in user patches
* #111: Bind "app" to allow a message via to send messages to it
* Many other bugfixes and improvements.
About MFP:
----------------------------------------
MFP is an environment for visually composing computer programs, with
an emphasis on music and real-time audio synthesis and analysis. It's
very much inspired by Miller Puckette's Pure Data (pd) and Max/MSP,
with a bit of LabView and TouchOSC for good measure. It is targeted
at musicians, recording engineers, and software developers who like
the "patching" dataflow metaphor for constructing audio synthesis,
processing, and analysis networks.
MFP is a completely new code base, written in Python and C, with a
Clutter UI. It has been under development by a solo developer (me!),
as a spare-time project for several years.
Compared to Pure Data, its nearest relative, MFP is superficially
pretty similar but differs in a few key ways:
* MFP uses Python data natively. Any literal data entered in the
UI is parsed by the Python evaluator, and any Python value is a
legitimate "message" on the dataflow network
* MFP provides fairly raw access to Python constructs if desired.
For example, the built-in Python console allows live coding of
Python functions as patch elements at runtime.
* Name resolution and namespacing are addressed more robustly,
with explicit support for lexical scoping
* The UI is largely keyboard-driven, with a modal input system
that feels a bit like vim. The graphical presentation is a
single-window style with layers rather than multiple windows.
* There is fairly deep integration of Open Sound Control (OSC), with
every patch element having an OSC address and the ability to learn
any other desired address.
The code is still in early days, but has reached a point in its
lifecycle where at least some interesting workflows are operational
and it can be used for a good number of things. I think MFP is now
ripe for those with an experimental streak and/or development skills
to grab it, use it, and contribute to its design and development.
The code and issue tracker are hosted on GitHub:
https://github.com/bgribble/mfp
You can find the LAC-2013 paper and accompanying screenshots, some
sample patches, and a few other bits of documentation in the doc
directory of the GitHub repo. The README at the top level of the
source tree contains dependency, build, and getting-started
information.
Thanks,
Bill Gribble
Hiho,
here's a first preview of XOSC, an OSC patchbay (partly coded during
the LAC ;) ).
What it does:
Connect different OSC capable programs, considering the situation where
you have two programs talking via OSC, but then want to have a third or
fourth application to also use OSC from those same applications without
having to rewrite code.
Solution:
Just change the OSC messages' target host/port to XOSC's port, and
XOSC will allow you to patch OSC between applications.
- XOSC does not require any changes to original software to use it
- XOSC has an OSC interface to create connections.
Find it at:
https://github.com/sensestage/xosc
Basic functionality works as far as I have tested (with the included
supercollider script).
The program runs as a command line application and has an OSC
interface. By registering to it as a watcher another program will
receive any updates to connection changes, e.g. to display this in a
GUI.
I'd be happy if someone feels like implementing such a GUI. (I imagine
some of the classes can be reused for this purpose, and there is a
libxosc to link to)... think of an extra tab in QJackCtl (Rui will
be happy with a patch ;) ), Patchage, or others...
* Caveats:
Currently clients and hosts are kept in memory based on their port
number, so possibly there are conflicts when mapping between different
hosts. Similarly multiple programs sending the same tags might provide
problems.
Happy about feedback, suggestions, bug reports (use the issue tracker),
and patches :)
sincerely,
Marije Baalman
Hi All,
A new build of Praxis LIVE is now available for download.
Praxis LIVE is an open-source, graphical environment for rapid
development of intermedia performance tools, projections and
interactive spaces. This release brings a range of new features for the
video pipeline, OSC control, some major editing improvements (including
copy & paste - who knew that would be so difficult to implement! :-) ), and
the rather obviously missing stereo sample player, amongst many other
tweaks and bug fixes.
Praxis LIVE is available as a Linux .deb package, Windows .exe installer
and as a .zip for un-installed usage (installed usage is recommended).
Website (with downloads and manual) - http://code.google.com/p/praxis
Release notes - http://code.google.com/p/praxis/wiki/ReleaseNotes
Some of the new features are conspicuously under-documented at the moment.
I'll improve this over the next few weeks, but feel free to fire questions
via email.
Best wishes,
Neil
--
Neil C Smith
Artist : Technologist : Adviser
http://neilcsmith.net
Praxis LIVE - open-source, graphical environment for rapid development of
intermedia performance tools, projections and interactive spaces -
http://code.google.com/p/praxis
OpenEye - specialist web solutions for the cultural, education, charitable
and local government sectors - http://openeye.info
Dear Linux Audio users and developers
We at AGR/HackLab are very proud to announce our newest creation: the MOD.
In a nutshell, the MOD it is a programable Linux based hardware
processor/controller with LV2 support.
It’s main objective is to take the processing of any LV2 plugins to the
stage.
We will make a presentation at the 2013 LAC on saturday 11 of may, at
17:10. We hope to see some of you guys there!
To make things more interesting there we also created the following
- MOD Cloud, an online plugin repository
- MOD Social Network, a place where MOD users can exchange their virtual
pedalboards
- MOD SDK - a software development kit
- Control Chain - a hardware interface for external controllers
You can see all costumer related info on the website
www.portalmod.com/enand you can watch a video of the prototype working
here
http://portalmod.com/blog/2013/03/video-1-testando-o-prototipo/
The core software inside the MOD is Open Source and is being published at
github (https://github.com/portalmod).
You can download the LAC Paper at
http://portalmod.com/blog/2013/05/mod-on-lac-and-berlin/. In it you will
find an explanation of the MOD working structure, both software as hardware
wise.
As the MOD comprises both software (host and web-gui) and hardware which
were not entirely predicted in the LV2 specification, there is some code to
be added to the LV2 bundles in order to make it work nice on the MOD. All
this added code refers only to the GUI and/or the controller. The actual
audio code (the plugin .so file) is left intact.
A LV2 without this extra code still will work, but will not have 100% of
its potential. It will have a generic dashboard icon, no visible icon knobs
and a generic controller display type.
When using the MOD connected to your PC or tablet using the webgui you can
browse the locally installed plugins (inside the MOD) as well as the ones
that are online at the MOD Cloud, provided you PC / tablet is connected to
the internet. The plugins from the cloud can be installed with a simple
drag movement.
The MOD Cloud is the place we expect to have the most interaction with the
LAD comunity. It is a plugin repository divided in four sections: official,
testing, contrib and commercial (any resemblance to apt-get’s
sources.config is a mere coincidence...).
The official branch is where you find the plugins uploaded by the MOD team.
Most of them are well known open source plugins which were packaged with
our gui and controller needed codes. The CAPS, CALF, INVADA, GUITARIX, MDA
and many others are all there with custom HTML GUIs and some tweaks where
needed.
The testing branch is where you find all the plugins the MOD team wants to
send to the official branch, but for any reason haven’t yet.
The contrib branch is something like Arch Linux AUR. It is an open
repository where you, the developers, can upload open source plugins to the
MOD community.
The commercial branch is just like the contrib, but for closed plugin to be
sold to MOD users. We expect to generate a feasible business model for all
LAD developers which intend to make a living on audio plugin programming.
Last but not least there is the MOD SDK.
The main goal of the SDK is to make it simple to set a GUI to your plugin
before installing it into your MOD.
We think that when using the SDK the developers will be able to concentrate
on their audio code and spend the least amount of time with interface
programming.
The SDK has a package of ready available resources (pedal and rack skins,
knobs, layout templates) with which you can pack your plugin by completing
a simple wizard.
There is also the documentation needed to create new screen widgets in
order to develop your own custom plugin GUI. The included resource code can
also be used as example.
We would like to thank all the LAD community for its ongoing efforts
towards having a decent plugin structure for linux audio.
For the developers of the plugins we are packing we’d like to know whether
you guys have any kind of objections.
We believe that a lively MOD users community would expand the LAD plugins
userbase and thus open new possibilities for developers.
We hope you all like what we are doing and we would love to discuss further
details with you.
Kind Regards
Gianfranco Ceccolini
The MOD Team
Hi,
These last few days I found some time to work on ladspa.m.lv2, an LV2
plugin to load ladspa.m.proto instrument definition files:
https://github.com/fps/ladspa.m.lv2
It is in a somewhat usable state, i.e. used in ardour3 it loads the
example instrument generated by this python script:
https://github.com/fps/ladspa.m.proto/blob/master/example_instrument.py
which is a very simple polyphonic sawtooth synth with exponential
envelopes and an echo with differing delay times per voice. There's
still some things to do (e.g. expose control ports, implement
All-Notes-Off midi messages, lots of optimizations - right now I care
more for correctness than for efficiency, etc.) and I also have some
questions:
1] This one is regarding waf. I'm not used to writing wscript files and
I adapted the whole thing from the example sampler from the
LV2-distribution. I wonder how I can make waf to use e.g. -fPIC and
other compiler flags needed for my 64-bit system. Right now I have put a
little makefile into the repository which passes the missing options
along as CXXFLAGS environment variable. This is a dirty hack. So if
anyone waf guru might want to take a look, I'd be so ever grateful.
https://github.com/fps/ladspa.m.lv2/blob/master/makefilehttps://github.com/fps/ladspa.m.lv2/blob/master/wscript
2] I'm a little bit puzzled by how the patch_set messages together with
the LV2 worker extension works. If you take a look at this run() function
https://github.com/fps/ladspa.m.lv2/blob/master/instrument.cc#L702
you'll see that I have an extra LV2_ATOM_SEQUENCE_FOR_EACH at the start
of the function to process the patch_set messages. I tried to integrate
that into the loop later on (that iterates over the sample_count frames
and lets the midi events take effect at their respective frame), but
once I do that, patch loading stops to work. I must be missing something
fundamental. So if any LV2 guru might want to take a look, I'd be very
grateful, too
Thanks and have fun,
Flo
--
Florian Paul Schmidt
http://fps.io
Dear all,
at 10:00am (in about 10 minutes), the Linux Audio Conference 2013 is
about to start.
There is a live stream available, so you can follow the event, even if
you have not made it physically to Graz
http://lac.linuxaudio.org/2013/stream
Remote participants are invited to join #lac2013 on irc.freenode.net[1],
to be able to take part in the discussions, ask questions, and get
technical assistance in case of stream problems.
See you on the conference - here and everywhere
fgmdasr
IOhannes
[1] http://webchat.freenode.net/?channels=lac2013
Dear all,
There she is, Yoshimi 1.1.0! Looking better than ever, working better
than ever, capable of doing more than ever. Simply put, better than
ever. Made possible by the much appreciated and very valuable help,
contributions and feedback from the Linux Audio community.
http://downloads.sourceforge.net/yoshimi/yoshimi-1.1.0.tar.bz2
For this release I'd like to thank the following people in particular
but in no particular order:
* Kristian Amlie for making Yoshimi less CPU hungry.
* Andrew Deryabin for making Yoshimi a little more CPU hungry again, but
for a good reason, Yoshimi now has per part JACK outputs!
* Nikita Zlobin for having Yoshimi handle state files better.
* Florian Dupeyron for his custom best of My?terious bank.
* Will J. Godfrey for his continuous testing and monitoring of new
developments.
* Alessandro Preziosi for Yoshimi's lovely new knobs.
* David Adler for the AZERTY virtual keyboard support.
* Rob Couto for the helpful insights and general help.
* Alan Calvert.
Best regards,
Jeremy Jongepier
I'm preparing a seminar that will take place tomorrow (4 May),
it will involve live mixing of surround sound using Ardour.
While setting up the PC to be used, two problems occured.
The first one is solved, but its cause remains misterious.
A some point it appeared that the sound card (HDSP-MADI,
used every day since years) had decided to meet its creator.
Every program trying to access it - from jackd to aplay -L
not only blocked, but turned out to be impossible to kill.
Replaced the card - same result. It turned out to be a
corrupt /var/lib/alsa/asound.state. The systemd service
doing the 'alsactl restore' while booting choked on it,
blocked, and this apperently blocked all others trying
to use the card later. What I don't understand is how alsactl
exactly failed, and why these processes seemed to be immune
to a 'kill -9'.
The second one is an Ardour2 session that worked perfectly
on another similar PC, but becomes completely unresponsive
on the one to be used. It has four mono tracks and three
10-channel ones, and little else. DSP load is not the problem,
it's less than 15%. Things look like it's the graphics. When
Ardour's playhead reaches the right end of the editor window,
it takes something like 5 seconds for the editor display to
update - everything seems to freeze during that time but audio
is not interrupted. Same when trying to scroll or change the
zoom factor. Video card is Nvidia GeForce 7300, indeed quite
old, but the same PC does full-screen HD youtube videos without
problem. Video driver is nouveau as nv seems to be no longer
supported by Archlinux. I'm pretty sure this system didn't have
such problems when it was using nv. OTOH nouveau works very well
on other systems... I could try and install the proprietary
Nvidia driver, but with just some hours to go I'd rather not
take any risks...
Any hints ?
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)