It was so terribly cold. Snow was falling, and it was almost dark.
Evening came on, the last evening of the year. In the cold and gloom a
poor little girl, bareheaded and barefoot, was walking through the
streets. Of course when she had left her house she'd had slippers on,
but what good had they been? ...
The Little Match Girl
http://mx44.linux.dk/~jens/unpublished/mtchgirl.mp3
...
"She wanted to warm herself," the people said. No one imagined what
beautiful things she had seen, and how happily she had gone with her old
grandmother into the bright New Year.
Please excuse cross-posting.
Dear friends and fellow FOSS enthusiasts,
It is my great pleasure to share with the community a belated Holiday
present :-) in a form of latest snapshot of L2Ork iteration of
Pure-Data. Better than ever, the latest version comes with the following
improvements:
*implemented apply undo for array properties and partially implemented
apply undo for graph-on-parent object properties (does not apply to
abstractions or top-level windows currently until I figure out how to
address the indexing of toplevel windows inside the glist as well as how
to address to which window such an undo belongs).
*properties are disabled when right-clicking on an abstraction as
modifying its settings externally does not make sense when one does not
see the actual contents inside it. So, to edit the properties of an
abstraction, one has to open the actual abstraction.
*fixed how new arrays are created so that they always fit within the
specified boundaries. Please note arrays that have been already created
in prior patches remain untouched in terms of graph auto-resizing
(legacy code is provided in g_editor.c canvas_vis that deals with this
if anyone wishes to convert their arrays but is incomplete in that it
assumes all arrays require resizing--this is however unnecessary as
simple recreation of said arrays or manual readjustment of their
settings ought to do the trick.
-This feature needs further testing--feedback is most appreciated.
*fixed how arrays deal with moving array points via mouse by restricting
them within the array bounds--this should work for all gui-driven array
operations, while array alterations via snapshots and other external
ways of manipulating arrays remain unbound so as to allow for
traditional data-flow debugging--this may change down the road in part
due to introduction of the magicGlass option and in part due to belief
that data monitoring should only report ranges specified by the graph.
-This feature needs further testing--feedback is most appreciated.
*added new feature for arrays where they report a bang through the
<arrayname>_changed send (if one is provided) whenever they have been
altered by a mouse click'n'drag--this in conjunction with array graph
auto-resizing makes arrays formidable alternatives for multisliders.
-This feature needs further testing--feedback is most appreciated.
*when an array subpatch is opened and resized, the array automatically
now resizes to properly fill the window.
-This feature needs further testing--feedback is most appreciated.
*fixed where array was not visible after reopening the patch if any of
its points touched upon y graph limits.
*fixed couple of segfaults caused by gridflow incompatibility--more
problems remain with gridflow library compatibility, likely due to
widgetbehavior and possibly also magicGlass incompatibility. Further
investigation is necessary.
*fixed memory leak in the disis_phasor~ external where the destructor
was never properly called and updated its documentation (available in
the l2ork_addons package).
*fixed highlighting of signal nlets where nlet would revert to
non-signal appearance after being highlighted/connected.
*reintroduced array listview (this was a regression in respect to
pd-extended).
*improved appearance of the array listview.
*fixed a few broken links in the pddp documentation and added new
l2ork-specific array features to the pddp documentation.
Latest snapshot is available from the usual place:
http://l2ork.music.vt.edu/main/?page_id=56
Complete changelog since 11/25/2010 is available here:
http://l2ork.music.vt.edu/data/pd/Changelog
Happy belated Holidays!
Best wishes,
Ico
I love attractive UIs like those from Bristol, have to try those ...
I want to use them in f.e. Qtractor or Rosegarden as softsynths with
some live character with external midi-controllers or with automation.
regards, saschas
2011/1/2 Ricardo Wurmus <ricardo.wurmus(a)gmail.com>:
> Hi Sascha,
>
> I found the AlsaModularSynth to be a great sounding "analog-ish" modular
> synthesizer with a very direct and very usable interface.
>
> I don't quite understand your vision just yet. Is the idea basically to
> write an attractive and usable GUI for an existing synth (engine)?
>
>
>
> On 2 January 2011 21:47, Julien Claassen <julien(a)c-lab.de> wrote:
>>
>> Hello Sascha!
>> Â I'm not good at coding at all, but I think a more useable framework for a
>> softsynth, if you like to build it with an existing one, might be bristol.
>> Bristol is a synth emulator. It has a couple of synths already. But it might
>> not suffer, having a new filter or different oscillator in it, if Nick is OK
>> with that. The synths it emulates, are basically built from the components
>> (filters, oscs, etc.), that are in the engine. Then they are connected in a
>> particular way and get a GUI/CLI put on top of them. Bristol has, what I
>> would call MIDI learning. You can easily assing MIDI controls to controls of
>> the currently loaded synth and I think you can save them as well. Have a
>> look at his site:
>> http://bristol.sf.net
>> Â The sweet thing about using this would be, that you have to implement the
>> new components and then there is an API - so I believe - for relatively
>> easily constructing the connections and the <UIs. I know only of the textUI,
>> which is very clever and helpful!
>> Â Kindly yours
>> Â Â Â Â julien
>>
>> --------
>> Music was my first love and it will be my last (John Miles)
>>
>> ======== FIND MY WEB-PROJECT AT: ========
>> http://ltsb.sourceforge.net
>> the Linux TextBased Studio guide
>> ======= AND MY PERSONAL PAGES AT: =======
>> http://www.juliencoder.de
>> _______________________________________________
>> Linux-audio-dev mailing list
>> Linux-audio-dev(a)lists.linuxaudio.org
>> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
Hello everybody,
We have a present for you, a new release of MusE.
The alpha indicates this is an early version so it's mainly
- a teaser to spread the word.
- an early adopters build.
- to welcome developers who want to port MusE to other platforms.
MusE has now been completely ported to the Qt4 architecture and we (mainly
Tim and Orcan)
are busy to make it even better than before, lots of gui stuff being
reworked.
MusE now also sports a new version of the DeicsOnze, DX11 emulating
softsynth, up from version 0.2 to 1.0.
The homepage has received a new look that we hope will give a better
indication of what MusE is and does.
Do visit http://muse-sequencer.org.
The full changelog is available at:
http://lmuse.svn.sourceforge.net/viewvc/lmuse/trunk/muse2/ChangeLog?revisio…
Find the download at:
https://sourceforge.net/projects/lmuse/files/
Happy Holidays!
The MusE Team
Hi all,
So I'm writing some LV2 plugins to wrap up the Aubio audio analysis
library<http://aubio.org/>,
and I'm not sure exactly how to handle functions like "onset" detection.
Currently, it just outputs clicks to an audio port (0 when no beat, 1 when
beat is detected). However, this doesn't take advantage of the power of
LV2. It is unclear to hosts and other plugins exactly what sort of data is
coming out.
I was thinking that maybe it would output MIDI, as midi matches the "event
based" aspect of beat detection. Perhaps sending out MIDI beat clock
signals <http://en.wikipedia.org/wiki/MIDI_beat_clock>? However, that
doesn't really match the sort of data that the Aubio functions detect. Maybe
it could just send a midi note on event?
It seems like maybe some sort of LV2 specific extension might be in order?
The Event Port <http://lv2plug.in/ns/ext/event/#EventPort> extension seems
to define everything I need: sending timestamped events with no extra
information attached. The question is: does it require some sort of further
extension to define what a beat port is in the way the MidiEvent extension
does?
Of course, this raises the question: Is a port specific for "beats" even
necessary? I can think of a few cases:
- DAW uses beat signals to set up markers on a track
- DAW uses beat signals to break a percussive track up into beats.
- Delay effect uses beat signals to have a timed delay
- Automatic drummer adds on a percussion part to audio with varying tempo
- ? other beat-synchronous effects?
Anyway, I'm wondering what you all think would be the best option, or
whether you think functionality like this is even warranted in an LV2 plugin
(should analysis plugins stick to VAMP?). So let me know what you think.
Jeremy Salwen
hi...
since jack1 release is taking pretty long, i decided to stop waiting
with a tschack release.
tschack is an SMP aware fork of jack1.
its a dropin replacement like jack2.
features:
- jack1 mlocking
- controlapi which works even when libjackserver.so is loaded RTLD_LOCAL
- smp aware
- backendswitching
- strictly synchronous like jack1. (-> no latency penalty)
- clickless connections.
- shuts down audio processing when cpu is overloaded for too long.
i also released PyJackd which is a wrapper around libjackserver.
features:
- commandline for backendswitching
- pulseaudio dbus reservation.
get it here:
http://hochstrom.endofinternet.org/files/tschack-0.120.1.tar.gzhttp://hochstrom.endofinternet.org/files/PyJackd-0.1.0.tar.gz
--
torben Hohn
Does anyone have any experience with speed of traversal through a
boost multi index container? I'm pondering their use to manage notes
currently in play, eg indexed by midi channel ordered by midi event
time/frame stamp.
cheers, Cal
Hi,
I've been trying to come up with a nice program architecture for a live
performance tool (Audio looping etc),
and I've kind of hit a wall:
Input will be taken via OSC, the "engine" will be written in C++, and the
GUI is up in the air.
I've written most of the engine, (working to a degree needs some bugfixes),
and now I've started implementing
the GUI in the same binary. Ie: its all compiled together, double-click it
and it shows on screen & loads JACK client.
The GUI code has a nasty habit of segfaulting.. which is also killing the
engine. That's a no-go for live performance.
The engine is rock solid stable. So its the fact that there's the GUI thread
running around that's segfault-ing things.
So I'm wondering if it feasible to keep the audio/other data in SHared
Memory, and then write the GUI in Python reading
from the same memory? Is this concidered "ugly" design? I have no experience
with SHM, so I thought I'd ask.
The other option I was concidering is writing the front-end GUI part using
only info obtained from OSC, but that would
exclude the waveforms of the audio, and lots of other nice features...
Help, advice, laughter etc welcomed :-) -Harry
Hello everyone,
I am trying to understand how a simple sound server could be implemented. I will
not necessarily develop this, but I'm trying to clarify my ideas.
As in JACK, it would allow clients to register, and their process callback to be
called with input and output buffers of a fixed size. The server would then mix
all output data provided by clients and pass the result to the audio hardware.
It would also read audio input from the hardware and dispatch it to the clients.
There wouldn't be any ports, routing, etc.. as provided by JACK. The main
purpose of a such server would be to allow several applications to record and
play audio, without them acquiring exclusive access the audio hardware. In this
regard it's similar to PulseAudio and many others.
The server itself could have a realtime thread for accessing audio. Therefore,
for a proof of concept, it could be developed on top of JACK. However, none of
the client could run in realtime: this is a given of my problem. The clients
would be standard applications, with very limited privileges. They wouldn't be
able to increase their own thread priorities at all. Each client would run as an
separated process.
The only solution that came to my mind so far is to have the clients communicate
with the server through shared memory. For each client, a shared memory region
would be allocated, consisting of one lock-free ringbuffer for input, another
for output, as well as a shared semaphore for server-to-client signaling.
At each cycle, the server would read and write audio data from/to the
ringbuffers of each registered clients, and then call sem_post() on all shared
semaphores.
A client side library would handle all client registering details, as well as
thread creation. It would then sem_wait(), and when awaken, read from the input
ringbuffer, call the client process callback with I/O buffers, and write to the
output ringbuffer.
Does this design sound good to you? Do you think it could achieve reliable I/O,
and reasonable latency? Keeping latency as low as possible, what do you advise
for the size of the ringbuffers?
--
Olivier
Hello
I bought the natural drum samples (http://www.naturaldrum.com/). It contains
WAVs and presets for Kontakt and Halion. Now I'd like to create some gigasampler
files in order to use it with linuxsampler.
The documentation of the natural drum sample library is quite good. The only
thing missing is the "loudness" of each sample in order to map each sample to a
velocity level from 0-127.
What would you recommend in order to calculate the "peek" of each drum sample
automatically? Is there a library which could do this? I would also be happy
with a command line tool like this:
$ peek bla.wav
Peek value: 12345
I could then write a C++-App using libgig.
Any ideas? Libraries? Algorithms?
Thanks!
Oliver