I love attractive UIs like those from Bristol, have to try those ...
I want to use them in f.e. Qtractor or Rosegarden as softsynths with
some live character with external midi-controllers or with automation.
regards, saschas
2011/1/2 Ricardo Wurmus <ricardo.wurmus(a)gmail.com>:
> Hi Sascha,
>
> I found the AlsaModularSynth to be a great sounding "analog-ish" modular
> synthesizer with a very direct and very usable interface.
>
> I don't quite understand your vision just yet. Is the idea basically to
> write an attractive and usable GUI for an existing synth (engine)?
>
>
>
> On 2 January 2011 21:47, Julien Claassen <julien(a)c-lab.de> wrote:
>>
>> Hello Sascha!
>> Â I'm not good at coding at all, but I think a more useable framework for a
>> softsynth, if you like to build it with an existing one, might be bristol.
>> Bristol is a synth emulator. It has a couple of synths already. But it might
>> not suffer, having a new filter or different oscillator in it, if Nick is OK
>> with that. The synths it emulates, are basically built from the components
>> (filters, oscs, etc.), that are in the engine. Then they are connected in a
>> particular way and get a GUI/CLI put on top of them. Bristol has, what I
>> would call MIDI learning. You can easily assing MIDI controls to controls of
>> the currently loaded synth and I think you can save them as well. Have a
>> look at his site:
>> http://bristol.sf.net
>> Â The sweet thing about using this would be, that you have to implement the
>> new components and then there is an API - so I believe - for relatively
>> easily constructing the connections and the <UIs. I know only of the textUI,
>> which is very clever and helpful!
>> Â Kindly yours
>> Â Â Â Â julien
>>
>> --------
>> Music was my first love and it will be my last (John Miles)
>>
>> ======== FIND MY WEB-PROJECT AT: ========
>> http://ltsb.sourceforge.net
>> the Linux TextBased Studio guide
>> ======= AND MY PERSONAL PAGES AT: =======
>> http://www.juliencoder.de
>> _______________________________________________
>> Linux-audio-dev mailing list
>> Linux-audio-dev(a)lists.linuxaudio.org
>> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
Hello everybody,
We have a present for you, a new release of MusE.
The alpha indicates this is an early version so it's mainly
- a teaser to spread the word.
- an early adopters build.
- to welcome developers who want to port MusE to other platforms.
MusE has now been completely ported to the Qt4 architecture and we (mainly
Tim and Orcan)
are busy to make it even better than before, lots of gui stuff being
reworked.
MusE now also sports a new version of the DeicsOnze, DX11 emulating
softsynth, up from version 0.2 to 1.0.
The homepage has received a new look that we hope will give a better
indication of what MusE is and does.
Do visit http://muse-sequencer.org.
The full changelog is available at:
http://lmuse.svn.sourceforge.net/viewvc/lmuse/trunk/muse2/ChangeLog?revisio…
Find the download at:
https://sourceforge.net/projects/lmuse/files/
Happy Holidays!
The MusE Team
Hi all,
So I'm writing some LV2 plugins to wrap up the Aubio audio analysis
library<http://aubio.org/>,
and I'm not sure exactly how to handle functions like "onset" detection.
Currently, it just outputs clicks to an audio port (0 when no beat, 1 when
beat is detected). However, this doesn't take advantage of the power of
LV2. It is unclear to hosts and other plugins exactly what sort of data is
coming out.
I was thinking that maybe it would output MIDI, as midi matches the "event
based" aspect of beat detection. Perhaps sending out MIDI beat clock
signals <http://en.wikipedia.org/wiki/MIDI_beat_clock>? However, that
doesn't really match the sort of data that the Aubio functions detect. Maybe
it could just send a midi note on event?
It seems like maybe some sort of LV2 specific extension might be in order?
The Event Port <http://lv2plug.in/ns/ext/event/#EventPort> extension seems
to define everything I need: sending timestamped events with no extra
information attached. The question is: does it require some sort of further
extension to define what a beat port is in the way the MidiEvent extension
does?
Of course, this raises the question: Is a port specific for "beats" even
necessary? I can think of a few cases:
- DAW uses beat signals to set up markers on a track
- DAW uses beat signals to break a percussive track up into beats.
- Delay effect uses beat signals to have a timed delay
- Automatic drummer adds on a percussion part to audio with varying tempo
- ? other beat-synchronous effects?
Anyway, I'm wondering what you all think would be the best option, or
whether you think functionality like this is even warranted in an LV2 plugin
(should analysis plugins stick to VAMP?). So let me know what you think.
Jeremy Salwen
hi...
since jack1 release is taking pretty long, i decided to stop waiting
with a tschack release.
tschack is an SMP aware fork of jack1.
its a dropin replacement like jack2.
features:
- jack1 mlocking
- controlapi which works even when libjackserver.so is loaded RTLD_LOCAL
- smp aware
- backendswitching
- strictly synchronous like jack1. (-> no latency penalty)
- clickless connections.
- shuts down audio processing when cpu is overloaded for too long.
i also released PyJackd which is a wrapper around libjackserver.
features:
- commandline for backendswitching
- pulseaudio dbus reservation.
get it here:
http://hochstrom.endofinternet.org/files/tschack-0.120.1.tar.gzhttp://hochstrom.endofinternet.org/files/PyJackd-0.1.0.tar.gz
--
torben Hohn
Does anyone have any experience with speed of traversal through a
boost multi index container? I'm pondering their use to manage notes
currently in play, eg indexed by midi channel ordered by midi event
time/frame stamp.
cheers, Cal
Hi,
I've been trying to come up with a nice program architecture for a live
performance tool (Audio looping etc),
and I've kind of hit a wall:
Input will be taken via OSC, the "engine" will be written in C++, and the
GUI is up in the air.
I've written most of the engine, (working to a degree needs some bugfixes),
and now I've started implementing
the GUI in the same binary. Ie: its all compiled together, double-click it
and it shows on screen & loads JACK client.
The GUI code has a nasty habit of segfaulting.. which is also killing the
engine. That's a no-go for live performance.
The engine is rock solid stable. So its the fact that there's the GUI thread
running around that's segfault-ing things.
So I'm wondering if it feasible to keep the audio/other data in SHared
Memory, and then write the GUI in Python reading
from the same memory? Is this concidered "ugly" design? I have no experience
with SHM, so I thought I'd ask.
The other option I was concidering is writing the front-end GUI part using
only info obtained from OSC, but that would
exclude the waveforms of the audio, and lots of other nice features...
Help, advice, laughter etc welcomed :-) -Harry
Hello everyone,
I am trying to understand how a simple sound server could be implemented. I will
not necessarily develop this, but I'm trying to clarify my ideas.
As in JACK, it would allow clients to register, and their process callback to be
called with input and output buffers of a fixed size. The server would then mix
all output data provided by clients and pass the result to the audio hardware.
It would also read audio input from the hardware and dispatch it to the clients.
There wouldn't be any ports, routing, etc.. as provided by JACK. The main
purpose of a such server would be to allow several applications to record and
play audio, without them acquiring exclusive access the audio hardware. In this
regard it's similar to PulseAudio and many others.
The server itself could have a realtime thread for accessing audio. Therefore,
for a proof of concept, it could be developed on top of JACK. However, none of
the client could run in realtime: this is a given of my problem. The clients
would be standard applications, with very limited privileges. They wouldn't be
able to increase their own thread priorities at all. Each client would run as an
separated process.
The only solution that came to my mind so far is to have the clients communicate
with the server through shared memory. For each client, a shared memory region
would be allocated, consisting of one lock-free ringbuffer for input, another
for output, as well as a shared semaphore for server-to-client signaling.
At each cycle, the server would read and write audio data from/to the
ringbuffers of each registered clients, and then call sem_post() on all shared
semaphores.
A client side library would handle all client registering details, as well as
thread creation. It would then sem_wait(), and when awaken, read from the input
ringbuffer, call the client process callback with I/O buffers, and write to the
output ringbuffer.
Does this design sound good to you? Do you think it could achieve reliable I/O,
and reasonable latency? Keeping latency as low as possible, what do you advise
for the size of the ringbuffers?
--
Olivier
Hello
I bought the natural drum samples (http://www.naturaldrum.com/). It contains
WAVs and presets for Kontakt and Halion. Now I'd like to create some gigasampler
files in order to use it with linuxsampler.
The documentation of the natural drum sample library is quite good. The only
thing missing is the "loudness" of each sample in order to map each sample to a
velocity level from 0-127.
What would you recommend in order to calculate the "peek" of each drum sample
automatically? Is there a library which could do this? I would also be happy
with a command line tool like this:
$ peek bla.wav
Peek value: 12345
I could then write a C++-App using libgig.
Any ideas? Libraries? Algorithms?
Thanks!
Oliver
Hi all,
I've been battling a kind of a dsp-writer's-block as of late. Namely, I am
dealing with a project where (at least as of right now) I would like to explore
human whisper and its percussive/rhythmic power. This would take place in an
ensemble of "voices." I am also looking to combine whisper with some sort of
DSP. Obviously vocoder comes as one of the obvious choices but it sounds IMHO
cliche and as a result I would like to avoid it as much as possible (unless I
can somehow come up with a cool spin on it which I haven't yet). I also tried
amp mod, additive, filtering, etc., but none of these struck me as something
interesting. I do think delays will be fine in terms of "punctuating" the
overall pattern but I think this should take place at the end of the DSP chain.
Granular synthesis is also a consideration but I've done so much of it over the
past years I am hoping to do something different.
So, as of right now I have:
1) whisper
2) ???
3) delays
4) profit! :-)
Given the mental constipation I have been battling particularly over the past
couple of days, I wanted to turn to you my fellow LA* enthusiasts for some
thoughts/ideas/inspiration. Your help would be most appreciated and I will
gladly credit your ideas in the final piece.
Many thanks!
Ivica Ico Bukvic, D.M.A.
Composition, Music Technology
Director, DISIS Interactive Sound and Intermedia Studio
Assistant Co-Director, CCTAD
CHCI, CS, and Art (by courtesy)
Virginia Tech
Department of Music
Blacksburg, VA 24061-0240
(540) 231-6139
(540) 231-5034 (fax)
ico.bukvic.net
If you need something to push you over the edge and port your existing
Qt/KDE music-making or multimedia app to the N900 running Symbian,
Maemo, or Meego: http://qt-apps.org/news/?id=340 (see below).
Some ideas (please?):
http://sv1.sourceforge.net/ == http://www.sonicvisualiser.org/http://kmid2.sourceforge.nethttp://kmetronome.sourceforge.nethttp://kmidimon.sourceforge.nethttp://vmpk.sourceforge.net/http://qtractor.sourceforge.nethttp://qmidictl.sourceforge.nethttp://qmidinet.sourceforge.nethttp://qjackctl.sourceforge.net/
....................
Win 10.000,- EUR at the "Qtest Mobile App Port"
Published: Dec 20 2010
Qtest Mobile App Port
Contest for Qt and KDE applications
Welcome to the Qtest Mobile App Port! As developers of applications
using Qt, you already know how great it is to work with - but how
about on mobile platforms, such as Symbian and MeeGo? How would you
like to take that step you have been wanting to take, but not been
able to justify: Take your application from the desktop and bring it
into the hand-held world via the Ovi store.
Let this contest be the justification, with the possibility of a new
phone or even 10,000 euros waiting at the end.
Dates:
The contest starts on 20th of December, 2010, and runs till 28th of
February. The 31st of December is important for you if you wish to
take part in the Early Bird competition. If you do no win, you will
still take part in the main competition, and will be allowed to
continue your work and submit new versions to the Ovi Store. The 28th
of February is the deadline for taking part in the main competition.
Developer Sprint: There will be a sponsored developer sprint organized
together with the KDE e.V. during the competition. The travel and stay
can be paid for if you do not have the budget yourself. Further
details will be made public at a later time, and all participants will
be notified of this information via email.
Judging and prizes:
The Qtest Mobile App Port is evaluated by a panel of judges which will
be announced in the next few days. The jury will pic 5 winners at 31th
of December as the early bird winners. Every winner gets a free N900
phone. The main competition first prize is EUR 10,000, which will be
awarded to the application which the judges find to be the best ported
application. The second to sixth price will be another 5 N900 phones.
And, finally: Everybody who takes part in the competition will be
awarded a gift bag, with a T-shirt and other merchandise.
Eligibility:
To be able to take part in the contest, the ported application must be
submitted for Ovi Store signing by one of the two deadlines:
- Early bird entries must be submitted by December 31st
- Standard entries must be submitted by February 28th
You also have to submit your application to the "Mobile Contest"
category on Qt-Apps.org or MeeGo-Central.org
You can submit your application to the Ovi Store as many times as you
wish during the competition. This allows you to get feedback from the
public on your software. It´s possible to submit new or existing
KDE/Qt applications
So have fun and good luck everybody!
.......................
Niels
http://nielsmayer.com