Greetings,
I'm testing the Linuxsampler plugin with Ardour3 SVN. The plugin loads
correctly, and when I launch the Fantasia GUI it identifies itself
correctly in the interface. I can load a MIDI file in A3 and watch it
play the keyboard on the LS plugin. Everything looks good, but there's
no sound. There's also no instance of Linuxsampler in QJackCtl.
When I run LS + Fantasia standalone I noticed that it doesn't
autoconnect to JACK. Perhaps this is the problem, but I can't find where
to configure LS to autoconnect.
Any suggestions ? This is a low-priority issue for me, but it'd be nice
to figure out what's happening.
Best,
dp
LoMus 2010
À la recherche des logiciels libres pour la création sonore et intermedia
Pour sa troisième édition, LoMus 2010 s’adresse à tous ceux qui s’aventurent dans le développement de logiciels libres musicaux ou de logiciels libres qui peuvent contribuer au processus de la création musicale.
En regard d'un des 2 thèmes mis en avant lors de cette édition des JIMs : "L'Œuvre musicale face à l'hétérogène : problématique de la mixité", le concours LoMus encourage plus particulièrement les contributions intégrant ou s'hybridant avec d'autres médias. Ce thème n'est cependant pas restrictif.
Un prix sera remis aux logiciels qui font preuve non seulement d'innovation, mais notamment d'inventivité face aux enjeux actuels de la création musicale.
Calendrier
Appel à soumissions : 4 février 2010
Soumission des logiciels : 1 avril 2010
Notification d'acceptation : 1 mai 2010
Remise du prix lors des JIM : 20 mai 2010
info : concours.afim-asso.org
In search of open-source software for musical and intermedia creation
For its third edition, LoMus 2010 invites music and audio open-source software creators to submit original projects that either directly or indirectly contribute to musical creation.
In regard to one of the 2 themes for JIM's edition : "The musical work and heterogeneity: the problem of mixity", the LoMus contest will particularly encourage sonic/musical softwares integrating or hibridizing with other media. Though, this theme is not restrictive.
A prize will be awarded to open-source sofware that prove to be not only innovatory but also inventive in the present context of music and audio creation.
Calendar
Call for submissions : February 4th 2010
Submission deadline : April 1st 2010
Admission notification : May 1st 2010
JIM Awards Ceremony : May 20th 2010
info : concours.afim-asso.org
AFIM : http://www.afim-asso.org/spip.php?article1
JIM2010 : http://jim.afim-asso.org/ocs/index.php/jims/index
Hi guys, I looking at reworking the mixer in a medium/large
C++ application. Any pointer/opinions/theories on how this
should be approached? (I know many of you have encountered
the problem.)
In particular, I'm thinking about having a central Mixer
object that the other parts of the application interface
with. This Mixer object would own, maintain, send, return,
and mix the buffers for everybody.
For those wanting/needing more details... read on.
APPLICATION
-----------
The application is Composite (http://gabe.is-a-geek.org/composite/).
It is intended to have DAW-like attributes... but is more
along the lines of a sequencer. Mixing is currently
implemented directly in the process() callback and inside
the sampler.[1]
RATIONALE
---------
The Sampler currently has a pointer to a parent class Engine
so that it can have access to the output buffers (currently
tied to AudioOutput 'drivers'). I would like to make
Sampler a more self-contained class.
APPROACHES I'M CONSIDERING
--------------------------
A. Create an abstract Mixer class. This class will manage
the audio buffers, their connects, send/return, etc.
The sampler class would, for instance, request from the
Mixer a buffer allocation. On every process cycle, the
Sampler would get a fresh copy of the buffer pointer.
In this way, the Mixer could, if it wanted to, serve up
the exact same pointer that jack_get_buffer() returns.
B. Have Sampler own all its output buffers. Force other
applications to query or connect to them in order to do
mixing. An example of this approach would be the LV2 amp
example.[2] However, with this approach I'm concerned
that I won't be able to avoid buffer thrashing.
THINGS I'VE SEEN ELSEWHERE
--------------------------
Most applications I've looked at have a very de-centralized
approch. If you are the author of one of these -- forgive
me if I've failed to grok your code! :-)
* Ingen handles mixing as a feature of a "connection."
It also appears that gain has to be handled elsewhere
(like an amplifier insert.) However, in Composite
I'd like to avoid setting up an arb. connection
framework at this stage.
* Ardour appears to handle channel gain internal to
each channel/buffer object. The output mix-downs
are more or less handled directly in the process()
callback.[3]
* Non-daw appears to handle it similar to Ardour.
* All of them, at the core, implement the mixing as
some manner of basic function... like a specialized
memcpy().
* All of them implement SSE optimizations in mixing, or
at least have them on their TO-DO list.
Any thoughts or comments are appreciated!
Thanks,
Gabriel
[1] If digging in the code, it's roughly at:
src/Tritium/src/EnginePrivate.cpp:438
src/Tritium/src/Sampler.cpp:419
src/Tritium/src/Sampler.cpp:598
Current git revision cfebf2058...
[2] http://lv2plug.in/plugins/Amp-example.lv2/
[3] See, e.g. AudioTrack::roll() in libs/ardour/audio_track.cc
Hi guys, I looking at reworking the mixer in a medium/large
C++ application. Any pointer/opinions/theories on how this
should be approached? (I know many of you have encountered
the problem.)
In particular, I'm thinking about having a central Mixer
object that the other parts of the application interface
with. This Mixer object would own, maintain, send, return,
and mix the buffers for everybody.
For those wanting/needing more details... read on.
APPLICATION
-----------
The application is Composite (http://gabe.is-a-geek.org/composite/).
It is intended to have DAW-like attributes... but is more
along the lines of a sequencer. Mixing is currently
implemented directly in the process() callback and inside
the sampler.[1]
RATIONALE
---------
The Sampler currently has a pointer to a parent class Engine
so that it can have access to the output buffers (currently
tied to AudioOutput 'drivers'). I would like to make
Sampler a more self-contained class.
APPROACHES I'M CONSIDERING
--------------------------
A. Create an abstract Mixer class. This class will manage
the audio buffers, their connects, send/return, etc.
The sampler class would, for instance, request from the
Mixer a buffer allocation. On every process cycle, the
Sampler would get a fresh copy of the buffer pointer.
In this way, the Mixer could, if it wanted to, serve up
the exact same pointer that jack_get_buffer() returns.
B. Have Sampler own all its output buffers. Force other
applications to query or connect to them in order to do
mixing. An example of this approach would be the LV2 amp
example.[2] However, with this approach I'm concerned
that I won't be able to avoid buffer thrashing.
THINGS I'VE SEEN ELSEWHERE
--------------------------
Most applications I've looked at have a very de-centralized
approch. If you are the author of one of these -- forgive
me if I've failed to grok your code! :-)
* Ingen handles mixing as a feature of a "connection."
It also appears that gain has to be handled elsewhere
(like an amplifier insert.) However, in Composite
I'd like to avoid setting up an arb. connection
framework at this stage.
* Ardour appears to handle channel gain internal to
each channel/buffer object. The output mix-downs
are more or less handled directly in the process()
callback.[3]
* Non-daw appears to handle it similar to Ardour.
* All of them, at the core, implement the mixing as
some manner of basic function... like a specialized
memcpy().
* All of them implement SSE optimizations in mixing, or
at least have them on their TO-DO list.
Any thoughts or comments are appreciated!
Thanks,
Gabriel
[1] If digging in the code, it's roughly at:
src/Tritium/src/EnginePrivate.cpp:438
src/Tritium/src/Sampler.cpp:419
src/Tritium/src/Sampler.cpp:598
Current git revision cfebf2058...
[2] http://lv2plug.in/plugins/Amp-example.lv2/
[3] See, e.g. AudioTrack::roll() in libs/ardour/audio_track.cc
Hi to all!
First post to this list :) and I will use it to present small project
I've been working on.
VocProc is a real time JACK application for vocal processing including
pitch shifting, automatic pitch correction and vocoder.
It is basically the same thing as fons' jretune or Tom's autotalent. I
wanted that functionality but at that time fons didn't made it and I
wasn't aware of autotalent. So I made it myself and decided to release
it. I have not tried any of the above for now, so I cannot say
anything about sound quality differences.
I made a working version of VocProc some time ago, but now finally
found some time to clean the code and prepare it to be released.
You can grab the code at:
http://hyperglitch.com/dev/VocProc
Currently, code was only tested and compiled on my computer (Arch
Linux, fftw-3.2.2, Qt4.5) and it works OK (for me).
Any feedback and bug reports are appreciated.
Cheers!
Igor
Maitland Vaughan-Turner:
>
>> Maybe. But I would say something like this:
>>
>> http://www.pawfal.org/Software/fastbreeder/
>>
>> would be more fitting for actually being
>> an audio function generator.
>>
>>
> Oh man! That is glitchtastic! I love you!
Happy to share the link!
And just to clear up a potential misunderstanding,
fastbreeder is made entirely (as far as I know) by
David Griffith.
(for once I wasn't plugging my own software. :-) )
It is my pleasure to announce the latest release of Aqualung,
an advanced, cross-platform, gapless music player.
This release adds some features and many bugfixes - all users
are encouraged to upgrade.
Please see the Aqualung website for general information,
downloads, documentation etc: http://aqualung.factorial.hu
The Win32 build is up-to-date with the release. The OSX bundle
will be updated at a later time.
The release changelog is listed below.
Enjoy,
Tom
* * *
Aqualung 0.9beta11
http://aqualung.factorial.hu
* Add PulseAudio support as contributed by PCMan plus a few minor
fixes.
* Added option for starting Aqualung hidden in tray. Useful when
running Aqualung automatically after login.
* Implement auto roll to active track functionality. Thanks to Chris
Craig for the excellent patch.
* Support new Musepack API (patch by Yavor Doganov)
* New keybinding: Ctrl-S to stop after currently playing song has
ended. Thanks to cobines for the patch.
* Add support for more versatile mouse-systray interaction. Thanks to
cobines for the excellent patch.
* Added support for new GtkTooltip API (since 2.12). Fixed tooltip
disappearing issue because of too frequent tooltip updates.
* Automatically add/remove stores when they become available or
disappear (most likely due to mount/unmount operations). Modified
stores will not be removed automatically.
* Add support for an application_title lua function separate from the
playlist_title lua function, so that the window title and the main
title label of the player is configurable from Lua.
* Don't require restart to update programmable title format file
* Don't use sndfile's Ogg decoder (always use native Ogg library)
* Fix FFmpeg headers detection in configure script
* Fix compiler warnings on 64 bit. Thanks to Zoltan Kovacs for the
patch.
* Fix crash on 64 bit when Aqualung is compiled without SRC support
and file contains metadata. Thanks to Zoltan Kovacs for tracking the
problem and providing the patch.
* Fixed crash when pasting into playlist without copying first (empty
clipboard).
* Fix a suspected regression: space toggles state of combined
play/pause button when a file is loaded.
* Fix lockup at end of playlist.
* Fixed a crash that occurred when clicked on a picture of a file in
the File Info dialog and the file format did not support metadata.
* Fix playlist column size allocation by eliminating manual/delayed
calculations and utilizing the built-in COLUMN_AUTOSIZE feature
instead.
* Fix crash when invoking the File Info dialog for an MPEG internet
radio.
* Fix inversion of enabled/diasbled state of tooltips.
* Fix crash when loading .m3u with invalid filename.
* Updated translations: German, Hungarian, Russian, Ukrainian
* New translations:
Japanese by Norihiro Yoneda
French by Julien Lavergne
* Up-to-date user documentation
Since the last release of fastbreeder is now about four years old it
won't compile out of the box. To get it to compile *at all* you need to
add:
#include <cstdlib>
somewhere at the top of Synth.cpp. I could spend more time getting it
to compile cleanly, but like that it works.
Gordon MM0YEQ
"Kjetil S. Matheussen" <k.s.matheussen(a)notam02.no> sez:
> Maybe. But I would say something like this:
>
> http://www.pawfal.org/Software/fastbreeder/
>
> would be more fitting for actually being
> an audio function generator.
>
>
Oh man! That is glitchtastic! I love you!
Dear fellow LA* members,
As some of you may be aware, instead of a static news page, Linuxaudio.org now has a direct LAA feed as its front page. Consequently, I would like to encourage everyone to please put special care in crafting your LAA posts, meaning much more so than those destined for lau/lad lists, as this is in part what everyone sees when they visit Linuxaudio.org (and if our awstats are any indication < http://stats.linuxaudio.org/cgi-bin/awstats.pl?config=www.linuxaudio.org>, then we do get tons of exposure there that is perhaps more importantly steadily growing). I say this not because there have been some grave offenses recently but rather because I think as a community it would really nice if we collectively put extra attention to this facet that is much considerably "public" than a typical lau/lad post. So, I guess what I am trying to say is perhaps having a post on lau/lad lists mirrored on laa may not be always a good idea.
If I had to single-out one post in there that could use some TLC :-) it would be the call for submissions for the upcoming LAC. Namely, suggesting that there has been little interest may end-up looking like a self-fulfilling prophecy--new and incoming potential contributors to LAC who may have come across this post could be easily discouraged by the way this reads despite the fact we all know that most conference submissions are usually uploaded in the last 72 hours before the submission deadline.
At any rate, don't mean to be preaching, so I hope no one will get offended. And if you do, I guess I owe you a pint (hear that Frank? ;-)
Just my 5-cents worth...
Best wishes,
Ivica Ico Bukvic, D.M.A.
Composition, Music Technology
Director, DISIS Interactive Sound & Intermedia Studio
Director, L2Ork Linux Laptop Orchestra
Assistant Co-Director, CCTAD
CHCI, CS, and Art (by courtesy)
Virginia Tech
Dept. of Music - 0240
Blacksburg, VA 24061
(540) 231-6139
(540) 231-5034 (fax)
ico(a)vt.edu
http://www.music.vt.edu/faculty/bukvic/