I am pleased to announce the release of Composite 0.006.2. This is a
bug-fix release.
ABOUT
-----
Composite is (or, will be) a software application/system for
real-time, in-performance sequencing, sampling, and looping.
Currently the main feature is the LV2 Sampler that supports Hydrogen
drumkits.
CHANGES
-------
* LV2 Sampler: Fix logic error with uri-map by passing
the Event URI as the `map` parameter. This also works
around crashes in slv2 and zynjacku (which is a bug on
their part).
STATUS
------
Composite is a project with a large vision. Here is the status of the
different components:
composite-gui: Alpha (i.e. "a broken version of Hydrogen")
composite_sampler (LV2): production/stable, no GUI
libTritium: Not a public API, yet.
LINKS
-----
Composite: http://gabe.is-a-geek.org/composite/
Plugin Docs:
file:///home/gabriel/code/composite-planning/plugins/sampler/1
Tarball:
http://gabe.is-a-geek.org/composite/releases/composite-0.006.2.tar.bz2
Git: http://gitorious.org/compositegit://gitorious.org/composite/composite.git
HOW TO USE THE PLUGIN
---------------------
To use the plugin, you need the following:
* A program (host) that loads LV2 plugins.
* A MIDI controller.
* An audio output device. :-)
The following LV2 hosts are known to work with this plugin:
Ingen http://drobilla.net/blog/software/ingen/
ardour3 (alpha) http://ardour.org/
lv2_jack_host http://drobilla.net/software/slv2/
zynjacku http://home.gna.org/zynjacku/
If you don't have a hardware MIDI controller, I suggest using
jack-keyboard (http://jack-keyboard.sourceforge.net/).
The first time you run the sampler, it will create a file
~/.composite/data/presets/default.xml, which will set up presets on
Bank 0 for the two default drum kits (GMkit and TR808EmulationKit).
Sending MIDI PC 0 and PC 1 will switch between the two kits. See
composite_sampler(1) for more information on setting up presets.
ACKNOWLEDGMENTS
---------------
With this release, I would especially like to thank:
Alessio Treglia - For quickly packaging 0.006.1 and getting
it into Debian sid.
Benoit Delcour - For quickly testing and reporting this
issue with the LV2 Sampler.
Peace,
Gabriel M. Beddingfield
spectmorph-0.2.0 has been released. This is the first version that supports
sound morphing, and some really interesting sounds can be created with this
version; there are examples on the web page.
Overview of Changes in spectmorph-0.2.0:
----------------------------------------
* implemented user defined morphing using a MorphPlan consisting of operators
- graphical representation of the operators
- graphical editing of the MorphPlan
- implement actual morphing (in per-operator per-voice module object)
- added MorphPlanSynth/MorphPlanVoice, which allow running MorphPlan's easily
- added LPC (linear prediction) during encoding, and LPC/LSF based morphing
* BEAST plugin:
- added GUI required for editing a MorphPlan
- support four output channels, as well as two control inputs
- delay compensation plugin (to compensate SpectMorph delay)
* JACK client:
- support GUI MorphPlan editing
* added sminspector (graphical tool for displaying SpectMorph instruments)
- zoomable time/frequency view
- configurable (FFT/CWT/LPC) time/frequency view transform parameters
- spectrum, sample, LPC visualization
- graphical loop point editing
- allow storing changes in .smset files (for editing loop points)
- play support via JACK
* improved smtool (old name: smextract); its now installed by default
- lots of new commands (like "total-noise", "auto-volume", ...)
- support .smset as input (in addition to .sm); command is executed on all
.sm files in the .smset
* added shared libraries for gui and jack code
* new integrated memory leak debugger (to find missing delete's)
* support ping-pong loops
* doxygen API docs updates
* migrated man pages from Doxer to testbit.eu wiki (and use wikihtml2man.py)
* performance improvements
What is SpectMorph?
-------------------
SpectMorph is a free software project which allows to analyze samples of
musical instruments, and to combine them (morphing). It can be used to
construct hybrid sounds, for instance a sound between a trumpet and a flute; or
smooth transitions, for instance a sound that starts as a trumpet and then
gradually changes to a flute.
Also interpolating between two samples of the same instrument (different attack
velocity of a piano) could be interesting.
SpectMorph is implemented in C++ and licensed under the GNU LGPL version 3
SpectMorph is still under development. This means:
* the fileformat is not yet stable - instruments or morph plans may not work
with newer versions of SpectMorph
* the algorithms for synthesizing sounds are still under development - newer
versions may sound different
To sum it up: if you compose music using SpectMorph, don't expect newer
versions to be compatible in any way.
Links:
------
Website: http://www.spectmorph.org
Download: http://www.spectmorph.org/downloads/spectmorph-0.2.0.tar.bz2
There are many sound samples on the website, which test morphing between
instruments.
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
Howdy!
TYOQA (the year of qtractor automation for the clueless) is now pretty
real. I'd say it's all been my my prerogative, again and again, doing
things my own way (do I hear Frank S. singing? nope. move along...).
Is this the time to do the unthinkable? Should I tag it as beta now?
Should I? There's one single reason for not doing so and a couple of
others to make it through:
1. basically it's all the same functionality that stays put or improved
in a few spots;
2. it just feels like it! :)
Now comes the mighty corrosive one: I'll be off on vacation soon. Summer
is waiting for me. And I just hate to miss that kind of deadline. Woohoo!
Is there anything else to mention? Go ahead, make your day:
Qtractor 0.5.0 (alpha zulu) is now released!
Release highlights:
* TYOQA! Audio/MIDI track and plugin parameter automation (NEW)
* MIDI controller catch-up behavior (NEW)
* All zooming in/out relative to views center (NEW)
* Audio gain/panning smoothing changes (FIX)
Happy summer 2 y'all!
Website:
http://qtractor.sourceforge.net
Project page:
http://sourceforge.net/projects/qtractor
Downloads:
- source tarball:
http://downloads.sourceforge.net/qtractor/qtractor-0.5.0.tar.gz
- source package (openSUSE 11.4):
http://downloads.sourceforge.net/qtractor/qtractor-0.5.0-3.rncbc.suse114.sr…
- binary packages (openSUSE 11.4):
http://downloads.sourceforge.net/qtractor/qtractor-0.5.0-3.rncbc.suse114.i5…http://downloads.sourceforge.net/qtractor/qtractor-0.5.0-3.rncbc.suse114.x8…
- from the dusty shelf: user manual (anyone?):
http://downloads.sourceforge.net/qtractor/qtractor-0.3.0-user-manual.pdf
Weblog (upstream support):
http://www.rncbc.org
License:
Qtractor is free, open-source software, distributed under the terms
of the GNU General Public License (GPL) version 2 or later.
Change-log:
- MIDI controller learn/catch-up sees the way in: MIDI controller
changes are now only effective after catching-up with their respective
program parameters, avoiding abrupt jumps and keeping a safe and
continuous behavior.
- Track/Height menu is now featured, giving access to Increase, Decrease
or Reset the current track height.
- All changes to audio gain and panning on tracks and buses are now
applied following a piece-wise linear ramp, reducing the old nasty
clicks, pops or zipper artifacts that might be awfully audible on some
situations, most specially on automation.
- All zooming in/out is now relative to either the viewport center or
current mouse cursor position if found laying inside.
- TYOQA! the underground sources have emerged:... after years in the
making, track automation, or dynamic curves as some like to call, is
finally a reality, tricky but real ;)
- Audio clip anti-glitch/ramp-smoothing effect is now slightly
independent of current buffer-size period (mitigating bug #3338113 effect).
- Once buried under the Edit menu, Clip menu has been finally promoted
to top main menu.
- Debugging stacktrace now applies to all working threads.
- Fixed muted loop playback on audio clips ending coincidentally with
the loop-turn/end point.
- Old/deprecated JACK port latency support added to audio recording
latency compensation.
- Audio clip merge/export lock-ups now untangled (fixes bug #3308998).
- LV2 extension headers update.
- Fixed configure of newer LV2 host implementation stack (LILV) when
older (SLV2) is not present.
Enjoy && Cheers!
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org
I am pleased to announce the release of Composite 0.006.1. This is a
bug-fix release.
ABOUT
-----
Composite is (or, will be) a software application/system for
real-time, in-performance sequencing, sampling, and looping.
Currently the main feature is the LV2 Sampler that supports Hydrogen
drumkits.
CHANGES
-------
* Fix FTBFS: t_AudioPort test missing link to QtCore
* Fix FTBFS Song.cpp: Replace using of QString(int) with
QString(const char*)
* LV2 Sampler: Initialize QCoreApplication
* LV2 Sampler: Fix crash by moving Logger instance to module level
* LV2 Sampler: Check LV2 Event types and add extension meta-data
* tests: t_AudioPort had a degenerate test
* tests: t_SeqScript now passes (was failing as dev reminder)
* Replace -lQtCore with ${QT_LIBRARIES} in build system
STATUS
------
Composite is a project with a large vision. Here is the status of the
different components:
composite-gui: Alpha (i.e. "a broken version of Hydrogen")
compoiste_sampler (LV2): production/stable, no GUI
libTritium: Not a public API, yet.
LINKS
-----
Composite: http://gabe.is-a-geek.org/composite/
Plugin Docs:
file:///home/gabriel/code/composite-planning/plugins/sampler/1
Tarball:
http://gabe.is-a-geek.org/composite/releases/composite-0.006.tar.bz2
Git: http://gitorious.org/compositegit://gitorious.org/composite/composite.git
HOW TO USE THE PLUGIN
---------------------
To use the plugin, you need the following:
* A program (host) that loads LV2 plugins.
* A MIDI controller.
* An audio output device. :-)
The following LV2 hosts are known to work with this plugin:
Ingen http://drobilla.net/blog/software/ingen/
ardour3 (alpha) http://ardour.org/
lv2_jack_host http://drobilla.net/software/slv2/
zynjacku http://home.gna.org/zynjacku/
If you don't have a hardware MIDI controller, I suggest using
jack-keyboard (http://jack-keyboard.sourceforge.net/).
The first time you run the sampler, it will create a file
~/.composite/data/presets/default.xml, which will set up presets on
Bank 0 for the two default drum kits (GMkit and TR808EmulationKit).
Sending MIDI PC 0 and PC 1 will switch between the two kits. See
composite_sampler(1) for more information on setting up presets.
ACKNOWLEDGMENTS
---------------
With this release, I would especially like to thank:
Alessio Treglia - For dutifully reporting bugs, submitting
patches, and *patiently* waiting for me to review them.
Paul Davis - For working with me on the crashes that were
discovered on ardour3 alpha.
Peace,
Gabriel M. Beddingfield
*REMINDER* deadline for submission of abstracts is Sunday July 17.
If you need an extension, please contact us before Monday.
Call for Abstracts:
Versatile Sound Models for Interaction in
Audio–graphic Virtual Environments:
Control of Audio-graphic Sound Synthesis
Workshop @ Conference on Digital Audio Effects DAFx-11
Friday September 23, 2011 at Ircam, Paris
The use of 3D interactive virtual environments is becoming more
widespread in areas such as games, architecture, urbanism, information
visualization and sonification, interactive artistic digital media,
serious games, gamification. The limitations in sound generation in
existing environments are increasingly obvious with current
requirements.
This workshop will look at recent advances and future prospects in
sound modeling, representation, transformation and synthesis for
interactive audio-graphic scene design.
Several approaches to extending sound generation in 3D virtual
environments have been developed in recent years, such as sampling,
modal synthesis, additive synthesis, corpus based synthesis, granular
synthesis, description based synthesis, physical modeling... These
techniques can be quite different in their methods and results, but
may also become complementary towards the common goal of versatile and
understandable virtual scenes, in order to cover a wide range of
object types and interactions between objects and with them.
The purpose of this workshop is to sum up these different approaches,
present current work in the field, and to discuss their differences,
commonalities and complementarities.
Accepted abstracts will be invited to submit an extended version to a
special issue of the Springer Journal on Multimodal User Interfaces
(JMUI) or SpringerOpen EURASIP Journal on Audio, Speech, and Music
Processing.
Detailed information about the workshop can be found here:
http://www.topophonie.fr/event/3http://dafx11.ircam.fr/?page_id=224
The workshop is free for attendants of the DAFx conference and
for non-DAFx-attendants by invitation. Registration to the DAFx
conference can be found here: http://dafx11.ircam.fr
Call for Abstracts
------------------
Abstracts (max. 1 A4/Letter page, PDF format)
on the topics of the workshop should be sent by July 17
to Diemo Schwarz (schwarz(a)ircam.fr)
The submissions will be reviewed by a program committee
and accepted communications will be given at the workshop.
The authors will be notified at the latest end of July, 2011.
Important Dates
---------------
* July 17, 2011: Abstract Submission Deadline
* July 31, 2011: Notification of Acceptance
* September 23, 2011: Workshop
Program Chairs
--------------
Roland Cahen, ENSCI-les Ateliers
Diemo Schwarz, IRCAM
Christian Jacquemin, LIMSI-CNRS & University Paris Sud 11
Hui Ding, LIMSI-CNRS & University Paris Sud 11
Program committee
-----------------
Nicolas Tsingos (Dolby Laboratories)
Lonce Wyse (National University of Singapore)
Andrea Valle (University of Torino)
Hendrik Purwins (University Pompeu Fabra)
Thomas Grill (Institut für Elektronische Musik IEM, Graz)
Charles Verron (McGill University, Montreal)
Cécile Le Prado (Centre National des Arts et Metiers CNAM)
Annie Luciani (Ingénierie de la Création Artistique ICA, ACROE)
Topics in detail
----------------
Which other and better alternatives to traditional sample triggering
do exist to produce comprehensive, flexible, expressive, realistic
sounds in virtual environments? How to produce rich interaction with
scene objects such as physically informed models for contact and
friction sounds etc? How to edit and structure audio–graphic scenes
otherwise than mapping one event to one sound? There is no
standardized architecture, representation and language for auditory
scenes and objects, as is OpenGL for graphics. The workshop will treat
higher level questions of architecture and modeling of interactive
audio-graphic scenes, down to the detailed question of sound modeling,
representation, transformation and synthesis. These questions cannot
be detached from implementation issues: novel and hybrid synthesis
methods, comparison and improvement of existing platforms, software
architecture, plug-in systems, standards, formats, etc.
New possibilities regarding the use of audio descriptors and dynamic
access to audio databases will also be discussed.
Beyond these main questions, the workshop will cover other recent
advances in audio-graphic scene modeling such as:
* audio-graphic object rendering, and physically and geometrically driven
sound rendering,
* interactive sound texture synthesis, based on signal models, or
physically informed
* joint representation of sound and graphic spaces and objects,
* sound rendering for audio-graphic scenes:
* level of detail, which is a very advanced concept in graphics, but is
rarely treated in audio.
* representation of space and distance,
* masking and occlusion of sources,
* clustering of sources
* audio-graphic interface design,
* sound and graphic localization,
* cross- and bi-modal perceptual evaluations,
* interactive audio-graphic arts,
* industrial audio-graphic data:
* architectural acoustics,
* sound maps,
* urban soundscapes...
* platforms and tools for audio-graphic scene modeling and rendering,
These areas are interdisciplinary in nature and interrelated. New
advancements in each area will benefit the others. This workshop will
allow to exchange the latest developments and to point out current
challenges and new directions.
--
Diemo Schwarz, PhD -- http://diemo.concatenative.net
Real-Time Music Interaction Team -- http://imtr.ircam.fr
IRCAM - Centre Pompidou -- 1, place Igor-Stravinsky, 75004 Paris, France
Phone +33-1-4478-4879 -- Fax +33-1-4478-1540
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
xjadeo is a video player that synchronizes to an external time-source.
http://xjadeo.sf.net/
After half a year of being stuck at release-candidate-7 the 0.6.0 went
out quietly last week.
version 0.6.1 - released today - fixes a small bug (russian and greek
translations were not installed), adds JACK-Session Support to xjadeo
[1] and makes use of JACK's new(er) latency compensation API.
As a reminder: version 0.6.0 introduced support for win32, features a
complete manual rewrite/overhaul, includes long overdue QT3->QT4 port of
the GUI, adds support for parsing LTC timecode from audio, switches to
more user-friendly default settings and support for newer versions of
ffmpeg/libav* amongst many other small details (see the changelog).
Thanks to Alessio Treglia (debian packaging, bug reports), Alexandre
Prokoudine (testing, bug reports, Russian translation), Geoff Beasley
(testing and Manual contributions), Michales Michaloudes (Greek
translation) and Natanael Olaiz (bug squashing).
- -=-
[1] More on JACK-Session support:
The [win32 and OSX] binaries available from sf.net do _not_ include
JACK-Session support, yet. On GNU/Linux (or self-compiled binaries for
other OS) the complete state of xjadeo is saved for each session, but
the optional GUI (qjadeo) is not restored.
However, [re-]launching the GUI will [re-]attach it to an already
running xjadeo instance.
Known issue: If multiple instances of xjadeo are running, it is only
possible to re-attach to all of them by setting the xjadeo session-ID
using the XJREMOTE environment variable _before_ launching the qjadeo
remote-control GUI.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iEYEARECAAYFAk4d55UACgkQeVUk8U+VK0LLZwCcC6HKKikjALUZJ/3OGpdvE4sJ
vOwAoL6oXQWh/1QB28aM0ppYNTTOdrQu
=qEq2
-----END PGP SIGNATURE-----
Hi all,
Great feedback from some users, especially Louigi Verona, Sascha Schneider and Jeremy Jongepier
have kept the inspiration level up, and some bugs could be squeezed since the last release.
Louigi has even used QMidiArp in Live sessions that you can now listen to on the qmidiarp website at
http://qmidiarp.sourceforge.net/qmidiarp_demos_en.html
I'd also like to thank Nedko again for great discussions and answers in most cases
Two major issues had been addressed in a provided patch, but other small ones are now done in this
release. But there are even some new features, most of them inspired by you guys:
qmidiarp-0.4.2 (2011-07-10)
New Features
o LFO wave lengths up to 32 bars for very low frequencies
o Groove Settings and LFO & Seq resolutions now also MIDI-controllable
o One-click duplication of LFO and Seq modules
o Option to add new modules in muted state
o Vertical Zoom switch for Seq module display
o ToolBars can be positioned vertically
o Nested arrangement of modules allows more flexible layouts
Fixed Bugs
o 0.4.1-patch had been available for the following two:
o Jack Transport sync arbitrarily stopping with only arp modules
o Instability with ALSA clock with only Seq and LFO modules
o Faster response to Jack Transport state changes
o Incorrect response to two Seq sliders
General Changes
o Jack Transport sync uses jack process callback not sync callback
Enjoy!
------------------------------------------------------
http://qmidiarp.sourceforge.net/http://sourceforge.net/projects/qmidiarp/files/qmidiarp/0.4.2/
------------------------------------------------------
guitarix/gx_head is a simple guitar mono tube amplifier simulation.
please refer to our project page for more information:
http://guitarix.sourceforge.net/
new features in short:
* fixed jack session support
* add amp-model (push/pull)
* add amp-model (feedback)
* fix build/runtime issue on OSX
* reformat source to the Google C++ Style Guide conventions
* some minor fixes and maybe new bugs
have fun
_________________________________________________________________________
guitarix is licensed under the GPL.
screen-shots and sound examples:
http://guitarix.sourceforge.net/
direct download:
http://sourceforge.net/projects/guitarix/files/guitarix/guitarix2-0.17.0.ta…
download site:
http://sourceforge.net/projects/guitarix/
please report bugs and suggestions in our forum:
http://sourceforge.net/apps/phpbb/guitarix/
________________________________________________________________________
For extra Impulse Responses, gx_head uses the
zita-convolver library, and,
for resampling we use zita-resampler,
both written by Fons Adriaensen.
http://kokkinizita.linuxaudio.org/linuxaudio/index.html
We use the marvellous faust compiler to build the amp and effects and will say
thanks to
: Julius Smith
http://ccrma.stanford.edu/realsimple/faust/
: Albert Graef
http://q-lang.sourceforge.net/examples.html#Faust
: Yann Orlary
http://faust.grame.fr/
________________________________________________________________________
For faust users :
All used Faust dsp files are included in /gx_head/src/faust,
the resulting .cc files are in /gx_head/src/faust-generated
The tools we use to convert (post-processing and plot)
the resulting faust cpp files to the needed include format,
stay in the /gx_head/tools directory.
________________________________________________________________________
regards
guitarix development team