Hello
I like to announce a update release of gxtuner, a simple, small and
lightweight guitar/bass tuner for jack.
It's a break out of the guitarix tuner module,
changes:
* full arbitrary scaling interface
* show octave number within the note
gxtuner comes with a analogue like interface (scale), show the tune
(char) and the accumulated frequency (Hz) and is licensed under the GPL.
gxtuner use a equal-tempered scale based on A4 = 440 Hz
for more information please read the included README file.
get it here:
http://sourceforge.net/projects/guitarix/files/gxtuner/
direct link
http://sourceforge.net/projects/guitarix/files/gxtuner/gxtuner-1.3.tar.bz2/…
have fun
hermann
I just upload to the sourceforge site
(http://jmax-phoenix.sourceforge.net/) the binaries for jMax Phoenix 0.7.
This release include the recent work on restructuring the DSP engine,
and provide the results of some optimization work; the performance
increase is (on my core i7 sandy bridge) of around 3 times for Mac OS X
and 2 times under Linux, for patches that has a typical mix of simple
and complex DSP objects; simple DSP objects run around 4 time faster
than before. (also, if you used the 0.6.2 distribution, performance
increase is more in the 10x range, because it was compiled using the
wrong flags).
Most of the performance increase come, on Mac OS X, for using the clang
vector extensions, and for Linux, from using the latest compiler with
the right flags.
The Linux distribution include the usability fix included in the last
Mac OS distribution.
A couple of incompatible changes: the ISPW like switch~ object has been
dropped, because its semantic prevented any changes in the DSP engine
(the ISPW switch~ was different than the pd switch~). New forms of
conditional execution may be provided later.
On Mac OS X, the personal preference file moved to the
~/Library/Preferences/org.jmax-phoenix.jMax/jmax.xml file, and the
caches to ~/Library/Caches/org.jmax-phoenix.jMax directory.
Maurizio
Hello
I like to announce the first release of gxtuner, a simple, small and
lightweight guitar/bass tuner for jack.
gxtuner comes with a analogue like interface (scale), show the tune
(char) and the accumulated frequency (Hz) and is licensed under the GPL.
It's a break out of the guitarix tuner module, you can download it here:
http://sourceforge.net/projects/guitarix/files/gxtuner/gxtuner-1.0.tar.bz2/…
have fun
hermann
I am pleased to announce the release of Composite 0.006.2. This is a
bug-fix release.
ABOUT
-----
Composite is (or, will be) a software application/system for
real-time, in-performance sequencing, sampling, and looping.
Currently the main feature is the LV2 Sampler that supports Hydrogen
drumkits.
CHANGES
-------
* LV2 Sampler: Fix logic error with uri-map by passing
the Event URI as the `map` parameter. This also works
around crashes in slv2 and zynjacku (which is a bug on
their part).
STATUS
------
Composite is a project with a large vision. Here is the status of the
different components:
composite-gui: Alpha (i.e. "a broken version of Hydrogen")
composite_sampler (LV2): production/stable, no GUI
libTritium: Not a public API, yet.
LINKS
-----
Composite: http://gabe.is-a-geek.org/composite/
Plugin Docs:
file:///home/gabriel/code/composite-planning/plugins/sampler/1
Tarball:
http://gabe.is-a-geek.org/composite/releases/composite-0.006.2.tar.bz2
Git: http://gitorious.org/compositegit://gitorious.org/composite/composite.git
HOW TO USE THE PLUGIN
---------------------
To use the plugin, you need the following:
* A program (host) that loads LV2 plugins.
* A MIDI controller.
* An audio output device. :-)
The following LV2 hosts are known to work with this plugin:
Ingen http://drobilla.net/blog/software/ingen/
ardour3 (alpha) http://ardour.org/
lv2_jack_host http://drobilla.net/software/slv2/
zynjacku http://home.gna.org/zynjacku/
If you don't have a hardware MIDI controller, I suggest using
jack-keyboard (http://jack-keyboard.sourceforge.net/).
The first time you run the sampler, it will create a file
~/.composite/data/presets/default.xml, which will set up presets on
Bank 0 for the two default drum kits (GMkit and TR808EmulationKit).
Sending MIDI PC 0 and PC 1 will switch between the two kits. See
composite_sampler(1) for more information on setting up presets.
ACKNOWLEDGMENTS
---------------
With this release, I would especially like to thank:
Alessio Treglia - For quickly packaging 0.006.1 and getting
it into Debian sid.
Benoit Delcour - For quickly testing and reporting this
issue with the LV2 Sampler.
Peace,
Gabriel M. Beddingfield
spectmorph-0.2.0 has been released. This is the first version that supports
sound morphing, and some really interesting sounds can be created with this
version; there are examples on the web page.
Overview of Changes in spectmorph-0.2.0:
----------------------------------------
* implemented user defined morphing using a MorphPlan consisting of operators
- graphical representation of the operators
- graphical editing of the MorphPlan
- implement actual morphing (in per-operator per-voice module object)
- added MorphPlanSynth/MorphPlanVoice, which allow running MorphPlan's easily
- added LPC (linear prediction) during encoding, and LPC/LSF based morphing
* BEAST plugin:
- added GUI required for editing a MorphPlan
- support four output channels, as well as two control inputs
- delay compensation plugin (to compensate SpectMorph delay)
* JACK client:
- support GUI MorphPlan editing
* added sminspector (graphical tool for displaying SpectMorph instruments)
- zoomable time/frequency view
- configurable (FFT/CWT/LPC) time/frequency view transform parameters
- spectrum, sample, LPC visualization
- graphical loop point editing
- allow storing changes in .smset files (for editing loop points)
- play support via JACK
* improved smtool (old name: smextract); its now installed by default
- lots of new commands (like "total-noise", "auto-volume", ...)
- support .smset as input (in addition to .sm); command is executed on all
.sm files in the .smset
* added shared libraries for gui and jack code
* new integrated memory leak debugger (to find missing delete's)
* support ping-pong loops
* doxygen API docs updates
* migrated man pages from Doxer to testbit.eu wiki (and use wikihtml2man.py)
* performance improvements
What is SpectMorph?
-------------------
SpectMorph is a free software project which allows to analyze samples of
musical instruments, and to combine them (morphing). It can be used to
construct hybrid sounds, for instance a sound between a trumpet and a flute; or
smooth transitions, for instance a sound that starts as a trumpet and then
gradually changes to a flute.
Also interpolating between two samples of the same instrument (different attack
velocity of a piano) could be interesting.
SpectMorph is implemented in C++ and licensed under the GNU LGPL version 3
SpectMorph is still under development. This means:
* the fileformat is not yet stable - instruments or morph plans may not work
with newer versions of SpectMorph
* the algorithms for synthesizing sounds are still under development - newer
versions may sound different
To sum it up: if you compose music using SpectMorph, don't expect newer
versions to be compatible in any way.
Links:
------
Website: http://www.spectmorph.org
Download: http://www.spectmorph.org/downloads/spectmorph-0.2.0.tar.bz2
There are many sound samples on the website, which test morphing between
instruments.
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
Howdy!
TYOQA (the year of qtractor automation for the clueless) is now pretty
real. I'd say it's all been my my prerogative, again and again, doing
things my own way (do I hear Frank S. singing? nope. move along...).
Is this the time to do the unthinkable? Should I tag it as beta now?
Should I? There's one single reason for not doing so and a couple of
others to make it through:
1. basically it's all the same functionality that stays put or improved
in a few spots;
2. it just feels like it! :)
Now comes the mighty corrosive one: I'll be off on vacation soon. Summer
is waiting for me. And I just hate to miss that kind of deadline. Woohoo!
Is there anything else to mention? Go ahead, make your day:
Qtractor 0.5.0 (alpha zulu) is now released!
Release highlights:
* TYOQA! Audio/MIDI track and plugin parameter automation (NEW)
* MIDI controller catch-up behavior (NEW)
* All zooming in/out relative to views center (NEW)
* Audio gain/panning smoothing changes (FIX)
Happy summer 2 y'all!
Website:
http://qtractor.sourceforge.net
Project page:
http://sourceforge.net/projects/qtractor
Downloads:
- source tarball:
http://downloads.sourceforge.net/qtractor/qtractor-0.5.0.tar.gz
- source package (openSUSE 11.4):
http://downloads.sourceforge.net/qtractor/qtractor-0.5.0-3.rncbc.suse114.sr…
- binary packages (openSUSE 11.4):
http://downloads.sourceforge.net/qtractor/qtractor-0.5.0-3.rncbc.suse114.i5…http://downloads.sourceforge.net/qtractor/qtractor-0.5.0-3.rncbc.suse114.x8…
- from the dusty shelf: user manual (anyone?):
http://downloads.sourceforge.net/qtractor/qtractor-0.3.0-user-manual.pdf
Weblog (upstream support):
http://www.rncbc.org
License:
Qtractor is free, open-source software, distributed under the terms
of the GNU General Public License (GPL) version 2 or later.
Change-log:
- MIDI controller learn/catch-up sees the way in: MIDI controller
changes are now only effective after catching-up with their respective
program parameters, avoiding abrupt jumps and keeping a safe and
continuous behavior.
- Track/Height menu is now featured, giving access to Increase, Decrease
or Reset the current track height.
- All changes to audio gain and panning on tracks and buses are now
applied following a piece-wise linear ramp, reducing the old nasty
clicks, pops or zipper artifacts that might be awfully audible on some
situations, most specially on automation.
- All zooming in/out is now relative to either the viewport center or
current mouse cursor position if found laying inside.
- TYOQA! the underground sources have emerged:... after years in the
making, track automation, or dynamic curves as some like to call, is
finally a reality, tricky but real ;)
- Audio clip anti-glitch/ramp-smoothing effect is now slightly
independent of current buffer-size period (mitigating bug #3338113 effect).
- Once buried under the Edit menu, Clip menu has been finally promoted
to top main menu.
- Debugging stacktrace now applies to all working threads.
- Fixed muted loop playback on audio clips ending coincidentally with
the loop-turn/end point.
- Old/deprecated JACK port latency support added to audio recording
latency compensation.
- Audio clip merge/export lock-ups now untangled (fixes bug #3308998).
- LV2 extension headers update.
- Fixed configure of newer LV2 host implementation stack (LILV) when
older (SLV2) is not present.
Enjoy && Cheers!
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org
I am pleased to announce the release of Composite 0.006.1. This is a
bug-fix release.
ABOUT
-----
Composite is (or, will be) a software application/system for
real-time, in-performance sequencing, sampling, and looping.
Currently the main feature is the LV2 Sampler that supports Hydrogen
drumkits.
CHANGES
-------
* Fix FTBFS: t_AudioPort test missing link to QtCore
* Fix FTBFS Song.cpp: Replace using of QString(int) with
QString(const char*)
* LV2 Sampler: Initialize QCoreApplication
* LV2 Sampler: Fix crash by moving Logger instance to module level
* LV2 Sampler: Check LV2 Event types and add extension meta-data
* tests: t_AudioPort had a degenerate test
* tests: t_SeqScript now passes (was failing as dev reminder)
* Replace -lQtCore with ${QT_LIBRARIES} in build system
STATUS
------
Composite is a project with a large vision. Here is the status of the
different components:
composite-gui: Alpha (i.e. "a broken version of Hydrogen")
compoiste_sampler (LV2): production/stable, no GUI
libTritium: Not a public API, yet.
LINKS
-----
Composite: http://gabe.is-a-geek.org/composite/
Plugin Docs:
file:///home/gabriel/code/composite-planning/plugins/sampler/1
Tarball:
http://gabe.is-a-geek.org/composite/releases/composite-0.006.tar.bz2
Git: http://gitorious.org/compositegit://gitorious.org/composite/composite.git
HOW TO USE THE PLUGIN
---------------------
To use the plugin, you need the following:
* A program (host) that loads LV2 plugins.
* A MIDI controller.
* An audio output device. :-)
The following LV2 hosts are known to work with this plugin:
Ingen http://drobilla.net/blog/software/ingen/
ardour3 (alpha) http://ardour.org/
lv2_jack_host http://drobilla.net/software/slv2/
zynjacku http://home.gna.org/zynjacku/
If you don't have a hardware MIDI controller, I suggest using
jack-keyboard (http://jack-keyboard.sourceforge.net/).
The first time you run the sampler, it will create a file
~/.composite/data/presets/default.xml, which will set up presets on
Bank 0 for the two default drum kits (GMkit and TR808EmulationKit).
Sending MIDI PC 0 and PC 1 will switch between the two kits. See
composite_sampler(1) for more information on setting up presets.
ACKNOWLEDGMENTS
---------------
With this release, I would especially like to thank:
Alessio Treglia - For dutifully reporting bugs, submitting
patches, and *patiently* waiting for me to review them.
Paul Davis - For working with me on the crashes that were
discovered on ardour3 alpha.
Peace,
Gabriel M. Beddingfield
*REMINDER* deadline for submission of abstracts is Sunday July 17.
If you need an extension, please contact us before Monday.
Call for Abstracts:
Versatile Sound Models for Interaction in
Audio–graphic Virtual Environments:
Control of Audio-graphic Sound Synthesis
Workshop @ Conference on Digital Audio Effects DAFx-11
Friday September 23, 2011 at Ircam, Paris
The use of 3D interactive virtual environments is becoming more
widespread in areas such as games, architecture, urbanism, information
visualization and sonification, interactive artistic digital media,
serious games, gamification. The limitations in sound generation in
existing environments are increasingly obvious with current
requirements.
This workshop will look at recent advances and future prospects in
sound modeling, representation, transformation and synthesis for
interactive audio-graphic scene design.
Several approaches to extending sound generation in 3D virtual
environments have been developed in recent years, such as sampling,
modal synthesis, additive synthesis, corpus based synthesis, granular
synthesis, description based synthesis, physical modeling... These
techniques can be quite different in their methods and results, but
may also become complementary towards the common goal of versatile and
understandable virtual scenes, in order to cover a wide range of
object types and interactions between objects and with them.
The purpose of this workshop is to sum up these different approaches,
present current work in the field, and to discuss their differences,
commonalities and complementarities.
Accepted abstracts will be invited to submit an extended version to a
special issue of the Springer Journal on Multimodal User Interfaces
(JMUI) or SpringerOpen EURASIP Journal on Audio, Speech, and Music
Processing.
Detailed information about the workshop can be found here:
http://www.topophonie.fr/event/3http://dafx11.ircam.fr/?page_id=224
The workshop is free for attendants of the DAFx conference and
for non-DAFx-attendants by invitation. Registration to the DAFx
conference can be found here: http://dafx11.ircam.fr
Call for Abstracts
------------------
Abstracts (max. 1 A4/Letter page, PDF format)
on the topics of the workshop should be sent by July 17
to Diemo Schwarz (schwarz(a)ircam.fr)
The submissions will be reviewed by a program committee
and accepted communications will be given at the workshop.
The authors will be notified at the latest end of July, 2011.
Important Dates
---------------
* July 17, 2011: Abstract Submission Deadline
* July 31, 2011: Notification of Acceptance
* September 23, 2011: Workshop
Program Chairs
--------------
Roland Cahen, ENSCI-les Ateliers
Diemo Schwarz, IRCAM
Christian Jacquemin, LIMSI-CNRS & University Paris Sud 11
Hui Ding, LIMSI-CNRS & University Paris Sud 11
Program committee
-----------------
Nicolas Tsingos (Dolby Laboratories)
Lonce Wyse (National University of Singapore)
Andrea Valle (University of Torino)
Hendrik Purwins (University Pompeu Fabra)
Thomas Grill (Institut für Elektronische Musik IEM, Graz)
Charles Verron (McGill University, Montreal)
Cécile Le Prado (Centre National des Arts et Metiers CNAM)
Annie Luciani (Ingénierie de la Création Artistique ICA, ACROE)
Topics in detail
----------------
Which other and better alternatives to traditional sample triggering
do exist to produce comprehensive, flexible, expressive, realistic
sounds in virtual environments? How to produce rich interaction with
scene objects such as physically informed models for contact and
friction sounds etc? How to edit and structure audio–graphic scenes
otherwise than mapping one event to one sound? There is no
standardized architecture, representation and language for auditory
scenes and objects, as is OpenGL for graphics. The workshop will treat
higher level questions of architecture and modeling of interactive
audio-graphic scenes, down to the detailed question of sound modeling,
representation, transformation and synthesis. These questions cannot
be detached from implementation issues: novel and hybrid synthesis
methods, comparison and improvement of existing platforms, software
architecture, plug-in systems, standards, formats, etc.
New possibilities regarding the use of audio descriptors and dynamic
access to audio databases will also be discussed.
Beyond these main questions, the workshop will cover other recent
advances in audio-graphic scene modeling such as:
* audio-graphic object rendering, and physically and geometrically driven
sound rendering,
* interactive sound texture synthesis, based on signal models, or
physically informed
* joint representation of sound and graphic spaces and objects,
* sound rendering for audio-graphic scenes:
* level of detail, which is a very advanced concept in graphics, but is
rarely treated in audio.
* representation of space and distance,
* masking and occlusion of sources,
* clustering of sources
* audio-graphic interface design,
* sound and graphic localization,
* cross- and bi-modal perceptual evaluations,
* interactive audio-graphic arts,
* industrial audio-graphic data:
* architectural acoustics,
* sound maps,
* urban soundscapes...
* platforms and tools for audio-graphic scene modeling and rendering,
These areas are interdisciplinary in nature and interrelated. New
advancements in each area will benefit the others. This workshop will
allow to exchange the latest developments and to point out current
challenges and new directions.
--
Diemo Schwarz, PhD -- http://diemo.concatenative.net
Real-Time Music Interaction Team -- http://imtr.ircam.fr
IRCAM - Centre Pompidou -- 1, place Igor-Stravinsky, 75004 Paris, France
Phone +33-1-4478-4879 -- Fax +33-1-4478-1540