spectmorph-0.4.1 has been released.
Overview of Changes in spectmorph-0.4.1:
* macOS is now supported: provide VST plugin for macOS >= 10.9
* Include instruments in source tarball and packages
* Install instruments to system-wide location
* New Instruments: Claudia Ah / Ih / Oh (female version of human voice)
* Improved tools for instrument building
- support displaying tuning in sminspector
- implement "smooth-tune" command for reducing vibrato from recordings
- minor encoder fixes/cleanups
- smlive now supports enable/disable noise
* VST plugin: fix automation in Cubase (define "effCanBeAutomated")
* UI: use Source A / Source B instead of Left Source / Right Source
* UI: update db label properly on grid instrument selection change
* Avoid exporting symbols that don't belong to the SpectMorph namespace
* Fix some LV2 ttl problems
* Fix locale related problems when using atof()
* Minor fixes and cleanups
What is SpectMorph?
SpectMorph is a free software project which allows to analyze samples of
musical instruments, and to combine them (morphing). It can be used to
construct hybrid sounds, for instance a sound between a trumpet and a flute; or
smooth transitions, for instance a sound that starts as a trumpet and then
gradually changes to a flute.
SpectMorph ships with many ready-to-use instruments which can be combined using
SpectMorph is implemented in C++ and licensed under the GNU LGPL version 3
Integrating SpectMorph into your Work
SpectMorph is currently available for Linux and Windows users. Here is a quick
overview of how you can make music using SpectMorph.
- VST Plugin, especially for proprietary solutions that don't support LV2.
(Available on Linux and 64-bit Windows)
- LV2 Plugin, for any sequencer that supports it.
- JACK Client.
- BEAST Module, integrating into BEASTs modular environment.
Note that at this point, we may still change the way sound synthesis works, so
newer versions of SpectMorph may sound (slightly) different than the current
There are many audio demos on the website, which demonstrate morphing between
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
I just released GSequencer v2.0.0-beta the versatile audio sequencer and editor.
* AgsWaveWindow - new wave form editor
* AgsAudiorec - new built-in machine
* AgsEqualizer10 - new built-in machine
* AgsSpectrometer - new built-in machine
Since February of this year, I was reworking the API almost every file
was revisited. You got a new concurrent dispatcher.
Thought not every aspect was tested, yet. Most of the functionality is
here, this means only a few implementations are missing.
It is a pleasure to announce ICAD 2019, the 25th International Conference on Auditory Display. The conference is hosted by the Department of Computer and Information Sciences, Northumbria University and will take place in Newcastle upon Tyne, UK on 23-27 June, 2019. The graduate student Think Tank (doctoral consortium) will be on Sunday, June 23, before the main conference.
The theme of ICAD 2019 will be Sonification for Everyday Life
Digital technology and artificial intelligence are becoming embedded in the objects all around us, from consumer products to the built environment. Everyday life happens where People, Technology, and Place intersect. Our activities and movements are increasingly sensed, digitised and tracked. Of course, the data generated by modern life is a hugely important resource not just for companies who use it for commercial purposes, but it can also be harnessed for the benefit of the individuals it concerns. Sonification research that has hit the news headlines in recent times has often been related to big science done at large publicly funded labs with little impact on the day-to-day lives of people. At ICAD 2019 we want to explore how auditory display technologies and techniques may be used to enhance our everyday lives. From giving people access to what’s going on inside their own bodies, to the human concerns of living in a modern networked and technological city, the range of opportunities for auditory display is wide. The ICAD 2019 committee is seeking papers, extended abstracts, multimedia, concert pieces, demos, installations, workshops, and tutorials that will contribute to knowledge of how sonification can support everyday life.
ICAD is a highly interdisciplinary academic conference with relevance to researchers, practitioners, musicians, and students interested in the design of sounds to support tasks, improve performance, guide decisions, augment awareness, and enhance experiences. It is unique in its singular focus on auditory displays and the array of perception, technology, and application areas that this encompasses. Like its predecessors, ICAD 2019 will be a single-track conference, open to all, with no membership or affiliation requirements.
A full Call for Participation with details of the submission classes and dates will be sent out in September.
For all general enquiries, please contact: icad2019chairs(a)icad.org
Paul Vickers co-chair of ICAD 2019.
Co-chair of ICAD 2019: https://icad2019.icad.org
Save our in-boxes! https://emailcharter.org
Dr Paul Vickers BSc PhD CEng MIEE FHEA
Associate Professor & Reader in Computer Science & Computational Perceptualisation
Department of Computer and Information Sciences
Ellison Building, Ellison Place
Newcastle upon Tyne, NE1 8ST, United Kingdom
T: +44 (0)191 243 7614
Personal Profile: https://paulvickers.github.io/
This message is intended solely for the addressee and may contain confidential and/or legally privileged information. Any use, disclosure or reproduction without the sender’s explicit consent is unauthorised and may be unlawful. If you have received this message in error, please notify Northumbria University immediately and permanently delete it. Any views or opinions expressed in this message are solely those of the author and do not necessarily represent those of the University. Northumbria University email is provided by Microsoft Office365 and is hosted within the EEA, although some information may be replicated globally for backup purposes. The University cannot guarantee that this message or any attachment is virus free or has not been intercepted and/or amended.
After quite a bit of polishing and last minute regression fixing, Beast 0.12
is finally out.
The latest stable release is now tracked by a new git branch
named 'wip/latest-stable'. Beware of non-linear updates to future
This release removes the Rapicorn dependency as well as the
runtime dependency on CPython. To achieve that, a number of
utilities from Rapicorn has to be integrated, which has made the
code base a fair bit larger:
651 files changed, 75581 insertions(+), 44596 deletions(-)
Most notably, this is the first release that installs the new
ebeast UI. Tracks, piano rolls and dB meters are already displayed,
but not much beyond that as it's still in pre-alpha stage.
However it's a good showcase for our future UI direction, you can
start it and take a quick look with:
Please file any bugs you encounter in the Beast bug tracker:
The release NEWS in all its glory is online:
Here is the abbreviated shortlog.
Vincent Bermel (1):
DATA: unhardcode launcher icon file type
Stefan Westerfeld (10):
BUILD: display summary message at the end of configure
AF-TESTS: compute soundfont test threshold based on fluidsynth version
BUILD: provide fluidsynth version instead of test threshold to af-tests
AF-TESTS: add version comparision helper script
AF-TESTS: fix soundfont test for newer (>= 1.1.7) fluidsynth versions
BUILD: determine soundfont test threshold from fluidsynth version
BSE: suppress resampler filter test output if no errors found
TESTS: testresampler: print test result as one single line by default
TOOLS: bsefcompare: print test result as one single line by default
BEAST: fix crash when adding a bus to the mixer while song is playing
Tim Janik (933):
NEWS: updates for beast-0.12.0
Free software author.
DrumGizmo 0.9.16 Released!
DrumGizmo is an open source, multichannel, multilayered, cross-platform
drum plugin and stand-alone application. It enables you to compose drums
in midi and mix them with a multichannel approach. It is comparable to
that of mixing a real drumkit that has been recorded with a multimic setup.
This is mainly a bugfix release. If you encountered timing issues when
using the humanizer features of 0.9.15, this is the release to get. It
also optimizes the resampling and a bunch of other stuff. For the full
list of changes, check the roadmap for 0.9.16 .
And now, without further ado, go grab 0.9.16 !!!