---------- Forwarded message ---------
De : Romain Michon <rmnmichon(a)gmail.com>
Date: mer. 30 oct. 2019 à 16:16
Subject: [Faudiostream-users] Forum Acusticum 2020: Special Session on
Hybrid Acoustical and Digital Musical Instruments
To: faudiostream-users <faudiostream-users(a)lists.sourceforge.net>
[Sorry for cross-posting]
Dear Colleagues,
We are organizing a special session on *Hybrid Acoustical and Digital
Musical Instruments* at the next European Acoustics Meeting (*Forum
Acusticum*: https://fa2020.universite-lyon.fr/) which will take place in *Lyon
(France) on April 20-24, 2020*:
---
*Summary of the Session*
Nowadays, physical modeling, increasingly light and powerful embedded
systems, as well as new digital fabrication techniques and sensor
technologies allow us to approach lutherie and musical instrument design in
a completely hybrid way. Digital 3D models can serve both as the source for
a sound synthesizer thanks to physical modeling techniques (i.e., finite
difference scheme, etc.) and can be "materialized" with 3D printing,
blurring the boundary between the physical/acoustical and the
virtual/digital worlds. Embedded systems based on an expending range of
architectures such as microcontrollers, GPUs, FPGAs, microprocessors, etc.
play an important role in this context by providing extended computational
power to run complex models and by offering the possibility to make hybrid
instruments more portable and standalone. Dedicated tools and platforms -
both hardware (e.g., Bela, Owl, smartdevices, etc.) and software (e.g.,
Faust, Kronos, Soul, etc.) - have been developed in recent years to
facilitate the design of such instruments. This adds up to the wide range
of new sensor technologies and specialized commercial interfaces (e.g.,
Sensel Morph, ROLI Seaboard, LinnStrument, etc.) that have been created in
recent years in this field.
In this special session on "Hybrid Acoustical and Digital Musical
Instruments," we invite paper submissions on the following *topics* (not
limiting):
- Programming languages for computer music and real-time signal processing
- Digital sound synthesis and processing
- Physical modeling of musical instruments
- Hardware and embedded platforms for real time DSP
- Human Computer Interaction (HCI)
- New Interfaces for Musical Expression
- New sensor technologies
- Digital fabrication for lutherie and acoustics
- Mobile music
---
*Abstracts can be submitted* to the following address:
https://fa2020.universite-lyon.fr/fa2020/english-version/navigation/abstrac…
*until Dec. 1, 2019*.
We look forward to see you in Lyon this Spring!
Best,
Romain Michon and Yann Orlarey
GRAME-CNCM
Lyon, France
_______________________________________________
Faudiostream-users mailing list
Faudiostream-users(a)lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/faudiostream-users
DrumGizmo 0.9.18 Released!
DrumGizmo is an open source, multichannel, multilayered, cross-platform
drum plugin and stand-alone application. It enables you to compose drums
in midi and mix them with a multichannel approach. It is comparable to
that of mixing a real drumkit that has been recorded with a multimic setup.
This release is primarily a bugfix release but a few new features also
managed to sneak in.
Highlights:
* Sample selection algorithm now behaves a lot better when using a
small sample set.
* Error reporting has been drastically improved when loading
drum-kits.
* Sample normalization option has been added.
As usual read the detailed description of all the new shiny features,
including some audio samples [1].
And now, without further ado, go grab 0.9.18!!! [2]
[1]: https://drumgizmo.org/wiki/doku.php?id=changelog:drumgizmo-0.9.18
[2]: http://www.drumgizmo.org/wiki/doku.php?id=getting_drumgizmo
The end is near! Fortunately only the end of our preparations.
Only four days left until this year's Sonoj Convention.
For the spontaneous among you there is still an opportunity to sign up
and visit. Just enter your name on https://sonoj.org/register.html .
Considering the many mails you've received so far, it can be assumed
that you're currently in Antarctica and therefore can't actually
participate.
Therefore now for the last time (this year):
Please help us finance our non-profit event by donating an amount of
your choice. We accept bank transfers and PayPal:
https://sonoj.org/donate.html
And finally I would be happy to see you during the two days in our video
stream. There will be a chat this year (it already exists), so you can
ask questions and comment.
The stream is on the Sonoj page, or here:
https://streaming.media.ccc.de/sonoj2019
Best regards,
Nils
Sonoj Convention
Dear Open Source Musicians and Music Lovers,
the submission period for the Open Source Music Nexus 2019 Challenge
has ended and the competition entries are now waiting for your votes!
There are eight submissions to the competition and they are very
stylistically diverse, so there should be something enjoyable for
everybody.
You can listen to the entries here:
https://nexus-challenge.osamc.de/vote/
(If the audio players do not show up in your browser, click the
"Download" button for each track to go to its page on archive.org.)
Voting is open to everybody* and runs until the end of Saturday of the
coming week (2019-10-26 23:59 UTC).
Please honor the labor that went into the competition entries and show
your appreciation by casting your vote!
* (Email registration required. The email address is only used for
logging in and sending out the results.)
--
Christopher Arndt
Open Source Audio Meeting Cologne
https://nexus-challenge.osamc.de/
challenge(a)osamc.de
The first release of GxMatchEq.
A Matching Equalizer to apply a EQ curve from on source to a other source.
GxMatchEq analyze the spectral profile of a sound source, analyze the
spectral profile of a sound destination, and calculate the EQ settings
needed to make the destination match the source.
Source analyses could be saved as 'Profiles' and reused for as many
destinations as you wish.
Match curves could be saved as presets to reuse the same settings when
ever needed.
Project page:
https://github.com/brummer10/GxMatchEQ.lv2
release:
https://github.com/brummer10/GxMatchEQ.lv2/releases
enjoy.
Yo:
I know this project has been in beta for a ridiculous amount of time, but I
still haven't mustered the time and effort to give everything a proper
test. However we have had a few community contributed fixes which warrants
a new release. Thanks to Robin (x42) we now have a working Vibe effect and
the bypass has been improved in all the effects (which probably won't be
very noticeable to users).
Happy LARD (Linux Audio Release Day)!
https://github.com/ssj71/rkrlv2/releases/tag/beta_3
_Spencer (ssj71)
Hola,
I'm pleased to announce version 0.1.0-beta2 of the *midiomatic* plugin
collection as the first public beta release.
What is it?
-----------
*midiomatic* is a small collection of MIDI filter, generator and
processor plugins in LV2 and VST2 format.
This collection arose from the desire to test whether it is viable to
implement MIDI processing plugins with the DISTRHO Plugin Framework (DPF).
This is the first public release and the project is still considered to
be in beta stage. The MIDISysFilter and MIDIPBToCC plugins are already
considered stable but the MIDICCRecorder plugin is still experimental.
I'm planning to add more plugins to the collection. Requests and ideas
for plugins welcome.
Where to get it?
----------------
https://github.com/SpotlightKid/midiomatic
or, for Arch Linux users, from the AUR:
https://aur.archlinux.org/packages/midiomatic/
*Share & Enjoy*
Christopher Arndt
Ciao tutti!
I'm pleased to announce that version 1.3.1 of *python-rtmidi* has just
been released!
What is it?
-----------
python-rtmidi is a Python binding for RtMidi, a set of C++ classes which
provides a cross-platform API for realtime MIDI input / output.
python-rtmidi supports Python 2 and Python 3 (3.4+) and is available for
Linux, macOS (OS X) and Windows.
What's new?
-----------
This is a bugfix release with only minor enhancements.
The major changes are:
* RtMidi C++ level exceptions, when thrown, do not print the error
message to stderr anymore.
* RtMidi C++ exceptions are now caught when creating RtMidiIn/Out
instances and converted into a Python rtmidi.SystemError exception.
* Helper functions in rtmidi.midiutil now raise sub-classes of
rtmidi.RtMidiError wherever appropriate.
* When the JACK backend can't be initialized (e.g. when the server isn't
running) it causes a DRIVER_ERROR instead of just a printing a WARNING.
* Various improvements to the included example scripts.
* Various small documentation wording changes and typo fixes.
For a detailed list of changes, see the change log here:
https://github.com/SpotlightKid/python-rtmidi/blob/master/CHANGELOG.rst
Where to get it?
----------------
https://github.com/SpotlightKid/python-rtmidi
or via pip:
pip install python-rtmidi
(Pre-compiled binary wheels for Windows and macOS for several Python
versions in 32 and 64 bit variants are provided.)
or, for Arch Linux users, from the AUR:
https://aur.archlinux.org/packages/python-rtmidi/
How to use it?
--------------
Please read the documentation here:
https://spotlightkid.github.io/python-rtmidi/
*Share & Enjoy*
Christopher Arndt