----- Forwarded message from Fons Adriaensen <fons(a)linuxaudio.org> -----
Date: Thu, 19 May 2022 22:22:12 +0200
From: Fons Adriaensen <fons(a)linuxaudio.org>
To: "Jeanette C." <julien(a)mail.upb.de>
On Thu, May 19, 2022 at 07:52:33PM +0200, Jeanette C. wrote:
> I know about one or two applications that use the timeofday/sleep mechanism,
> but from first hand experience I know that these tend to drift and wobble.
The key to do this is to have a high priority thread waiting for an
*absolute* time, and then each time increment that time by the
required delta.
Note that this is fundamentally different from using sleep or similar
functions. With those you wait for a certain time. So if your previous
event was late, the next one will be late as well just because you
start waiting for it too late. So all the errors will add up, and
you will *never* get the correct event frequency.
When you wait until an absolute time, any latency on the previous
event does not affect the following ones. The errors don't accumulate.
So what to wait for ? That could be any system call that takes an
absolute timeout rather than a maximum waiting time. On Linux I'd
use something like sem_timedwait(). To set the initial timeout,
the corresponding clock can be read with clock_gettime(), using the
CLOCK_MONOTONIC option.
Don't know about Apple. Last time I looked it didn't have clock_gettime(),
but it has gettimeofday(). Note that it is not gettimeofday() that is
the cause of the problem you mentioned, it is using sleep() or usleep().
Ciao,
--
FA
----- End forwarded message -----
Hey hey,
I wonder how would you best derive a steady MIDI clock in software for a cross
platform application? Cross platform in this case almost certainly means Linux
and mac.
I know about one or two applications that use the timeofday/sleep mechanism,
but from first hand experience I know that these tend to drift and wobble.
This becomes apparent when syncing a synth with clock synced delays to that
clock. There is a slight chorus effect. Furthermore, subsequent renderings of
the same song is too much out of sync. Exact re-runs are necessary with
connected mono timbral instruments that have to contribute several sounds to
the same song.
I did think about using RtAudio with a very low blocksize. The callback
function could supply a steady time source at the connected soundcard's
samplerate (or any subdivision thereof). Is that feasible? Or will some
function from any of the Boost libraries do? I haven't found the right one
yet. The Thread library contains its versions of sleep (i.e. sleep_for) and
other libraries supply all kinds of finely grained time formats. But
apparently, they too are based on a timeofday based mechanism.
Being new to this kind of task, I am a bit at a loss. So any practical
pointers and hints are welcome.
Best wishes,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
If there's nothing missing in my life
Then why do these tears come at night <3
(Britney Spears)
Hello all,
Version 0.10.1 of Aeolus is now available at the usual place:
<http://kokkinizita.linuxaudio.org/linuxaudio/downloads/index.html>
* Cleanup, maintenance, bug fixes.
The biggest bug was probably that the 'instability' and 'release
detune' parameters set in the stops editor were correctly stored
into the *.ae0 files which contain the stop definitions, but NOT
copied into the *.ae1 files which contain the precomputed wavetables
and run-time synthesis parameters.
So they would work only when the wavetables were recomputed
on a running Aeolus instance (e.g. by changing tuning or
temperament), and not when previously stored ones were reloaded.
This makes quite a difference, as without the random delay
modulation which is controlled by 'instability', the looped
parts of the wavetables just become a static sound.
You may also get stops-0.4.0. This includes some tweaks that I
have done on my local copy over the past years, but is probably
not much different from 0.3.0. You may need to modify your
~/.aeolusrc to use these.
-------
Apart from bug fixes, this will be the last release using the
current Aeolus framework.
A completely new one is in the pipeline, but it still requires
a lot of new code, testing and tuning. This will provide:
* 'Chiff', the filtered noise that some pipes generate.
I've finally found an algorithm that produces realistic
results and that is efficient enough to work on lots
of pipes.
* Using multiple CPU cores.
* Higher order Ambisonics output.
* Binaural output (with optional head tracking).
* Full separation of UI and synthesis processes,
connected via a network connection.
Ciao,
--
FA
Hi,
I just released a new plugin. The first one based on the new B.Widgets
toolkit. Try out and have fun.
B.Low is the unique sample-based sound generator plugin you always
waited for. It blows out low sounds from below to spice up your music
production with a special flavour. The high quality samples were
gratefully provided by numerous international artists.
Github: https://github.com/sjaehn/BLow
Release: https://github.com/sjaehn/BLow/releases/tag/1.2.0
Regards,
Sven
Greetings,
The 2022 Sound and Music Computing (SMC) Summer School will take place on June 5-7, 2022 in Saint-Étienne (France), prior to the SMC conference (https://smc22.grame.fr). It will consist of three one day workshops by Michel Buffa, Ge Wang, and Yann Orlarey (see program below). The SMC-22 Summer School is free and targeted towards grad students in the cross-disciplinary fields of computer science, electrical engineering, music, and the arts in general. Attendance will be limited to 25 students.
Application to apply to the SMC-22 Summer School can be made through this form: https://forms.gle/HF2Xv7QtbZG5U4hE6 (you will be asked to provide a resume as well as a letter of intent). Applications will be reviewed "on the fly" on a "first come, first served basis:" if the profile of a candidate seems acceptable, it will be automatically selected. The SMC-22 Summer School will happen in person (no video streaming): accepted candidates will be expected to physically come to the conference venue.
Additional information about this event can be found on the SMC-22 website: https://smc22.grame.fr/school.html
---
SMC-22 SUMMER SCHOOL PROGRAM
--------------------------------------------------
Michel Buffa -- Web Audio Modules 2.0: VSTs For the Web
During this tutorial, you will first follow a WebAudio API presentation with examples and you will learn how to program simple effects or instruments with JavaScript. In a second part you will be introduced to "WebAudio Modules 2.0" (WAM), a standard for developing "VSTs on the Web." The new WAM ecosystem covers many use cases for developing plugins, from the amateur developer writing simple plugins using only JavaScript/HTML/CSS to the professional developer looking for maximum optimization, using multiple languages and compiling to WebAssembly. It was designed by people from the academic research world and by developers who are experts in Web Audio and have experience developing professional computer music applications. In its current state, the open source WAM 2.0 standard is still considered a "beta version," but in a stable state. The framework provides most of the best features found in native plugin standards, adapted to the Web. We regularly add new plugins to the wam-examples GitHub repository, but there are also dozens of WAMs developed by the community, such as the set of plugins created by the author of sequencer.party, who has open sourced them in their entirety. DUring this tutorial you will learn how to reuse existing plugins in a host web application, but also how to write your own reusable plugins using JavaScript, TypeScript or Faust.
Bio of Michel Buffa
Michel Buffa (http://users.polytech.unice.fr/~buffa/) is a professor/researcher at University Côte d'Azur, a member of the WIMMICS research group, common to INRIA and to the I3S Laboratory (CNRS). He contributed to the development of the WebAudio research field, since he participated in all WebAudio Conferences, being part of each program committee between 2015 and 2019. He actively works with the W3C WebAudio working group. With other researchers and developers he co-created a WebAudio Plugin standard. He has been the national coordinator of the french research project WASABI, that consists in building a 2M songs knowledge database that mixes metadata from Cultural, lyrics and audio analysis.
--------------------------------------------------
Ge Wang -- Chunity! Interactive Audiovisual Design with ChucK in Unity
In this workshop, participant will learn to work with Chunity -- a programming environment for the creation of interactive audiovisual tools, instruments, games, and VR experiences. It embodies an audio-driven, sound-first approach that integrates audio programming and graphics programming in the same workflow, taking advantage of strongly-timed audio programming features of the ChucK audio programming language and the state-of-the-art real-time graphics engine found in Unity.
Through this one-day workshop, participants will learn:
1) THE FUNDAMENTALS OF CHUNITY WORKFLOW FROM CHUCK TO UNITY,
2) HOW TO ARCHITECT AUDIO-DRIVEN, STRONGLY-TIMED SOFTWARE USING CHUNITY,
3) DESIGN PRINCIPLES FOR INTERACTIVE AUDIOVISUAL/VR SOFTWARE
Any prior experience with ChucK or Unity would be helpful but is not necessary for this workshop.
Bio of Ge Wang
Ge Wang (https://ccrma.stanford.edu/~ge/) is an Associate Professor at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). He researches the artful design of tools, toys, games, instruments, programming languages, virtual reality experiences, and interactive AI systems with humans in the loop. Ge is the architect of the ChucK audio programming language (https://chuck.stanford.edu/) and the director of the Stanford Laptop Orchestra (https://slork.stanford.edu/). He is the Co-founder of Smule and the designer of the Ocarina and Magic Piano apps for mobile phones. A 2016 Guggenheim Fellow, Ge is the author of /Artful Design: Technology in Search of the Sublime/ (https://artful.design/), a photo comic book about how we shape technology -- and how technology shapes us.
--------------------------------------------------
Yann Orlarey -- Audio Programming With Faust
The objective of this one-day workshop is to discover the Faust programming language (https://faust.grame.fr) and its ecosystem and to learn how to program your own plugins or audio applications. No prior knowledge of Faust is required.
Faust is a functional programming language specifically designed for real-time signal processing and synthesis. It targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards. A distinctive feature of Faust is that it is not an interpreted, but a compiled language. Thanks to the concept of architecture, Faust can be used to generate ready-to-use objects for a wide range of platforms and standards including audio plugins (VST, MAX, SC, PD, Csound,...), smartphone apps, web apps, embedded systems, etc.
At the end of the workshop, you will have acquired basic Faust programming skills and will be able to develop your own audio applications or plugins. You will also have a good overview of the main libraries available, of the documentation, and of the main programming tools that constitute the Faust ecosystem.
Bio of Yann Orlarey
Born in 1959 in France, Yann Orlarey is a composer, researcher, member of the Emeraude research team (INRIA, INSA, GRAME), and currently scientific director of GRAME (https://www.grame.fr), the national center for musical creation based in Lyon, France. His musical repertoire includes instrumental, mixed, and interactive works as well as sound installations. His research work focuses in particular on programming languages for music and sound creation. He is the author or co-author of several musical software, including the programming language FAUST, specialized in acoustic signal synthesis and processing.