Greetings,
The 2022 Sound and Music Computing (SMC) Summer School will take place on June 5-7, 2022 in Saint-Étienne (France), prior to the SMC conference (https://smc22.grame.fr). It will consist of three one day workshops by Michel Buffa, Ge Wang, and Yann Orlarey (see program below). The SMC-22 Summer School is free and targeted towards grad students in the cross-disciplinary fields of computer science, electrical engineering, music, and the arts in general. Attendance will be limited to 25 students.
Application to apply to the SMC-22 Summer School can be made through this form: https://forms.gle/HF2Xv7QtbZG5U4hE6 (you will be asked to provide a resume as well as a letter of intent). Applications will be reviewed "on the fly" on a "first come, first served basis:" if the profile of a candidate seems acceptable, it will be automatically selected. The SMC-22 Summer School will happen in person (no video streaming): accepted candidates will be expected to physically come to the conference venue.
Additional information about this event can be found on the SMC-22 website: https://smc22.grame.fr/school.html
---
SMC-22 SUMMER SCHOOL PROGRAM
--------------------------------------------------
Michel Buffa -- Web Audio Modules 2.0: VSTs For the Web
During this tutorial, you will first follow a WebAudio API presentation with examples and you will learn how to program simple effects or instruments with JavaScript. In a second part you will be introduced to "WebAudio Modules 2.0" (WAM), a standard for developing "VSTs on the Web." The new WAM ecosystem covers many use cases for developing plugins, from the amateur developer writing simple plugins using only JavaScript/HTML/CSS to the professional developer looking for maximum optimization, using multiple languages and compiling to WebAssembly. It was designed by people from the academic research world and by developers who are experts in Web Audio and have experience developing professional computer music applications. In its current state, the open source WAM 2.0 standard is still considered a "beta version," but in a stable state. The framework provides most of the best features found in native plugin standards, adapted to the Web. We regularly add new plugins to the wam-examples GitHub repository, but there are also dozens of WAMs developed by the community, such as the set of plugins created by the author of sequencer.party, who has open sourced them in their entirety. DUring this tutorial you will learn how to reuse existing plugins in a host web application, but also how to write your own reusable plugins using JavaScript, TypeScript or Faust.
Bio of Michel Buffa
Michel Buffa (http://users.polytech.unice.fr/~buffa/) is a professor/researcher at University Côte d'Azur, a member of the WIMMICS research group, common to INRIA and to the I3S Laboratory (CNRS). He contributed to the development of the WebAudio research field, since he participated in all WebAudio Conferences, being part of each program committee between 2015 and 2019. He actively works with the W3C WebAudio working group. With other researchers and developers he co-created a WebAudio Plugin standard. He has been the national coordinator of the french research project WASABI, that consists in building a 2M songs knowledge database that mixes metadata from Cultural, lyrics and audio analysis.
--------------------------------------------------
Ge Wang -- Chunity! Interactive Audiovisual Design with ChucK in Unity
In this workshop, participant will learn to work with Chunity -- a programming environment for the creation of interactive audiovisual tools, instruments, games, and VR experiences. It embodies an audio-driven, sound-first approach that integrates audio programming and graphics programming in the same workflow, taking advantage of strongly-timed audio programming features of the ChucK audio programming language and the state-of-the-art real-time graphics engine found in Unity.
Through this one-day workshop, participants will learn:
1) THE FUNDAMENTALS OF CHUNITY WORKFLOW FROM CHUCK TO UNITY,
2) HOW TO ARCHITECT AUDIO-DRIVEN, STRONGLY-TIMED SOFTWARE USING CHUNITY,
3) DESIGN PRINCIPLES FOR INTERACTIVE AUDIOVISUAL/VR SOFTWARE
Any prior experience with ChucK or Unity would be helpful but is not necessary for this workshop.
Bio of Ge Wang
Ge Wang (https://ccrma.stanford.edu/~ge/) is an Associate Professor at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). He researches the artful design of tools, toys, games, instruments, programming languages, virtual reality experiences, and interactive AI systems with humans in the loop. Ge is the architect of the ChucK audio programming language (https://chuck.stanford.edu/) and the director of the Stanford Laptop Orchestra (https://slork.stanford.edu/). He is the Co-founder of Smule and the designer of the Ocarina and Magic Piano apps for mobile phones. A 2016 Guggenheim Fellow, Ge is the author of /Artful Design: Technology in Search of the Sublime/ (https://artful.design/), a photo comic book about how we shape technology -- and how technology shapes us.
--------------------------------------------------
Yann Orlarey -- Audio Programming With Faust
The objective of this one-day workshop is to discover the Faust programming language (https://faust.grame.fr) and its ecosystem and to learn how to program your own plugins or audio applications. No prior knowledge of Faust is required.
Faust is a functional programming language specifically designed for real-time signal processing and synthesis. It targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards. A distinctive feature of Faust is that it is not an interpreted, but a compiled language. Thanks to the concept of architecture, Faust can be used to generate ready-to-use objects for a wide range of platforms and standards including audio plugins (VST, MAX, SC, PD, Csound,...), smartphone apps, web apps, embedded systems, etc.
At the end of the workshop, you will have acquired basic Faust programming skills and will be able to develop your own audio applications or plugins. You will also have a good overview of the main libraries available, of the documentation, and of the main programming tools that constitute the Faust ecosystem.
Bio of Yann Orlarey
Born in 1959 in France, Yann Orlarey is a composer, researcher, member of the Emeraude research team (INRIA, INSA, GRAME), and currently scientific director of GRAME (https://www.grame.fr), the national center for musical creation based in Lyon, France. His musical repertoire includes instrumental, mixed, and interactive works as well as sound installations. His research work focuses in particular on programming languages for music and sound creation. He is the author or co-author of several musical software, including the programming language FAUST, specialized in acoustic signal synthesis and processing.
Hi people,
Google Summer of Code (https://summerofcode.withgoogle.com) is a global, online program focused on bringing new contributors into open source software development. GSoC Contributors work with an open source organization on a 12+ weeks programming project under the guidance of mentors.
GRAME has been selected as a mentor organization for the Faust project (https://summerofcode.withgoogle.com/programs/2022/organizations/grame).
Foe interested people, feel free to contribute !
Stéphane
Dear past and future visitors of the Sonoj Convention,
the following message is made under the assumption that the world will return to a more healthy and peaceful state until October 2022.
At this point in time, I am unable to make any guarantees. Sorry.
However, the date is fixed. The convention will either be on this weekend (see below) or it will not happen at all.
The fourth annual Sonoj Convention ( https://www.sonoj.org ) will maybe(!) taking place this upcoming October 8th-9th 2022 in Cologne, Germany.
The convention is focusing on the combination of music production and free / open source software, with a priority on practical music production.
We want to welcome everyone, regardless of your musical or technical background. As it was the last times, admission to the convention is free.
While the website https://sonoj.org/ is still in it's "in-between" mode it will soon be replaced by the real site again, with more information.
To get a better idea of what to expect out of Sonoj, you can find information and recordings from last year's conventions in our archives: ( https://sonoj.org/archive )
If you want to contribute with a demonstration or talk, please contact me ( info(a)sonoj.org ). A short, informal expression of intent will be enough for now.
Yours,
Nils Hilbricht
Cologne, Germany
https://www.sonoj.org
Hello all,
See below...
What I want to achieve is to take some action when instr 85 ends.
I naively tried using 'gidur' as p3 for instr 85 in the score, but
that doesn't work.
So how do I trigger instr 86 at the right time ?
<CsInstruments>
instr 84
gidur filelen $INPFILE
print gidur
endin
instr 85 ; set its duration from the value found in instrument 84
p3 = gidur
; process input from $INPFILE
endin
instr 86
; Should do something when instr 85 ends.
endin
</CsInstruments>
<CsScore>
i84 0 0.1
i85 + 1 ; p3 is just a dummy
</CsScore>
TIA,
--
FA
Hi there,
My name is Roman, I'm studying Computer Science and I'd like to participate in the GSoC this year.
Also I'd like to do this in a linux audio related project if possible, because I want to help improve the Linux Audio world!
So if anyone here is part of or knows an organisation that would like to mentor a GSoC student and have them work on one of their projects, please let me know!
My background:
I'm a Master student in CS and my focus so far has been centered around operating systems (incl. kernel development), security, concurrency and (hard) real-time.
At the University I also took a few signals, systems and DSP courses, so I know what an LTI system is, how digital filters work and what a hilbert transform does.
To pay my rent and food I work part time as a repair technician fur electronic musical instruments and equipment and therefore have a background in electronics as well.
I'm also a passionate hobby musician and live mixing technician.
I know how to write C code that doesn't blow up. I'm familiar enough with C++ to get around comfortably.
Recently I started writing an 8-bit microcontroller emulator as a University project in Rust and so far I really like the language.
Python is also a very nice language in my opinion.
Audio related things I've written include python bindings for the jack dbus interface, a jack application managing tool to start/stop/mute applications via hardware buttons and used mididings to map MIDI CC to Sysex for my hardware synths.
I've also written a bit of FAUST code to create a number of effects I want to use.
A few ideas and fields I can imagine working on (non-exhaustive, no particular order):
- Mididings backend for embedded devices
- Polyrhythmic sequencing
- Sysex integration in sequencers
- Linux kernel work
- Emulation of analog hardware (setBfree still needs a nice overdrive afaik ;) )
- jack and/or pipewire
Something I'd really like to see someday is being able to sit down at (or stand up with) my Instrument and just jam and when I played something I like, I can go back to, say, 28s ago and extract a few bars and build a song from there.
Also I'd really like to hear your ideas and suggestions! :)
I'm really looking forward to your responses and hopefully a great collaboration as a result!
Feel free to ask any questions, as will I! ;)
Cheers,
Roman / etXzat