Does anyone know of software that can generate MIDI messages from a touchpad?
The idea would be to send CCs to a sequencer or soft synth, but being able to
send it to an external hardware device would also be very useful.
--
Will J Godfrey
https://willgodfrey.bandcamp.com/http://yoshimi.github.io
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
This is Steinway_IMIS soundfont, version 2.2.
ftp://musix.ourproject.org/pub/musix/sf2/Steinway_IMIS2.2
This version fixes the issue with loops. I hope this is the good one
and there are no more remaining major bugs.
Marcos is a little busy right now, so he asked me to make this fix. He
is thinking to make other improvements, so expect more updates soon.
Is anybody out here in LAU land have experience with PISound?
https://www.blokas.io/pisound/
I have just bought one and am having quite sever teething problems with it.
It keeps freezing for ~45 seconds when running X and I cannot get it to
use the full display.
cheers
Worik
--
If not me then who? If not now then when? If not here then where?
So, here I stand, I can do no other
root(a)worik.org 021-1680650, (03) 4821804 Aotearoa (New Zealand)
Dear list,
I recently bought a LinnStrument from Roger Linn Design:
http://www.rogerlinndesign.com/linnstrument.html
It is a great isomorphic midi-controller, and as such it is immediately
recognized on Linux.
The distinguishing feature of the LinnStrument is that it senses 3
degrees of freedom on each note: x-direction, y direction and
z-direction (pressure). The x-direction is mapped to pitch-bend, and
y-direction to CC74.
A cool feature is the "slide", where the pitch-bend is used to slide
between all notes in a row.
To allow individual pitch and CC74 values for each note, it sends each
note on a separate midi-channel ("MPE"):
http://www.rogerlinndesign.com/implementing-mpe.html
Bitwig has added support for this, and there is 20 presets in version
1.3.11, where this is used (tag: linnstrument). The LinnStrument
controller is not recognized automatically on Linux in version 1.3.11,
but it can be configured manually, and then it works fine. Note that
both midi-in and midi-out has to be configured, if not there is no
sound! It should look like this: https://ibin.co/2msBJVgpKtf9.png
Now I would like to also use it with the free Linux synths.
Here's what I have been able to make work this far.
Synthv1:
PME works reasonably well: I can play polyphonic in MPE mode, but it
tends to miss the "note off"s.
I can get the slide to work, by setting
<param index="36" name="DEF1_PITCHBEND">2</param>
<param index="78" name="DEF2_PITCHBEND">2</param>
is a preset.
Zynaddsubfx:
I can not get MPE to work.
Sending only on one channel, and setting PWheelB.Rng to 2400 cents, I
cant get the sliding to work, but only when playing with one finger.
If I enable MPE on the LinnStrument there is only an occasional sound,
when it happens to send on the channel, that Zyn is listening on.
I'll love to hear if other LinnStrument users have been able to do more
with any of the free synths on Linux.
All the best,
Thomas
Hello all,
I know this might be a weird place to ask, but thought some of you may
have some insight. I'm setting up four raspberry pis for an installation
to just loop through videos on four TVs with vlc and openbox.
Do you think I could set up the image on one pi, and then clone them on
other sd cards for the other 3 pis? There should be no issues with doing
that, right?, especially since I don't plan on giving them internet
access. The pis all have the same size sd card too, making this even
easier. I think I'm just going to throw 32-bit arch linux arm one of the
pis, set one up the way I want, and then clone them for the other pis.
Thank you very much for your help and input,
Brandon Hale
Hi, there!
I have a problem with my pipewire installation on a debian testing
machine (Thinkpad T530). And I have a very similar machine running
pipewire (Thinkpad T410) which does not have these issues.
I *guess* on both machines I went to the description in how to activate
pipewire. Chances are, I forgot something…
On both machines, audacious with the jack-output-plugin works.
I use the linux-show-player which has several modules to output the
sound (alsa/pulse/jack). I mostly use jack to have the freedom to send
specific audio to a specific channel. (In my example that's not
necessary, but there are tasks I depend on it).
On the T410 linux-show-player does work as expected. On the T530 it does
not. It blocks the output, no sound is heard and the time-counter stays
on 0:00.
Tried pw-top, but didn't find a clue. Neither "[sudo] journalctl -f" nor
linux-show-player (in debug mode) doesn't print any useful information.
"pw-metadata - n settings" shows the exact same settings on both machines.
How can I debug this?
And how can I return to seperate pulseaudio and jackd, if I'm not able
to debug it?
Greets!
Mitsch
Greetings,
The 2022 Sound and Music Computing (SMC) Summer School will take place on June 5-7, 2022 in Saint-Étienne (France), prior to the SMC conference (https://smc22.grame.fr). It will consist of three one day workshops by Michel Buffa, Ge Wang, and Yann Orlarey (see program below). The SMC-22 Summer School is free and targeted towards grad students in the cross-disciplinary fields of computer science, electrical engineering, music, and the arts in general. Attendance will be limited to 25 students.
Application to apply to the SMC-22 Summer School can be made through this form: https://forms.gle/HF2Xv7QtbZG5U4hE6 (you will be asked to provide a resume as well as a letter of intent). Applications will be reviewed "on the fly" on a "first come, first served basis:" if the profile of a candidate seems acceptable, it will be automatically selected. The SMC-22 Summer School will happen in person (no video streaming): accepted candidates will be expected to physically come to the conference venue.
Additional information about this event can be found on the SMC-22 website: https://smc22.grame.fr/school.html
---
SMC-22 SUMMER SCHOOL PROGRAM
--------------------------------------------------
Michel Buffa -- Web Audio Modules 2.0: VSTs For the Web
During this tutorial, you will first follow a WebAudio API presentation with examples and you will learn how to program simple effects or instruments with JavaScript. In a second part you will be introduced to "WebAudio Modules 2.0" (WAM), a standard for developing "VSTs on the Web." The new WAM ecosystem covers many use cases for developing plugins, from the amateur developer writing simple plugins using only JavaScript/HTML/CSS to the professional developer looking for maximum optimization, using multiple languages and compiling to WebAssembly. It was designed by people from the academic research world and by developers who are experts in Web Audio and have experience developing professional computer music applications. In its current state, the open source WAM 2.0 standard is still considered a "beta version," but in a stable state. The framework provides most of the best features found in native plugin standards, adapted to the Web. We regularly add new plugins to the wam-examples GitHub repository, but there are also dozens of WAMs developed by the community, such as the set of plugins created by the author of sequencer.party, who has open sourced them in their entirety. DUring this tutorial you will learn how to reuse existing plugins in a host web application, but also how to write your own reusable plugins using JavaScript, TypeScript or Faust.
Bio of Michel Buffa
Michel Buffa (http://users.polytech.unice.fr/~buffa/) is a professor/researcher at University Côte d'Azur, a member of the WIMMICS research group, common to INRIA and to the I3S Laboratory (CNRS). He contributed to the development of the WebAudio research field, since he participated in all WebAudio Conferences, being part of each program committee between 2015 and 2019. He actively works with the W3C WebAudio working group. With other researchers and developers he co-created a WebAudio Plugin standard. He has been the national coordinator of the french research project WASABI, that consists in building a 2M songs knowledge database that mixes metadata from Cultural, lyrics and audio analysis.
--------------------------------------------------
Ge Wang -- Chunity! Interactive Audiovisual Design with ChucK in Unity
In this workshop, participant will learn to work with Chunity -- a programming environment for the creation of interactive audiovisual tools, instruments, games, and VR experiences. It embodies an audio-driven, sound-first approach that integrates audio programming and graphics programming in the same workflow, taking advantage of strongly-timed audio programming features of the ChucK audio programming language and the state-of-the-art real-time graphics engine found in Unity.
Through this one-day workshop, participants will learn:
1) THE FUNDAMENTALS OF CHUNITY WORKFLOW FROM CHUCK TO UNITY,
2) HOW TO ARCHITECT AUDIO-DRIVEN, STRONGLY-TIMED SOFTWARE USING CHUNITY,
3) DESIGN PRINCIPLES FOR INTERACTIVE AUDIOVISUAL/VR SOFTWARE
Any prior experience with ChucK or Unity would be helpful but is not necessary for this workshop.
Bio of Ge Wang
Ge Wang (https://ccrma.stanford.edu/~ge/) is an Associate Professor at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). He researches the artful design of tools, toys, games, instruments, programming languages, virtual reality experiences, and interactive AI systems with humans in the loop. Ge is the architect of the ChucK audio programming language (https://chuck.stanford.edu/) and the director of the Stanford Laptop Orchestra (https://slork.stanford.edu/). He is the Co-founder of Smule and the designer of the Ocarina and Magic Piano apps for mobile phones. A 2016 Guggenheim Fellow, Ge is the author of /Artful Design: Technology in Search of the Sublime/ (https://artful.design/), a photo comic book about how we shape technology -- and how technology shapes us.
--------------------------------------------------
Yann Orlarey -- Audio Programming With Faust
The objective of this one-day workshop is to discover the Faust programming language (https://faust.grame.fr) and its ecosystem and to learn how to program your own plugins or audio applications. No prior knowledge of Faust is required.
Faust is a functional programming language specifically designed for real-time signal processing and synthesis. It targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards. A distinctive feature of Faust is that it is not an interpreted, but a compiled language. Thanks to the concept of architecture, Faust can be used to generate ready-to-use objects for a wide range of platforms and standards including audio plugins (VST, MAX, SC, PD, Csound,...), smartphone apps, web apps, embedded systems, etc.
At the end of the workshop, you will have acquired basic Faust programming skills and will be able to develop your own audio applications or plugins. You will also have a good overview of the main libraries available, of the documentation, and of the main programming tools that constitute the Faust ecosystem.
Bio of Yann Orlarey
Born in 1959 in France, Yann Orlarey is a composer, researcher, member of the Emeraude research team (INRIA, INSA, GRAME), and currently scientific director of GRAME (https://www.grame.fr), the national center for musical creation based in Lyon, France. His musical repertoire includes instrumental, mixed, and interactive works as well as sound installations. His research work focuses in particular on programming languages for music and sound creation. He is the author or co-author of several musical software, including the programming language FAUST, specialized in acoustic signal synthesis and processing.
Hello, all,
I tried to replace my audio stack on my everyday, all-purpose Linux
Ubuntu 18.04 desktop with PipeWire. Overall, it went smoothly, with one
exception. So far, I've mainly used it to listen to audio from Firefox,
from YouTube videos, etc. Every time I perform an action on the YouTube
video (start or stop the video, move the timeline back and forth, change
the playback speed, etc.), the Sound control panel either shows a new
Firefox source, under the Applications tab, with the volume set to zero,
or the existing Firefox source is slid to zero. Before I can hear
anything, I have to bring up the Sound control panel and slide the
control up.
I know this isn't a PipeWire support forum, but I haven't been able to
find one for PipeWire under Linux or Ubuntu. Does anyone know of a
support or user's forum where I could ask this question? Where would you
go to ask this question?
Thanks for your help and advice.
-Kevin