(sorry for cross-posting, please distribute)
The IEM – Institute of Electronic Music and Acoustics – in Graz, Austria
is happy to announce its call for its 2026 residency program.
# Artist in Residence
<https://go.iem.at/residency>
The residency is aimed at individuals wishing to pursue an artistic
project related to the research fields of the IEM:
• Algorithmic Composition
• Algorithmic Experimentation
• Audio-Visuality
• Dynamical Systems
• Experimental Game Design
• Live Coding
• Sonic Interaction Design
• Spatialization/higher-order Ambisonics
• Standard and non-standard Sound Synthesis
• ...
Duration of residency: 4 months
Start date: September 1st 2026 (negotiable)
APPLICATION DEADLINE: March 31th 2026
Please reply to the official call by KUG for a Senior Artist (scroll
down in the PDF for the English version):
<https://go.iem.at/residency26>
# The Institute
The Institute of Electronic Music and Acoustics is a department of the
University of Music and Performing Arts Graz founded in 1965. It is a
leading institution in its field, with more than 35 staff members of
researchers and artists. IEM offers education to students in composition
and computer music, sound engineering, sound design, contemporary music
performance, and musicology. It is well connected to the University of
Technology, the University of Graz as well as to the University of
Applied Sciences Joanneum through three joint study programs.
The project results will be released through the Institute’s own Open
CUBE and Signale concert series, as well as through various
collaborations with international artists and institutions.
What we expect from applicants:
• A project proposal that adds new perspectives to the Institute’s
activities and resonates well with the interests of IEM.
• Willingness to work on-site in Graz for the most part of the Residency.
• Willingness to exchange and share ideas, knowledge and results with
IEM staff members and students, and engage in scholarly discussions.
• The ability to work independently within the Institute.
• A dissemination strategy as part of the project proposal that ensures
the publication of the work, or documentation thereof, in a suitable
format. This could be achieved for example through the release of media,
journal or conference publication, a project website, or other means
that help to preserve the knowledge gained through the Residency and
make it available to the public.
• A public presentation as e.g. a concert or installation, which
presents the results of the Residency.
What we offer:
• 24/7 access to the facilities of the IEM.
• Exchange with competent and experienced staff members.
• A desk in a shared office space for the entire period and access to
studios including the CUBE [1], according to availability.
• Extensive access to the studios of the IEM during the period from July
1st until end of September.
• access to the IKOsahedron loudspeaker [2]
• access to the “Autoklavierspieler” [3]
• infrared motion tracking systems
• Regular possibilities for contact and exchange with peers from similar
or other disciplines.
• Concert and presentation facilities (CUBE 30 channel loudspeaker
concert space).
What we cannot offer to the successful applicant:
• We can not provide any housing.
• We also cannot provide continuous assistance and support, although the
staff is generally willing to help where possible.
• We can not host artist duos or groups, because of spatial limitations.
• We can not offer any additional financial support for travel or
material expenses.
Feel free to contact residency(a)iem.at if you have any questions.
[1] The Cube has a 30-channel loudspeaker system
[2] <https://iko.sonible.com/>
[3] <https://algo.mur.at/projects/autoklavierspieler/>
--
please do not CC me for list-emails
[I wanted to send this to LAD/LAU yesterday, but used the old list addresses by accident]
Hi all,
just wanted to share the good news here that in 2026 the LAC (Linux Audio Conference) is
taking place again, on June 18-20 (Thu-Sat), this time coming back to Maynooth (Ireland)
where it was already hosted in 2011.
Victor Lazzarini, conference organizer, asked me to help in spreading the word about
it, so here we go.
All details on music&paper submission process, deadlines, travel and accomodation etc can
be found at the conference web site: https://lac26.mucs.club/
Greetings, and please feel free to spread the word wherever possible,
Frank
This release introduces major improvements to DSP performance and tuning
flexibility.
New Features
*Multithreaded Audio Engine*
Loopino now supports multithreaded audio processing to reduce load on
the main audio thread.
*
Audio processing can be buffered as half-frame or full-frame blocks
*
Buffered DSP blocks are processed in a worker thread
*
Significantly reduces DSP load and xruns in the main audio thread
*
Designed to improve stability under high polyphony and complex
modulation scenarios
*Micro Tuning Support (Scala)*
Loopino now supports microtonal tuning via Scala.
*
Built-in factory tuning scales included
*
Drag & drop support for Scala .scl files
*
Drag & drop support for Scala .kbm key mapping files
*
Flexible keyboard-to-scale mapping for alternative tuning systems
Notes
This update improves real-time performance and expands Loopino’s musical
language beyond standard equal temperament, making it suitable for
high-load sound design and microtonal composition alike.
Project Page:
https://github.com/brummer10/Loopino
Release Page:
https://github.com/brummer10/Loopino/releases/tag/v0.9.5
This release focuses on workflow improvements, clearer signal routing,
and new creative options.
### New Features
- **Drag & Drop Processing Chains**
- Filter and Machine chains can now be reordered via drag and drop
- Machine chain changes trigger a full key cache rebuild
- Filter chain changes apply immediately in real time
- **Reverse Sample Playback**
- Samples can now be played in reverse
- Fully integrated into the existing voice and filter pipeline
- **New Machine: Vintage (TimeMachine)**
- A new offline machine focused on temporal character and coloration
- Operates during key cache generation
- Designed for non-destructive experimentation with timing and feel
---
### Architecture & Workflow
- Clear separation between **offline machines** and **real-time filters**
- Deterministic signal flow from sample → machine → key cache → voices →
filters
- Improved internal consistency and predictability
---
### Documentation
- Added a new [**Loopino
Wiki**](https://github.com/brummer10/Loopino/wiki/User-Documentation)
- User-facing documentation covering:
- Sample loading and destructive trimming
- Machines vs Filters
- Signal flow and processing stages
- Documentation aims to be precise, technical, and transparent
---
### Notes
- Existing projects remain compatible
---
Project Page:
https://github.com/brummer10/Loopino
Release Page:
https://github.com/brummer10/Loopino/releases/tag/v0.9.0
As always, feedback is welcome.
Hi all,
just a quick "addendum" to the earlier announcement here:
The organizing team of this year's Linux Audio Conference (see below) has also
given this conference a "theme" that should spark some ideas for papers or
discussion, and has now added the following blurb to the home page
(see https://lac.linuxaudio.org/2026/) -
###
Conference theme:
Large language models and Free/Libre/open source software.
"I am not comfortable contributing to a project that extensively uses AI."
from a post found in a pull request to a github repository.
This year's LAC theme explores questions relating to the (sometimes uneasy)
relationships that may emerge between LLMs and FLOSS. This of course has many
dimensions, from the purely technical, through to the practical, and finally
to the ethical.
As code repositories such as github roll out support for third-party and their
own LLM agents, this is an area that needs vigorous discussion and assessment.
It is probably not a good idea to ignore it, as it is unlikely to go away.
It may be possible to formulate a position from the LAC community, which we might
carry forward for further consideration in other forums.
Even if such a thing cannot exist, it is still important that we put forward
our ideas in relation to this issue. Therefore, it makes good sense to invite
contributions to this theme and make it a central point of discussion at the LAC.
The LAC2026 organising team.
###
Greetings,
Frank
On Fri, 9 Jan 2026 17:33:01 +0100
Frank Neumann <beachnase(a)web.de> wrote:
> Hi all,
>
> just wanted to share the good news here that in 2026 the LAC (Linux Audio Conference) is
> taking place again, on June 18-20 (Thu-Sat), this time coming back to Maynooth (Ireland)
> where it was already hosted in 2011.
>
> Victor Lazzarini, conference organizer, asked me to help in spreading the word about
> it, so here we go.
>
> All details on music&paper submission process, deadlines, travel and accomodation etc can
> be found at the conference web site: https://lac26.mucs.club/
>
> Greetings, and please feel free to spread the word wherever possible,
> Frank
Hello everybody,
I've made some small changes to the LAD and LAA mailing lists that will
hopefully reduce the number of bounces that have occurred lately:
- The unsubscribe footer has been removed.
- The [LAD] and [LAA] prefixes that were added to the subject have been
removed.
- Converting HTML mail to plain text has been disabled.
By making these changes the messages sent to the lists should go through
unaltered which should make DMARC/DKIM happier. Apologies beforehand for
any inconvenience these changes might cause. If any problems arise from
these changes then let me know, you can also contact me personally if
you wish through either this mail address or jeremy(a)linuxaudio.org.
Best regards,
Jeremy Jongepier
linuxaudio.org sysadmin
NeuralRack is a Neural Model and Impulse Response File loader for
Linux/Windows available as Stand alone application, and in the Clap, LV2
and vst2 plugin format.
It supports *.nam files <https://www.tone3000.com/search?tags=103> and,
or *.json or .aidax files <https://www.tone3000.com/search?tags=23562>
by using the NeuralAudio <https://github.com/mikeoliphant/NeuralAudio>
engine.
For Impulse Response File Convolution it use FFTConvolver
<https://github.com/HiFi-LoFi/FFTConvolver>
Resampling is done by Libzita-resampler
<https://kokkinizita.linuxaudio.org/linuxaudio/zita-resampler/resampler.html>
New in this release:
* implement option to move (drag and drop) EQ around
Neuralrack allow to load up to two model files and run them serial.
The input/output could be controlled separate for each model.
It features a Noise Gate, and for tone sharping a 6 band EQ could be
enabled.
Additional it allow to load up a separate Impulse Response file for each
output channel (stereo),
or, mix two IR-files to a two channel mono output.
Neuralrack provide a buffered Mode which introduce a one frame latency
when enabled.
It could move one Neural Model, or the complete processing into a
background thread. That will reduce the CPU load when needed.
The resulting latency will be reported to the host so that it could be
compensated.
ProjectPage:
https://github.com/brummer10/NeuralRack
Release Page:
https://github.com/brummer10/NeuralRack/releases/tag/v0.3.0