Hi all,
I’d like to share some news about audio plugins originally written by
the brilliant developer Oto Spál, whose work I’ve been a big fan of for
a long time.
## Oto’s original Senderspike plugins:
https://senderspike.wordpress.com/
## Original non-GUI plugins (Linux VST2 builds):
https://github.com/xjjx/senderspike_plugins/releases/
## JUCE-based ports (work in progress)
I’ve started porting these plugins to JUCE. They are in VST2, VST3, and
LV2 formats.
This is still under active development, but the plugins are already
usable and ready for testing.
Current status:
* SN06 is complete – OpAmp has a full working GUI editor
* Other are DSP fully funcional, but GUI editors are still work in progress
* Builds are produced via GitHub Actions, both for Linux and Windows
Latest CI build:
https://github.com/xjjx/senderspike_plugins/actions/runs/21938203666
## Notable differences from the original versions
* Originally, parameters were exposed to hosts only in normalized form
(0–1).
I’ve added meaningful parameter display (e.g. VST2 effGetParamName
support)) where possible.
* Knob behavior is slightly different — I wasn’t able to perfectly
recreate the original behavior, but it’s close enough.
* Product codes has been changed to allow side-by-side testing with
older versions.
* SN06 contains resizable GUI code, but it's currently disabled.
## Repository branches
* master – original plugins, recompiled for Linux
* juce – current JUCE-based development branch
* vstgui – unsuccessful attempt to upgrade to VSTGUI v4
Additionally, the repository includes a small script to fix the VST2
cache in Qtractor.
https://github.com/xjjx/senderspike_plugins/blob/juce/tools/qtractor_fix_vs…
Feedback, testing, and bug reports are very welcome.
Best regards,
Pawel / Xj
* New feature: Kit Mode now has a crossfade Volume option as well as Velocity..
* New feature: Yoshimi now recognises old and new versions of MXML and FLTK.
* New feature: Yoshimi can run on the wayland windowing system without issues.
* Various code refinements.
### Building
Full build instructions are in [INSTALL](INSTALL).
### Source
Yoshimi source code is available from either:
* Sourceforge: https://sourceforge.net/projects/yoshimi
* Github: https://github.com/Yoshimi/yoshimi
### Community
Our list archive is at: https://www.freelists.org/archive/yoshimi
To post, email to: yoshimi(a)freelists.org
### License
GPLv2+ see [COPYING](COPYING) for license details.
--
Will J Godfrey
.
This release introduces major improvements to DSP performance and tuning
flexibility.
New Features
*Multithreaded Audio Engine*
Loopino now supports multithreaded audio processing to reduce load on
the main audio thread.
*
Audio processing can be buffered as half-frame or full-frame blocks
*
Buffered DSP blocks are processed in a worker thread
*
Significantly reduces DSP load and xruns in the main audio thread
*
Designed to improve stability under high polyphony and complex
modulation scenarios
*Micro Tuning Support (Scala)*
Loopino now supports microtonal tuning via Scala.
*
Built-in factory tuning scales included
*
Drag & drop support for Scala .scl files
*
Drag & drop support for Scala .kbm key mapping files
*
Flexible keyboard-to-scale mapping for alternative tuning systems
Notes
This update improves real-time performance and expands Loopino’s musical
language beyond standard equal temperament, making it suitable for
high-load sound design and microtonal composition alike.
Project Page:
https://github.com/brummer10/Loopino
Release Page:
https://github.com/brummer10/Loopino/releases/tag/v0.9.5
This release focuses on workflow improvements, clearer signal routing,
and new creative options.
### New Features
- **Drag & Drop Processing Chains**
- Filter and Machine chains can now be reordered via drag and drop
- Machine chain changes trigger a full key cache rebuild
- Filter chain changes apply immediately in real time
- **Reverse Sample Playback**
- Samples can now be played in reverse
- Fully integrated into the existing voice and filter pipeline
- **New Machine: Vintage (TimeMachine)**
- A new offline machine focused on temporal character and coloration
- Operates during key cache generation
- Designed for non-destructive experimentation with timing and feel
---
### Architecture & Workflow
- Clear separation between **offline machines** and **real-time filters**
- Deterministic signal flow from sample → machine → key cache → voices →
filters
- Improved internal consistency and predictability
---
### Documentation
- Added a new [**Loopino
Wiki**](https://github.com/brummer10/Loopino/wiki/User-Documentation)
- User-facing documentation covering:
- Sample loading and destructive trimming
- Machines vs Filters
- Signal flow and processing stages
- Documentation aims to be precise, technical, and transparent
---
### Notes
- Existing projects remain compatible
---
Project Page:
https://github.com/brummer10/Loopino
Release Page:
https://github.com/brummer10/Loopino/releases/tag/v0.9.0
As always, feedback is welcome.
Hello all,
First release of VoyeSeq is out!
VoyeSeq mimics the Voyetra Sequencer Plus Gold workflow to create patterns,
which can be triggered using MIDI notes from the host.
- *Pattern Bank:* 128 patterns (0-127).
- *Keyboard-Driven Workflow:* Optimized for speed without heavy mouse
reliance.
- *Cross-Platform:* Built with DISTRHO DPF to run as JACK standalone,
LV2, VST3, or CLAP.
- *Start/stop with spacebar:* from VoyeSeq to host through OSC (tested
in Ardour).
Find it here: https://github.com/yellius/VoyeSeq
Enjoy and let me know what you think,
Cheers,
Jelle.
NeuralRack is a Neural Model and Impulse Response File loader for
Linux/Windows available as Stand alone application, and in the Clap, LV2
and vst2 plugin format.
It supports *.nam files <https://www.tone3000.com/search?tags=103> and,
or *.json or .aidax files <https://www.tone3000.com/search?tags=23562>
by using the NeuralAudio <https://github.com/mikeoliphant/NeuralAudio>
engine.
For Impulse Response File Convolution it use FFTConvolver
<https://github.com/HiFi-LoFi/FFTConvolver>
Resampling is done by Libzita-resampler
<https://kokkinizita.linuxaudio.org/linuxaudio/zita-resampler/resampler.html>
New in this release:
* implement option to move (drag and drop) EQ around
Neuralrack allow to load up to two model files and run them serial.
The input/output could be controlled separate for each model.
It features a Noise Gate, and for tone sharping a 6 band EQ could be
enabled.
Additional it allow to load up a separate Impulse Response file for each
output channel (stereo),
or, mix two IR-files to a two channel mono output.
Neuralrack provide a buffered Mode which introduce a one frame latency
when enabled.
It could move one Neural Model, or the complete processing into a
background thread. That will reduce the CPU load when needed.
The resulting latency will be reported to the host so that it could be
compensated.
ProjectPage:
https://github.com/brummer10/NeuralRack
Release Page:
https://github.com/brummer10/NeuralRack/releases/tag/v0.3.0
SpectMorph 1.0.0-beta3 has been released.
This version contains a new pitch detection algorithm for the instrument
editor and it can read mp3 files. Compared to 1.0.0-beta2 there are
mostly smaller fixes, but since some of them address critical problems
we strongly recommend updating to beta3 if you use a previous beta.
There is a tutorial on YouTube for the new features in the 1.0.0 series:
- https://youtu.be/mwVUsuOTcN0
Feedback for any issues you might experience with the beta version is
appreciated.
What is SpectMorph?
-------------------
SpectMorph is a free software project which allows to analyze samples of
musical instruments, and to combine them (morphing). It can be used to
construct hybrid sounds, for instance a sound between a trumpet and a
flute; or smooth transitions, for instance a sound that starts as a
trumpet and then gradually changes to a flute.
SpectMorph ships with many ready-to-use instruments which can be
combined using morphing.
SpectMorph is implemented in C++ and licensed under the GNU LGPL version
2.1 or later
Integrating SpectMorph into your Work
-------------------------------------
SpectMorph is currently available for Linux, Windows and macOS (Intel
and Apple Silicon), with CLAP/LV2/VST plugins. Under Linux, there is
also JACK Support.
Links:
------
Website: https://www.spectmorph.org
Download: https://www.spectmorph.org/downloads
There are many audio demos on the website, which demonstrate morphing
between instruments.
List of Changes in SpectMorph 1.0.0-beta3:
------------------------------------------
## SpectMorph 1.0.0 beta3
#### New Features
* Implement pitch detection algorithm for instrument editor and smenc (#31).
* Support mp3 format for static plugins and builds with new libsndfile.
#### Instrument Updates
* Trumpet, French Horn: ping pong loop, better tuning
* Bass Trombone: ping pong loop, volume normalization, tuning
* Alto Saxophone: ping pong loop
#### Reduce Memory Usage after Unload
* Avoid global constructors / destructors.
* Use our own TextRenderer instead of cairo to be able to free font cache.
* Ship necessary fonts on macOS for TextRenderer.
* Free various tables and other bits of static data when unloading.
#### Fixes
* Don't crash on invalid utf8 during conversion (use replacement char).
* Fix crash caused by multiple threads modifying control events.
* Fix CLAP's get factory implementation (#30).
* Various ASAN / UBSAN fixes.
* Fix RTSAN issue: make FFT realtime safe.
* Avoid allocating memory in RT thread if events need to be sorted.
* Fix (unlikely) LineEdit crash.
* Validate input for smenc -m and other utils where an integer is
expected (#31).
* Fix smooth tune performance for long input files.
* Build system updates.
* Convert manpages to markdown.
* Documentation updates.
--
Stefan Westerfeld, http://space.twc.de/~stefan