Does anyone know of software that can generate MIDI messages from a touchpad?
The idea would be to send CCs to a sequencer or soft synth, but being able to
send it to an external hardware device would also be very useful.
--
Will J Godfrey
https://willgodfrey.bandcamp.com/http://yoshimi.github.io
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hi!
Not sure, if I'm on the right place, but I guess the LAU-people are
trained to find solutions to extraordinary problems…
I have a vision 8-) :
I'm sitting at FOH, driving a theater show. I have - let's say - 3
projectors available. One on my back to cover the stage from the front,
two behind the stage doing a rear projection on the right and the left.
On every projector, there is a Raspberry Pi connected via HDMI, waiting
to send videos to the projector. All Raspberries are connected to LAN,
just like my linux-laptop from which the show is controlled.
I know, I can reach similar with QLC+: Install the app on all the
computers involved, setup 3 different Artnet-channels, configure a/some
video function(s) and make each one accessible through a dmx-channel.
Therefore, the videos that should be presented have to be on the
Raspberries. I can copy them to the devices and configure the triggers
on each before the show.
But my goals are different: Keep it simple, keep it fast (in terms of
latency, but also in terms of using light and fast apps and finally. in
terms of not running through the venue to make some last-minute
configurations) and let only one machine be the one that has to be
configured - the main laptop at FOH.
I'm not so far away from that - the tools and the technology seem to be
there, already. With ffmpeg for example it's possible to stream videos
from point to point in realtime.
[code]ffmpeg -i [input-video] -f [streaming codec to use]
udp://[reciever's network-adress]:[port][/code]
(There are options to speed thing up and/or relieve the CPU, but take it
as an easy example.) On the other side of the chain, ffplay or mpv can
catch the stream and decode it in no time.
[code]mpv udp://[transmitter's network-adress]:[port][/code]
(Again: Optimizations left aside)
Tried this myself in a LAN between a Ryzen5 2400G Desktop and a 10 year
old Thinkpad and achieved latencies under 1s - which is good enough,
even for professional use. Once, you found the best options for your
setup you can use it over and over again with different video-inputs and
destinations. Best of it: With a commandline code it's capable of being
integrated in QLC+ or Linux Show Player (LiSP). And: With ffmpeg I can
-tee video from audio stream, if I like, and keep the audio at the FOH.
(Or send it back from one of the raspberries to FOH via net-jack or
comparable. Keeping video and audio in sync will be another challenge, I
see…)
But there is one downside: If the receiver already plays the video,
there is no big latency between sender and receiver (if the options are
chosen well, of course). But: Catching the stream can take several
seconds. So, what I need is a continuous stream on which I can send my
videos. OBS can do this, but it's another resource intensive app and -
as far as I know - I cannot send commands from QLC+ or LiSP to it. (I
want ONE cue-player for all, you know…!) Also: I *guess* OBS can't
handle more than 1 stream, at once (sending to the different
RPi-receivers) - but with ffmpeg-commands it's easy…!
I had the idea, sending a continuous stream by streamcasting a virtual
desktop page and configure mpv to play on that in fullscreen, by demand.
But I guess, this comes not so handy with more than one beamer.
Any ideas in how to reach my goals? (You can suggest other apps than
ffmpeg or mpv, of course!)
(Disclaimer: I have also posted this to the
Linux-Audio-Users-Mailinglist and will try to send it to a place, where
ffmpeg-nerds are common. I will inform you if I get good thoughts from
the other sources…)
Greets!
Mitsch
Hi all,
The first pre-release of SoundTracker v1.0.5 is done. The main feature
of 1.0.5 release is full support of stereo samples (MODPlug XM / XI
extension), including conversion between mono and stereo samples and
some specific editing functions. This pre-release also includes some
less important updates and fixes.
You can find more new features in NEWS file.
SoundTracker download page:
https://sourceforge.net/projects/soundtracker/files/
Regards,
Yury.
SpectMorph 0.6.1 has been released.
The main changes are:
- Instrument editor improvements
- Support for multiple banks for WavSources
- New standard instruments
- The code is now hard RT capable
- UI fixes for macOS
What is SpectMorph?
-------------------
SpectMorph is a free software project which allows to analyze samples of
musical instruments, and to combine them (morphing). It can be used to
construct hybrid sounds, for instance a sound between a trumpet and a
flute; or smooth transitions, for instance a sound that starts as a
trumpet and then gradually changes to a flute.
SpectMorph ships with many ready-to-use instruments which can be
combined using morphing.
SpectMorph is implemented in C++ and licensed under the GNU LGPL version
2.1 or later
Integrating SpectMorph into your Work
-------------------------------------
SpectMorph is currently available for Linux, Windows and macOS (Intel
and Apple Silicon), with CLAP/LV2/VST plugins. Under Linux, there is
also JACK Support.
Links:
------
Website: http://www.spectmorph.org
Download: http://www.spectmorph.org/downloads
There are many audio demos on the website, which demonstrate morphing
between instruments.
List of Changes in SpectMorph 0.6.1:
------------------------------------
#### Instrument Editor
* Support click & drag sample to scroll & zoom (#22).
* Support stereo to mono conversion when loading stereo samples (#14).
* Add manual volume editing / normalization.
* Implement automatic selection triggered by midi.
#### New instruments
* Bass Flute
* Soprano Saxophone
* Clarinet, Bass Clarinet
* Tenor Trombone
* Viola, Double Bass
* Make samples and meta information for standard instruments available
on github.
#### Improvements
* Support multiple banks for WavSources / instrument editor.
* Avoid allocations in DSP thread to be hard RT capable.
* Allow overriding analysis parameter for frame stepping to get higher
time resolution.
#### Fixes
* Make UI work properly in Ableton Live (and possibly other hosts) on macOS.
* Fix UI scaling problem on M1 macOS builds.
* Fix crash if instrument editor is closed without any samples.
* Fix cases of undefined behaviour.
* Fix timing problems for long notes, reproduce long WavSource notes
with exact tempo.
* Fix use-after-free for outdated control events.
* Fix freetype related memory leak.
--
Stefan Westerfeld, http://space.twc.de/~stefan