Does anyone know of software that can generate MIDI messages from a touchpad?
The idea would be to send CCs to a sequencer or soft synth, but being able to
send it to an external hardware device would also be very useful.
--
Will J Godfrey
https://willgodfrey.bandcamp.com/http://yoshimi.github.io
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
This is Steinway_IMIS soundfont, version 2.2.
ftp://musix.ourproject.org/pub/musix/sf2/Steinway_IMIS2.2
This version fixes the issue with loops. I hope this is the good one
and there are no more remaining major bugs.
Marcos is a little busy right now, so he asked me to make this fix. He
is thinking to make other improvements, so expect more updates soon.
Is anybody out here in LAU land have experience with PISound?
https://www.blokas.io/pisound/
I have just bought one and am having quite sever teething problems with it.
It keeps freezing for ~45 seconds when running X and I cannot get it to
use the full display.
cheers
Worik
--
If not me then who? If not now then when? If not here then where?
So, here I stand, I can do no other
root(a)worik.org 021-1680650, (03) 4821804 Aotearoa (New Zealand)
Dear list,
I recently bought a LinnStrument from Roger Linn Design:
http://www.rogerlinndesign.com/linnstrument.html
It is a great isomorphic midi-controller, and as such it is immediately
recognized on Linux.
The distinguishing feature of the LinnStrument is that it senses 3
degrees of freedom on each note: x-direction, y direction and
z-direction (pressure). The x-direction is mapped to pitch-bend, and
y-direction to CC74.
A cool feature is the "slide", where the pitch-bend is used to slide
between all notes in a row.
To allow individual pitch and CC74 values for each note, it sends each
note on a separate midi-channel ("MPE"):
http://www.rogerlinndesign.com/implementing-mpe.html
Bitwig has added support for this, and there is 20 presets in version
1.3.11, where this is used (tag: linnstrument). The LinnStrument
controller is not recognized automatically on Linux in version 1.3.11,
but it can be configured manually, and then it works fine. Note that
both midi-in and midi-out has to be configured, if not there is no
sound! It should look like this: https://ibin.co/2msBJVgpKtf9.png
Now I would like to also use it with the free Linux synths.
Here's what I have been able to make work this far.
Synthv1:
PME works reasonably well: I can play polyphonic in MPE mode, but it
tends to miss the "note off"s.
I can get the slide to work, by setting
<param index="36" name="DEF1_PITCHBEND">2</param>
<param index="78" name="DEF2_PITCHBEND">2</param>
is a preset.
Zynaddsubfx:
I can not get MPE to work.
Sending only on one channel, and setting PWheelB.Rng to 2400 cents, I
cant get the sliding to work, but only when playing with one finger.
If I enable MPE on the LinnStrument there is only an occasional sound,
when it happens to send on the channel, that Zyn is listening on.
I'll love to hear if other LinnStrument users have been able to do more
with any of the free synths on Linux.
All the best,
Thomas
Hey hey,
this is a very energetic, driven rock/prog track.
https://youtu.be/H0qAu1U9U3o
OGG version:
https://www.dropbox.com/s/z19uw7ksqlf0nrp/going_rascal.ogg
I had this theme lying around for ages. It seemed too good to just get lost at
the back of the sofa. So I rerecorded it from scratch.
In terms of sounds it's mostly in excercise in analogue, subtractive synthesis
and patching on my Behriinger Neutron. The drums are Drumgizmo with the
Aasmonster kit, which I love a lot. More difficult to edit, but it brings a
lot more potential for shaping. Especially the drums took a lot of shaping and
processing from simple filtering, EQ'ing and compression to creative
processing. :)
Hope you enjoy it, have a great Sunday!
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
Just hang around and you'll see,
There's nowhere I'd rather be <3
(Britney Spears)
Hi!
Don't know, if this is actually the academic or scientifically correct
way to do it, but I got good sounding results - in fact, I managed to
insert mono-recorded voices into an ambisonic recording with other
people and it didn't sound that weird…
But I'm always interested in getting things more and more perfect, so I
hope many of the ambisonics-pros are joining this thread to discuss the
topic.
This is the software and hardware I used:
* Røde NT-SF1 1st order Ambisonics microphone
* Sound Devices MixPre-6 recording device
* Ubuntu Studio 04.22
* Carla 2.4.2
* Røde Soundfield PlugIn (Windows 64bit VST3)
* Wine
* Ardour 6.9
* Audacity 2.4.2
* LSP-PlugIn Suite (LV2)
* IEM Ambisonics PlugIn Suite (LinuxVST)
And here is the whole procedure, how I made it…
I made an ambisonics recording with people in a room. After that I took
a paper bag to create an impulse, about 50cm away from the NT-SF1. (Just
clapping the hand was too silent in my opinion and I didn't want to use
balloons because of the plastic waste. Paper bags seem to be an
relatively environment friendly and cheap alternative to me.) I retried
as often as it needed to get a good sounding and non distorted record of
the impulse and its respond.
Getting back to home I copied the A-format of my recordings to my
computer, fired up Ardour, created a 4-channel track and inserted the
16-channel-version of carla patchbay into that channel strip. I had to
deactivate the panner in every channel strip to make Ardour "ambisonics
compatible", yes, also in the "Master"-channel which must be blown up
to 4 channels, too. I imported the tracks that I wanted to be converted.
The only tool in my procedure that wasn't free and open was Rødes
Soundfield PlugIn, so I had to use some tricks to get manufacturer
proven A-to-B conversion of my recordings. FalkTX did make carla
windows-VST-capable, but it's not a bullet prove task… So, I got the
Soundfield exe-file, installed it with wine and I finally got a dll I
could drag and drop into carla patchbay's GUI. There it is! Chances are
Ardour crashes, so I made sure to save after every step. As input
format, I have to chose "NT-SF1", on output the best choice is to use
"B-Format (Ambix)", because Ambix is the actual standard and the IEM
plugins only handles Ambix. Of course, I had to connect Carla's output
to the input of Soundfield and vice versa.
Time for the A-to-B-convertion. Drag and drop the 4-channel-audio into
the editor window of Ardour, set "Start" and "End" and use export. Make
sure to have all 4 channels to be exported into one file. You can use
both WAV or FLAC. Next step is Audacity. (You don't need to close
Ardour, sooner or later it will crash… :) )
Instead of stereo files, Audacity handles 4-channel-audio as 4 seperate
mono files. (But it can be exported to 4-channel-audio, again, if you
choose "advanced mix options" in the "import/export" division of the
preferences. You won't need it for the virtual ambisonics IR reverb.)
But - it is mandatory to keep the audio sample accurate between all the
tracks! If you want to delete, delete from all of them, simultaneously!
So, you choose your favorite BANG! out of the inpulse series you
recorded and delete the rest. Export every mono track as WAV. Use Ambix
nomenclature: 1st track is W, 2nd is Y, 3rd is Z and 4th is X. Best bet
is to write the number and the letter in the name of the file. I choose:
[nameoftheplace]_1_(W).wav and so on…
Now, to actually build the virtual ambisonics IR reverb, fire up Ardour
and make it ambisonics compatible (deactivate panner). Create a
4-channel-audio-bus and place a 16-channel Carla patchbay. In the Carla
patchbay, create 4 "LSP Impulse Responses Mono" - one for each channel.
(Using LV2 in Carla works pretty stable!) Open the GUI for each "LSP
Impulse Responses Mono" and use the corresponding WAV in it by clicking
into the GUI and choose the right file. Inside Carla patchbay it should
look as follows:
Carla channel 1 output ---> LSP Impulse Responses Mono #1 (with
[nameoftheplace]_1_(W).wav) ---> Carla channel 1 input
Carla channel 2 output ---> LSP Impulse Responses Mono #2 (with
[nameoftheplace]_2_(Y).wav) ---> Carla channel 2 input
Carla channel 3 output ---> LSP Impulse Responses Mono #3 (with
[nameoftheplace]_3_(Z).wav) ---> Carla channel 3 input
Carla channel 4 output ---> LSP Impulse Responses Mono #4 (with
[nameoftheplace]_4_(X).wav) ---> Carla channel 4 input
Make sure, that every LSP IR Mono instance uses exactly the same values
of headcut, tailcut and amplification. WIth headcut, you should cut the
impulse. Now, you can use a mono audio file, place it into the
ambisonics sphere f.e. with the IEM StereoEncoder, use an auxiliary send
(don't forget to disable the panner of it, too!) and enjoy the
reverberation! (Play with Ardour's "strict E/A" to get as much channels
for each strip you need!) Using an EQ after the virtual reverb is always
a good idea. LSP IR Mono has one integrated, but my impulses seem to
have no linear frequency band and needed more than the +/-12 dB the
plugin offers… (I take IEMs "MultiEQ" for this task - one setting for
all the channels is another advantage.)
Unfortunately, Carla can't recall LSP IR Mono's WAV-file, so, you have
to reload it, every time you start your session.
Greets!
I hope, this can be useful for anyone…
Mitsch
On Sat, 2022-09-17 at 12:00 +0200,
linux-audio-user-request(a)lists.linuxaudio.org wrote:
>
>
> I sort of found an old track of indeterminate age, and thought I'd
> like to try
> a remix, so looked for the project folder. It wasn't there! That was
> a bit of a
> shock, as I never delete these, so how it came to be missing is a
> mystery.
>
> It would have been possible to keep listening to the original audio
> to (slowly)
> piece it together, but that was more tedious than I fancied. At this
> point I
> didn't know just how old it was - the audio had a file date in 2010,
> but I
> suspected it was older. Eventually I remembered I had a compressed
> archive of
> my very early pre-linux work, and there it was!
>
> Well, it is a not quite kosher MIDI file. This has the last modified
> date in
> 1994, so indeed older than I though. Importing it into Rosegarden
> produced a
> lot of strange bits of tracks separated from the actual notes, and no
> sign of
> track names, but it was enough to get started. That was about a month
> ago. I
> now have the entire recording, expanded and reproduced with Yoshimi
> (the
> original would have been a mix of Sound Canvas and SY22).
>
> So here it is. Personally I find it quite difficult to keep still
> while it's
> playing :)
>
> https://soundcloud.com/soft-sounds/skipping-rope
>
> --
> Will J Godfrey {apparently now an 'elderly'}
> https://willgodfrey.bandcamp.com/
> http://yoshimi.github.io
> Say you have a poem and I have a tune.
> Exchange them and we can both have a poem, a tune, and a song.
>
This is delightful, Will, thank you for sharing. I love the low-
pitched rhythm and the way you let each instrument take the melody.
John Sauter (John_Sauter(a)systemeyescomputerstore.com)
--
get my PGP public key with gpg --locate-external-keys
John_Sauter(a)systemeyescomputerstore.com
I sort of found an old track of indeterminate age, and thought I'd like to try
a remix, so looked for the project folder. It wasn't there! That was a bit of a
shock, as I never delete these, so how it came to be missing is a mystery.
It would have been possible to keep listening to the original audio to (slowly)
piece it together, but that was more tedious than I fancied. At this point I
didn't know just how old it was - the audio had a file date in 2010, but I
suspected it was older. Eventually I remembered I had a compressed archive of
my very early pre-linux work, and there it was!
Well, it is a not quite kosher MIDI file. This has the last modified date in
1994, so indeed older than I though. Importing it into Rosegarden produced a
lot of strange bits of tracks separated from the actual notes, and no sign of
track names, but it was enough to get started. That was about a month ago. I
now have the entire recording, expanded and reproduced with Yoshimi (the
original would have been a mix of Sound Canvas and SY22).
So here it is. Personally I find it quite difficult to keep still while it's
playing :)
https://soundcloud.com/soft-sounds/skipping-rope
--
Will J Godfrey {apparently now an 'elderly'}
https://willgodfrey.bandcamp.com/http://yoshimi.github.io
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hey hey,
does anyone know of a sonic visualiser with a text-based programming language?
I'm not familiar with the territory at all. I'd like to create more
interesting visuals for my Youtube material. Since I'm (almost) blind, I'd
like to create visuals from the music itself, with a certain degree of
algorithmic magic, I suppose, but with some control through a simple,
dedicated programming interface. Think something like the Csound language or
POV-Ray. It could be based on pure audio or with additional support for MIDI.
If it's not too complicated and well-defined (well described) I'd be happy to
use something with no direct linkage to audio.
The languages and paradigms I know are c/c++, Csound, bash scripting, POV-Ray
and html. I would prefer something which is not a library in a programming
language, so I didn't have to write a whole program around it.
Given these rough ideas, does anything spring to mind? Graphic patterns as
such don't mean that much to me, since they'd mostly be to delicate to see for
me. I personally set a lot of stock by colours though.
Best wishes and thanks for any ideas,
Jeanette
--
* Website: http://juliencoder.de - for summer is a state of sound
* Youtube: https://www.youtube.com/channel/UCMS4rfGrTwz8W7jhC1Jnv7g
* Audiobombs: https://www.audiobombs.com/users/jeanette_c
* GitHub: https://github.com/jeanette-c
And I love the way with just one whisper
You tell me everything <3
(Britney Spears)
XUiDesigner v0.7 released
A easy to use tool to generator/design X11 based LV2 plugins.
Beside that XUiDesigner allow to generate and install GUI's for existing
LV2 plugins, it support as well to generate LV2 plugins from scratch.
Special support is implemented for FAUST dsp files, which allow you to
generate a LV2 plugin with X11 based UI by just drag'n'drop a FAUST dsp
file into the XUiDesigner interface. This works now as well for MIDI
capable faust modules.
In any way, you don't need to interference with any of the annoying LV2
implementations. XUiDesigner handle that all for you.
The very same is true when you like to implement your own dsp (C or C++)
into a LV2 plugin. You could create the GUI interface, save the plugin
bundle, and implement the needed calls to init, activate and run your dsp.
This release comes with a couple of Bug-fixes and aims to be nearly stable.
Here is a introduction Wiki
<https://github.com/brummer10/XUiDesigner/wiki/XUiDesigner> entry to
show the first steps.
New in this release:
implement interim save format (json)
<https://github.com/brummer10/XUiDesigner/commit/8e94678ad5e1abde7c8d4dfae05…>
(Allow to load and rework the generated UI in XUiDesigner at any time)
Project page:
https://github.com/brummer10/XUiDesigner
Download Release:
https://github.com/brummer10/XUiDesigner/releases/download/v0.7/XUIDesigner…
Enjoy anyway.