Dear List,
I'd like to play a MIDI file using setBfreeUI but have not found any
possibility. Is there a method to send a MIDI file to the already launched
program or otherwise a command line parameter to let setBfree play the given
MIDI file?
My feeling is that setBfree is not capable to play MIDI files by its own?! thus
I was looking for a simple jack-aware MIDI file player sending its output to
Jack to be routed to setBfree. Any hint is highly appreciated.
Another setBfree question: Can I get information of the current program that is
in use? E.g. I send a Program Change MIDI message to setBfreeUI and notice that
the sliders change but have no control about which program it is that is
currently in use.
thanks for your time
Gerhard
Let me state the goal up front. I have a room with several speakers
installed. It's not a traditional home theater setup; there's a
subwoofer, and then 8 speakers. 8.1, if you will. It's not used for
movies or other commercially recorded multitrack audio.
There's also 2 computers involved; one is an older raspberry pi running
the usual raspian, the other is a previous generation Intel NUC running
recent Linux Mint.
My goal is to be able to drive each channel independently, from my own
software. I'm not trying to play movies through this setup; this is
strictly a non-commercial attempt to play special effects (thunder,
wind, forest noises) on demand, by starting and stopping .wav files as
needed.
What I have today is that the raspberry pi determines what needs to be
played and when, and since it doesn't handle 9 channels of audio by
itself, it sends network messages to the NUC to cue it to play some of
the sounds. The pi has 4 channels of output, gotten by plugging in 2
cheap Pluggable USB external USB->stereo devices. Two channels go into
the sub; the other two drive a pair of speakers. The NUC has 3 of the
same cheap USB audio devices, giving six channels for the other
speakers. In both cases the software is spawning (and killing) aplay and
using -D to pick the stereo device to use. All that actually works fine;
happily, the latency of sending messages and spawning aplay isn't a problem.
In some cases the pi will decide it needs to play different audio clips
to the same speaker simultaneously; I need ALSA's ability to mix inputs
to a single output to keep working. (Currently my software limits things
to 3 sounds at a time on any given channel.)
It's important to me that when I play a sound intended to come out of
the front left speaker, /it actually go to that speaker/.
You can probably guess my problem: on any reboot, the cheap USB devices,
which don't have serial numbers, get randomly assigned to ALSA devices.
On the pi, in ALSA, sysdefault:CARD=Device and sysdefault:CARD=Device_1
both show up, but it's random which speakers they drive. Ditto for the 3
devices on the NUC. Result: when the pi decides to generate a flash of
lightning in the lights on the left, the thunder comes out of the front.
Or bird sounds end up in the subwoofer.
Basically I'm doing it wrong. How do I do it right?
Note that my recorded sources are generally all stereo .wav files; when
I want sound to come out of just one speaker, I mix that sound file to
put everything into one channel. That's maybe not ideal, but I'm
comfortable messing with the audio files and used to thinking of the
outputs as 5 groups of 2 channels each. But I'm not wedded to that and
would be ok with controlling channels independently.
I am fine with using either or both computers to produce sounds.
Experience says that the pi doesn't handle more than two USB devices
without running out of bandwidth, which is why it's currently 2 devices
on the pi and 3 on the NUC. But that's changeable.
I keep looking at the HDMI outputs on both these devices and wondering
if that's not a total of 16 channels of audio that won't move around,
that I could be using. Maybe I could dump USB entirely. But I keep
reading articles online that suggest that using HDMI audio on Linux just
doesn't work well?
I'm trying to keep costs down. I already have the 5 USB audio devices
and they work, so if I could get them to stop moving randomly I could
call it done. If I have to buy different USB audio devices, or HDMI
audio extractors, I want to spend $$, maybe 1$$ and definitely not $$$$.
I'm willing to move the audio duties between the two computers (if the
NUC can drive all 10 channels, 8 from HDMI and 2 from a single USB
device, that works.) Rewriting my software isn't a problem, but the time
and expense of buying hardware, trying it, realizing it won't work,
having to send it back/take a loss, lather rinse repeat, is exactly what
I need to avoid.
I did experiment with an HDMI audio extractor once, a few years ago.
When I got it to work at all I discovered that it would somehow go to
sleep during periods of silence, and then when audio signal was
presented, it had a "wake up" period of over a half second, during which
the audio was dropped. That ruined a number of effects.
I've experimented with using the headphone output on the pi. The audio
quality was too low. I don't need the highest of audio fidelity for
this, but the sub gets driven as low as 8Hz and the upper end is around
18Khz, and when I play crickets I want it to sound like there are
crickets in the room. The pi's headphone output wasn't convincing.
Basically: what do I need to buy that's known to Just Work Every Time?
And will keep working for years?
Level of expertise: application programming on linux, not a problem.
Configuring ALSA is scary and I'd need step by step instructions. Once
we get into modprobe and custom drivers I'm acutely nervous. These
computers do other important things and I don't want them bricked.
Solid suggestions welcome, please nothing of the "well you could try..."
variety. I'm sure someone on this list has been here and done this. What
did you use? TIA.
Dear Linux Audio Users,
The second annual Sonoj Convention ( https://www.sonoj.org ) will be
taking place this upcoming October 26th-27th 2019 in Cologne, Germany.
The convention is focusing on the combination of music production and
open source software with a priority on practical music production.
At Sonoj there will be talks, demonstrations, and workshops from
demonstrating basic workflows to detailed instructions on how to subtly
improve your sound.
While the event will take place in Germany, all presentations and
workshops will be in English, recorded and streamed to make them easily
accessible to a wide audience.
We want to welcome everyone, regardless of your musical or technical
background.
To get a better idea of what to expect out of Sonoj, you can find
information and recordings from last year's convention in our archives:Â
( https://sonoj.org/archive )
As it was last year, admission to the convention is free, though
donations are very much welcome.
- Visitor Registration is now possible. The space is limited so please
register on our website now. ( https://sonoj.org/register.html )
- We are looking for talks, demonstrations and workshops. If you would
like to contribute, please send an informal and short e-mail with your
ideas to info(a)sonoj.org
- Donations are very welcome. Even if you don't attend personally please
consider supporting a non-profit event to promote open source music
making and production. We accept direct bank transfer and paypal (
https://www.sonoj.org/donate.html )
Yours,
Nils Hilbricht
https://www.sonoj.org
Hey everyone!
I am very excited to present my new album, "Nocturnal Creatures".
As is usual for all of my latest ambient work, the textures and pads were
created with the Linux Audio setup, using JACK, Carla, Rakarrack and
Qtractor, whereas full song arrangement, automation and mixing were done
with FL Studio running through WINE.
The album can be streamed and purchased from Bandcamp:
https://louigi.bandcamp.com/album/nocturnal-creatures
If you like the album, I would greatly appreciate a review.
You can also stream the album from SoundCloud. Be sure to follow! :)
https://soundcloud.com/louigiverona/sets/nocturnal-creatures
Cheers!
Louigi Verona
https://louigiverona.com/
Atte tried yo post a reply to a thread here that he'd actually started but it
was rejected marked as spam. I've seen a copy and there's nothing I would have
thought that could be taken as such.
Has anyone else had this recently?
Any ideas what he can do about it?
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
On Mon, 2 Sep 2019 09:23:35 +0200
Atte <atte(a)youmail.dk> wrote:
>On Sun, 1 Sep 2019 19:44:26 +0100
>Will Godfrey <willgodfrey(a)musically.me.uk> wrote:
>
>> We also have a shiny new website. I've asked our web guy to make it as
>> accessible as possible.
>> It is at:
>> http://yoshimi.github.io/
>
>Website looks really good! Nice touch with the menu, that reflects the button look of yoshimi!
>
>One small suggestion: It's kindof standard that clicking on the logo will take you to the root of the website, a small addition, might be worth considering :-)
>
>Cheers
Thanks for the suggestion - done now :)
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
This is a 2010 album. From the height of today's experience I can see that
the production quality is absolutely not where I want it to be, but back
then I paid very little attention to sound engineering, unfortunately.
However, the album is still kinda interesting, and completely produced with
the Linux Audio stack. So, just wanted to share this with everyone as I was
going through my older releases.
https://louigi.bandcamp.com/album/tranquility
Louigi Verona
https://louigiverona.com/
V1.6.0
Yoshimi is now 10 years old and (while fully respecting its origins) is forging
it's own path into the future. Do come along for the ride.
Our headline feature is extensions to AddSynth voices and modulators.
There is a new AddSynth noise type.
There are extra mute options.
There is a global bank search entry in the main window's instrument menu, and a
button in the instrument bank window.
Also in the main window there is a button to temporarily disable an individual
system effect.
In the part editor window there is now a 'Humanise Velocity' slider.
We've made an improvement to the way recent histories are managed.
All the above features are, of course, also available to the command line
interface.
'Reports' and 'Midi Learn' openers have been swapped.
There is a new group of easy to use NRPNs.
There have been improvements to Copy/Paste.
There is tighter control of startup.
Incidentally, whenever we add new features, the default is always to keep the
existing behaviour.
The Advanced User Manual has been considerably Expanded.
Under the hood
Ring buffers have now been changed to a bespoke type.
Almost all file system operations have been moved to a single source file.
As well as running headless, Yoshimi can now be built headless.
The command line has additional protection against overlength lines, and
corrupted data.
More details in /doc/Yoshimi_1.6.0_features.txt
Yoshimi source code is available from either:
https://sourceforge.net/projects/yoshimi
Or:
https://github.com/Yoshimi/yoshimi
Full build instructions are in 'INSTALL'.
Our list archive is at:
https://www.freelists.org/archive/yoshimi
To post, email to:
yoshimi(a)freelists.org
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
dear list members,
I'm trying to have my computer automatically establish a connection
between qmidinet and qjackctl resp. reaper so that I could just start
the jack-server via qjackctl, then start reaper and have touchdaw
working (via qmidinet). the connections can be established very easily,
only I don't know how to establish that said flow. the issue is that
qmidinet needs to be started/reseted after jack is up. I thought I just
use the option in qjackctl to run a script after jack is set up but if
I put 'qmidinet' here it doesn't work, because of some conflicting
processes I don't understand so far.
does anybody of you know what I could do? I thought of a command to
reset qmidinet just like its gui offers, that would be the easiest and
a sane way. is there such command?
thank you very much!
christoph
Yep, sounds like you need a LADISH manager. You might already have Claudia
on your system, which is quite nice, but if not you could also try out
GLadish which is the more basic one (also works great).
Claudia: https://kx.studio/Applications:Claudia
Gladish: sudo apt install gladish
On Sat, 31 Aug 2019, 12:25 , <linux-audio-user-request(a)lists.linuxaudio.org>
wrote:
> Send Linux-audio-user mailing list submissions to
> linux-audio-user(a)lists.linuxaudio.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.linuxaudio.org/listinfo/linux-audio-user
> or, via email, send a message with subject or body 'help' to
> linux-audio-user-request(a)lists.linuxaudio.org
>
> You can reach the person managing the list at
> linux-audio-user-owner(a)lists.linuxaudio.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linux-audio-user digest..."
>
>
> Today's Topics:
>
> 1. Re: Nocturnal Creatures (Francesco Ariis)
> 2. Re: qjackctl, qmidinet, touchdaw, reaper (Banibrata Dutta)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 30 Aug 2019 13:01:29 +0200
> From: Francesco Ariis <fa-ml(a)ariis.it>
> To: Louigi Verona <louigi.verona(a)gmail.com>
> Cc: linux-audio-user <linux-audio-user(a)lists.linuxaudio.org>
> Subject: Re: [LAU] Nocturnal Creatures
> Message-ID: <20190830110129.wpeak3w62g5rzez4(a)x60s.casa>
> Content-Type: text/plain; charset=us-ascii
>
> Hello Louigi,
>
> On Fri, Aug 30, 2019 at 11:49:35AM +0200, Louigi Verona wrote:
> > Hey everyone!
> >
> > I am very excited to present my new album, "Nocturnal Creatures".
>
> I am listening to "Rumble of a Distant Migration", very good quality
> track atmosphere and production-wise. Good job!
> -F
>
>
> ------------------------------
>
> Message: 2
> Date: Sat, 31 Aug 2019 11:08:42 +0530
> From: Banibrata Dutta <banibrata.dutta(a)gmail.com>
> To: "soffioalcuore(a)posteo.net" <soffioalcuore(a)posteo.net>
> Cc: linux-audio-user(a)lists.linuxaudio.org
> Subject: Re: [LAU] qjackctl, qmidinet, touchdaw, reaper
> Message-ID:
> <CALODqpqOUZ3r7c9SpL6a=odVyKyLUsNViM0Dxppgbfd=
> 2dgAyQ(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Not an expert on this, but I think you need a "session manager" for this.
> The session manager is what saves existing Jack connections / patching as a
> "session", s.t. everytime you need to start the combination of independent
> Linux Audio applications (that use JACKd), the connections are setup
> automagically for you (by the session manager, for the selected session).
>
> On Tue, Aug 27, 2019 at 4:20 PM soffioalcuore(a)posteo.net <
> soffioalcuore(a)posteo.net> wrote:
>
> > dear list members,
> >
> > I'm trying to have my computer automatically establish a connection
> > between qmidinet and qjackctl resp. reaper so that I could just start
> > the jack-server via qjackctl, then start reaper and have touchdaw
> > working (via qmidinet). the connections can be established very easily,
> > only I don't know how to establish that said flow. the issue is that
> > qmidinet needs to be started/reseted after jack is up. I thought I just
> > use the option in qjackctl to run a script after jack is set up but if
> > I put 'qmidinet' here it doesn't work, because of some conflicting
> > processes I don't understand so far.
> > does anybody of you know what I could do? I thought of a command to
> > reset qmidinet just like its gui offers, that would be the easiest and
> > a sane way. is there such command?
> >
> > thank you very much!
> > christoph
> > _______________________________________________
> > Linux-audio-user mailing list
> > Linux-audio-user(a)lists.linuxaudio.org
> > https://lists.linuxaudio.org/listinfo/linux-audio-user
> >
>
>
> --
> regards,
> Banibrata
> http://www.linkedin.com/in/bdutta
> http://twitter.com/edgeliving
>