Dear all,
Unfortunately I had to disable signing up for all linuxaudio.org mailing
lists. This is due to a flood of fake accounts being created that send
out subscription confirmations which sometimes get flagged as spam. This
in turn leads to so-called Feedback Loop complaints from the abuse team
of our hosting service (Hetzner). If we get too much of those we risk
getting our mail traffic blocked or ending up on a RBL (Realtime Blacklist).
So at the moment new subscriptions can only be done manually through me.
This is very inconvenient of course so if anyone knows a better way to
shield Mailman3 against bots creating fake accounts then please contact
me, thanks in advance!
Best regards and happy holidays!!!
Jeremy
linuxaudio.org admin
I've recently made "prettified" accordion videos using closeups of both
hands. That provides a pretty good distraction from the faces I pull
when playing while still not nominally going for the "headless" look
popular with some players likely having the same problem.
<https://youtu.be/spAP7ODPCyg>
Since the left hand moves all over the place with the bass part of the
accordion, it requires keyframes to rein that movement in for the
closeup. Shotcut recently acquired motion tracker, but so far I haven't
got it to work, it doesn't track rotation, and there doesn't seem to be
a way to apply results for creating a working closeup crop. So
basically I went through the video until each bellows reversal and then
straightened up the closeup, creating a keyframe.
The greatest annoyance probably was that the rotation angle tended to be
in the interval 350° to 20° and I had to manually convert every angle
just below 360° into a negative angle in order to keep Shotcut from
performing caprioles with the bass side of the accordion between
keyframes. An option to constrain the rotation angle to some interval
when using the visual controls for straightening things could be useful.
I've worked with three cameras: one for the main video, one for each
hand. The bass hand camera was tilted in a way to get as large an image
as possible while keeping the bass side of the accordion somewhere in
frame.
On the audio side, this was sampled in Ardour with 96k on an Echo
Audiofire card using jackd2 on Firewire (I had to ditch Pulsewire on
Ubuntustudio because it got in the way). I employed the Guitarix
wrapping of Zitareverb which does a much more convincing job than what
Shotcut itself offers as "Reverb".
Now if just my playing skills were up to what the tools do... Silk
purses and all that.
--
David Kastrup
Hi list,
I discovered that jackd can use different devices for input and output,
which is great! I am wondering if there will be the occasional click if
these two devices do not have synchronized word clocks? Or is there some
clever (resampling?) going on behind the scenes?
Thanks!
Peter
I failed to double check compiling a headless build and one of our blind users
reported a build failure :(
Now fixed, and a note added to my checklist.
--
Will J Godfrey {apparently now an 'elderly'}
https://willgodfrey.bandcamp.com/http://yoshimi.github.io
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
This is mostly a bugfix release, however there are a few useful improvements:
From the Readme file:
Version 2.3.1.2
Copy/Paste has been improved and unified between CLI and GUI.
Improved discovery of most recent HTML guide location.
Also available to the CLI.
Small corrections and updates in the User Guide.
Various bugfixes.
For developers:
Improved access to control data/descriptions, with updates.
Updated various explanatory texts.
Full details are in /doc/Yoshimi_2.3.1.2_features.txt
Yoshimi source code is available from either:
https://sourceforge.net/projects/yoshimi
Or:
https://github.com/Yoshimi/yoshimi
Full build instructions are in 'INSTALL'.
Our list archive is at:
https://www.freelists.org/archive/yoshimi
To post, email to:
yoshimi(a)freelists.org
--
Will J Godfrey {apparently now an 'elderly'}
https://willgodfrey.bandcamp.com/http://yoshimi.github.io
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Greetings LAU,
Long time no read..
Time truly flies but then again, it means that it's already Acid
December season!
https://acid.datapop.se
For those who don't know, Acid December is a thousand years old
internet tradition, where people from all over the world send acid
tracks. Then every day in december, the magic elves release one track.
If there are 42 tracks, it goes on until the 42nd of December.
Submissions for this edition are open until midnight, 31st December!
Would be lovely to hear your take this year! either way, stay safe in
cyberspace and take good care of eachother!
From cold and grey Stockholm, with warm and colorful regards,
--
Set Sakrecoer
Hi!
Not sure, if I'm on the right place, but I guess the LAU-people are
trained to find solutions to extraordinary problems…
I have a vision 8-) :
I'm sitting at FOH, driving a theater show. I have - let's say - 3
projectors available. One on my back to cover the stage from the front,
two behind the stage doing a rear projection on the right and the left.
On every projector, there is a Raspberry Pi connected via HDMI, waiting
to send videos to the projector. All Raspberries are connected to LAN,
just like my linux-laptop from which the show is controlled.
I know, I can reach similar with QLC+: Install the app on all the
computers involved, setup 3 different Artnet-channels, configure a/some
video function(s) and make each one accessible through a dmx-channel.
Therefore, the videos that should be presented have to be on the
Raspberries. I can copy them to the devices and configure the triggers
on each before the show.
But my goals are different: Keep it simple, keep it fast (in terms of
latency, but also in terms of using light and fast apps and finally. in
terms of not running through the venue to make some last-minute
configurations) and let only one machine be the one that has to be
configured - the main laptop at FOH.
I'm not so far away from that - the tools and the technology seem to be
there, already. With ffmpeg for example it's possible to stream videos
from point to point in realtime.
[code]ffmpeg -i [input-video] -f [streaming codec to use]
udp://[reciever's network-adress]:[port][/code]
(There are options to speed thing up and/or relieve the CPU, but take it
as an easy example.) On the other side of the chain, ffplay or mpv can
catch the stream and decode it in no time.
[code]mpv udp://[transmitter's network-adress]:[port][/code]
(Again: Optimizations left aside)
Tried this myself in a LAN between a Ryzen5 2400G Desktop and a 10 year
old Thinkpad and achieved latencies under 1s - which is good enough,
even for professional use. Once, you found the best options for your
setup you can use it over and over again with different video-inputs and
destinations. Best of it: With a commandline code it's capable of beingÂ
integrated in QLC+ or Linux Show Player (LiSP). And: With ffmpeg I can
-tee video from audio stream, if I like, and keep the audio at the FOH.
(Or send it back from one of the raspberries to FOH via net-jack or
comparable. Keeping video and audio in sync will be another challenge, I
see…)
But there is one downside: If the receiver already plays the video,
there is no big latency between sender and receiver (if the options are
chosen well, of course). But: Catching the stream can take several
seconds. So, what I need is a continuous stream on which I can send my
videos. OBS can do this, but it's another resource intensive app and -
as far as I know - I cannot send commands from QLC+ or LiSP to it. (I
want ONE cue-player for all, you know…!) Also: I *guess* OBS can't
handle more than 1 stream, at once (sending to the different
RPi-receivers) - but with ffmpeg-commands it's easy…!
I had the idea, sending a continuous stream by streamcasting a virtual
desktop page and configure mpv to play on that in fullscreen, by demand.
But I guess, this comes not so handy with more than one beamer.
Any ideas in how to reach my goals? (You can suggest other apps than
ffmpeg or mpv, of course!)
(Disclaimer: I have also posted this to the
Linux-Audio-Users-Mailinglist and will try to send it to a place, where
ffmpeg-nerds are common. I will inform you if I get good thoughts from
the other sources…)
Greets!
Mitsch