Version 1.3.8 has been out for a while now, but we didn't announce it generally,
as almost immediately, a bug in cmake was discovered that caused a segfault on
build. This is a curious one and only seems the affect the December build of
cmake 3.4.1 on debian testing. Fedora and associated distros have no problem.
However, one of yoshimi's little helpers found a way to bypass the issue.
Codenamed 'The Swan', version 1.3.8.2 is now available from both:
http://sourceforge.net/projects/yoshimihttps://github.com/Yoshimi/yoshimi
Full details are in the tarball /doc/Yoshimi_1.3.8-features.txt but in brief:
Program changes from any source while actually playing multiple tracks are now
virtually silent - and are silent if the part being changed is not sounding.
Root & Bank changes are always silent.
Storage of Audio & MIDI preferences have been improved, along with preserving
your CLI/GUI working environment choice.
The CLI can now set almost all the 'top level' controls, and the major 'user'
settings for parts. The parser allows highly abbreviated commands for fast
working.
e.g: s p 4 pr 6
(set part 4 program 6)
This sets part 4 to the instrument with ID 6 from the current bank and root. It
also then leaves you at part context level and pointed to part 4. Additionally,
it will activate that part if it was off (and the config setting is checked).
This release is sound/instrument compatible with Zyn. 2.5.2
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hi list,
I am wondering when there will be more recent realtime kernel packages
available via Debian packages. Does anyone have an idea (besides asking
the Debian maintainers)?
The current one on Debian testing seems to be linux-image-3.14-2
3.16-4 and 4.3.0-1 seem to not offer realtime versions.
Thank you for all ideas!
best, Peter
Hey hey everyone,
I want to put a string section in a quasi 3d room. I've looked up the
classical seating of a string section, but now I'm wondering about real
distances in a concert hall. Assuming I have a 60 instrumentalist string
section with 1st and 2nd violins, violas, celli and double basses. At what
distance am I looking in a straight line from left to right and front to back?
>From there I can work out the finer points.
Thanks for any help!
Ta-ta
----
Ffanci
* Homepage: https://freeshell.de/~silvain
* Twitter: http://twitter.com/ffanci_silvain
* GitHub: https://github.com/fsilvain
Hi all,
In need of a (console) wave file player which is also jack-transport
aware, serching around I came up with this ecasound command:
ecasound -c -i ultimo_sole_audio_dial_eff.wav -o jack,system \
-G:jack,ecasound,recv
However I have two problems:
1. The above starts in interactive mode while I'd like ecasound to
autostart when run. It seems that omitting -c will cause ecasound to
exit upon jack transport stop, which is a no-go for me
2. The wave file is mono: I tried various combinations sucj as
jack_multi but this will always either create two outputs or one output
and connect it only to one of the system (stereo) outputs.
I guess I could work-around problem 2 by either converting the file to
stereo or further using jack_lsp in my script to connect the output to
the second system port.. Any idea about 1?
Lorenzo
Hi,
today we have published the video of another single of John Option:
Sunday morning.
The song is published under the terms of the Creative Commons License
(CC-BY-SA) and it's completely produced with free software:
Ardour, Hydrogen, Jack, Qsynth, CALF, and many other
great free audio software that we used under Debian GNU/Linux.
Here you can listen the single and see the video (made with kdenlive):
https://youtu.be/fpEoKFlU1fQ
As for the previous songs we have done a little more in the direction of
freedom and we published in our website[1] the single recording tracks
and the complete Ardour session. All this material is published under
the terms of the Creative Commons license Attribution Share alike so
that anyone can use our tracks to produce a remix of our song or even a
new song that have to be published under the same license.
You can find all about our project here: http://johnoption.org
I hope that you like our choice of freedom. If you feel like I'd love
to read your feedback, because the encouragement of the people who
listen to us and appreciate the philosophy of our project is the only
fuel for us to continue. And if you like to be updated about our next
release, please subscribe to our YouTube channel or any other social
network you like (see link to our profiles on our website[1]).
Best regards,
Max-B
1. http://johnoption.org
--
IM: massimo(a)jabber.fsfe.org - GnuPG Public Key-Id: 0x5D168FC1
Some may remember, I tried a full-court press on MIDI over wifi about
eighteen months ago, using a couple of different methods; there were
occasional losses in realtime play, enough key-up commands lost that it
could not be used. So I gave it up for then and eventually found a
different use for my Rpi2.
But the prevalence of tablets as OSC+MIDI controllers, over wifi, has not
ceased to nag at me. I'm not interested in having a tablet -- my keyboard
is all I need -- but I am very interested in configuring an RPI2 or
equivalent in place of that tablet, so that I can wire keyboard to RPI2 and
then go wireless to the synth.
Now if I understand this Wikipedia article
<https://en.wikipedia.org/wiki/Open_Sound_Control> correctly, SLIP is
commonly used to encapsulate OSC signals. This could be a great clue,
because SLIP is very definitely a protocol which can give us verification
and recovery of lost signals, i.e., signals lost in the normal wifi
situation. And I have seen the Pure Data method of converting OSC to MIDI
and back, so that is not too much of a problem. The question I have is,
what would be the best method of verifying, configuring, and debugging SLIP
encapsulation? To do this well, for production on stage, I have not not
only be able to checkbox on for SLIP, but trigger warnings if it detects a
certain degree of errors.
Thoughts, anyone?
--
*Jonathan E. Brickman jeb(a)ponderworthy.com
<http://login.jsp/?at=02e47df3-a9af-4cd9-b951-1a06d255b48f&mailto=jeb@ponder…>
(785)233-9977*
*Hear us at http://ponderworthy.com <http://ponderworthy.com> -- CDs and
MP3s now available! <http://ponderworthy.com/ad-astra/ad-astra.html>*
*Music of compassion; fire, and life!!!*
Thank you MR Hawaii, I forgot I had installed that, but looking in its man-page,
makes references to non-distructive editing. If I understand the concept, I
would think editing out portions of sound would certainly be destroying an
orriginal. Meanwhil, if I run nana -t and a file name, I get the following
error:
Found config file: /home/chime/.namarc
YAML::Tiny found bad indenting in line ' consumer:' at
/usr/share/perl5/Audio/Nama/Assign.pm line 283.
So please, how do I fix this-and-would NAMA be an interactive editor? Thanks in
advance
Hart
Hello,
There is a free reverb tool by u-he with which they are trying to get
insights by asking users to send in random codes that they
either hate or like. The VST2 plugin can generate random model and
delay value codes. The idea is to see what works outside of
algorithmic reverbs and into combinations of networked, serial and
parallel delays. The tool is free and has some presets. It is also
possible to save presets. It has a 'send code' button.
It can add up to 20% CPU usage, though, so this is not a typical reverb
unit, hence the prefix 'proto'.
Runs fine in Ardour.
Details:
https://www.u-he.com/cms/179-protoverb
Hey everyone!
For those of you who follow my project all these years this is normal news,
for those of you who don't know, "droning" is a lifelong project to explore
various forms of drone music. Currently the project has 262 tunes released
since April 2011.
If you like ambient from time to time, you might find some of these tunes
to your liking.
The new batch adds tunes 258-262. Although these are just 5 tunes, each of
them required huge amounts of work.
*droning258* is a calm haunting drone, something I enjoy doing from time to
time.
*droning259* is a grand soundscape that took several weeks to produce, with
many challenges, both creative and technical.
*droning260* is one of those borderline sequence-based tunes, which might
seem very close to falling out of the format. Nevertheless I decided to
keep this in the project. I like the thought of having very different tunes
represented.
*droning261* explores this "underwater" feeling that you can find in
several other droning tunes. This one, however, has lots of detail and is
also sequence-based.
*droning262* is a very elaborate soundscape. I tried getting something like
this done in quite a while and am quite happy with the result, although
this is definitely not the last time I venture into this kind of ambient.
Get ogg files here:
http://www.louigiverona.com/?page=projects&s=music&t=droning
If you would like to support the project financially, one of the tunes from
the batch is released on Bandcamp, offering you lossless quality:
https://louigi.bandcamp.com/album/droning259
--
Louigi Verona
http://www.louigiverona.com/
> Message: 3
> Date: Sun, 3 Jan 2016 08:33:58 -0600
> From: "Jonathan E. Brickman" <jeb(a)ponderworthy.com>
> To: linux-audio-user(a)lists.linuxaudio.org
> Subject: Re: [LAU] MIDI over wifi on Linux, revisited
> Message-ID: <56893156.5080502(a)ponderworthy.com>
> Content-Type: text/plain; charset="windows-1252"; Format="flowed"
>
>
> <snip>
>
This is just a curious question from me on this topic.
I work in a theater that has a really crowded wifi spectrum and the
building is essentially a big metal box.
This includes a generic wifi node, all the wifi on the audience cell
phones, 2.4GHz body packs, and 2.4GHz crew comm.
We have noticeable dropout issues all the time.
I be afraid to run my MIDI over wifi...
PS: If anyone has a good solution to this sort of crowding, I'd love to
hear it. The best we've come up with is to disconnect the wifi node and
plead with the audience to use airplane mode. (moving the directional
antennas for the body packs helped for them, but still didn't completely
alleviate the issue.)
Regards,
Mac