Hi fellow LAU's
Over the past summer I had the opportunity to co-produce an EP in our new
'Bandshed' studio with the band 'Evergreen' which my son drums in. It was
a great learning experience and they are a talented and funny bunch of
guys aged 17-19. They brought the songs and performances and I did the
recording mixing and mastering... Unsurprisingly we used AV Linux 2016 and
various builds of Mixbus 3 to complete this project and the main interface
used was a Presonus 1818VSL.
This is the first released single 'I'll Get To You', it has a curious
balance of a super tight rhythm section and bare-knuckled bar fight vocals
and guitars, precisely the kind of thing that makes these guys unique and
fun to work with...
https://soundcloud.com/evergreen-548575357/ill-get-to-you
Dear list,
I'm happy to announce two open positions in our group at Oldenburg
University and partner institutions:
NIH-funded software engineer for 5 years (audio signal processing,
computer science, communications engineering, or similar)
at HoerTech gGmbH, a transfer institution of Oldenburg University, see
http://www.hoertech.de/de/hoertech/karriere.html for details, deadline
Sept. 30th
NIH-funded PostDoc position until 06/30/2021
in the field of Signal Processing for Hearing Devices at Oldenburg
University, see http://www.uni-oldenburg.de/stellen/?stelle=65009 for
details, deadline Sept. 20th
Especially the software engineer position is based on open source audio
software development on Linux; we presented the basis of that position
at the LAC2009:
http://lac.linuxaudio.org/2009/cdm/Friday/07_Grimm/07.pdf
Our group on Auditory Signal Processing (head: Prof. Volker Hohmann)
https://www.uni-oldenburg.de/en/auditory-signal-processing/ is active in
bridging the gap between basic research on perceptual principles in
hearing and applications in hearing devices. The group offers a creative
and collaborative research environment within a large hearing research
cluster http://hearing4all.eu/EN/
Best regards,
Giso
Hi,
We are working on a new project that is focused on childrens education and
language learning tools.
RauRau TV
http://raurau.orghttps://www.youtube.com/channel/UC6uUlBbtctR6jIzl9eV6iww
Of course everything is done with Linux Multimedia tools.
We need some community support so that the youtube search ranking
algorithm(s) will give priority to the channel. It would be very helpful
if y'all could like/subscribe/comment/share/link to the channel.
In return I will be happy to do the same for your videos and assist with
promoting via the "Linux Music Videos" playlist.
https://www.youtube.com/playlist?list=PLGRkb-jpsg0XYGVkE1zpJzQtwrEGKBYO6
and also at the Linux Music Videos blog: http://videos.linux-audio.com
Please contact me directly or leave a comment on the videos and I will
link back to you.
- It should go without saying but if you want to collaborate (with profit
share) on new content for the channel please contact me directly.
--
Patrick Shirkey
Boost Hardware Ltd
Dear LAU friends!
A long time ago I acquire a Korg X5DR synth module. I then wrote a
simple midi sequencer that
wrote stuff out to the module via the serial port (the X5DR has a 56Kb
serial input in addition to
the conventional 32Kb Midi ports). This worked fine and I had lots of
fun playing it.
Things moved on and my machines now no longer have serial ports, but a
USB/Serial adaptor cable
got me over that one.
More recently again, things have on a lot, and after a certain amount of
trial and error I managed to
adapt my sequencer to talk to to Yoshimi via Jack using the Alsa
routines and QJackCtl to set up the
links. One odd thing I noticed was that QJackCtl shows my programs
output port and the Yoshimi input ports
on the ALSA page, rather than the MIDI page. I now want to connect up
the Calf Fluidsynth, but that
shows the input ports on the MIDI page.
Anyone with enough knowhow of programming the ALSA interfaces willing to
sort out my problem?
My present code for setting up the port has:
out_port = snd_seq_create_simple_port(seq, "MFE: output port",
SND_SEQ_PORT_CAP_READ|SND_SEQ_PORT_CAP_SUBS_READ,
SND_SEQ_PORT_TYPE_APPLICATION);
which sounds odd (READ?) but I gather the parameters actually specify
the far end, rather than the
near end.
I'd like to be able to use this with Fluidsynth and Yoshimi it different
times, but I guess I'd need
to have a command line option to vary the code to get the different
class of outputs.
Any pointers to something that can explain the difference between the
two categories would be most helpful
Many thanks
Bill
--
+----------------------------------------+
| Bill Purvis |
| email: bill(a)billp.org |
+----------------------------------------+
Dear linux-audio-user list,
I am using pico2wave on Debian 8 to synthesize voice from text. I encounter the problem that in the wav file which is produced, there is clipping, regularly producing harsh scratching sounds. I've downloaded the source code of the Debian packages "libttspico0" and "libttspicl-util" (containing pico2wave) but I am an absolutely lay person (I'm not even sure how to install the modified source as a Debian package, but this problem I'll address later on...). Can anyone of you tell me which line of code I have to edit to make the voice production lower (and by that minimizing the clipping)? (or maybe how to increase the maximum volume capability of the wav file as an alternative?)
Any help is appreciated. Apart from the clipping the pico voices are quite agreeable to listen to, so I really want to get this done.
Regards!
Amie
Hello,
Does anyone know of a good plugin that will generate subharmonics?
I would like to put a little more low frequency "oomph" into my bass
track. Preferrable LADSPA, but VST would work, too.
Thanks for any help!
-TimH
Maybe not be the prettiest, but this bird sure knows how to create great sounds!
First of all, we have a new quick guide that's in Yoshimi's 'doc' directory.
It's just something to help new users get started.
We've always logged warnings if it wasn't possible to run either audio or MIDI,
but now we also give a GUI alert.
From this version onward it is possible to autoload a default state on startup,
so you have Yoshimi already configured exactly as you like, with patches loaded
and part destinations set etc.
To make it easier to position program change CCs in a MIDI file, there is a new
option to report the time these take to load.
Vector control settings are now stored in patch set and state files.
We implemented a simpler way to perform channel switching so the 'current' MIDI
instrument can seem to be changed instantly, but retaining the note tails of
the previous one.
All the usual background improvements.
When installed, full details are in either:
/usr/local/share/doc/yoshimi/Yoshimi_1.4.1-features.txt
Or:
/usr/local/share/doc/yoshimi/Yoshimi_1.4.1-features.txt
To build yoshimi fetch the tarball from either:
http://sourceforge.net/projects/yoshimi
Or:
https://github.com/Yoshimi/yoshimi
Our user list archive is at:
https://www.freelists.org/archive/yoshimi
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
I have a Fireface UCX, but I never actually use it to full potential. I
have an opportunity to sell it now, but was wondering what I should get
instead. Requirements: USB, 4 channels, Linux compatible, low latency.
Thanks for your suggestions.
Since I was a little boy, I love fuzz.
A lot have changed over the time, one can say I'm established this
day's, true, maybe. One can say I'm not the revolutionary from the old
day's any more, may be true as well. But, this one thing for sure has left,
I love Fuzz. :-)
https://github.com/brummer10?tab=repositories
regards
hermann
Hi folks!
Just passing along the release of AV Linux 2016.8.30, if you're interested
please take a look at the Release Announcement.
http://bandshed.net/forum/index.php?topic=3827
Best Regards, Glen MacArthur