I'm the (un?)lucky owner of an M-Audio Fast Track Pro USB audio
interface and I'm having some serious problems getting this device to
record audio reliably under Linux.
I've been using arecord and occasionally Audacity for all of my
testing. My problem is this: Recording a take works about 80% of the
time. In the remaining ~20% of cases, the captured audio is extremely
loud with severe digital distortion. Once this problem shows up, it
persists for any subsequent takes. The only way I've found to make
the problem go away, at least temporarily, is to power-cycle the Fast
Track Pro.
I considered the possibility that this particular device might be
defective, but it seems to work wonderfully under Windows.
I'm calling out to other Fast Track Pro users in the hope that someone
out there has encountered the same problem and better still, found a
solution.
Any suggestions at all would be greatly appreciated!
.lewis
[Apologies for cross-postings] [Please distribute]
Save the date: Linux Audio Conference 2018
The 2018 Linux Audio Conference will take place at
c-base, Berlin
7th -11th June 2018
The LAC 2018 will feature a full program of talks, workshops and music.
The official call for papers and works will be sent shortly after the
winter break -
so use the quiet of the holidays to think about possible submissions.
All relevant information will be made available on:
http://lac.linuxaudio.org/2018/
The LAC 2018 Organizing Team
--
https://sleepmap.de
Hi everyone,
This is my first email to this list. Apologies in advance if I break any
conventions I am unaware of.
I am a long-time musician, but just getting acquainted with audio in
linux. I have a very simple goal of getting sound from a plugged
instrument to my computer via a usb-interface, the Cakewalk UA25-EX,
which I know works with linux.
So far the instrument's output is received by the interface, and the
interface is recognized by the computer. I asked in the ardour forums on
how to get ardour to recognize that the input for audio is the usb
interface. They suggested <https://community.ardour.org/node/15577> I
use ALSA instead of jack to connect to Ardour. The interface of Ardour
allows me to select the Cakewalk interface as input and output, but when
I play the instrument I see that while the interface gets the signal,
the audio is not received by Ardour. If I change the input to my
laptop's microphone, then I do get a signal. The problem, then, lies in
the connection between the interface and Ardour (or the computer?)
I checked the Alsamixer levels and they are not at 0. The advice from
the Ardour forum was to contact this mailing list, as I probably need
some help checking the connection of the usb interface to the computer.
Thanks in advance
I am looking for response files for ir.lv2 - the only convolution reverb
so far I have found working within ardour. Looking for natural, long,
lush reverbs and thought to have found a good starting point with
openairlib.net.
However, all big reverbs I've found so far, do produce lots of kind of
rythmic clicking (or audible transients) during playback, thus rendering
them unusable. The shorter ones partly work, but I just happen to be
looking for large spaces. Cathedrals and the like. Why not the whole
universe?
Any sources or hints, what I may have to look out for? Wrong file
format? Or what I may be doing wrong otherwise? It does not seem to be a
CPU or load issue at all. And the settings I have been playing with did
so far not affect the issue notably.
Any ideas?
Thanks
Hello! I am a long time Linux user and I used to participate in the LAU
group many years ago. A while ago, I decided to do my music "out of the
box" with mostly analog gear.
I am returning once again to LAU. My goal here is to edit several cameras
videos together using different angles, into one video. And use the WAV
generated from the F8 as the audio. Actually I plan on having some
professional mastering done on the tracks from the F8. But I am doing the
video editing part.
I need to sync videos from multiple Android phone cameras that are
recording some "live" music instrument performances in my home studio. The
videos contain CH1 as LTC audio (sounds nasty!) , and a scratch track on
CH2 - both coming from a Zoom F8 outputs. The phone camera inputs are from
an Irig DUO over USB. The F8 is recording the performances via its 8
inputs, to a BWF WAV file on a removable SD that contains all 8 tracks.
I can see the timecode metadata of the WAV using bwftool. I can see all the
tracks in Ardour when importing the WAV (compiled latest release from
source on OpenSuse LEAP 42.2) and also Audacity.
About the LTC and timecode: Lots of people use "Plural Eyes" and closed
source software to sync audio and video. I believe LAU will understand why
that is not an option for me. Really though I have a personal interest in
timecode, and some equipment that uses it. I want to try it.
Nice cameras like REDD can accept timecode on a BNC input, or generate it.
Phone cameras and even many Cannon cameras AFAIK do not have that
capability. While not ideal, it is common I believe to record the timecode
as LTC audio on one of the stereo tracks on the camera. These performances
are mostly under 5 minutes and I don't expect drift to be an issue - if so
I will need a better camera later on.
My idea was to startup XJadeo with the "-l" (LTC) option, and receive
events via "mplayer -ao jack F8.WAV" .
I have it backwards though: The MP4 loaded into Xjadeo has the LTC audio
track. The WAV has the timecode metadata. And I have several MP4 files with
LTC. Ardour doesn't read MP4 files. I am stuck on the next step here. I am
a little weak in my video skills, I am trying to use this project to learn
more.
Any advice? Well, besides skipping timecode ... I got that a lot already on
the "gearslutz" forum :-) .
I've recently pinned down a number of oddities concerning these, and thought
what I've learned would be interesting to anyone working on their own
instrument patches.
The first thing to keep in mind is that amplitude envelopes (particularly
release time) set the point at which a note ceases. Frequency/filter envelopes
can be shorter, so their effect stops part way, but if they are longer, the
last part will be ineffective.
Across all three engines, and kits (if kit mode is active) it is whichever is
the longest that sets the overall time of the note, and you may well hear
others stop if the times are sufficiently different.
Also, within AddSynth itself, it is which ever voice has the longest envelope
that sets the overall voice time, and if you set voices with very different
characteristics you can hear the shorter ones finish before the overall sound
stops. Bear in mind, that each voice can also have a start delay set, so you
can get a late sound pickup that is then the last bit you hear, even if it's
quite short. However, if the start time of one voice is after all the others
others have finished it will never sound.
This sort of idea works best with 'Forced Release' disabled.
An unexpected twist to this, is that taking the combined voice envelope time
against the main AddSynth envelope, although attack and decay times follow the
above pattern, it is which ever has the *shortest* release that sets the
AddSynth time as a whole. This can really catch you out!
With regard to the Modulator amplitude envelopes. They don't change the overall
time, but if they are shorter than their voice length (or any voice that the
modulator is slaved to) the modulation may end a bit strangely. If they are
longer, then part of their action will be missed.
Finally, there is what I think is a bug (that goes back to Zyn 2.2.1). If an
AddSynth voice is enabled, it's amplitude envelope time is active, even if the
envelope is apparently deactivated and not editable. Oh, and by default all the
voice times are quite long, so again you could be puzzled as to why a sound is
longer than you expected. This has always been there, so I don't believe it
should be changed. To do so would quite likely alter many existing instrument
patches, but do keep it in mind.
In the latest Yoshimi commit, there is a new instrument in my 'Companion' bank
called 'AddSynth Morph' that demonstrates some of these points - I think it
sounds nice too :)
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Has anyone got any experience of these?
Are they in fact any good?
I would imagine that running one at 96k, 16bit should give a good enough
bandwidth and resolution for checking most audio kit.
Which rather begs the question, why are almost all digital scopes only 8bit...
unless you spend a fortune on them?
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hello,
For a university project we're building a custom audio system with our own
input and amplifier.
We will most likely use an FPGA that communicates sound data over SPI to a
Raspberry Pi.
On the Raspberry Pi the sound can be further processed by for example Sonic
Pi.
Sonic Pi uses SuperCollider which uses JACK which uses ALSA.
At some point in this chain we need to be able to interface with our FPGA.
Initially I thought it would be easy to write a JACK client, and it is.
The problem with that seems to be that JACK is in control of the sampling
rate.
So if I read data from the FPGA into a buffer and the clocks drift, I get
overruns or underruns.
I found a few potential solutions.
What alsa_in and alsa_out do is resample between the two clocks. Maybe a
bit of work, but definitely works.
There is some business about clockmaster in JACK, which seems to be
something different, but maybe I don't understand it.
There is a freerunning mode, which makes it OK to do IO in the callback.
I'm not sure if this plays well with SuperCollider. It seems that in this
case the processing is directly driven by how fast I get data from the
FPGA, which is what I want.
If all of the above turns out to be bad ideas, I need to look at a
different location in the chain.
It would make sense to write an ALSA driver for what is pretty much a
custom sound card.
However, it seems that writing an ALSA driver is orders of magnitudes more
complex than registering a callback with JACK.
Any ideas what would be the easiest way to get sound from our FPGA into
SuperCollider and back?
Regards,
Pepijn de Vos
I can't recall ever seeing this discussed here.
I have always purchased the non-ECC RAM because it
is less expensive, but do people have opinions/information
on this? Does anyone here always purchase ECC RAM?
Thanks.