greetings!
I've been messing with Muse a lot recently and there is one
thing I haven't been able to manage completely. and that
thing is, you've guessed it, MIDI syncing.
I own a DR-880 Drum Machine which I connect via a USB cable
to my computer and use it as a MIDI device in the qjackctl pathcbay
I then proceed to add the DR-880 in Midi Ports/Soft Synths in Muse, after
which Midi Ports/Soft Synths shows an Instrument named "generic midi"
and a device named "DR-880" followed by: "Play:Device or resource busy".
I don't understand why Muse says it's busy when nothing is playing
on it.
I set up MIDI Sync so that Muse is the Master and turn on "MIDI Clock"
and "MIDI Machine Control" and set the port to 3 'cause that's the port
I used in MIDI Ports/Soft Synths for the DR-880.
So, finally when I press play in Muse the DR-880 starts playing.
yay, great. But my enthusiasm didn't last for long as I was soon
to find out that when I skip playback in Muse to a certain point
or rewind it back the DR-880 keeps on playing.
And this is EXTREMELY annoying and discouraging, especially
for long songs where I have to play them from start each time
just to record a MIDI solo on my keyboard at the last minute
and it also makes music making tedious and boring.
Now, what am I missing? Is Muse supposed to work this way and only
sync with a MIDI device when played from the start? Or more probaly
I misconfigured or my lack of knowledge is causing this.
I kindly ask for any sort of help or some light on my dark path
of playing the song from start to end each time I want to
record a new part or edit it.
Note: I figured out that I could place markers in certain points
of the track where a pattern on the DR-880 is starting and then
manually rewind on the DR-880 to that part when I want to
start from a particular point in the song but this is also
time consuming and driving me mad.
Sorry for the extremely long mail but I am starting to lose
my patience with this particular issue.
Thnx, Hiram.
--
"I happen to think that computers are the most important thing to
happen to musicians since the invention of cat-gut which was a long
time ago. "
Robert Moog
> in terms of automating the alignment, it seems that ardour should
> probably add a new feature: "position sync point at loudest sample".
> then you just make sure you have a loud reference point in each file,
> set up the sync points, and then align them all to a single reference
> region.
>
> alas, it does not have that feature at this time.
Hi, Paul,
My worst days in audio and audio for video usually involved the kind of
"wild sync" that is being discussed here.
If the drift between channels is small enough to avoid problems, it will
be easy enough to align by hand. If not, it won't help anyway.
If you're ever going to marry the audio to video -- and the audio is
longer than a minute or two -- then God have mercy on your soul! One wild
clock is bad enough; two isn't even funny.
Loudest sample is a criteria that can result in false positives --
especially if the recording is a field recording, not a studio recording.
There is a point -- reached quite soon -- where it would be better to take
the two sources back to analog, so that they can be rerecorded in the
studio with a single A/D than to monkey with them in the asychronous
state.
Be a pal, a remember that good habits don't have to be broken! "One clock
to rule them all..." ;) Don't encourage sloppy sync! Save yourself the
work, and add a cooler feature instead -- like "Auto-Improve Lyrics"!
I've been playing with Ardour, but am not sure if it has the equivalent of
"Sync to Mark" that I used to use in Sonic, but that's a nice compromise
between automated and manual.
Cheers,
Phil M
--
Dept. of Mathematics, 342 Machray Hall
U. of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2
Office: 446 Machray Hall, 204-474-6470
http://www.rephil.org/ phil at rephil dot org
> Date: Tue, 7 Mar 2006 14:22:48 +0100
> From: "Alex Polite" <notmyprivateemail(a)gmail.com>
> Subject: [linux-audio-user] Aligning audiofiles for different
> recorders.
> Howdy
>
> I make a lot of interviews. Right now I have a two channel recorder. I
> connect one headset mic to each channel, put one on myself and one on
> the interviewee. Works kind of alright but I would rather have the
> interviewee wear one recorder and wear another myself. This gives us
> more freedom to move around during the interview.
>
> The tricky part will be to align the two separate recordings. I could
> probably do it manually in ardour, dragging the regions back and forth
> and stretching them until they line up exactly.
>
> I've googled a bit to find a tool that does this aligning
> automatically but haven't come up with anything.
>
> Does anybody out here know of something?
You could try using a clapper board, or anything that makes a sharp
click.
It should get picked up by both mics. Then, when you come to align the
two recordings, you just line up the clicks at the start and they will
be in sync.
If both recorders are digital then they should stay in sync for quite a
while. You should not need to do any time stretching.
This is the old way of getting two recorders (a camera and location
sound) to sync, and works quite well. It's still used nowadays when
people are too cheap (like me) to do proper time code.
>
> alex
> I make a lot of interviews. Right now I have a two channel recorder. I
> connect one headset mic to each channel, put one on myself and one on
> the interviewee. Works kind of alright but I would rather have the
> interviewee wear one recorder and wear another myself. This gives us
> more freedom to move around during the interview.
>
> The tricky part will be to align the two separate recordings. I could
> probably do it manually in ardour, dragging the regions back and forth
> and stretching them until they line up exactly.
It's a lot easier to syncronize the word clocks when recording!
The best solution is *one* wireless mic (for the interviewee) and you get
to keep the recorder, your own mic, and the receiver. That way you can
still all walk around.
With two recorders, you really want a way to share word clock. You might
get away with two recorders, but if you *do* develop a problem, that will
_really_ drive you nuts in the editing room!
If you have a budget for a second recorder, I'd get it to have a spare (or
redundant recording!)
Cheers,
Phil M
--
Dept. of Mathematics, 342 Machray Hall
U. of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2
Office: 446 Machray Hall, 204-474-6470
http://www.rephil.org/ phil at rephil dot org
Greetings,
So here comes the time for another public release of the (cute) JACK Audio
Connection Kit - Qt Interface: QjackCtl 0.2.20 is out!
Just as one can read from the change log:
- Server path setting now accepts custom command line parameters (after a
kind suggestion from Jussi Laako).
- The internal XRUN callback notification statistics and reporting has
been changed to be a bit less intrusive.
- Patchbay socket dialog gets some more eye-candy as icons have been added
to the client and plug selection (combobox) widgets.
- Connections and patchbay lines coloring has changed just slightly :)
- New patchbay socket forwarding feature. Any patchbay socket can now be
set to have all its connections replicated (i.e. forwarded) to another
one, which will behave actively as a clone of the former. Forward
connections are shown by vertical directed colored lines, and can be
selected either on socket dialog or from context menu (currently
experimental, only applicable to input/writable sockets).
- Optional specification of alternate JACK and/or ALSA installation paths
on configure time (after a patch from Lucas Brasilino, thanks).
Available from the usual places:
http://qjackctl.sourceforge.nethttp://sourceforge.net/projects/qjackctl
Enjoy.
--
rncbc aka Rui Nuno Capela
rncbc(a)rncbc.org
I was once able to convert a 8K samples per second unsigned
audio file gotten by sending /dev/dsp >thefile to a 44,100 samples per
second stereo .wav file suitable for burning on to an audio CD. I
found what I believe to have been the command I used in the form of a
shell script but I get the following errors:
sox: Do not support unsigned with 16-bit data. Forcing to Signed.
sox: Invalid options specified to avg for this channel combination
The script is:
#! /bin/sh
sox -r8000 -t ub cdda.ub -t wav -c 2 -w -r44100 output.wav resample .95 avg 1,1
Just for laughs, I removed the avg flag and actually got a
.wav file which was the correct pitch and all, but which was full of
clicks and missing pieces of sound, obviously not usable.
According to the sox manual, the -r8000 isn't actually
necessary since sox defaults to a 8-K sample rate.
The only thing I can think of is that I have forgotten some
step that I put in to the script originally or that sox has tightened
up some of the syntax in the last couple of years. All I remember was
that the resulting CD worked and didn't sound any worse than the
original file gotten from /dev/dsp.
The avg flag makes the levels correct for both channels which,
in this case, are the same.
Thanks for any help.
Martin McCormick WB5AGZ Stillwater, OK
Systems Engineer
OSU Information Technology Department Network Operations Group
Hi LinuxSamplers!
I've just seen a nice gigasample library and I wonder, if it could be used
with LS?
For those, who wanna have a look themselves:
http://www.postpiano.com
Search for products, pianos "OLD LADY".
The library has I think 12 velocity layers, same goes for pedal-down- and
key-release-samples. It _CAN_ also use IR to give body-resonance and
concert-hall reverb (but that's not so important.
Anyone any ideas about that?
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net - the Linux TextBased Studio guide
Hello,
I'm looking for a web-based application that supports working on and
discussing of music and pieces within an band. What do I would like to
do with such an app:
- Upload of mp3 takes which has been recorded during rehearsals or
sessions.
- Click somewhere in a timeline based image (e.g. amplitude/time or
spectrum/time) and the playback starts there.
- Define views on parts of an mp3 recording, because they are
sometimes several hours long and often contain different songs and
lots of talk
- Attach Comments (of different musicians) to certain parts or
timestamps of a recording
- Discuss pieces, eventually with inclusion of some staves (which
should be rendered on the server).
- All editing should be stored in a versioning repository, like most
wikis do.
- It should be usable be musicians who don't have a degree in
information sciences and who don't want to spent to much time with
the platform itself. Perhaps something like a wiki for musicians.
- Hardware requisites on the user side should not be more than be a web
browser (graphics based) and a mp3 playing app that is capable to
play back streams.
What I'm _not_ looking for:
- Something like Ardour with a web frontend (ok, it wouldn't be a show
stopper if it's features were available as well).
So, before starting to write something like that on my own from scratch
I would like to know, if there is some similar project out there in the
world that needs some contribution instead and that would be usable in
the short term.
Thanks for reading,
Yours,
Jacob
I´ve tryed getting my Radium usb keybord to work for
days now, its not nice.
where can the probelm be? well.. I dont have that
modules.conf ou conf.modules file in /etc. There is a
modprobe.conf and I mess with it.
Second, in console:
#amidi -l
Device Name
and nothing more
#aconnect -i -o
gives me a cliente 0: ´system´[type kernel]
an client 62: ´midi through´ [type kernel]
and nothing more....
I am using alsa for my delta 66 and it is doing fine.
My Fedora Core 3 with planet ccrma is doing fine too.
when alsa is turned on, it starts snd-usb-audio as it
should. And I have hotplug installed.
Help?
thanx
Renato
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com