Frank Barknecht:
> OSCServer.cpp:2: 'lo_address' is used as a type,
> but is not defined as a type.
Wasn't lo_address called something else in very old
versions of liblo? Maybe you have spare headers
lying around. Or maybe I'm inventing things.
Chris
What are people's favorite applications for splitting up large sound
files?
I have a bunch a bunch of IMA ADPCM .wav's that I want to burn to CD,
but often they're too large for a single CD.
BTW, I've been converting the files to .cdr on a linux system, and then
transferring that result to a windows machine for burning via a windows
version of cdrecord.
I'm interested in both CLI and GUI ways of doing this.
Thanks!
Hi. I'm writing to the list as opposed to the individual app owners
since I can't really tell where the bug is. I have a sequence in which I
want to use zynaddsubfx as the bass and another instance of zynaddsubfx
for synth strings. The problem that I see is when I add the base
instance to the bass track, muse then gray's out both instances (which
have the same name, so I assume that muse is going through the list
graying out any instance with the given name). I do notice in qjackctl's
connection screen that the two instances of zynaddsubfx midi out have
the same name although obviously they are on different ports. The
zynaddsubfx audio instances have different names. This could be a bug -
I'm not familiar enough with the jack naming conventions to definitively
state this though. However, when I do the same in rosegarden4, it works
fine. Rosegarden4 seems to key its list of softsynths with the name and
port number.
Thus, in conclusion it seems that zynaddsubfx might be at fault in that
multiple instances advertise themselves with the same midi out name. I
tested several other softsynths. It seems that ams, spiral modular synth
and hydrogen also share this characteristic. AmSynth appends a "serial
number" to new instances in the midi client list and I tested that muse
does indeed work fine with muliple instances of AmSynth.
Muse might be at fault because it uses only the midi out name instead of
looking also at the port number. Given the number of softsynths that
follow the convention of identifying themselves by non-unique names, I
think muse should probably add the port number to the identifier to tell
instances of the same softsynth apart.
The Independent picked their top 10 free software projects:
http://news.independent.co.uk/media/article337369.ece
and dyne:bolic made the list. I think all Linux audio developers
deserve some credit for this.
Lee
Hi, Im coding a VST host for windows and linux. The linux version will
support VSTs compiled on linux and not using wine or aything. Of course,
there is not alot of native linux VST plugins around but that will
change (I already made one :P )
There is one challange though, event dispatching in X11. Unlike windows,
X11 windows doesnt have an assotiated window proc for dispatching
events. I can overcome this in my own gui toolkit by passing a Display*
pointer to the plugin etc, but it wouldnt work with other gui toolkits.
So how do I make a soution that work with any toolkits on linux?
1) the plugin calls its own event loop in effEditIdle
2) make a new atom "wndproc" for storing wndproc function per window,
the host will send XEvents to the wndproc if found.
I prefer 1) but I dont know if toolkits supports manually calling the
event loop?
cheers
jorgen
Hi everyone,
The Internet-Drafts for the RTP MIDI protocols
for sending MIDI over IP are now in "IETF Last Call" --
this is a process where the Internet Engineering Steering
Group (IESG) solicits comments from the community at large,
before making a decision on whether the protocol should
be blessed with standards-track RFC status.
See below for information on how to send comments
to the IESG (don't send them to me directly -- I can't pass
them on). Thanks!
-----
From: The IESG <iesg-secretary(a)ietf.org>
Date: January 6, 2006 8:46:17 AM PST
To: IETF-Announce <ietf-announce(a)ietf.org>
Cc: avt(a)ietf.org
Subject: Last Call: 'RTP Payload Format for MIDI' to Proposed Standard
Reply-To: iesg(a)ietf.org
The IESG has received a request from the Audio/Video Transport WG to
consider
the following documents:
- 'RTP Payload Format for MIDI '
<draft-ietf-avt-rtp-midi-format-14.txt> as a Proposed Standard
- 'An Implementation Guide for RTP MIDI '
<draft-ietf-avt-rtp-midi-guidelines-14.txt> as an Informational RFC
The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send any comments to the
iesg(a)ietf.org mailing lists by 2006-01-20.
The file can be obtained via
http://www.ietf.org/internet-drafts/draft-ietf-avt-rtp-midi-
format-14.txt
http://www.ietf.org/internet-drafts/draft-ietf-avt-rtp-midi-
guidelines-14.txt
---
John Lazzaro
http://www.cs.berkeley.edu/~lazzaro
lazzaro [at] cs [dot] berkeley [dot] edu
---
Hi folks.
This message does get into driver specifics to an extent, but I'm mostly
coming to list for advice on how to find what "mystery codec" is being
used.
I've been putting some of the tiny bit of actual freetime I get during
this winter holiday :), into trying to get a libusb-based driver going
for my little Olympus VN480PC. It's a digital voice recorder that comes
with a USB cable for transferring voice recordings to a computer, and
the accompanying software is windows only.
I'd very much like to be able to do this transfer using linux instead of
needing windows though.
I think I -might- have the protocol mostly figured using a USB sniffer
on the windows side, but that may prove to be the easy part of this
project. :-S
What I'm faced with now, is I have 10K of data which I'm assuming just
about has to be the voice data in some format or other - but it isn't
clear what format it's in.
Toward trying to figure out the format, I've:
1) Computed the difference in the lengths of the apparently-voice data
transferred via USB, and the "data" section of the resulting .wav file.
They do not differ by a constant - so it's not just a matter of tacking
on a header.
2) Computed the quotient of the lengths of the apparently-voice data
transferred via USB, and the "data" section of the resulting .wav file.
They do not differ by a constant factor - so it's not just a matter of
converting, for example, shorts to reals again and again, for example,
in which case the lengths, I would think, should differ by a constant
factor of 2.
3) Symlinked a file containing the data transferred via USB, to all of
the file extensions known to sox, and attempted to use sox to convert
those files to .wav. None of the conversions succeeded.
4) Wrote a small python program to treat the data transferred via USB as
data to be stuffed into the "data" section of a .wav file, and created a
series of .wav files with all format types from 0 to 999. sndfile-info
did not give errors for 9 of these, but none of them look or sound right
in gnusound.
5) Googled about olympus and voice/audio codecs, to see if there is a
proprietary one they favor. It appears they were involved in the design
of the "DSS" format.
6) Downloaded "DSS Player Lite" from Olympus' web site, and copied the
data transferred via USB to "hi.dss". However, DSS Player Lite did not
recognize the file format.
Does anyone have any thoughts about what else I might try to see what
format this data is in, and/or convert it to a known format?
I've got detailed documentation of most of what I've done so far on this
project at http://dcs.nac.uci.edu/~strombrg/VN480PC/ The page includes
some .wav's, a binary file I'm assuming is voice data in a mystery
codec, full USB sniffer logs, and so on.
Does anyone have any suggestions - especially toward how to convert that
"likely voice data" in the USB Sniff to some sort of known and
supported-on-linux codec?
Thanks!
Hi!
Seems like the father of FM-synthesis has joined wikipedia. Some of you
guys might care to take a brief look at the FM-synthesis page, just once
in a while, so it wont get vandalised again?
--
mvh // Jens M Andreasen
Florian Schmidt writes:
> I further assume that the alsa seq event system
> is used
This is true of Rosegarden,
> and midi events are not queued
> for future delivery but always delivered immediately.
but this isn't -- Rosegarden always queues events
from a non-RT thread and lets the ALSA sequencer
kernel layer deliver them. (Thru events are delivered
directly, with potential additional latency because of
the lower priority used for the MIDI thread.) In
principle this should mean that only the priority of
the receiving synth's MIDI thread is significant for
the timing of sequenced events. We also have a
mechanism to compensate for gradual drift between
the MIDI timing source (kernel timers or RTC) and
soundcard clock, when synchronising to audio, by
adjusting the sequencer skew factor. (This happens
to be similar to the mechanism for slaving to MTC,
which is handy.)
In my experience this is all a long way from
foolproof. The most common problems for users
seem to be:
- ALSA sequencer uses kernel timers by default and
of course they only run at 100 or 250Hz in many
kernels.
- ALSA sequencer can sync to RTC, but the
associated module (snd-rtctimer) appears to hang
some kernels solid when loaded or used. I don't have
much information about that, but I can probably find
out some more.
- ALSA sequencer can sync to a soundcard clock,
but this induces jitter when used with JACK and has
caused confusion for users who find themselves
inadvertently sync'd to an unused soundcard (the
classic "first note plays, then nothing" symptom).
The biggest advantage of course is not having to run
an RT MIDI timing thread. My impression is that this
aspect of MusE (which does that, I think) causes
as many configuration problems for its users as using
ALSA sequencer queue timers does for Rosegarden's.
Any more thoughts on this?
Chris