On Sun, Mar 27, 2005 at 09:39:31AM -0800, Matt Wright wrote:
> When we wrote that part of the OSC Spec, we were thinking
> of the case in which an OSC Method doesn't need to know
> the address through which it was invoked, i.e., "usual"
> cases like setting a parameter. That's why the spec
> doesn't mention sending either the expanded or unexpanded
> OSC address to a handler --- sorry about that.
>
> Why not simply always send both? That seems more general
> and easier to understand than a special case, at least for
> me.
Well, that would require changing the API, which is a Bad Thing, and there
is a user_data parameter than can encode that kind of contextual
information when its needed. Also, the method callback functions are too
complicated allready :)
- Steve
(Forwarding to LAD)
cheers,
Christian
--------------------------------------
I've created a mailing list for the discussion of defining an open
instrument standard. So far the agreement seems to be to create an XML
standard which references external audio files. The use of FLAC has
also been mentioned. All who are interested may join at the following
link:
http://resonance.org/mailman/listinfo/open-instruments
The address to post is:
open-instruments at resonance.org
Archives:
http://resonance.org/pipermail/open-instruments/
If you feel another email list should be notified, please send this
information on. Perhaps CC the new list so that others may check which
lists have already been notified (check the archives). At this point I
have sent this to:
swami-devel
linuxsampler-devel
fluid-dev (FluidSynth devel list)
Best regards,
Josh Green
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
re all,
i'm writing a software that can read a C++ header and generate code to
expose the public functions in classes found: the generated code can be
compiled together in the original app, which should also link to a
library, then with 4 new lines OSC and XMLRPC servers will be active
accepting remote calls to the functions.
in fakiir, i make use of liblo-0.18 also published on this list.
the software it's in a early phase of development, but since today is
able to accept concurrent OSC and XMLRPC calls in the testclass.cpp
application that is bundled with the source.
http://fakiir.dyne.org
i'm happy to hear comments, suggestions or any criticism that comes
ciao
- --
jaromil, dyne.org rasta coder, http://rastasoft.org
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: Cryptographically signed mail, see http://gnupg.org
iD8DBQFCQvU3WLCC1ltubZcRAsP3AJwKkEKeXc4GrWoyTXOqkPUuStKLHwCfReBE
NN7TpeyfrNxTIC7dNNN4BPE=
=uqyK
-----END PGP SIGNATURE-----
Hi,
If playing a sound file that has a different framerate from Jack, using
libsamplerate, should I :
- convert in real time, in the process callback ?
- convert the whole file into memory when loading it ?
Actually, I've already coded the second option, but just discovered the
jack_set_samplerate_callback() method, which seems to require the first
option... Is it important to support framerate changes while rolling ?
--
og
Thanks for the replies everyone! :)
I was afraid that the computers would not be able to handle more than 8
channels of recording which is why I was limiting them that way. How
many channels can one computer handle? What kind of specs (ie:
processor/RAM) would that computer need? What interfaces can handle more
than 8 analog mics? I would need to record about 32 mic inputs.
Also, the OpenMosix would be disabled while recording, and would only be
active when in a non-recording mode. Also, this network would be an
island. The gigabit ethernet switch would be dedicated to just the audio
stuff.
-jordan
Hello all. I don't really have any business asking, but I am more and
more interested in digital HD recording, and I have spent many hours
recently studying hardware, software, techniques, et cetera. I don't
really have the funds to create such a system, but it is fun to plan it
out, in case my church or someone else would be interested in such a
system in the future.
The recent discussion of jack over networks has gotten me wondering a
few things.
Here is my current fantasy rack setup:
1U: UPS
1U: KVM
2U: RAID
2U: Master/DAW
1U: Slave Recorder/Node
1U: Slave Recorder/Node
1U: Slave Recorder/Node
1U: Slave Recorder/Node
1U: Gigabit ethernet switch (all computers connected through it)
Basically, all slaves would boot off of the RAID server (for easier
maintenance) into a cut-down Linux kernel, and will boot into text-only
mode, starting only the programs absolutely required to run Ecasound.
The slave node would begin recording when the master tells it to, and
all audio data would be sent to the RAID server. I am thinking that each
slave would be able to handle 8 channels of mono input. All nodes would
be equipped with the OpenMosix software, so that they can assist the
Master when they are not busy.
The master would boot into GUI mode, so that the operator wouldn't be
intimidated by learning Linux command line. The master would be able to
see the status of all slaves, and what each channel is doing. The master
would able to assign friendly names to each of the input channels (ie:
vocal 1, vocal 2, drums 1, bass, etc.). When the operator checks a box
at each channel, to indicate whether it should record or not (no point
in recording if nothing is plugged into it). The operator can also enter
a friendly name for the project. When the operator issues forth the
"BEGIN RECORDING!!!!" command, a directory is created on the server, and
all nodes begin recording. The all data is sent directly to the server,
then opened on the DAW from there.
The DAW operator would be able to record a live mix into a JACK-capable
recorder, such a ReZound, then burn it to CD. The operator would also be
able to back-up the project
(ie: all of the raw audio) to a data DVD.
So, I guess the question is, would this work?
-jordan
jack.udp for me means many underrun in this configuration:
x86 pc
jackd -d alsa -d hw:0
jack.udp recv
^
| (FastEthernet network (100mbps))
|
|
powerpc pc
jackd -d dummy
jack.udp -r IP send
the first thing that came in my mind was some network troubles like
mtu.. further investigation (like involving two x86 hosts, or working
directly without the switch, working with 802.11b networks...) made me
think that jack.udp uses the audiocard driver for some kind of timing.
now, i've some troubles running an audiocard with jack in the powerpc
(my main computer) there is a way to make this system usable with dummy
driver? i've noticed that this dummy driver has a "delay in microsecs"
parameter, now i think if i can calculate the right number the problem
will fade away, but i dont know how to do the math :(
some one has an idea on this? or has solved a similar trouble?
another question out of this trouble: someone knows how to raise the IRQ
priority of the USB controller to an usable value? possibly not
involving the use of the realtime patch, inusable on ppc-powermac,
thanks for attention and eventual replies, and keep the good work on
this audio software :)
willy
On Monday 21 March 2005 12:06 pm, linux-audio-dev-request(a)music.columbia.edu
wrote:
> jack.udp for me means many underrun in this configuration:
>
> x86 pc
> jackd -d alsa -d hw:0
> jack.udp recv
> Â Â Â ^
> Â Â Â | Â (FastEthernet network (100mbps))
> Â Â Â |
> Â Â Â |
> powerpc pc
> jackd -d dummy
> jack.udp -r IP send
In order for this to work, the sending side needs to know "how many" samples
it needs to send. In other words, if the x86 side needs exactly 44,100
samples per second, there is no way in your existing setup to make sure that
the powerpc doesn't send 44,101 or 44,099 samples per second. You'll need to
run an actual alsa client on the sending side, and use wordclock or some
other mechanism to keep the two soundcards in sync. You might be able to
hack something into the receiving side which tells the sender each time it
consumes a buffer, but that's probably not going to get you where you want to
be.
-Ben Loftis
Hi all,
after some of the latest discussions about audio-apps without gui, my head is
filled with giving JackMix[1] OSC-Support and perhaps splitting it into a
text-based / osc-based server doing the mixing and a gui...
So my question arises: Which OSC-implementation to use?
I had a look into Steve Harris' liblo and libOSC++. The later seems more
appealing to me since I am a C++-Guy.
What do you folks think? What do you propose? What are you using?
Arnold
[1] http://roederberg.dyndns.org/~arnold/jackmix/
--
There is a theory which states that if ever anyone discovers exactly what the
Universe is for and why it is here, it will instantly disappear and be
replaced by something even more bizarre and inexplicable.
There is another theory which states that this has already happened.
-- Douglas Adams, The Restaurant at the End of the Universe