On Mon, 16 Feb 2015, Rick Green wrote:
> On Mon, 16 Feb 2015, Len Ovens wrote:
>
>> I think OSC is missing parts though (just like AES67?). I should be able to
>> add a control surface/keyboard to a network and the unit should show me
>> every other unit (SW or HW) that it can talk to. From that list I should be
>> able to connect my output to a number of inputs and connect my input to
>> whatever output. Then if I play notes on a keyboard I should get the right
>> sounds, or move a control the right thing should change. That is a lot of
>> things to happen. My controller has to be able to set it's networking up
>> and then discover what other OSC stuff is around. OSC right now seems to be
>> totally manual. The user has to know the target IP and port (99% musicians
>> just blanked out) and have the server running and enter those things just
>> to have a connection. Even with a standard set of commands this is already
>> a fail.
>>
> Sorry for the off-list reply, but the list server is rejecting my posts
> today. It seems that my buddy who hosts the aapsc.com domain for me has
> gotten himself dropped into a dnsrbl hole...
>
> I think you're on the right track here. MIDI isn't extensible. OSC is too
> open. It looks like XML to me - very wordy with all the 'label=value' pairs,
> and ASCII to boot. The extra bandwidth of ethernet is eaten up by the
> wordiness of the command structure, so you don't really gain any
> commands/second advantage over MIDI.
> WHat if we had a protocol more like SNMP: Generalized like OSC, but with a
> MIB or DTD-like dictionary delivered once at initialization by the controlled
> device. The controller could use this to configure itself for the
> application, and then the actual datastream would be in a compact binary
> 'bytecode' more like MIDI. Best of both worlds?
considering the variety of uses for a control protocol, I think there is
room for more than one. There are varying amounts of acceptable latency.
What is low latency for audio, Synth control, lighting control, video, DAW
control, are all different. MIDI for synth control is well known and the
biggest thing it needs is a faster transport... Can't beat Jack for this.
It will be interesting to see the new HD MIDI to see if it is a help or
hinderance and how well supported it gets. If it allows standard MIDI
transport over IP, that alone may make it worth while, just because it
would give us a standard way of doing MIDI over net. However, from reading
the blurbs on HD, it seems they have gone beyond one step up. More
channels yes... merely changing 8 bit to 16 would go from 16 channels to
256, but they are talking "thousands". This would indicate to me that they
are increasing the bit width by 4x. A 16 bit word would be used just for
channel, another just for velocity, another for release velocity, another
for presure, another for tuning of the note (can change on a pre note hit
basis... maybe during the note hold too... not to mention the note
itself... SOunds like 12 bytes per key press already (by my guess). Not
exactly compact, but perhaps more so than some OSC.
One thing to remember though, Ethernet does have minimum packet size. (42
bytes or so as happens... minus a few more for udp/ip headers) So a single
note on in MIDI would take the same ethernet traffic as an OSC message in
many cases. OSC does have a sort of running status where a PWD can be set
and can also use wild cards to send the same message to a number of
different addresses. There is an OSC standard set of performance (note
on/off/whatever/etc.) in the works as well that seems to be very close to
what HD MIDI is doing. The reality is that GB ethernet is what makes
lowlatency audio/midi possible or at least stable, not because it is
designed for low latency work, but because it is fast enough that a small
packet has a low latency... and on GB, MTU1500 is small packets. 100M
networks can, if they are real quiet, do very well too. But to be reliable
one has to design for at least moderately busy networks.
OSC does have (or can) self documenting signals. The path name tells the
intended destination program, channel (by name or number), what is to be
controled, and how much... all in english (or greek or whatever). The
language could even be possibly multi-lingual without changing bandwidth.
OSC has a query setup so the sending client can find out what commands are
available and what they do... I have more to read on this part :)
Using ethernet as a transport for anything has to assume that the
transport is not locked to our use. This is new to the audio/music world
which is used to having it's own MIDI line and it's own audio port/card.
Sharing resources has been something we have tried to avoid for years and
now with networking, sharing has to be assumed... sort of like plugging
one's USB audio into a hub that also has a USB drive, mouse, keyboard and
wireless dongle using it. Fun stuff, and really not that different than
sharing a PCI bus.
--
Len Ovens
www.ovenwerks.net
Ralf Mardorf wrote:
> A PPA, perhaps kxstudio, might provide linux-rt for the OP's distro.
Tried that, haven't found anything like rt or preepmt kernel there.
> <A nice detailed Debian-specific instruction and explanation skipped>
Thank you, the make-kpkg procedure looks convenient, since that way I'm
getting a native deb. If there's no binary alternative, that's probably
the way I'll be building my kernel.
Artem
Gene Heskett wrote:
> Look in the repo for a kernel with rt-preempt in its name string.
Unfortunately, no such thing here.
> <A nice detailed instruction and explanation skipped>
Thanks, if I don't find a suitable ready-made kernel, I'll try building
my own, though I haven't done that in quite a while.
Artem
Beware old building with additions :P It appears we have power coming
from two power entrances with two different earth grounds to the panels. I
had found two power cords with the ground pin pulled which I replaced...
big ground loop noise. I tried (as read from a number of sources) making
unballanced cables with a small resitor in the ground path but this made
things worse not better. I will be running a power cord from the stage
back to the mixer next. All of our signal in paths are already isolated.
It is only the two monitor mixes to powered monitors that are unballanced
and causing problems. The outputs from the mixer are ballanced, but the
lines to the stage are not... I may make cables to allow me to use two of
the input ballanced lines instead and two more isolation boxes as a longer
term fix. There will be a new stage going in and I have already been
asking that power and grounding be corrected for the whole signal flow.
--
Len Ovens
www.ovenwerks.net
Hi
The topic say’s it all, any Graphic designer around here, who like to
create a new, overall design for the guitarix project?
If so, please contact me.
regards
hermann
Hi Daniel. Sorry for late followup here.
I'm changing the thread (was "LAU: OpenMusic 6.9 - new build for Ubuntu
(.deb) and Fedora (.rpm)") - it's sliding sideways now.
>>>>> On Thu, 5 Feb 2015 11:54:48 +0100, Daniel Appelt said:
D> Maybe you could try to run the small programs I provided as
D> attachments to the GDK bug report? They should just work on your 64bit
D> system.
D> https://bugzilla.gnome.org/show_bug.cgi?id=735995
Checking both 64 and 32-versions of the programs, doesn't show the
errors you reported w. bugzilla:
What does this mean?
-anders
Hi,
My laptop (running ubuntu) has the following (audio) specs. I know it's
probably subjective, but is this considered "good enough" for pro audio
work or should I invest in an external sound card? (Or for that matter a
better laptop?!) And if so, which soundcard is recommended?
Thanks a lot,
Peter
Memory: 3.8 GiB
Processor: Intel® Pentium(R) Dual CPU T3400 @ 2.16GHz × 2
00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio
Controller (rev 03)
Subsystem: Toshiba America Info Systems Device ff66
Flags: bus master, fast devsel, latency 0, IRQ 46
Memory at d6700000 (64-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: snd_hda_intel
00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express
Port 1 (rev 03) (prog-if 00 [Normal decode])
Hey hey,
I want to sample the waveform of an oscillator of a hardware synth. One period
of the waveform must have 65536 points in the end. Will the following process
work:
Samplerate 48kHz, Oscillator pitch 71.0449218750000Hz, becasue:
71.0449218750000Hz / 48000Hz = 97/65536
In theory I can now create an empty table with 65536 entries and fill every
97th entry (with wraparound) with successive samples from the original audio
recording.
I see a problem: the soundcards ADC must average/interpolate the digital
samples from the analogue input arriving in between samples. I can't tune my
synth to 48000/65536Hz.
I can increase my samplerate to 96kHz and work through the sample process as
above, but will that help? Am I completely on the wrong track for creation of
an adequate copy of an analogue waveform?
If a correct process is more complicated and requires more mathematics, I will
drop the idea as such.
Ta-ta
----
Ffanci
* Internet: https://freeshell.de/~silvain
Twitter: http://twitter.com/ffanci_silvain
I have a cheap (hipstreet i8) android tablet I have been playing with. It
has Android 4.4.4, kernel 3.10.20 and says the cpu is an Intel product.
I have tried all the audio related applets I could find and have generally
been dissapointed with the latency. I hit the drum on a drum kit and the
sound is delayed enough to make it hard to play though it does help to not
listen to the audio and just play. I mention this as background for the
rest.
One of the apps is "WiFi Audio" http://www.ajeetv.info/wifiaudio/ (the
link is only the PC send end there is catually very little info about
this)
Anyway this app works quite well. Considering the background info above
about how much latency there is in android audio to begin with, I would
say that the WiFi transfor adds no noticable delay to the already long
android delay. The quality is good with drop outs only when I have the
android device obscured from the AP by a metal desk or something like
that. Certainly if the android audio latency could be cleared up this
would make a usable personal monitor for a musician for inear use.
In my case the audio chain looks something like this:
player -> pulse -> jack -> alsa -> out.
Jack is set to 128/2 which does some interesting things to pulse (It
forces pulse to deal with it at that latency)
The chain to my android is:
player -> pulse -> WiFi Audio -> android.
According to Pavucontrol, WIfI Audio if being fed by "Monitor of Jack
Sink".
The point of this is that if I had gotten a tablet/phone that was on the
ubuntu touch dev list, which while using the Android kernel, bypasses
most of the android audio stuff and goes straight to the Alsa bit. There
may be a usable monitoring system for Audio over WiFi.
--
Len Ovens
www.ovenwerks.net