I'm playing around with a no-longer-developed java-based editor for
Kurzweil syths called "VAST Programmer" (VP) and I have a question about
MIDI on my Ubuntu Studio 14.04 in order to use a MIDI keyboard together
with the editing software.
My MIDI hardware includes:
A Behringer UMX490 USB controller keyboard
Kurzweil K2000R Synth
Hammerfall DSP Multiface I.
Editing with the VP, which works by sending and receiving system
exclusive messages, requires that the K2000R's MIDI in and out are
connected to the out and in of the Multiface. The UMX490 therefore needs
to be connected to the computer via USB. Both the UMX490 and the
Hammerfall DSP multiface show up in Qjackctl under the ALSA tab as both
readable and writable clients.
When VP starts for the first time it asks the user to select from a list
of available MIDI in and outs. I selected the HDSP which is displayed as
"DSP [hw:0,0,0]". VP then opens and allows me to edit the K2000. VP does
not show up in qjackctl however and once connected to the HDSP I can no
longer make a connection between the UMX490 (readable client) and the
Hammerfall DSO (writable client) in order to play the synth. The HDSP
seems to be fully occupied.
It seems that I need a type of "MIDI merge" function to have the editor
and keyboard functioning at once. I remember back in the 80s(!) little
hardware boxes existed for this, but there must be a way of doing this
software. Can someone please send suggestions?
If anyone googles this and wishes to run VP on Linux, note that it was
necessary to create the directory "/etc/.java/.systemPrefs/com" and make
it user-writable with "sudo chmod 777 /etc/.java/.systemPrefs/com" in
order to get it to run. VP can be downloaded from here:
To run, expand the zip file, enter into the main directory of the
resource and enter: java -Xms256m -Xmx256m -jar "vastp.jar"
You'll also need a "VAST" type Kurzweil syth connected via MIDI for it
There's a MIDI "Midi Compatibility issue" patch on the download page
that didn't work for me - the GUI was sized strangely and unusable.
I've done nothing with video except play them from YouTube.
I've noticed that YouTube separates HD1080 video from the audio tracks,
meaning I can't just download the HD1080 vid and watch it comfortably
using my USB sound card. But I can download the HD1080 vid and the audio
David W. Jones
authenticity, honesty, community
> On Wed, 21 Jan 2015, Leonardo Gabrielli wrote:
> > In reality of the 16ms latency I mentioned, 5.3ms are the A/D and 5.3ms
> are the
> > D/A (48khz, 2 periods of 128 samples in JACK at both sides). The
> That does not need to be. My 10 year old P4 was running an ice1712 at
> 48k/16/2 (.66ms) with no xruns.
The machine I'm referring to is a Cortex A8 using its own audio codec, and
I think the drivers are not exceptional. Lowering the period size was not
possible. I was using Debian Stable.
(going off-topic): what sound card did you use to get the 0.66ms latency
and what distro?
Other comments: network audio requires to serve not only audio card
interrupts but also NIC interrupts in case of ethernet, or USB/SPI in case
of wifi chips. The drivers must written very carefully. I've often seen the
NIC interrupt routine preempt jack audio clients. Furthermore, network
drivers are written for throughput, not latency. These two issues make me
wonder whether a general purpose OS could serve the purpose well. Or maybe
it's just I tend to prefer the low-level solutions...
One thing yet to check: the original 802.11 specifications allowed for a
contention-free period, still have to check whether current 802.11n and
802.1ac AP still allow for it.
> From: Hermann Meyer <brummer-(a)web.de>
> To: linux-audio-user <linux-audio-user(a)lists.linuxaudio.org>
> Subject: [LAU] Which kernel do you use?
> Message-ID: <54C22928.2000704(a)web.de>
> Content-Type: text/plain; charset=utf-8; format=flowed
> I've get a new (used) mobo, and played with the kernel version to use.
> So far I've tested all from 3.19.0-rc5 down to 3.4.104
> Those, were rt-patches available I test as rt-kernel.
> Best results, I only get with the 3.4 series, all later kernels
> introduce Xruns here, sporadic, unrelated to the dsp load, even, when
> just jack is running.
> I build then all with the same configuration, play a bit with
> configurations deselect hyper-threading on the latest kernels, use
> threadedirqs, all to no avail. Only 3.4.xxx-rt kernel run absolute smooth.
> So, which kernels with witch configuration do you all use?
> Graphics: Card: NVIDIA GT216 [GeForce GT 220]
> bus-ID: 01:00.0 chip-ID: 10de:0a20
> Display Server: X.Org 126.96.36.1991 drivers: nouveau (unloaded:
I see you are using the nouveau driver for NVIDIA. Other RT users with
NVIDIA have commented on performance issues with nouveau and had better
performance with the NVIDIA akmods.
Thanks Len for the useful feedback.
> In such a case, it would not be possible to add a WIFI that went to AP
> then snake then FOH mix then monitor as there would be close to 30ms. The
> WIFI would have to be a dirrect input to the FOH mix.
In reality of the 16ms latency I mentioned, 5.3ms are the A/D and 5.3ms are
the D/A (48khz, 2 periods of 128 samples in JACK at both sides). The
remainder approx. 5.3ms is buffering large enough to compensate for the
jittery behavior of network delay. In other words a packet needs reasonably
less than <5.3ms to fly from one end to the other. With an AP things are a
bit more delicate for the only reason that the packet must fly twice, i.e.
the transmitter must get access to the medium ASAP, and then the AP must
get access to the medium ASAP to relay the packet to the recipient. And
they must meet the deadline imposed by the receiver JACK cycle. Medium
access with 802.11 is the biggest issue in my experience. However, with an
AP it won't be 30ms, it will be much lower. Obviously a TDMA mechanism
would be much more useful for the transmission rather than the 802.11 MAC,
but improvements can be done. There is one AES paper from a guy in Finland
who managed to send 8channel audio at 192khz in packets of about 50samples
each, see http://www.aes.org/e-lib/browse.cfm?elib=16138
I will take a look again at the AES67 minimum requirements, I guess to
reach them a lot to development must be done. Basically the A/D and D/A
must be done with 2 ping-pong buffers of 32 samples at 48khz, and hope that
the network access and transport can be carried on reliably in 1.8ms.
Nothing you could do with a general purpose platform or OS at the moment,
but a good challenge for the future years.
Thanks Len for the useful feedback.
> I was going to say 10ms is about where I start to feel disconnected from
> what I am playing, but I realize I am thinking one way and round trip
> would be about 20ms. So 16ms might be well workable.
Psychoacoustic tests (many of them from the CCRMA SoundWire team and
others) shows that 21-25ms is acceptable to keep a steady tempo. Larger
values make the performers slow down, very low values make them accelerate!
AFAIK in musical instrument design the 10ms figure is taken as a threshold
for your instrument response (e.g. keypress --> sound), but in performance
the values can be higher as said.
Your inputs are interesting, and yes, I think AES67 could be the first step
Hi list, I have a Kurzweil K2000R synth and a scsi zip 100 drive. I'd
like to be able to download files from the web and transfer to the
synth. I found a USB Zip 100 drive available which I'd like to connect
to my Linux PC so that I can transfer the disks across to the scsi
drive on the K2000.
The USB drive is here:
Does anyone know if this will work? Will I have any trouble with this
drive on Linux working with K2000 formatted disks?
Just to bring the discussion back to its original topic, I see and know
already that audio over 802.11 is always thought of as a geek thing when
not an insanity, but at my institution we felt this as good challenge for
engineering research and I've been working on that as part of my PhD
studies in the last two years. Updates and material about the project are
reported at our research group webpage
The project, called WeMUST, i.e. wireless music studio, was started to test
current network technologies in a *studio*, but later we also addressed
live stage usage and a concert was performed last summer on the sea. In
that case we acquired signal with Debian-based ARM platforms (beagleboard
xm) and sent it to special devices from Mikrotik through Ethernet. The
Mikrotik devices have directional antennas and created 802.11a bridges from
sea to land. The networking topology allowed for monitoring and the
round-trip latency allowed by the system was 16ms at lowest (but could be
reduced with a different HW choice). The musicians could synchronize well
and all had the same latency, imposed by JACK, running on both the ARM and
the PC mixing the signal on the land.
To recap, my opinion is that 802.11 can be as good as any other wireless
technology in providing music and compared to legacy analog techniques the
quality is not compromised (unless the link is so bad that connection
breaks/packets get lost). Of course the 802.11 family of protocols is
mainly targeted at throughput and best effort delivery, this is why the
audio community must demand for amendments that allow a robust audio link,
instead of neglecting the opportunity and relegating wireless connectivity
to a marketing feature (as it is up to know, just a gimmick for product
brochures to boost sales, showing ultra-cool ipad apps for remote control
and nothing more).
With this I don't mean saying that we should replace affordable and
reliable audio cables with ultra-expensive wireless stuff. My goal, as a
researcher is to thrive into the technical and usability challenges of a
technology so widespread nowadays, show what can be done and check from the
industry or the potential users if there are new applications that are
opened by wireless and most importantly networking.
Networking is a key aspect, indeed, because a point-to-point link is not
novel, but if wireless connectivity can provide integration of
functionalities and devices at low extra cost, it is worth investigating,
IMO, at least at the academic level.