2014-10-27 15:45 GMT+00:00 Leonardo Gabrielli <leodardo(a)gmail.com>:
> Carlos,
> for audio over Eth/wireless on ARM I suggest you to give a read to my
> recent papers at:
> http://a3lab.dii.univpm.it/research/wemust
>
> Those didn't involve Fons' zita-[nj|jn]bridge, which has recently been
> released and I will probably use in next refinements of the research
> outcome, being tiny and functional.
>
> So far my experience with ARM cores is that you must be careful with Jack.
> Old platforms such as Xscale will require you effort for compiling and
> working fine. And btw I suggest you to go directly with 2.6 kernel for
> real-time audio. Also, I expect it to be tight as a platform for running
> Jack (how much memory do you have?) especially at low period sizes (the CPU
> risks to be overwhelmed with interrupts).
>
> Definetely Jack has a lot of features that are important even for this
> simple task but I'm wondering if there is any gain in embedding only those
> needed in a library and use it instead of the whole JACK.
>
> BTW: a nice paper you may want to read:
> Reuter, "Case Study: Building an Out Of The Box Raspberry Pi Modular
> Synthesizer", LAC 2014
>
> Leonardo
>
>
Thanks so much for the info, Leonardo. I'll check it ASAP,
>
>
> On 26/10/2014 13:00, linux-audio-user-request(a)lists.linuxaudio.org wrote:
>
>> In practice that is not very likely to happen, the reason
>>>> > >being that interfacing to Jack is so much more easy than
>>>> > >writing an ALSA driver. Also, passing via Jack does not
>>>> > >add any latency, and in most cases users will want the
>>>> > >flexibility it provides.
>>>>
>>> >
>>> >thanks for the answer, I was expecting this, but hadn't measures the
>>> >difference between the jack client and alsa driver.
>>> >So now it looks like I need to learn how to cross compile jack for
>>> various
>>> >ARM devices to have it on the lightweight clients :/
>>> >
>>> >Rapha?l
>>> >_______________________________________________
>>> >Linux-audio-user mailing list
>>> >Linux-audio-user(a)lists.linuxaudio.org
>>> >http://lists.linuxaudio.org/listinfo/linux-audio-user
>>> >
>>>
>> I've thought about a similar idea sometime in the past: A distributed
>> audio
>> network with thin clients/raspberrys for a home studio or distributed via
>> some network. I'd be interested in following whatever progress you make.
>>
>> About that of "distributed band" I red a little about programs to jam via
>> internet: Netjack, Ninjam, Midikit.
>>
>
>
--
C. sanchiavedraZ:
* NEW / NUEVO: www.sanchiavedraZ.com
* Musix GNU+Linux: www.musix.es
Hi
Trying my luck again with recordmydesktop. Found that the following
eventually starts record my desktop:
https://dl.dropboxusercontent.com/u/4343030/recordmydesktop
However it seems to only record in mono. How do I capture two audio
channels?
Even better: is there an alternative out there that actually works?
--
Atte
http://atte.dkhttp://a773.dk
MFP -- Music For Programmers
Release 0.05, "Mighty Fine Patching"
I'm pleased to announce a new version of MFP, containing many new
features, fixes and improvements. This is still a very early
release that is missing a lot of expected functionality, but it's
a significant step forward from 0.04 in every way and I thought
it might be of interest to the wider community.
A summary of changes is below. Please see the GitHub issue tracker
for complete details:
http://github.com/bgribble/mfp
This version is still source-code-only, but the new build system
should make it a bit easier for those who would like to try it.
Significant changes since release v0.04
----------------------------------------
* MFP patches can be saved as LV2 plugins that can be
live-edited while loaded in a host (see doc/README.lv2)
* New build system using 'waf' for one-line build and install
(see doc/README.build)
* Support for user patches with dynamic creation of
inlets/outlets and other objects at instantiation time (with
examples) using the "@clonescope" method
* Lazy evaluation of expressions using a leading "," syntactic
sugar is available in message boxes (i.e. the message
"datetime.now()" is a constant, but ",datetime.now()" is
evaluated each time the message is emitted)
* More sample patches, including a basic tutorial covering app
interaction, "hello, world", and patterns for things like
iteration, conditionals, etc
* Improvements to stability and error handling
* Many other bugfixes and improvements. The complete list of
60+ tickets closed since the 0.04 release is in the 0.05
milestone:
http://github.com/bgribble/mfp/issues?q=milestone%3A%22mfp+0.05%22+is%3Aclo…
About MFP
----------------------------------------
MFP is an environment for visually composing computer programs,
with an emphasis on music and real-time audio synthesis and
analysis. It's very much inspired by Miller Puckette's Pure Data
(pd) and Max/MSP, with a bit of LabView and TouchOSC for good
measure. It is targeted at musicians, recording engineers, and
software developers who like the "patching" dataflow metaphor for
coding up audio synthesis, processing, and analysis.
MFP is a completely new code base, written in Python and C, with
a Clutter UI. It has been under development by a solo developer
(me!), as a spare-time project for several years.
Compared to Pure Data, its nearest relative, MFP is superficially
pretty similar but differs in a few key ways:
* MFP uses Python data natively. Any literal data entered in the
UI is parsed by the Python evaluator, and any Python value is a
legitimate "message" on the dataflow network
* MFP provides fairly raw access to Python constructs if desired.
For example, the built-in Python console allows live coding of
Python functions as patch elements at runtime.
* Name resolution and namespacing are addressed more robustly,
with explicit support for lexical scoping
* The UI is largely keyboard-driven, with a modal input system
that feels a bit like vim. The graphical presentation is a
single-window style with layers rather than multiple windows.
* There is fairly deep integration of Open Sound Control (OSC), with
every patch element having an OSC address and the ability to learn
any other desired address.
* MFP has just a fraction of the builtin and addon functionality
provided by PD. It's not up to being a replacement except in
very limited cases!
The code and issue tracker are hosted on GitHub:
https://github.com/bgribble/mfp
You can find the LAC-2013 paper and accompanying screenshots,
some sample patches, and a few other bits of documentation in the
doc directory of the GitHub repo. The README files at the top
level of the source tree contain dependency, build, and
getting-started information.
Thanks,
Bill Gribble <grib(a)billgribble.com>
I'm posting this in a number of places to try and get some idea of ALSA/JACK
usage, so I'd be grateful for responses.
https://www.surveymonkey.com/s/JZVV7K9
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
From: Atte:
>
> Hi
>
> Trying my luck again with recordmydesktop. Found that the following
> eventually starts record my desktop:
>
> https://dl.dropboxusercontent.com/u/4343030/recordmydesktop
>
> However it seems to only record in mono. How do I capture two audio
> channels?
>
> Even better: is there an alternative out there that actually works?
>
>
I have been using ffmpeg for my recordings.
It's a bit tricky to get sound from jack though, but if you start
pulseaudio using
the jack sinks, you get the sound if you connect the output
of what you want to record to the pulseaudio jack input. I do
that connection manually.
So:
1. Start pulseaudio (make sure it is configured to use jack):
$ pulseaudio
2. Connect the output jack ports of what you want to record into pulseaudio.
3. Run something like this:
$ffmpeg -f alsa -ac 2 -i pulse -f x11grab -r 30 -s 720x646 -i :0.0 -acodec
pcm_s16le -vcodec libx264 -preset ultrafast -crf 0 -y output.mkv
(this works for me)
Here are a couple of links with info I used to configure ffmpeg:
http://www.commandlinefu.com/commands/view/148/capture-video-of-a-linux-des…http://www.commandlinefu.com/commands/view/7109/capture-video-of-a-linux-de…
And here is my version of ffmpeg, which works:
ffmpeg version 0.10.7 Copyright (c) 2000-2013 the FFmpeg developers
The ffmpeg version might be relevant since I think there are two different
versions of ffmpeg,
one hijacked version made by the libav folks, and one real version.
Hello,
I have a situation in mind : in a LAN a central computer running jack server with a lot of audio I/O, and lightweight clients with stereo I/O.
I'd like those clients to be able to send a receive 2 channels of audio to the central jack server, lowest latency possible.
With the zita-njbridge and jack server running on the lightweight clients, it works, but considering i don't need the jack server on the lightweight clients (no effect plugins, no routing, no synths...) could a zita-nabridge exist to capture/override the streams from the client's embedded alsa device ?
Raphaël
http://www.jerashmusic.fr
Hey hey everyone,
I hreard, that the Bash (Bourne Again shell) had a vital security issue, that
was only fixed very recently. So if you rely on Bash better update. I _THINK_
the problem was only fixed last week or so. Let your friends know! :)
Don't ask me about specifics, I just got the info and passed it along, since
it sounded like good advice.
Thanks and sorry for the OT.
Ta-ta
----
Ffanci
* Internet: http://freeshell.de/~silvain
Hi all,
I'm the team leader of Voice Communication Systems development in ARTISYS company. We try to utilize RME RayDat sound device on Linux system with ALSA sound interface. After a lot of experiments we are not able to set up proper sound output from this device, so we kindly ask you for a help to solve our problem. Here is a brief description:
General task of our troubleshooting is that our audio signal output is not continuous. The output signal is rather interrupted. Here is our configuration:
PC with Linux (kernel v. 3.17.0) with ALSA sound libraries compiled in kernel. We have also additional packages installed: alsa-tools, alsa-utils and alsa-lib (all version 1.0.28).
Inside the PC there is PCI Express card RME Raydat, whose drivers are also compiled in the Linux kernel. This interface is connected by optical fibres (ADAT) to the Ferrofish A16 MK-II (2 pairs of TOSLINK cables).
Playing test files using commands
aplay -D pcm.out_test -r 48000 -f S32_LE /usr/share/sounds/alsa/Front_Left.wav -vv
and
aplay -D pcm.out_test2 -r 48000 -f S32_LE /usr/share/sounds/alsa/Front_Right.wav -vv
causes audible interruptions. The configuration of the ALSA device is following:
pcm.out_dmix {
type dmix
ipc_key 56874
ipc_key_add_uid false
ipc_perm 0666
slave {
pcm "hw:2,0"
period_size 2048
channels 36
rate 48000
}
bindings {
0 0 # from 0 => to 0
1 1 # from 1 => to 1
}
}
pcm.out_test {
type plug
slave.pcm "out_dmix"
ttable.0.0 1
}
pcm.out_test2 {
type route
slave.pcm "out_dmix"
ttable.0.1 1
}
Let me describe other two example experiments:
1. If I try to record sound using the same parameters as out_dmix device but of type "dsnoop" (equivalent of dmix for capturing), there are no interruptions and the recorded sound is perfect.
2. If I play single sound file to the output of the RME card while the ALSA device is of type "route", the sound output is perfect. Playing second file on different output channel this way I'm not able to open
other output channel simultaneously by another program because the device is busy. There is this experiment's ALSA device configuration:
pcm.out_test {
type route
slave.pcm "hw:2,0"
slave.format "S32_LE"
slave.channels 36
ttable.0.0 1
}
pcm.out_test2 {
type route
slave.pcm "hw:2,0"
slave.format "S32_LE"
slave.channels 36
ttable.0.1 1
}
This should be solved by "dmix" type ALSA device, which currently causes interruptions in our configuration.
The question is: how can we play multiple audio streams to multiple output channels using our equipment?
Thank you.
Dear Mr. Knoth,
I'm the team leader of Voice Communication Systems development in ARTISYS company. We try to utilize RME RayDat sound device on Linux system with ALSA sound interface. After a lot of experiments we are not able to set up proper sound output from this device, so we kindly ask you for a help to solve our problem. Here is a brief description:
General task of our troubleshooting is that our audio signal output is not continuous. The output signal is rather interrupted. Here is our configuration:
PC with Linux (kernel v. 3.17.0) with ALSA sound libraries compiled in kernel. We have also additional packages installed: alsa-tools, alsa-utils and alsa-lib (all version 1.0.28).
Inside the PC there is PCI Express card RME Raydat, whose drivers are also compiled in the Linux kernel. This interface is connected by optical fibres (ADAT) to the Ferrofish A16 MK-II (2 pairs of TOSLINK cables).
Playing test files using commands
aplay -D pcm.out_test -r 48000 -f S32_LE /usr/share/sounds/alsa/Front_Left.wav -vv
and
aplay -D pcm.out_test2 -r 48000 -f S32_LE /usr/share/sounds/alsa/Front_Right.wav -vv
causes audible interruptions. The configuration of the ALSA device is following:
pcm.out_dmix {
type dmix
ipc_key 56874
ipc_key_add_uid false
ipc_perm 0666
slave {
pcm "hw:2,0"
period_size 2048
channels 36
rate 48000
}
bindings {
0 0 # from 0 => to 0
1 1 # from 1 => to 1
}
}
pcm.out_test {
type plug
slave.pcm "out_dmix"
ttable.0.0 1
}
pcm.out_test2 {
type route
slave.pcm "out_dmix"
ttable.0.1 1
}
Let me describe other two example experiments:
1. If I try to record sound using the same parameters as out_dmix device but of type "dsnoop" (equivalent of dmix for capturing), there are no interruptions and the recorded sound is perfect.
2. If I play single sound file to the output of the RME card while the ALSA device is of type "route", the sound output is perfect. Playing second file on different output channel this way I'm not able to open
other output channel simultaneously by another program because the device is busy. There is this experiment's ALSA device configuration:
pcm.out_test {
type route
slave.pcm "hw:2,0"
slave.format "S32_LE"
slave.channels 36
ttable.0.0 1
}
pcm.out_test2 {
type route
slave.pcm "hw:2,0"
slave.format "S32_LE"
slave.channels 36
ttable.0.1 1
}
This should be solved by "dmix" type ALSA device, which currently causes interruptions in our configuration.
The question is: how can we play multiple audio streams to multiple output channels using our equipment?
Thank you.
Yours sincerely
Ing. Vaclav Mach
Voice Communications System team leader
ARTISYS
www: http://www.artisys.aero
On 22/10/2014 11:45, linux-audio-user-request(a)lists.linuxaudio.org wrote:
> In my experience there's a greater risk of overheating without a fan and
> the ARM (allwinner) chipsets are prone to that. My bet is a low power x86
> processor/unit with a (quiet) fan will out perform and outlast an ARM
> chipset without.
I did some simple benchmarks on a Allwinner A20 board (cubieboard)
recently. The benchmark consist of computing a bunch of sine oscillators
(second order resonator filter), generally used for modal synthesis and
other types of sound synthesis. The results I got from the A20 when
clocked at 1GHz are suprisingly good: 1000 theoretical oscillator can be
computed in a 128 samples period, while on my quad core-i5 I get 1500.
On a 7-years onld Centrino Duo I get about 850. While this don't stand
as a real-world benchmark (buffer transfers are not taken into account)
and I haven't optimized for the architectures (but just let g++ go with
-O2) you get the idea.
I didn't experience overheating on the A20 but the tests are not
continuous as you would during a performance, so I won't bet it will
last long. :)
I have a sensation that generally the kernel is also quite unstable on
most platforms unless a silicon manufacturer is there to help (as it
happens with some TI chips) and in general I would prefer Intel for
reliable live performance. However as a researcher I am trying to
squeeze ARMs to perform as musical instruments and I think they can work
well if the industry supports kernel development. But I'm wondering if
this will continue to happen, since the eastern mobile market is
crushing the sales of the reliable manufacturers.
Going slightly OT: I really hate how the market is pushing on short
products lifecycles, following the trend of the mobile industry. On one
side the audio and music market is similar to the consumer market as
users want to have ever new and fancy products with appeal. On another
side it is similar to the industry/autmotive market as you need reliable
products that last for years. What you should find inside is sturdy
electronics with a support of >10 years and the possibility to find new
pin-compatible ICs after those 10 years from the manufacturer. There's
too much consumerism in the silicon industry following the mobile
"revolution" meaning that everything contains electronics is destined to
last less and die shortly. Or enter the market in pre-beta stage (which
nowadays is considered a "feature"). The only way to get long term
support is sticking to the old good silicon manufacturers, hoping they
won't discontinue your MCU/CPU/DSP soon (as they are doing to cut costs).
I hope someday people will realize that not all electronic products are
like smartphones.