Hi everybody
I’ve just stumbled on a very simple issue that I can’t solve.
Is there a simple command line player for JACK that plays .ogg files?
My first option would be the mplayer but it has got loads of dependencies and , as I’m working on a embedded device, I’d like to keep things as lean as possible.
Any help or suggestion is appreciated
Kind regards
Gianfranco
The MOD Team
actually, you are right Ben.
I intended to prepare RPi to use some external battery to play "on street",
or somewhere more mobile, instead of taking my precious VL70m synthesizer
all the time.
Thank you for reminder. You say "rapid stuff gets laggy", I will try it
anyway, using minimalistic raspbian and some performance tuning, but thank
you for your "kind warning", will keep it in mind. Will try to use some
more simple soundfonts (but probably not any "cheap" 8-bit like
synthesizers..)
Regards
Milan
2014-02-24 20:44 GMT+01:00 Ben Bell <bjb-linux-audio-user(a)deus.net>:
> On Sun, Feb 23, 2014 at 07:09:06PM +0100, Milan Lazecky wrote:
> > i have a yamaha wx5 (midi saxophone). i was thinking to use my midi2usb
> > cable to plug into raspberry pi which would use some fluidsynth
> soundfonts
> > to synthesize music realtime.
>
> There are some examples of people trying to do this and I think the general
> impression is that the latency is touch and go depending on what you're
> playing. I have one here running fluidsynth, alsa midi using an Evolution
> USB keyboard, and a set of mellotron soundfonts. It's OK for chords and
> single note runs, but if I try anything rapid, it feels laggy. Of course,
> compared with a real mellotron that's not so bad, but playing a WX5 may be
> sore.
>
> As others have pointed out the audio out isn't audiophile quality, but I'd
> have thought if you were in a studio you'd use proper hardware and this
> would be for live use? In which case, factory in an amp, an audience
> talking
> and so on, and I don't think it's as big an issue as people make out.
>
>
Hi,
I'm completely new to Common Music
(http://sourceforge.net/projects/commonmusic/), so I'm hoping somebody
can give me a hand here. Before I go to the effort of starting to learn
it, I wanted to know if it is possible to somehow import audio files
with it and treat them as objects (say for example that I have a number
of noise sounds and that I want to generate some music by randomly
playing them for a length of time)? A quick look at the Common Music
webpage and some of the examples there didn't clear this up for me.
At the same time, what would be the best way to start learning how to
use it? From what I read, the book
http://www.amazon.com/Notes-Metalevel-Introduction-Computer-Composition/dp/…
is a very good starting point, but apparently all the examples, etc. are
for an old version of Common Music (CM2), so I'm not sure if whatever
one can learn from that book will be of much use with the current
version of CM3.
Any pointers welcome. Thanks a lot,
--
Ángel de Vicente
http://www.iac.es/galeria/angelv/
---------------------------------------------------------------------------------------------
ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Protecci�n de Datos, acceda a http://www.iac.es/disclaimer.php
WARNING: For more information on privacy and fulfilment of the Law concerning the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en
Hi
I have "some success" with getting input from a ps3 controller over usb:
This shows up in lsusb:
atte@skagen:~$ lsusb | grep Sony
Bus 003 Device 013: ID 054c:0268 Sony Corp. Batoh Device / PlayStation 3
Controller
"cat /dev/input/js0" (as regular user) show the expected garbage when
moving the controller around or touching buttons. I also get input in
chuck opened with Hid.openJoystick(0), so I'm pretty sure the controller
is recognized and sending stuff into the system.
Now, how do I get it running over bluetooth (wireless). Google led me to
lot's of outdated info, so I'm hoping someone here with hands-on
experience on a recent system could provide a few starting points,
hints, links or clues.
Thanks in advance!
--
Atte
http://atte.dkhttp://modlys.dk
Hi dear all.
Just wanted to let you know about this amazing course that the
University of Edimburg has online during 7 weeks, and there's still
days to sign in:
https://www.futurelearn.com/courses/higgs
As maybe you've notice, I'm quite eclectic and curious about anything,
and Science (Physics in this case) is one of my favourite matters, so
I'm in the course. It's now in the thick of the 2nd week and I tell
you that it is amazing the quality and also the simplicity they've
achieved given the complexity of the subjects, with a lot of
instructional videos and some articles and texts, and even Mr. Higgs
is there himself.
Kindest Regards.
--
Carlos sanchiavedraz
* Musix GNU+Linux
http://www.musix.es
Thank you! This looks fascinating!
Grekim
Hi, has anyone tried a usb sound card with the BBB? I want to connect my
guitar to it and run some PD patches so something with low latency would be
niiice :)
Also wondering how hard it would be to directly connect some ADCs and DACs.
Has anyone tried?
--
Rafael Vega
email.rafa(a)gmail.com
On Fri, Feb 21, 2014 at 08:34:16PM +0100, Jörn Nettingsmeier wrote:
> On 02/21/2014 07:52 PM, Lieven Moors wrote:
> >>it was part of the API very early on, then we decided we didn't want to
> >>impose the possibility of change on clients. as time goes on, it becomes
> >>clear (to me at least) that we should have implemented it.
> >
> >What would be use cases for changing the sample rate dynamically?
>
>
> having wired up a complex signal graph, which for the most part depends on
> the studio, not on the project at hand, and then having to deal with
> different projects in different sample rates.
>
> say your studio involves three monitoring setups, one main stereo, one
> nearfield, and one surround, you are using jack to do EQ on those things, in
> my case there's an ambisonic decoder in the loop as well. that means the
> jack graph is already quite elaborated. in that case, it would be nice to
> leave it running while switching from, say, a cd project at 44k1 to a tv
> thing at 48k.
>
> as it is now, i have decided to do _everything_ at 48k (i have no second
> thoughts about a final resampling step), but if a client brings material at,
> say, 96k, i have to downsample first. sometimes i wish for an easy way to
> reclock a graph. obviously, nobody expects this to be gapless. fading
> everthing down and then taking a few seconds to reclock everything would be
> fine.
>
> but then, many pieces of software in my chain would need changes. for
> instance, an important piece of dsp for me is jconvolver, as it sits in
> front of all my speakers.
> of course, the impulse responses i use for EQ and room correction only make
> sense for a given sample rate - it would have to be changed to swap one set
> of IRs for another during a reclocking call, and of course that needs to be
> configured and the user actually needs to provide those different IRs.
>
Yes, I see...
I got into the habbit of using the same sample rate for all my projects
as well. And I can remember a few times I wished to change the sample rate
on the fly.
Now I wonder if this would be difficult to implement. Do many clients
expect the sample rate to remain stable? Aren't most clients checking for
the sample rate in the process callback anyway? Of course, clients depending
on samples or IR's would have to play back at the wrong rate...
lieven