Hi everybody,
Fallowing up the long discussion i'm trying to sort of give the
information you all seem to be missing.
there was a meantion of this, but not too many of you have paid any
attention:
here you can find an article about the current status of the protocol mess:
http://prosoundnewseurope.com/pdf/PSNLive/PSNLive_2009.pdf (page 28)
obviously eas50 is good to go, but Ethernet AVB is right thing really.
the only thing that it's still work in progress, but many of the
proprietary vendors, which already have their own networking solution
(like Harman with HiQ-net, the one i can name of top of my head)
are involved in AVB stadard deveelopment.
The idea of AVB is to bypass the IP layer, which is right thing really.
you don't need to assign IPs to your audio nodes, really!
in avb you'd just have to select channels that nodes whant to listen to.
there is a fair bit of documentation on the ietf.org AVB group's page.
but XMOS is looking to be the best point of refference:
http://www.xmos.com/news/15-jun-2009/xmos-simplifies-ethernet-avb-implement…
is think we should forget everything else and crack on with the XS1 AVB
implementation!
their XS1 chips seem to be really great,
their are basically every innovative and open-source minded.
the official toolchain is LLVM-GCC based.
you can use C, C++ or their own XC.
XC is basically C with some stuff omited (like goto and floats)
and XMOS IO stuff added, don't just say WTF, look at it first!
you should also watch the videos here:
http://www.xmoslinkers.org/conference-online-wf
especialy the two about the "XMOS Architecture" and the AVB
presentation.
some dev-kits are quite expencive, but that's due to low-volume really
;)
there is alos a nice USB Audio kit!
plus there is alittle board that is cheap and has two RJ45's on it
already :)
I'm myself studying the XC book at the moment. And geting familiar with
the tool set :)~~
looks very exciting, cause these are the invovative chips!
ok, may be an FPU is really missing on XCore, but how many DSPs have
it anyway? well quite a few, but there was no FPU on dsps for ages! :))
also XC or C/C++ are so much more obvious then the bloody "menthal american military engeneers non-sense" called HDL-whatever!
Cheers Everyone,
Hope you will appreciate my excitment :)~ (l0l)
--
ilya .d
for more information's read here.
http://en.wikipedia.org/wiki/MIDI_beat_clock
my question, exist something like this for alas. i am interested to send midi beat clock
signals from hydrogen to external hardware synthesisers/arpeggiators. and i am explicit
not interested to sync them to any timecode. because the external machines have to run
independent and in a randomly order. they only have to sync there beats.
here the mbc specs.
midi beat clock defines the following real time messages:
* clock (decimal 248, hex 0xF8)
* tick (decimal 249, hex 0xF9)
* start (decimal 250, hex 0xFA)
* continue (decimal 251, hex 0xFB)
* stop (decimal 252, hex 0xFC)
and about ticks.
i fond out that linux audio apps all have other or there own definitions about the quantity of ticks per beat.
make it sense to find out an accordance about ticks per beat. or is this irrelevant for any syncing. especially i mean here syncing via jack-transport.
greetings wolke
As a power user who's modestly (just kidding) keen on saving time,
using great workflow, and avoiding as much of the drudgery of editing
work over and over again to get an end result as is possible, i've had
the privilege and pleasure of testing and working with a data protocol
called CV, or control voltage, in these last 2 weeks.
Non-Daw, and it's new buddy Non-Mixer, enable me to write function
data in ND control sequences, or "lanes" at will. From my POV it's
like turbo automation, and i'm still surprised and delighted at how
easy and FAST it is to work with. Without the multiple complexity that
is midi, in a simple 1:1 format, this is a very clever way to handle
automated data between apps, imho.
I ask Devs who are building up or modifying their linux audio and
video apps, if they could cast a brief eye over this protocol, and at
least spare a thought for the opportunities it offers to stream direct
data from one app to another. It seems to be an ideal solution for a
modular framework, without a lot of complexity involved. Best of all,
it uses jack ports to do the routing work, so there's no additional
work devs have to do, when trying to stream data across apps. I know
some of you will be familiar with this protocol, so this quick note
could be considered a reminder. :)
Non-Daw, and Non-Mixer are CV capable, and i can enthusiastically
testify to the system working very well indeed.
I guess you could call this a quick heads up for a community interapp
opportunity, and given the recent resurgence of the Session discussion
(woohoo), i'm thinking the CV protocol might be complementary as a
component in such a framework, from a user's perspective.
Alex.
--
www.openoctave.org
midi-subscribe(a)openoctave.org
development-subscribe(a)openoctave.org
I wanted a very simple SDR with jack inputs and outputs for a
demonstration I was doing. I had a look at the DSP guts of dttsp and
quisk, and sat down to code.
Now, since I wanted to demonstrate how you could use LADSPA filters to
clean up received audio, it occurred to me that I should implement my
SDR core as a LADSPA plugin. So, I did.
It "works for me". If you try it out, let me know how you get on. At
256 frames/period it sits at about 3% usage on my P4-2.8 without any
other LADSPAs running - not bad, but it probably could be better.
If you want to build it, get the code with:
git clone git://lovesthepython.org/ladspa-sdr.git
then build it with scons. You'll need to manually copy the resulting
sdr.so to wherever your LADSPA plugins live. Load it up in jack-rack
and add in an amplifier plugin (there's no AGC) and some sort of filter
(I recommend the Glame Bandpass Filter).
Performance and quality isn't exactly amazing, but for less than 300
lines of code - much of that used to set up the plugin - it's not too
bad.
Gordon MM0YEQ
We seem to be fairly intrested in the same things James!
I don't know if you have access to University Lecturers... if you do, go
have a chat with
the software engineer lecturer. I've only had positive experiences when
approaching them
about "totally-unrelated-to-course" projects.
On the other hand, I bought a book (forget the exact name.. can find out)
which showed some of the basic ObjectOrientated stuff, but at the same time,
I found it to be relatively useless when trying to apply it to
"music-software".
(Ie: Ardour, Seq24, Dino, etc kind of programs)
Spending time doing out program diagrams.. (you know the "standard" boxes
approach
to explaing how classes interact.) That's been my approach, I didnt really
find any great
resources online. If you do find any, please post back here! :-)
Good luck, -Harry
Jorn, Fons, i'm looking for a ladspa UHJ encoder, and can't seem to
find one. Any idea if such a beast exists? Or if there's a standalone
instance or ambdec preset i can use, and route in and out of?
Jorn ,i've had several browses over your web examples of using AMB
plugins with Ardour, and have reflected the setup where possible in
Non-Mixer.
I'm using samples (ala LSampler) for noise, but i'll ask here, what's
the function of using the tetraproc mike plugin over something else?
I'm lost in your explanation.
I'm still getting my feet wet in ambisonics, and making plenty of
errors along the way, but progress seems imminent. (as it always does
i guess, for the optimistic among us.)
Some general questions.
When i use Jconvolver standalone (my preference) and test with a
*amb.conf, i get 1 input and 4 outputs WXYZ. Is this correct for 4
signals coming into 1, into the *amb.conf, or do i need to change this
to reflect individual WXYZ routing, from something like a MASTER
strip, or from an ambdec plugin in a channel strip? (i'm trying to get
the signal chain sorted out correctly.) i.e. 4 in, 4 out.
I'm using all mono ins for sound sources, and want to reflect
positioning in the busses, as i have multitrack 1st violins,
2ndviolins, etc...
So my 1st violins (4 monotracks) are going into a 1stviolin buss (4
ins) and in the buss signal chain, i'm adding a ladspa amb mono
panner, which naturally gives me 4 outs, then the chain continues to
the MASTER and jconvolver, back into a jconv buss in the mixer with
the intent of finally routing that to the UHJ buss...
Should i then stay "faithful" to that signal chain, and up to a UHJ
encode to stereo (which i hope is in ladspa existence) maintain the 4
port stream to stay compliant with WXYZ?
The intent with this is provide ambisonic positioning, and convolver
tail, right up until downsizing to stereo as the last part of the
signal chain.
I'm finding the challenge of this interesting, and may have more
questions as more of this slowly seeps into my head.
Feel free to point out obvious errors, or alternative (meaning
smarter) suggestions.
Alex.
--
www.openoctave.org
midi-subscribe(a)openoctave.org
development-subscribe(a)openoctave.org
Good day...
Just coming to grips working with and learning the alias system...
Under what conditions might a Jack port not have any alias names?
When might I expect to encounter that situation?
Because our app supports both ALSA midi and Jack midi, the app's very own
ALSA ports are showing up in its list of Jack midi ports. We don't want our
own ALSA ports listed in there.
So to filter them out of our Jack midi ports list, I look at the port's (first)
alias name and see if the app's name is in there, and filter out the port if
it's a match.
So far so good, but I'm worried what happens if there's no alias to work with.
I can't figure out a way to determine if a non-aliased name like
"system:midi_playback_4" actually belongs to our app's own ALSA ports.
I don't know how or if 'alias renaming' will affect my plans.
Still learning + investigating much about this system.
Thanks. Tim.
Hey, has anyone been seeing strange behavior from this combination?
kernel 2.6.31.x rt20 + alsa 1.0.22 userland
RME card (pcmcia card + multiface)
hdspmixer is not doing the right thing (does not initialize the card in
a way in which playback works), it does not see the hwdep interface (or
something like that) and disables metering, alsamixer even segfaults
when I reach the end of the controls listed. Plain weird. Smells like
something changed deep in the kernel that makes alsa-lib very unhappy.
Alsa-tools rebuilt from source does not make a difference.
Weirdness goes away when I boot into 2.6.29.6 rt23...
Is there anything in alsa-* that depends on which _kernel_ is available
at compile time?
-- Fernando