Hello LADs
I'm trying to make a GUI for a LV2 synth based on so-404. In fact, I'm
trying to learn C / C++.
I got everything to build OK, but I'm stuck on a hard to read execution
error ; when I load the plugin in jalv, it loads it OK, but fails to
load the UI and says :
suil error: Unable to open UI library
/usr/local/lib/lv2/kis.lv2/kis_gui.so
(/usr/local/lib/lv2/kis.lv2/kis_gui.so: undefined symbol:
_Z17instantiateSO_404PK15_LV2_DescriptordPKcPKPK12_LV2_Feature)
(The architecture is based on
https://github.com/harryhaaren/lv2/tree/master/plugins/eg-sinsynth.lv2
in fact it is exactly the same : synth.c/h, synth_gui.cpp, widget.cpp/h,
built as synth.so synth_gui.so synth.ttl and manifest.ttl using waf).
I must say I'm still to fully understand the concept of descriptors...
Can somebody see what I'm doing wrong?
Thanks
--Philippe "xaccrocheur" Coatmeur
I'm almost there:
âš¡ jalv.gtk https://bitbucket.org/xaccrocheur/kis
Plugin: https://bitbucket.org/xaccrocheur/kis
UI: https://bitbucket.org/xaccrocheur/kis#gui
UI Type: http://lv2plug.in/ns/extensions/ui#X11UI
JACK Name: Kis
Block length: 512 frames
MIDI buffers: 32768 bytes
Comm buffers: 524288 bytes
Update rate: 25.0 Hz
SO-404 v.1.2 by 50m30n3 2009-2011
controlmode = 1.000000
volume = 50.000000
cutoff = 50.000000
resonance = 100.000000
envelope = 80.000000
portamento = 64.000000
release = 100.000000
channel = 1.000000
frequency = 440.000000
suil error: Failed to find descriptor for
<https://bitbucket.org/xaccrocheur/kis#gui> in
/usr/local/lib/lv2/kis.lv2/kis_gui.so
Please, give me a pointer to a basic implementation of that descriptor
thing, somebody, please end my misery.
--Phil
On 15/10/14 20:17, Paul Davis wrote:
>
>
> On Wed, Oct 15, 2014 at 2:24 PM, Phil CM <philcm(a)gnu.org
> <mailto:philcm@gnu.org>> wrote:
>
> Hello LADs
>
> I'm trying to make a GUI for a LV2 synth based on so-404. In fact,
> I'm trying to learn C / C++.
>
> I got everything to build OK, but I'm stuck on a hard to read
> execution error ; when I load the plugin in jalv, it loads it OK,
> but fails to load the UI and says :
>
> suil error: Unable to open UI library
> /usr/local/lib/lv2/kis.lv2/kis_gui.so
> (/usr/local/lib/lv2/kis.lv2/kis_gui.so: undefined symbol:
> _Z17instantiateSO_404PK15_LV2_DescriptordPKcPKPK12_LV2_Feature)
>
>
>
> % echo _Z17instantiateSO_404PK15_LV2_DescriptordPKcPKPK12_LV2_Feature
> | c++filt
> instantiateSO_404(_LV2_Descriptor const*, double, char const*,
> _LV2_Feature const* const*)
> %
>
> your plugin is missing a symbol (instantiateSO_404(...)), apparently.
>
>
>
> _______________________________________________
> Linux-audio-dev mailing list
> Linux-audio-dev(a)lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
Hello all,
Since some time I do get duplicate messages on this list
for example the last one just a few minutes ago:
> Date: Sun, 5 Oct 2014 23:35:21 +0200
> From dspam(a)snarchi.io Sun Oct 5 21:35:55 2014
> From: tom(a)trellis.ch
> To: Fons Adriaensen <fons(a)linuxaudio.org>
> Cc: linux-audio-dev(a)lists.linuxaudio.org
This seems to happen only to messages which are a reply to one
I sent. The 'dspam(a)snarchi.io' address seems to be related
to 'marcochapeau', a name I've seen before on this list.
I just wonder if I'm the only one getting these.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
Hi Matthias,
I'm trying to build your ambix plugin suite on Linux using the LV2
wrapper, which fails with the following problem during makefile generation:
CMake Error at CMakeLists_subprojects.txt.inc:104 (ADD_LIBRARY):
Cannot find source file:
/local/build/ambix/JUCE/modules/juce_audio_plugin_client/LV2/juce_LV2_Wrapper.cpp
Tried extensions .c .C .c++ .cc .cpp .cxx .m .M .mm .h .hh .h++ .hm .hpp
.hxx .in .txx
Call Stack (most recent call first):
ambix_binaural/ambix_binaural/CMakeLists.txt:20 (INCLUDE)
It seems that the JUCE tree in your repo does not have an LV2 directory
at all, and neither has the official JUCE repo - has it been dropped?
Any hints much appreciated, best greetings from Essen to Vilnius,
Jörn
--
Jörn Nettingsmeier
Lortzingstr. 11, 45128 Essen, Tel. +49 177 7937487
Meister für Veranstaltungstechnik (Bühne/Studio)
Tonmeister VDT
http://stackingdwarves.net
Hi all
Please take a look at http://ags.sf.net/api/ags and let me know what
you're missing.
There may follow some corrections if you find mistakes let me know, too.
At the moment there probably plenty of them.
kind regards
Joël
It's job hunting time, and while I am skeptical that there are very many
paying jobs for Linux audio developers out there, I thought I'd at least
ask on this list. What companies are doing work in the Linux audio
universe?
Or, on the other side, anybody know of interesting
music-instrument/audio companies located in or around NYC (where I live)
who might need Linux folks for their web presence or other online stuff?
Any tips appreciated!
Thanks,
Bill Gribble
I have been playing with numbers (a little) for AoIP and AES67 (because I
can actually read the spec). The media clock (AKA wordclock) is derived
from the wall clock which is synced via PTP. There are 3 sample rates
supported 44.1k (not sure if any physical devices really do), 48k and 96k.
The first thing I find is that it is not possible to get even word clock
via simple math. The wall clock moves one tick per usec which at 48K is
20.833rep. (44.1k is a mess) I would suggest this is why AVB and AES67 at
lowest latency already uses 6 sample frames which is a nice even 125 usec.
It is obvious to me from this, that the true media clock is derived by
external circuitry. The one "must support format" is 48k at 48
samples/packet. (both 16 and 24bit on the receive end of a stream) This is
1ms packet time. (there are AoIP devices that only support AES67 with this
format) So I am guessing that it must be easy to derive a media clock from
a 1ms clock input. The computer (CPU board) would have a GPIO with a 1ms
clock out derived from the cpu wall clock which in turn is kept in sync
with the network via PTP. I am guessing there is already a chip in
production that does this with almost no support circuitry.
The side effect of the 1ms/packet standard is that streams are limited to
8 channels at 48K and 4 channels at 96k. So for more channels more streams
are required. This maximum comes from the limit that the full packet must
fit into one standard ethernet packet with the MTU at 1500 (minus headers
etc.). The transport does not do packet reassembly of fragmented packets
and basically drops them.
1 ms latency in the jack world is the same as 16/3. This may explain why
/3 works better for other formats such as USB or FW. It may also be the
bridge that works best with OPUS at 5 ms.
--
Len Ovens
www.ovenwerks.net
Is anyone in contact with Lars? I tried to send him an email but it immediately
bounced, so i think I've got an address that's no longer valid.
--
Will J Godfrey
http://www.musically.me.uk
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.
Hello all,
Today I got an email of a user asking me to help him make a plugin
called 'zitaretuner' work. I never wrote such a plugin, and I didn't
even know it existed. So I can't help this user.
Of course this made me curious, and I managed to get a copy of
the source code of this lv2 plugin. And I wasn't very amused.
As expected it's based on zita-at1, and again a complete disaster.
The DSP code of zita-at1 is written as a neat self-contained C++
class with a very clean interface, and this is done explicitly to
make it re-usable.
But instead of re-using it, the author of the plugin decided to
rewrite it in C, and combine it in the same source file with parts
of libzitaresampler (instead of using that as a library as it is
meant to), and with whatever is required to turn it into an lv2.
The whole thing is just a single source file.
The same author (who is know only as 'jg') didn't bother to add
a decent GUI, relying on the plugin host to create one. That means
for example that the note selection buttons (which also double as
'current note' indicators in zita-at1), are replaced by faders.
Only $GOD knows what they are supposed to control.
And as a final topping on the cake, that whole crappy thing is
presented as if I were the author of it all. No mention at all
that things have been modified, and by whom or why. This alone
is a clear violation of the license under which zita-at1 was
released. And whoever did it doesn't even have the courage to
identify him/herself.
I've complained about this sort of thing before, and this time
I'm really pissed. So let one thing be clear: I will never again
release any code under a license that allows this sort of thing
to happen.
Ciao,
--
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
I have been doing some reading on audio over IP (or networking of any
kind) and one of the things that comes up from time to time is collisions.
Anything I read about ethernet talks about collisions and how to deal with
them. When I was thinking of a point to point layer two setup, my first
thought was there should be no collisions. Having read all the AES67 and
other layer 3 protocols there does not seem to be mention of collisions
really. My thought is that on a modern wired network there should be no
collisions at all. The closest thing would be a delay at a switch while it
transmitted another packet that in a hub would have been a collision.
So my thought is that AoIP at low latencies depends on a local net with no
collision possible. Am I making sense? or am I missing something?
The various documents do talk about three kinds of switches, home,
enterprise and AVB. It is quite clear what makes an AVB switch, but what
does an "enterprise" switch have over any other switch aside from
speed? I am sure I am being small minded in my thoughts here. For example,
I am expecting very little non-audio bandwidth and I am guessing that the
average home switch does not prioritize any style of packet over another.
I guess I am asking what parts of a switcher are important for audio? I am
guessing that HW encription (offloading the SSL from the server to the
switch) is not something that helps audio. Even with proprietary
protocols, it would not make sense to use encription. There seems to be a
real plug and play emphasis where the audio enabled LAN is firewalled from
the rest of the world. The streams are multicast so any box that
recognizes the packets can use those channels and offer it's own audio
channels. Any box can send control messages to reroute audio... there must
be some kind of authentication for that.... (light bulb) this is why AES67
does not include controls and perhaps discovery either. I can just see
someone walking into a concert with a notepad and deciding they want
their own mix... I am guessing the promoter would not want people walking
away with their own "direct from the mixer" audio track of the event
either. Not quite plug and play then. The wireless router wants to limit
what traffic it deals with.
So AES67 allows the control to be offloaded to a web interface with access
control (switch SSL offloading would not help here anyway). Discovery ends
up being manual at first glance. I am thinking it will not be long before
the control and setup will automate loging into the web IF and setting
things up. A system wide user/password and an interface that defaults to
streaming it's own inputs as multicast is already discoverable.
--
Len Ovens
www.ovenwerks.net