On 5/29/10, akjmicro(a)gmail.com <akjmicro(a)gmail.com> wrote:
> Hey all,
>
> Yes, grepping for the port type which appears underneath with a 'jack_lsp
> -t' will be more consistent and dependable. Or, using a python-jack lib
> function and not depending on any system shell calls. The problem then
> becomes, is the jack lib for python well documented? If so, I think that's
> the real future of jackctl.py
>
> PS Qjackctl may be lightweight, but installing the entire QT toolkit just to
> use it is not!
>
> AKJ
>
> Sent from my Verizon Wireless BlackBerry
>
> -----Original Message-----
> From: Robin Gareus <robin(a)gareus.org>
> Date: Sat, 29 May 2010 14:00:52
> To: Julien Claassen<julien(a)c-lab.de>
> Cc: Aaron Krister Johnson<aaron(a)akjmusic.com>;
> <linux-audio-user(a)lists.linuxaudio.org>;
> <linux-audio-dev(a)lists.linuxaudio.org>
> Subject: Re: [LAD] [LAU] like "qjackctl", but trimmed of all fat
>
> Hi Julien, Hey Aaron,
>
> read 'jack_lsp --help'.
>
> '-t' does not take any arguments; it just makes jack_lsp print the type.
> the filter-string only acts on the port-name (BTW, not only the
> beginning of the port-name; but it's case-sensitive: strstr() )
>
> Anyway I can reproduce the problem, some jack-midi ports show up in the
> audio-tab of jackctl20100528b.py.
>
> jackctl20100528b checks for lowercase 'midi' in the port-name instead of
> looking up the port-type. So a2jmidi for example with an upper-case M
> "Midi.." ends up in the audio-panel.
>
> Your suggestion to parse the output of 'jack_lsp -t -c' is spot on.
> the (currently 2) possible return values are (indented by tab):
>
> #define JACK_DEFAULT_AUDIO_TYPE "32 bit float mono audio"
> #define JACK_DEFAULT_MIDI_TYPE "8 bit raw midi"
>
> ..or as you suggest using the python-module for JACK may also simplify
> things and make jackctl easier to maintain.
>
> Cheers!
> robin
>
> PS. Oh, and which of qjackctl's features makes it 'fat'? it's not
> bloated in any way. I'd rather put it the other way 'round and say that
> jackctl is 'slim'. Sorry could not resist.
>
>
> On 05/29/2010 12:23 PM, Julien Claassen wrote:
>> Hello Aaron and Jack-Team!
>> There seems to be a bug in my jack_lsp. I just started a2jmidid and
>> j2amidi_bridge. when I do a jack_lsp I get all the ports.
>> When I do: jack_lsp -t midi I only get one port from jack_midi_clock,
>> but none of the other ones.
>> When I type: jack_lsp -t, I can't see a difference between the
>> jack_midi_clock port and the others:
>> jack_lsp -t
>> [...]
>> a2j:Virtual Raw MIDI 0-0 [16] (capture): VirMIDI 0-0
>> 8 bit raw midi
>> a2j:Virtual Raw MIDI 0-0 [16] (playback): VirMIDI 0-0
>> 8 bit raw midi
>> a2j:Virtual Raw MIDI 0-1 [17] (capture): VirMIDI 0-1
>> 8 bit raw midi
>> a2j:Virtual Raw MIDI 0-1 [17] (playback): VirMIDI 0-1
>> 8 bit raw midi
>> a2j:Virtual Raw MIDI 0-2 [18] (capture): VirMIDI 0-2
>> 8 bit raw midi
>> a2j:Virtual Raw MIDI 0-2 [18] (playback): VirMIDI 0-2
>> 8 bit raw midi
>> a2j:Virtual Raw MIDI 0-3 [19] (capture): VirMIDI 0-3
>> 8 bit raw midi
>> a2j:Virtual Raw MIDI 0-3 [19] (playback): VirMIDI 0-3
>> 8 bit raw midi
>> a2j:M Audio Delta 1010LT [20] (capture): M Audio Delta 1010LT MIDI
>> 8 bit raw midi
>> a2j:M Audio Delta 1010LT [20] (playback): M Audio Delta 1010LT MIDI
>> 8 bit raw midi
>> j2a_bridge:playback
>> 8 bit raw midi
>> a2j:j2a_bridge [129] (capture): capture
>> 8 bit raw midi
>> Jack MIDI Clock:midi_out
>> 8 bit raw midi
>>
>> Or is the argument "midi" only seen as the start of a port_name?
>> If so, Aaron, you must rewrite this part of jackctl (I guess you do
>> what I described, because I get exactly your output). You should rewrite
>> it using:
>> jack_lsp -t
>> And then parse the type info underneath each name. I think a simple
>> grabbing for "audio" or "midi" will do. But I guess, that in the long
>> run, using the python module for jack, will be more efficient and easy
>> to use.
>> Kindly yours
>> Julien
>>
>
--
Aaron Krister Johnson
http://www.akjmusic.comhttp://www.untwelve.org
I've got yoshimi jack session support behaving semi-sensibly with pyjacksm and
jsweeper, but there's one point I'm struggling with. Jack midi connections aren't
being stored by session save -
<?xml version="1.0" ?>
<jacksession>
<jackclient cmdline="yoshimi -d ${SESSION_DIR} -u 4" infra="False" jackname="yoshimi-yoshimi" uuid="4">
<port name="yoshimi-yoshimi:Left" shortname="Left">
<conn dst="system:playback_1"/>
</port>
<port name="yoshimi-yoshimi:Right" shortname="Right">
<conn dst="system:playback_2"/>
</port>
<port name="yoshimi-yoshimi:In" shortname="In"/>
</jackclient>
</jacksession>
I figure it has to be some deficiency in my port registration, but that seems
straightforward enough -
midi.port = jack_port_register(jackClient, port_name, JACK_DEFAULT_MIDI_TYPE, JackPortIsInput, 0);
I'm not asking for in depth debugging/analysis support here (I'll figure it out
eventually!) but I'm hopeful someone might have an 'off the top of the head'
suggestion.
cheers, Cal
I just came to understand that sourceforge sometime ago added a
setting for declaring if your project uses encryption. This, it seems
to cater to US export regulations... paah.
Now what they did is to by default set it to: 'yes' project uses encryption.
Resulting in a large part of the world not being able to access
content on sourceforge.
I won't go into the politics of this, this is not the right forum. It is done.
Though, as I'm sure very few of us uses encryption, please log in to
your sourceforge account and change this setting to 'No' so your
project remains accessible to everybody.
Regards,
Robert
Well I did the switch: jackd here is now jackdmp.. and [almost]
everything works just like before.
The motivation for this was to benefit from the re-loadable backend
feature of jackdmp for two reasons:
- to be able to quickly switch between internal and external soundcards
- have JACK sessions survive suspend/resume cycles (using dummy backend)
After some initial testing I was quite enthusiastic. It looks very
smooth on the surface. but - or course - the devil is in the details:
Calling `jack_control sm` drops existing connections to system:* ports.
OK. They may be different but here they're not. no problem: this can be
remedied with a simple shell script.
But worse: both patchage (v0.4.4) & qjackctl (v0.3.6.22) go haywire
(either 100% CPU usage, disconnect or crash) when replacing the
back-end while they're running. I have not yet found a pattern in the
app's behaviour.
Qjackctl's issue can be worked around by stopping it before doing the
switch: 'dbus-send --system /org/rncbc/qjackctl org.rncbc.qjackctl.stop'
but re-starting it after the switch fails if the qjackctl setup does not
match the current active hardware (it tries to start a 2nd jackd
instead); otherwise it works just fine.
ardour2 disconnects if I switch directly between two alsa backends.
however going alsa,hw:0 -> dummy -> alsa,hw:1 works.
Clearly there's some issue remaining to be worked out.
The good news: both mplayer and alsa-plug have no problem with me
changing the jackd-backend (while retaining sample-rate and buffersize)
even while they're playing; so I'm good most of the time :)
The scripts I use are available at http://rg42.org/wiki/jack2contol
Has anyone else ventured down that road and has a similar setup running?
Can someone reproduce these problems?
Cheers!
robin
Good evening everyone!
I was wondering, is there a difference in opening a device like
plughw:0,0
and
plug:pcm.my_own_device
I've looked in the code of aplay, but small as this tool is, it contains
still a good deal of functionality, so it seems. I've compared it to the other
code in question. there are differences. So now I think it's best to ask and
to guess, which is which and what piece of code, which function signatures are
acutally used for which purpose.
The application is question is chan_alsa from asterisk. So it's quite
limited: configured for 8kHz, 16bit (either se or le) and mono. They use a
fixed periodsize and connected buffer.
I found, that plughw:0,0 acutally let's me here things, if not really
capture input myself, but I'll pounce on that, when it's time.
I want to connect a pcm-device wired to JACK to the asterisk software, yet
it either tells me, that the argument is invalid Plug:pcm.name or plug:name)
or it gets read/write errors or it simply crashes. :-(
I've seen that they use nonblocking mode and interleaved data.
So any suggestions on that?
Warmly yours
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
sorry...
no 'reply' butttons (thunderbird) or any other link (list archives) i
tried has worked so far, so i give up.
i don't have the patience for these things...
at least i tried..
i just wanted to let you know that there's something new out there, that
some of you might find interesting, that's all...
if there's any questions, ask in our discussion group instead
(http://groups.google.com/group/axonlib), or email me directly...
or in the discussion thread over at kvr-audio
(http://www.kvraudio.com/forum/viewtopic.php?t=288888)
- ccernn
hello,
to clarify the current format capabilities of the library:
the library for now can be used to create:
- vst plugins for linux and windows (so / dll)
- standalone executables for linux and windows ('elf' exe/ 'pe' exe).
the standalone executables are not plugin host containers, but they can use
most of the library features such as gui widgets, drawing capabilities,
image decoding, file i/o etc. such executables can be useful for debugging a
plugin gui for example so that no host is needed to load into.
also the standalone executable can be created with no plugin ideas in mind
at all. an example for that would be a gui plotting program that can write
the resulted data from a buffer to the hard-drive.
standalone exacutables may eventually have some sound processing
cababilities, for example: reading a wavetable from disk and sending the
signal to the sound interface. also some primitive host capabilities for
plugins (i personally do not see much use for that).
on the plugin side there are a lot of possibilities and we are eventually
going to expand to more plugin formats like ladspa, other linux formats and
possibly au (while adding some osx platform code).
the general idea for the plugin formats is to have *unified plugin syntax
that can be compiled easily for different formats by simply switching a
compiler flag e.g. -DAX_FORMAT_VST / -DAX_FORMAT_LADSPA.
*or a least unified as much as possible ;)
of course any suggestions and comments are welcome and appreciated in the
same notion.
lubomir
i'm new to lists like this, and don't know how to reply to specific
posts in a discussion.. (teach me?)
some clarification is needed for axonlib (earlier post), i guess...
yes, axonlib is:
>> "standalone/vst-plugin library for linux/win32"
but the plan/idea is to expand the 'vst' part to other plugin formats
too, linux things like ladspa, dssi, (lv2), and a future mac port is
possible. support for the various plugin/binary formats of course
depends on the platform you compile for. there's no point in compiling a
lv2 for windows, for example... vst and standalone binary (exe) is
probably the most 'universal', since they exists for all three major
platforms..
and it's also good to be able to compile in one batch, (from a script)
to all supported formats and platforms, with no changes to the source
code...
and
>> " But the standalone part doesn't quite make sense"
some things can be nice to have available as a standalone binary on
linux, for example to use with jack.. ( but we're missing some parts for
standalones yet, audio and midi i/o being some of them :-/ haven't
decided on which lib/backend yet, possibly portaudio/midi )
and, it helps development a lot, when you can compile to an exe/binary
and run it from within an ide or something, and test various small
changes almost instantly, than needing to go through the often lengthy
process of starting up a host, and load the plugin there,
- ccernn
aarg, answering in the list is confusing!
thunderbird gives me just errors when trying to click on any "reply"
links in the emails or the list archives.
please, someone tell me how this is supposed to work...
---
I think it is a library for building audio processors - you write your
code to do whatever you want with audio and/or MIDI, and make GUI using
library functions. After that, with one parameter, you choose if you
want to compile it as a VST plugin or as a standalone app for Linux or
Windows.
Cheers!
Igor
---
yeah, that is more or less correct..
audio processor, plugin, diffferent names for kind of the same thing...
and if everything goes as planned, you can later compile to a dssi or
ladspa or whatever, if we manage to put the various (plugin-)format
abstractions in place. i've never done any code for these formats,
except for some ladspa testing, and doesn't use any audio-host that
support these plugin formats, ... so we might need some help with this
part, or it might take some time to get done (and bugfixed)..
same thing with the mac support...
everything has been abstracted into seperate layers for both the
plugin/processor format (standalone binary is considered as just one of
these formats), and platformst (currently windows/linux). there's a
bunch of #ifdef AX_FORMAT_VST and #ifdef AX_LINUX, etc, inside the code,
so you just have to define a couple of these for the format and platform
you want to compile to (probably in a makefile, or in a compile script),
and most things is 'automagically' handled for you...
you write your plugin (or whatever) with a specific set of
functions/methods from the base classses, and via some #ifdefs, etc, the
correct implementations for your platform/format of choice is being
'dragged in' and compiled into the resulting binary or shared library..
and sorry if my english isn't too perfect (i'm norwegian... if something
sounds confusing, just (continue to) ask..
- ccernn
[forwarding here, hope this is not too off topic or unwelcome]
==
Software Developer: Audio and Digital Music
Centre for Digital Music
Queen Mary, University of London
School of Electronic Engineering and Computer Science
The Centre for Digital Music (C4DM) at Queen Mary, University of
London, is seeking an experienced Software Developer with a background
and knowledge in Audio and Digital Music, to work on a new
EPSRC-funded project "Sustainable Software for Digital Music and
Audio Research". The aim of this project is to provide a Service to
support the development and use of software, data and metadata to
enable high quality research in the Audio and Digital Music research
community.
The postholder will undertake a range of software development
activities in this project, including: developing cross-platform
robust engineered software from research prototype software; tailoring
or adapting existing research software to make it usable by other
researchers; creating and maintaining software and data repositories;
providing documentation, training and advice on the use of developed
software; and engagement and outreach to the research community and
beyond.
The C4DM, part of the School of Electronic Engineering and Computer
Science, is a world-leading multidisciplinary research group in the
field of Digital Music & Audio Technology. C4DM already develops
robust software and technologies for music and audio research,
including Sonic Visualiser (SV), a popular open source cross-platform
framework for analysis of music and audio. Details about the School
can be found at www.eecs.qmul.ac.uk and about the Centre for Digital
Music at www.elec.qmul.ac.uk/digitalmusic
The post is full time and for 40 months (starting in July 2010 or as
soon as possible thereafter). Starting salary will be in the range
£27,913 - £33,659 per annum inclusive of London Allowance. Benefits
include 30 days annual leave, final salary pension scheme and
interest-free season ticket loan.
Candidates must be able to demonstrate their eligibility to work in
the UK in accordance with the Immigration, Asylum and Nationality Act
2006. Where required this may include entry clearance or continued
leave to remain under the Points Based Immigration Scheme.
Informal enquiries should be addressed to the Principal Investigator,
Prof Mark Plumbley at mark.plumbley(a)elec.qmul.ac.uk
Further details and an application form can be found at:
www.hr.qmul.ac.uk/vacancies
(http://webapps.qmul.ac.uk/hr/vacancies/jobs.php?id=1815)
To apply for the Software Developer position, please email the
following documents to Ms Julie Macdonald at
applications(a)eecs.qmul.ac.uk: Completed application form quoting
10212/CE; a CV listing any publications and a statement describing
your previous software development experience, outlining the relevance
to this project. Postal applications should be sent to Ms Julie
Macdonald, School of EECS, Queen Mary University of London, Mile End
Road, London, E1 4NS
The closing date for applications is 12 noon on 25 June 2010.
Interviews are expected to be held on 7 July 2010.
If you have not heard from us by 12 July 2010 then you should assume
that you have not been shortlisted on this occasion.
Valuing Diversity & Committed to Equality