Is there any good reason why the B-format port groups refer
to the 2nd order horizontal components as R and S ?
--
FA
Laboratorio di Acustica ed Elettroacustica
Parma, Italia
Lascia la spina, cogli la rosa.
Hello folks!
I'm soory for posting here, but I hope someone can help me. I'm just
compiling a bit of code and gcc gives a strange error, which Ican't fix.
Code snippet:
[...]
if (ioctl (vcsa_fd, KBD_SNIFF_SET, &set) < 0)
{
sbl_log ("no kernel support for keyboard sniffing\n");
kbd_sniffing = 0;
return 0;
}
else { ... }
/* end of snipet */
the reproduceable compiler error says something about the if-statement:
kbd.h:96: error: expected ')' before '[' token
It does the same for two other ioctl statements, one incuded in an if, the
other in an assignment like:
value = ioctl (...);
My gcc is: gcc 4.2.3 (Debian 4.2.3-3)
Please: can someone help me here? It's so frustrating...
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
> On Sat, May 3, 2008 03:27, salsaman(a)xs4all.nl wrote:
> > Hi all,
> > after some long and protracted discussion with the jack developers on
> > their mailing list, I have come to the conclusion that they will
> probably
> > never accept the videojack patches into the main jack trunk. I also made
> > some suggestions to make jack startup easier for non-technical users,
> and
> > these were rejected. The attitude seems to be that the goal of jack is
> > simply to make a high quality server for audio, and if this means adding
> > complexity to the startup, requiring kernel patches, etc. then so be it.
> >
> > Some of the jack developers suggested to me that I look into using
> > pulseaudio (http://www.pulseaudio.org/) since it seems more suited to my
> > needs. I haven't had a chance to look into it deeply yet, but it looks
> > like an interesting project.
> >
> > As a result of all this, here is what I suggest:
> >
> > - we maintain our own fork of jack based on the current videojack code
> and
> > clients and backport any fixes from the main jack trunk as well as we
> can.
> > For this it would be nice to have a CVS/SVN set up.
> >
> > - use the vjack-devel mailing list from BEK to discuss development of
> this
> > branch
> >
> > - look into pulseaudio, and see if it might be possible to make a
> > "pulsevideo" - if it seems possible and wise, approach the pulseaudio
> > developers and see if they are more open to accepting non-audio patches.
> >
> > - discuss the pros and cons of writing something like videojack from
> > scratch - basically, what is needed - a server that provides timing and
> > calls callbacks and uses shared memory to pass framebuffers; client code
> > which connects to the server and sets up the callbacks; control
> interfaces
> > to list, connect, and disconnect the clients
> > - or maybe it's better to make a stripped down version of vjack that
> only
> > handles video ?
> > - think about whether we want to be able to synch with audio from jack
> > and/or pulseaudio.
> >
> > Please discuss...
> >
> >
> > Salsaman.
> > http://lives.sf.net
> >
> >
> >
> > _______________________________________________
> > piksel mailing list
> > piksel(a)bek.no
> > https://www.bek.no/mailman/listinfo/piksel
> > http://www.piksel.no
> >
>
>
>
> Well, since nobody else seems to have an opinion on this, I guess I will
> go ahead with what I suggested.
>
> Here is what I plan to do:
>
> 1) register a project on sourceforge (vjack.sf.net ?)
> 2) check the current videojack code in to subversion; set up basic web
> page for vjack (this can later be pointed to jackvideo.org)
>
> 3) check the changelog for jack and backport any important fixes
> 4) trim down the codebase - remove all drivers except the "dummy" driver
> 5) change all occurances of "jack" to "vjack", eg in function names and
> enums; create libvjack
> 6) update all clients "jack" -> "vjack"
> 7) make the changes necessary for vjack - make rate a float; change
> default rc file to ~/.vjackdrc and allow an enviroment parameter to
> override the default location; change default server name from "default"
> to "video0"; allow vjack_connect, vjack_disconnect and vjack_lsp to
> specify a server name
> 8) create basic documentation and HOWTOs
>
> Since this is quite a lot to do, are there any volunteers to help with
> this ?
>
>
>
> Salsaman.
> http://lives.sf.net
>
>
>
> --
> "We are called to be architects of the future, not its victims."
> - R. Buckminster Fuller
>
> _______________________________________________
> piksel mailing list
> piksel(a)bek.no
> https://www.bek.no/mailman/listinfo/piksel
> http://www.piksel.no
Although I obviously do not know the whole story, this to me is highly
disturbing. Having video sync (like MIDI sync that is apparently being
worked on) within Jack would allow us to do professional sample-accurate
multimedia production. The alternative suggested above simply reinforces the
problem JACK in and of itself faces in respect to myriad of other (mostly
subpar) solutions. Given that we have a lot of contributors in our midst who
apparently are unable to find common language (so they go on to reinvent the
wheel with their own often incomplete implementation), it would be nice to
see LAD community (especially considering as small as it is) not propagate
this unfortunate predicament.
Once again, please note that the aforesaid is my gut-reaction as I obviously
am not familiar with the innards of this matter.
Best wishes,
ico
On Wed, May 7, 2008 22:04, Stéphane Letz wrote:
>
> Le 7 mai 08 à 21:37, Juuso Alasuutari a écrit :
>
>> Stéphane Letz wrote:
>>> Video in jack1 won't happen because of several reasons that can be
>>> explained again: we want to fix and release jack1 soon and video
>>> in jack is a too big change to be integrated in the current state
>>> of the proposed patch.
>>> The future of jack is now jack2, based on the jackdmp new
>>> implementation (http://www.grame.fr/~letz/jackdmp.html). A lot of
>>> work has already been done in this code base that is now API
>>> equivalent to jack2. New features are already worked on like the
>>> DBUS based control (developed in the "control" branch) and NetJack
>>> rework (developed in the "network" branch).
>>> I think a combined "video + audio in a unique server" approach is
>>> perfectly possible: this would require having 2 separated graph
>>> for audio and video running at their own rate. Video and audio
>>> would be done in different callbacks and thus handled in different
>>> threads (probably running at 2 different priorities so that audio
>>> can "interrupt" video). Obviously doing that the right way would
>>> require a bit of work, but is probably much easier to design and
>>> implement in jackd2 codebase.
>>> Thus I think a better overall approach to avoid "video jack fork"
>>> is to work in this direction, possibly by implementing video jack
>>> with the "separated server" idea first (since is is easier to
>>> implement). This could be started right away in a jack2 branch.
>>
>> I'll throw in my 2 Euro cents.
>>
>> If the VideoJACK crowd feels that JACK2 development is taking too
>> slow and decide to continue with their fork, may I suggest that we
>> all still discuss and draft a proper video API together? If a fork
>> happens out of practical reasons, it would be best to make sure
>> that switching video software to use JACK2 later on will be as
>> painless as possible.
>>
>> Technical issues aside, I wish that those affiliated with VideoJACK
>> do not feel that their needs are neglected by the JACK developers.
>> I hope that the recent discussion has proved that people in this
>> camp are willing to improve JACK in this respect. Perhaps we could
>> move on and try to find more common ground?
>>
>> Juuso
>
> Yes sure.
>
> Where is the latest state of the video patch for jack? I can have a
> look and see how easy/difficult it would be to implement that in a
> jackdmp/jack2 branch.
>
> Stephane
>
> _______________________________________________
> piksel mailing list
> piksel(a)bek.no
> https://www.bek.no/mailman/listinfo/piksel
> http://www.piksel.no
>
>
Please forward to the LAD mailing list, since I am not subscribed there,
and unable to post.
I will just describe briefly the current API. It would be nice everybody
would agree that this is OK, otherwise please can we resolve this
quickly
- the project is going slowly enough as it is.
----------------------------------------------------------
The videojack patch adds a new port type - video.
#define JACK_VIDEO_PORT_TYPE 2
There is no mixing function for video, since there are dozens or maybe
hundreds of different ways to mix two or more video frames together; all
mixing is done by a host application.
Therefore each video buffer can have only one source, but may have
multiple sinks.
The size of the video buffer is initially set to 0x0, and the colorspace
is set to the default (JACK_VIDEO_COLORSPACE_RGBA32).
The source client only may set the frame size at any time, and may also
change the colorspace. Although only RGBA32 is supported officially,
other
colorspaces may be used in private networks.
The source client sets width and height using:
int jack_video_set_width_and_height (jack_client_t *client, jack_port_t
*port, uint32_t width, uint32_t height)
and may set the colorspace with:
int jack_video_set_colorspace(jack_client_t *client, jack_port_t *port,
jack_video_colorspace_t cspace)
These functions may only be called after the client is activated by the
source.
The functions also set the buffer size to 4*width*height.
Likewise, there are the functions:
uint32_t jack_video_get_width(jack_client_t *client, jack_port_t *port)
uint32_t jack_video_get_height(jack_client_t *client, jack_port_t *port)
jack_video_colorspace_t jack_video_get_colorspace(jack_client_t *client,
jack_port_t *port)
A client may set callbacks for these:
int jack_set_video_colorspace_callback (jack_client_t *client,
JackVideoColorspaceCallback
video_colorspace_callback,
void *arg);
int jack_set_video_size_callback (jack_client_t *client,
JackVideoSizeCallback
video_size_callback,
void *arg);
Although probably the client would only get a buffer size change
message,
since jack can only send one message per change (seems to be a
limitation
in jackd).
There is also a convenience function:
void* jack_video_get_framebuffer(jack_port_t* video_port)
which just does:
return jack_port_get_buffer (video_port, 1);
And that's it ! This is enough to get video working in jack.
There are a couple of changes I plan to make, which are just nice-to-
have:
- make rate a float (for the video clock)
- use an environment variable to define the location of ~/.jackdrc (this
will go away if video and audio clocks can be combined in one server)
Also I will patch jack_connect, jack_disconnect, and jack_lsp to take a
server name. This is necessary right now if running two servers,
video and
audio. But it seems like a good idea to keep it in anyway.
And as suggested, jack_set_video_process_callback() will just call
jack_set_process_callback() (for now). Later it could link to a video
clock in a combined server.
There was a further suggestion that the input client can set a "frame
count", which can be read by the output clients. This should also be
feasable.
--------------------------------------------------------
The current code is at:
http://bekstation.bek.no/piksel/pikseldev/jack-vjack5.tar.bz2
The startup parameters I use are:
/usr/bin/jackd -p 10 -t 40 -d dummy -r 25 -p 1 -P 0 -C 0
Note the last -p 1 is necessary, otherwise you get a floating point
exception. All other parameters can be adjusted as necessary.
There are also various clients - jack_video_test_generator and
jack_video_output are shipped by default.
On the piksel site there is a hacked version of camorama:
http://bekstation.bek.no/piksel/pikseldev/camorama-vjack.tar.bz2
and ekiga:
http://www.xs4all.nl/~salsaman/ekiga-2.0.12-hacked-for-vjack.tar.bz2
LiVES also supports vjack in an vjack out. You must compile it with
--enable-vjack. The vjack output client can be selected in
Preferences/Playback. The input client (generator) must be bound to an
effect key (VJ/Realtime effect mapping), and then activated.
Regards,
Salsaman.
_______________________________________________
Jack-Devel mailing list
Jack-Devel(a)lists.jackaudio.org
http://lists.jackaudio.org/listinfo.cgi/jack-devel-jackaudio.org
FYI
-----Original Message-----
From: piksel-bounces(a)bek.no [mailto:piksel-bounces@bek.no] On Behalf Of
salsaman(a)xs4all.nl
Sent: Wednesday, May 07, 2008 6:31 PM
To: p1k53l workshop
Cc: jack-devel(a)lists.jackaudio.org
Subject: Re: [piksel] [Jack-Devel] Re : The future of videojack ?
On Wed, May 7, 2008 22:04, Stéphane Letz wrote:
>
> Le 7 mai 08 à 21:37, Juuso Alasuutari a écrit :
>
>> Stéphane Letz wrote:
>>> Video in jack1 won't happen because of several reasons that can be
>>> explained again: we want to fix and release jack1 soon and video
>>> in jack is a too big change to be integrated in the current state
>>> of the proposed patch.
>>> The future of jack is now jack2, based on the jackdmp new
>>> implementation (http://www.grame.fr/~letz/jackdmp.html). A lot of
>>> work has already been done in this code base that is now API
>>> equivalent to jack2. New features are already worked on like the
>>> DBUS based control (developed in the "control" branch) and NetJack
>>> rework (developed in the "network" branch).
>>> I think a combined "video + audio in a unique server" approach is
>>> perfectly possible: this would require having 2 separated graph
>>> for audio and video running at their own rate. Video and audio
>>> would be done in different callbacks and thus handled in different
>>> threads (probably running at 2 different priorities so that audio
>>> can "interrupt" video). Obviously doing that the right way would
>>> require a bit of work, but is probably much easier to design and
>>> implement in jackd2 codebase.
>>> Thus I think a better overall approach to avoid "video jack fork"
>>> is to work in this direction, possibly by implementing video jack
>>> with the "separated server" idea first (since is is easier to
>>> implement). This could be started right away in a jack2 branch.
>>
>> I'll throw in my 2 Euro cents.
>>
>> If the VideoJACK crowd feels that JACK2 development is taking too
>> slow and decide to continue with their fork, may I suggest that we
>> all still discuss and draft a proper video API together? If a fork
>> happens out of practical reasons, it would be best to make sure
>> that switching video software to use JACK2 later on will be as
>> painless as possible.
>>
>> Technical issues aside, I wish that those affiliated with VideoJACK
>> do not feel that their needs are neglected by the JACK developers.
>> I hope that the recent discussion has proved that people in this
>> camp are willing to improve JACK in this respect. Perhaps we could
>> move on and try to find more common ground?
>>
>> Juuso
>
> Yes sure.
>
> Where is the latest state of the video patch for jack? I can have a
> look and see how easy/difficult it would be to implement that in a
> jackdmp/jack2 branch.
>
> Stephane
>
> _______________________________________________
> piksel mailing list
> piksel(a)bek.no
> https://www.bek.no/mailman/listinfo/piksel
> http://www.piksel.no
>
>
Please forward to the LAD mailing list, since I am not subscribed there,
and unable to post.
I will just describe briefly the current API. It would be nice everybody
would agree that this is OK, otherwise please can we resolve this quickly
- the project is going slowly enough as it is.
----------------------------------------------------------
The videojack patch adds a new port type - video.
#define JACK_VIDEO_PORT_TYPE 2
There is no mixing function for video, since there are dozens or maybe
hundreds of different ways to mix two or more video frames together; all
mixing is done by a host application.
Therefore each video buffer can have only one source, but may have
multiple sinks.
The size of the video buffer is initially set to 0x0, and the colorspace
is set to the default (JACK_VIDEO_COLORSPACE_RGBA32).
The source client only may set the frame size at any time, and may also
change the colorspace. Although only RGBA32 is supported officially, other
colorspaces may be used in private networks.
The source client sets width and height using:
int jack_video_set_width_and_height (jack_client_t *client, jack_port_t
*port, uint32_t width, uint32_t height)
and may set the colorspace with:
int jack_video_set_colorspace(jack_client_t *client, jack_port_t *port,
jack_video_colorspace_t cspace)
These functions may only be called after the client is activated by the
source.
The functions also set the buffer size to 4*width*height.
Likewise, there are the functions:
uint32_t jack_video_get_width(jack_client_t *client, jack_port_t *port)
uint32_t jack_video_get_height(jack_client_t *client, jack_port_t *port)
jack_video_colorspace_t jack_video_get_colorspace(jack_client_t *client,
jack_port_t *port)
A client may set callbacks for these:
int jack_set_video_colorspace_callback (jack_client_t *client,
JackVideoColorspaceCallback
video_colorspace_callback,
void *arg);
int jack_set_video_size_callback (jack_client_t *client,
JackVideoSizeCallback video_size_callback,
void *arg);
Although probably the client would only get a buffer size change message,
since jack can only send one message per change (seems to be a limitation
in jackd).
There is also a convenience function:
void* jack_video_get_framebuffer(jack_port_t* video_port)
which just does:
return jack_port_get_buffer (video_port, 1);
And that's it ! This is enough to get video working in jack.
There are a couple of changes I plan to make, which are just nice-to-have:
- make rate a float (for the video clock)
- use an environment variable to define the location of ~/.jackdrc (this
will go away if video and audio clocks can be combined in one server)
Also I will patch jack_connect, jack_disconnect, and jack_lsp to take a
server name. This is necessary right now if running two servers, video and
audio. But it seems like a good idea to keep it in anyway.
And as suggested, jack_set_video_process_callback() will just call
jack_set_process_callback() (for now). Later it could link to a video
clock in a combined server.
There was a further suggestion that the input client can set a "frame
count", which can be read by the output clients. This should also be
feasable.
--------------------------------------------------------
The current code is at:
http://bekstation.bek.no/piksel/pikseldev/jack-vjack5.tar.bz2
The startup parameters I use are:
/usr/bin/jackd -p 10 -t 40 -d dummy -r 25 -p 1 -P 0 -C 0
Note the last -p 1 is necessary, otherwise you get a floating point
exception. All other parameters can be adjusted as necessary.
There are also various clients - jack_video_test_generator and
jack_video_output are shipped by default.
On the piksel site there is a hacked version of camorama:
http://bekstation.bek.no/piksel/pikseldev/camorama-vjack.tar.bz2
and ekiga:
http://www.xs4all.nl/~salsaman/ekiga-2.0.12-hacked-for-vjack.tar.bz2
LiVES also supports vjack in an vjack out. You must compile it with
--enable-vjack. The vjack output client can be selected in
Preferences/Playback. The input client (generator) must be bound to an
effect key (VJ/Realtime effect mapping), and then activated.
Regards,
Salsaman.
_______________________________________________
piksel mailing list
piksel(a)bek.no
https://www.bek.no/mailman/listinfo/pikselhttp://www.piksel.no
>Well, this has been discussed to death on the jack-devel lists. I
can see that from an audio developer's point of view,
> it would be nice to have video within the same >server as audio.
>However, there are fundamental differences between video and audio,
which make this in my mind impractical.
> Firstly, there is the problem of latency - for audio, generally the
aim is to have a latency < 4ms. For video -
> since we are dealing with much larger chunks of data, a >latency an
order of magnitude greater than this is usually acceptible.
>Second, the timing is different. For audio generally you have a 1024
sample buffer and a rate of 44.1KHz or 48KHz. Video requires usually
something like 25fps.
> So you can either have two clocks, or you can split the video into
chunks (ugh), both solutions have problems.
> If you can solve these problems, then there is absolutely nothing
stopping you running video and audio in the same server
> (Video simply adds a new port type). Regards, Salsaman.
Video in jack1 won't happen because of several reasons that can be
explained again: we want to fix and release jack1 soon and video in
jack is a too big change to be integrated in the current state of the
proposed patch.
The future of jack is now jack2, based on the jackdmp new
implementation (http://www.grame.fr/~letz/jackdmp.html). A lot of work
has already been done in this code base that is now API equivalent to
jack2. New features are already worked on like the DBUS based control
(developed in the "control" branch) and NetJack rework (developed in
the "network" branch).
I think a combined "video + audio in a unique server" approach is
perfectly possible: this would require having 2 separated graph for
audio and video running at their own rate. Video and audio would be
done in different callbacks and thus handled in different threads
(probably running at 2 different priorities so that audio can
"interrupt" video). Obviously doing that the right way would require a
bit of work, but is probably much easier to design and implement in
jackd2 codebase.
Thus I think a better overall approach to avoid "video jack fork" is
to work in this direction, possibly by implementing video jack with
the "separated server" idea first (since is is easier to implement).
This could be started right away in a jack2 branch.
What do you think?
Stephane
2008/5/4, Paul Davis <paul(a)linuxaudiosystems.com>:
> the thing to do is to
> fire up aplay with a long audio file, then put the laptop into suspend.
> if it comes back from suspend with working playback, its a JACK issue.
> otherwise, its an ALSA card-specific driver issue.
>
>
ok
now managed to play a wav through alsaplayer (alsa output driver).
result:
With the internal card (Intel hd_audio) resume IS working.
With the external usb card (ua-25) resume does NOT work.
no resume from hibernate-ram (playback stops, alsaplayer must be
restarted) with:
$ alsaplayer -r -oalsa -d default:CARD=UA25 ultra_test4.wav
successfull (playback continues) resume from hibernate-ram with:
$ alsaplayer -r -oalsa -d default:CARD=Intel ultra_test4.wav
The usb-card is powered off on hibernation.
A presumption: Maybe it doesn't get enougth time to re-power on.
2008/5/4, Justin Smith <noisesmith(a)gmail.com>:
> Some sound cards have suspend reset issues, could this be part of the problem?
>
I'm using a usb-card (edirol ua-25) as jack-output, if that matters.
Setting jack timeout to 5000 msec didn't help it.
Jack is running in RT mode.
make jack and jackified apps work with hibernate-ram
reasons:
- It's great coming back to my/your pc and have it ready in seconds
- saves energy
- saves money
- might help the nature
- better system integration
- no requirement to close jack and all jack-apps, when hibernating
- improved acceptance as THE audioserver
cheers