Hello folks!
I'm soory for posting here, but I hope someone can help me. I'm just
compiling a bit of code and gcc gives a strange error, which Ican't fix.
Code snippet:
[...]
if (ioctl (vcsa_fd, KBD_SNIFF_SET, &set) < 0)
{
sbl_log ("no kernel support for keyboard sniffing\n");
kbd_sniffing = 0;
return 0;
}
else { ... }
/* end of snipet */
the reproduceable compiler error says something about the if-statement:
kbd.h:96: error: expected ')' before '[' token
It does the same for two other ioctl statements, one incuded in an if, the
other in an assignment like:
value = ioctl (...);
My gcc is: gcc 4.2.3 (Debian 4.2.3-3)
Please: can someone help me here? It's so frustrating...
Kindest regards
Julien
--------
Music was my first love and it will be my last (John Miles)
======== FIND MY WEB-PROJECT AT: ========
http://ltsb.sourceforge.net
the Linux TextBased Studio guide
======= AND MY PERSONAL PAGES AT: =======
http://www.juliencoder.de
> On Sat, May 3, 2008 03:27, salsaman(a)xs4all.nl wrote:
> > Hi all,
> > after some long and protracted discussion with the jack developers on
> > their mailing list, I have come to the conclusion that they will
> probably
> > never accept the videojack patches into the main jack trunk. I also made
> > some suggestions to make jack startup easier for non-technical users,
> and
> > these were rejected. The attitude seems to be that the goal of jack is
> > simply to make a high quality server for audio, and if this means adding
> > complexity to the startup, requiring kernel patches, etc. then so be it.
> >
> > Some of the jack developers suggested to me that I look into using
> > pulseaudio (http://www.pulseaudio.org/) since it seems more suited to my
> > needs. I haven't had a chance to look into it deeply yet, but it looks
> > like an interesting project.
> >
> > As a result of all this, here is what I suggest:
> >
> > - we maintain our own fork of jack based on the current videojack code
> and
> > clients and backport any fixes from the main jack trunk as well as we
> can.
> > For this it would be nice to have a CVS/SVN set up.
> >
> > - use the vjack-devel mailing list from BEK to discuss development of
> this
> > branch
> >
> > - look into pulseaudio, and see if it might be possible to make a
> > "pulsevideo" - if it seems possible and wise, approach the pulseaudio
> > developers and see if they are more open to accepting non-audio patches.
> >
> > - discuss the pros and cons of writing something like videojack from
> > scratch - basically, what is needed - a server that provides timing and
> > calls callbacks and uses shared memory to pass framebuffers; client code
> > which connects to the server and sets up the callbacks; control
> interfaces
> > to list, connect, and disconnect the clients
> > - or maybe it's better to make a stripped down version of vjack that
> only
> > handles video ?
> > - think about whether we want to be able to synch with audio from jack
> > and/or pulseaudio.
> >
> > Please discuss...
> >
> >
> > Salsaman.
> > http://lives.sf.net
> >
> >
> >
> > _______________________________________________
> > piksel mailing list
> > piksel(a)bek.no
> > https://www.bek.no/mailman/listinfo/piksel
> > http://www.piksel.no
> >
>
>
>
> Well, since nobody else seems to have an opinion on this, I guess I will
> go ahead with what I suggested.
>
> Here is what I plan to do:
>
> 1) register a project on sourceforge (vjack.sf.net ?)
> 2) check the current videojack code in to subversion; set up basic web
> page for vjack (this can later be pointed to jackvideo.org)
>
> 3) check the changelog for jack and backport any important fixes
> 4) trim down the codebase - remove all drivers except the "dummy" driver
> 5) change all occurances of "jack" to "vjack", eg in function names and
> enums; create libvjack
> 6) update all clients "jack" -> "vjack"
> 7) make the changes necessary for vjack - make rate a float; change
> default rc file to ~/.vjackdrc and allow an enviroment parameter to
> override the default location; change default server name from "default"
> to "video0"; allow vjack_connect, vjack_disconnect and vjack_lsp to
> specify a server name
> 8) create basic documentation and HOWTOs
>
> Since this is quite a lot to do, are there any volunteers to help with
> this ?
>
>
>
> Salsaman.
> http://lives.sf.net
>
>
>
> --
> "We are called to be architects of the future, not its victims."
> - R. Buckminster Fuller
>
> _______________________________________________
> piksel mailing list
> piksel(a)bek.no
> https://www.bek.no/mailman/listinfo/piksel
> http://www.piksel.no
Although I obviously do not know the whole story, this to me is highly
disturbing. Having video sync (like MIDI sync that is apparently being
worked on) within Jack would allow us to do professional sample-accurate
multimedia production. The alternative suggested above simply reinforces the
problem JACK in and of itself faces in respect to myriad of other (mostly
subpar) solutions. Given that we have a lot of contributors in our midst who
apparently are unable to find common language (so they go on to reinvent the
wheel with their own often incomplete implementation), it would be nice to
see LAD community (especially considering as small as it is) not propagate
this unfortunate predicament.
Once again, please note that the aforesaid is my gut-reaction as I obviously
am not familiar with the innards of this matter.
Best wishes,
ico
>Well, this has been discussed to death on the jack-devel lists. I
can see that from an audio developer's point of view,
> it would be nice to have video within the same >server as audio.
>However, there are fundamental differences between video and audio,
which make this in my mind impractical.
> Firstly, there is the problem of latency - for audio, generally the
aim is to have a latency < 4ms. For video -
> since we are dealing with much larger chunks of data, a >latency an
order of magnitude greater than this is usually acceptible.
>Second, the timing is different. For audio generally you have a 1024
sample buffer and a rate of 44.1KHz or 48KHz. Video requires usually
something like 25fps.
> So you can either have two clocks, or you can split the video into
chunks (ugh), both solutions have problems.
> If you can solve these problems, then there is absolutely nothing
stopping you running video and audio in the same server
> (Video simply adds a new port type). Regards, Salsaman.
Video in jack1 won't happen because of several reasons that can be
explained again: we want to fix and release jack1 soon and video in
jack is a too big change to be integrated in the current state of the
proposed patch.
The future of jack is now jack2, based on the jackdmp new
implementation (http://www.grame.fr/~letz/jackdmp.html). A lot of work
has already been done in this code base that is now API equivalent to
jack2. New features are already worked on like the DBUS based control
(developed in the "control" branch) and NetJack rework (developed in
the "network" branch).
I think a combined "video + audio in a unique server" approach is
perfectly possible: this would require having 2 separated graph for
audio and video running at their own rate. Video and audio would be
done in different callbacks and thus handled in different threads
(probably running at 2 different priorities so that audio can
"interrupt" video). Obviously doing that the right way would require a
bit of work, but is probably much easier to design and implement in
jackd2 codebase.
Thus I think a better overall approach to avoid "video jack fork" is
to work in this direction, possibly by implementing video jack with
the "separated server" idea first (since is is easier to implement).
This could be started right away in a jack2 branch.
What do you think?
Stephane
2008/5/4, Paul Davis <paul(a)linuxaudiosystems.com>:
> the thing to do is to
> fire up aplay with a long audio file, then put the laptop into suspend.
> if it comes back from suspend with working playback, its a JACK issue.
> otherwise, its an ALSA card-specific driver issue.
>
>
ok
now managed to play a wav through alsaplayer (alsa output driver).
result:
With the internal card (Intel hd_audio) resume IS working.
With the external usb card (ua-25) resume does NOT work.
no resume from hibernate-ram (playback stops, alsaplayer must be
restarted) with:
$ alsaplayer -r -oalsa -d default:CARD=UA25 ultra_test4.wav
successfull (playback continues) resume from hibernate-ram with:
$ alsaplayer -r -oalsa -d default:CARD=Intel ultra_test4.wav
The usb-card is powered off on hibernation.
A presumption: Maybe it doesn't get enougth time to re-power on.
2008/5/4, Justin Smith <noisesmith(a)gmail.com>:
> Some sound cards have suspend reset issues, could this be part of the problem?
>
I'm using a usb-card (edirol ua-25) as jack-output, if that matters.
Setting jack timeout to 5000 msec didn't help it.
Jack is running in RT mode.
make jack and jackified apps work with hibernate-ram
reasons:
- It's great coming back to my/your pc and have it ready in seconds
- saves energy
- saves money
- might help the nature
- better system integration
- no requirement to close jack and all jack-apps, when hibernating
- improved acceptance as THE audioserver
cheers
i'm sure the linux audio community has something to contribute to
this initiative!
Begin forwarded message:
> From: "Graham Coleman" <gcoleman(a)iua.upf.edu>
> Date: 25. April 2008 18:01:28 GMT+02:00
> Subject: DAFxTRa 2008: announcement and call for participation
>
> Apologies for multiple postings. We will attempt a public evaluation
> of digital audio effects. We encourage your feedback and
> participation.
>
> --
> Salutations!
>
> We announce and call for your participation in a cross-community
> evaluation of audio effects.
>
> DAFxTRa 2008 (DAFx Transformation RAting) is a new initiative
> promoted by
> MTG-UPF and DAFx-08 aimed at evaluating and comparing algorithms for
> audio effects.
> Our goal is to have the main evaluation in September 2008 during
> DAFx-08
> (http://www.acoustics.hut.fi/dafx08/) but to make it happen we need
> the
> involvement of the audio effects research community.
>
> We do not know of any major initiative to compare audio effects
> algorithms, which might be the related to the difficulty of the
> evaluation task. It is hard because it requires standardized
> procedures for
> carefully controlled subjective experiments. But we believe
> that now is the time to try it, so we can all learn from the
> process and in
> turn improve our audio effects algorithms.
>
> Inspired by the success of MIREX in the evaluation of Music
> Information
> Retrieval algorithms, and having acquired some experience by
> organizing the
> audio description contest at ISMIR 2004-Barcelona, we want to
> promote a
> similar initiative for the digital audio effects community.
>
> However most audio effects tasks do not afford an objective
> measure of ground truth, like in MIR, and thus we have to define
> specific evaluation strategies. We want to do it by involving the
> developers of algorithms interested in participating and by specifying
> with them the evaluation process. The participants should learn
> from the
> process and the whole DAFx community should benefit from it.
>
> Our initial aims are the following:
> 1. Propose several audio effects categories. Initial proposal: time-
> scaling,
> pitch-shifting, source separation, morphing, distortion effects and
> deconstruction.
> 2. Select the categories for which there is a sufficient number of
> participants.
> 3. Select sounds from Freesound (http://freesound.iua.upf.edu) to be
> used as test sounds for each category.
> 4. Define the evaluation procedure for each category.
> 5. Ask the participants to submit the transformed sounds (not the
> algorithms).
> 6. Perform the evaluation both live at DAFx-08 and also online in
> Freesound.
> 7. Publish the results of the evaluation.
>
> Our proposed time-line is:
> - 1st August 2008: Finalize categories and evaluation procedures
> - 30th August 2008: Submit transformed sounds
> - 1st-4th September 2008: Run life evaluations at DAFx-08 Conference
> - 10th-30th September 2008: Run on-line evaluations on the
> Freesound site
> - 15th October 2008: Publish the results
>
> If you are interested in participating or in getting involved in the
> process join the DAFxTRa mailing list
> (http://iua-mail.upf.es/mailman/listinfo/dafx-eval) for an open
> discussion.
> Results of the discussion and organizational details of the evaluation
> will be posted in a wiki (http://smcnetwork.org/wiki/DafxTRa2008)
>
> Your input and ideas will be most welcome.
>
> Graham Coleman (MTG-UPF) (contact person)
> Jordi Bonada (MTG-UPF)
> Perfecto Herrera (MTG-UPF)
> Xavier Serra (MTG-UPF)