Hi!
I am trying to create a userspace driver with module uinput and I am
getting into trouble when I terminate the program that creates the
device. After a while the machine will crash ...
The object is to create a joystick device, and I might be missing some
important bits of documentation. So far I am not writing any data to the
interface, but js_demo will find it and open it, showing the expected
number of axis. The name of the interface looks like random though,
which worries me and gets me to suspect that something is wrong
Any idea why the machine is crashing or has a pointer to some example I
could study? As long as I just let the program run, there is no
problem ... Only on termination.
This is what I have gotten so far in the uinput department:
--8<-------------------------------------------------
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <signal.h>
#include <linux/input.h>
#include <linux/uinput.h>
int create_uinput() {
int fd = -1;
fd = open ("/dev/uinput", O_RDWR);
if (fd <= 0)
{
fprintf (stderr,"could not open uinput device\n");
exit(EXIT_FAILURE);
}
memset (&uinp, 0, sizeof (struct uinput_user_dev));
strncpy (uinp.name, "NoJoy!", 7); // FIXME!
uinp.id.version = 4;
uinp.id.bustype = BUS_USB;
ioctl (fd, UI_SET_EVBIT, EV_ABS);
ioctl (fd, UI_SET_EVBIT, EV_KEY);
ioctl (fd, UI_SET_ABSBIT, ABS_X);
ioctl (fd, UI_SET_ABSBIT, ABS_Y);
ioctl (fd, UI_SET_ABSBIT, ABS_Z);
ioctl (fd, UI_SET_ABSBIT, ABS_THROTTLE);
ioctl (fd, UI_SET_KEYBIT, BTN_TOP);
ioctl (fd, UI_SET_KEYBIT, BTN_TOP2);
ioctl (fd, UI_SET_KEYBIT, BTN_BASE);
ioctl (fd, UI_SET_KEYBIT, BTN_BASE2);
ioctl (fd, UI_SET_KEYBIT, BTN_BASE3);
ioctl (fd, UI_SET_KEYBIT, BTN_BASE4);
// Create device
write (fd, &uinp, sizeof (uinp));
if (ioctl (fd, UI_DEV_CREATE))
{
fprintf (stderr,"could not create uinput device.\n");
exit(EXIT_FAILURE);
}
sigset(SIGINT,destroy_uinput);
sigset(SIGTERM,destroy_uinput);
return fd;
}
void destroy_uinput()
{
fprintf(stderr,"NoJoy says: BYE!\n");
if(uinp_fd > 0)
{
ioctl (uinp_fd, UI_DEV_DESTROY);
close (uinp_fd);
}
exit(0);
}
--
Fons Adriaensen <fons(a)kokkinizita.net> sez:
>
> On Thu, Sep 27, 2007 at 09:03:32PM -0700, Maitland Vaughan-Turner wrote:
>
> > There was something like a 50% success rate in choosing the audio
> > source correctly.
>
> Which means it was just a random selection...
>
Obviously... (not that it proves anything either way)
I just thought it was an interesting coincidence that this article
appeared right on the heels of our discussion of the subject.
I *did* say maybe you're right and maybe it *is* all in my head.
hehe, no need to rub salt in it. :)
~Maitland
Hiho,
I am having a discussion on the supercollider front about what is the proper
way for dynamic linking.
as far as I know, you use ldconfig and have the library location that programs
dynamically link to defined in /etc/ld.so.conf
but what is supposed to happen if the user just installs the program to a
directory in his home directory?
how should the dynamic linking be defined?
esp. if the user does not have the root rights to change anything
in /etc/ld.so.conf or to run ldconfig.
I did not find anything quick on the net about this, so maybe one of you can
enlighten me what is the "proper" way of dealing with this.
sincerely,
Marije
Hallo!
> our experience with ardour has been that DC bias is measurably more
> effective at reducing CPU load than DAZ, FTZ or both combined. DAZ and
> FTZ do both help significantly, however.
One more question: is it not necessary to deactivate DAZ, FTZ again
after the application (or operation) ?
Or is this done automatically ?
(because I cannot see it e.g. in your ardour code)
Because e.g. in this document:
http://developer.apple.com/documentation/Performance/Conceptual/Accelerate_…
they set it back afterwards.
Thanks,
LG
Georg
Dominique Michel <dominique.michel(a)citycable.ch> sez:
>
> Personally, I don't like it. I prefer very much a good stereo sound in the
> original language (with some kind of text if it is a language that I don't
> understand) like on the Swedish TV.
>
> For that PCM-DSD stuff. I prefer PCM because we can archive a good sound
> quality with a much lower bandwitch. DSD was fine at the beginning of digital
> recording because it was nothing else (for what I know), but for today's
> professional audio, DSD is a waste of resources because of the huge needed
> bandwitch.
>
> Dominique
>
This hits near something I was wondering: Wouldn't it be pretty easy
to do lossless compression on a DSD stream. Since it's only one bit,
It seems like you could use simple run-length encoding to achieve
pretty good results.
~Maitland
Gordon JC Pearce <gordonjcp(a)gjcp.net> sez:
>
> On Tue, 2007-09-25 at 19:31 +0200, Fons Adriaensen wrote:
>
> > A nice variation on this theme occured years ago at an AES conference.
> > The speaker wanted to demonstrate that 'digital' sound was crap, by
> > using the familiar 'push down the extended arm' test. Test persons
> > listening to analog sound could easily resist, while they lost all
> > force when listening to a digital recording.
> >
> > What the speaker didn't know was that the PA system used to play the
> > tracks was fully digital...
>
> I once helped prepare the equipment for a double-blind test of speaker
> cables. All the golden-eared audophiles picked out one cable as being
> far superior to the others, with better clarity and definition in the
> upper harmonics and tighter more defined bass or some such bollocks.
>
> I did have to buy my Mum a new extension lead for her lawnmower, though.
> Sixty feet of Black and Decker's finest, with Speakon plugs soldered to
> it.
>
> Gordon
That's pretty freaking funny. Reminds me of the Penn and Teller where
they sell the diners in a trendy restaurant water from a garden hose
on the patio. haha http://www.youtube.com/watch?v=XfPAjUvvnIc it's
really funny stuff; worth watching if you have a couple minutes.
Incidentally, I get my water from a mountain spring up the road (can't
do much better than that, eh?), although I *have* bought a bottle of
two of Evian in my day :)
As for the double blind audio test, I reckon you guys have all seen
the new AES journal by now (if not, go to your mailbox). There is a
double-blind test where people were played DVD-A and SACD's, but some
of them were passed through an extra A/D/A stage at 16bit-44.1khz.
There was something like a 50% success rate in choosing the audio
source correctly.
Maybe it *is* all in my head...?
Although, I wonder if ear training has anything to do with it. I'm
super dorky sometimes, and I used to mix multitrack projects to
different bit-depths/sample-rates and then try to train myself to hear
the differences between them. hahaha, I figure most people don't do
that sort of thing...
~Maitland
Hello guys..
We are looking for one developer that can add and complete the Qtractor
features under one Audio-midi Styles player/editor.
http://qtractor.sourceforge.net/qtractor-index.html
we need to include the 16 pattern mode player and merge our available
arranger styles engine chords features into the qtractor sequencer, where we
must able to switch in real time the 16 patterns and recognize the chords
system.
Arranger styles are the typical style engine used for the most popular
Roland, Yamaha, Korg.keyboards.
We are looking to replace our basic midi styles player from our Mediastation
Linux keyboards.
If someone is really interested and with a audio-midi styles know how, just
contact me for the more information and sfecification.
Cheers
Domenico
Lionstracs Italy
www.lionstracs.com
2007/9/27, Georg Holzmann <grh(a)mur.at>:
> Hallo list!
>
> I am just thinking about the right strategy for denormal handling in a
> floating point (single or double prec) audio application (and yes I
> already read the docs of the different methods at musicdsp and so on ...)
>
> Basically my question is, if it is enough to simply turn on the
> Flush-to-zero and Denormals-are-zero mode and then compile everything
> with -msse -mfpmath=sse ?
> I know it won't run on older Pentium3,2 etc. - but for the machines
> which support this feature, is this enough ?
>
> Thanks for any hint,
> LG
> Georg
I've a copy of a paper written by Laurent de Soras from ohmforce,
which comments differents solutions, you could find it here
http://rtfm.osslab.eu/english/audio/denormal.pdf
Regards
Elthariel
> _______________________________________________
> Linux-audio-dev mailing list
> Linux-audio-dev(a)lists.linuxaudio.org
> http://lists.linuxaudio.org/mailman/listinfo.cgi/linux-audio-dev
>
On 9/24/07, linux-audio-dev-request(a)lists.linuxaudio.org
> Date: Mon, 24 Sep 2007 21:59:01 +0200
> From: Fons Adriaensen <fons(a)kokkinizita.net>
>
> On Mon, Sep 24, 2007 at 11:50:55AM -0700, Maitland Vaughan-Turner wrote:
>
> > Intuitively, one could also say that more sample points yield a
> > waveform that is closer to a continuous, analog waveform. Thus it
> > sounds more analog.
>
> This is completely wrong. Sorry to be rude, but such a statement
> only shows your lack of understanding.
Why is it wrong? If I drew some dots on a waveform and then connected
the dots, to try to reconstruct the waveform, wouldn't I get a better
result with more dots?
>
> > Thanks for the link. My whole point of digging up this old thread
> > though, was to say that I've tried it, and my ears tell me that the
> > papers are incorrect.
>
> Then please point out the errors in the paper by Lipshitz and Vanderkooy.
my ears tell me that... that's all; it's just subjective. haha, I see
subjective reports don't get you far around here.
>
> I'm not saying that DSD is crap. It sounds well. But it doesn't meet
> the claims set for it (as shown by L&V - you need at least two bits
> to have a 'linear' channel) and as a storage or transmission format
> it's inefficient compared to PCM. That means that if you use PCM with
> the same number of bits per second as used by DSD, you get a better
> result than what DSD delivers.
well, what do you mean by better? It seems like 24 bit is already
better in terms of dynamic range at any sample rate, but if you mean
more detailed representation of a waveform (in time), it seems like
you necessarily need to have the highest possible sample rate.
Like, if I were just recording an acoustic guitar and vocals, of
course 24 bit would be the best choice.
But if I'm recording a live band, there is just so much stuff
happening at once... You can't pinpoint an exact time when the
keyboard player presses the key, and you can't pinpoint just when I
pluck that bass string. A 96 khz 24bit system might say that the two
events happened at exactly the same time, when really it was closer to
1/100000 of a second apart. Now think about how many times something
like that could happen in a live recording with many instruments and
vocals and background noise from the crowd, etc. I'd rather have the
detail than the dynamic range in that case...
~Maitland
Quoting nescivi <nescivi(a)gmail.com>:
> Hiho,
>
> I am having a discussion on the supercollider front about what is the
> proper
> way for dynamic linking.
>
> as far as I know, you use ldconfig and have the library location that
> programs
> dynamically link to defined in /etc/ld.so.conf
>
> but what is supposed to happen if the user just installs the program to a
>
> directory in his home directory?
> how should the dynamic linking be defined?
Ardour installs it's own version of the included libraries in it's own
directory, PREFIX/lib/ardour2/, and the executable it installs in
PREFIX/bin/ is actually a shell script. That script uses the LD_LIBRARY_PATH
environment variable to make sure the version installed with ardour are
loaded. After setting that varible, the script installs the actual binary
which is also installed in PREFIX/lib/ardour2/ .
I think this is the proper way to do it. It is also the way programs like
firefox do it (as a quick 'less $(which firefox)' will tell you).
Sampo