On Wednesday, 29. March 2006 21:08, Lee Revell wrote:
On Wed, 2006-03-29 at 17:28 +0200, Florian Schmidt
wrote:
> > [ 4797.602341] jamin:17931 userspace BUG: scheduling in user-atomic
> > context!
> > [ 4797.602362] [<c01049f5>] show_trace+0x25/0x30 (20)
> > [ 4797.602382] [<c0104a23>] dump_stack+0x23/0x30 (20)
> > [ 4797.602395] [<c02bfa64>] schedule+0x114/0x140 (36)
> > [ 4797.602412] [<c02bfda4>] wait_for_completion+0xa4/0xe0 (48)
> > [ 4797.602426] [<c017bb68>] do_coredump+0x348/0x780 (192)
> > [ 4797.602456] [<c012edf1>] get_signal_to_deliver+0x391/0x510 (60)
> > [ 4797.602473] [<c0102c68>] do_notify_resume+0xb8/0x75c (220)
> > [ 4797.602486] [<c01034fc>] work_notifysig+0x13/0x1b (-8116)
> > [ 4797.602498] ---------------------------
> > [ 4797.602504] | preempt count: 00000000 ]
> > [ 4797.602511] | 0-level deep critical section nesting:
> > [ 4797.602518] ----------------------------------------
Doesn't this trace mean the problem is that jamin
actually crashed and
dumped core on exit?
You're right... This is the backtrace:
#0 0xffffe410 in __kernel_vsyscall ()
#1 0x451a69a1 in raise () from /lib/tls/i686/cmov/libc.so.6
#2 0x451a82b9 in abort () from /lib/tls/i686/cmov/libc.so.6
#3 0x0805a69e in io_queue (nframes=0, nchannels=2, in=0xb73e03c4,
out=0xb73e03bc) at io.c:477
#4 0x0805a7c5 in io_process (nframes=64, arg=0x0) at io.c:545
#5 0xb7f7d7c7 in jack_client_thread (arg=0x8120c00) at client.c:1465
#6 0xb7f80e04 in jack_thread_proxy (varg=0x82324a8) at thread.c:111
#7 0x453e0341 in start_thread () from /lib/tls/i686/cmov/libpthread.so.0
#8 0x452474de in clone () from /lib/tls/i686/cmov/libc.so.6
So jamin is actually calling abort() at io.c:477. But why? What is
happening here, and what can I do about it?
Dominic