Paul Davis paul at linuxaudiosystems.com
Tue Nov 24 19:46:44 UTC 2009

On Tue, Nov 24, 2009 at 2:24 PM,  <fons at kokkinizita.net> wrote:

> An event loop (as I use the term) is just something of the
> form
> while (running)
> {
>   E = wait_for_events();
>   process_event(E);
> }


> In process_event() the first selection would be on event
> origin. Messages from you other threads for example would
> get handled by whatever code you provide for that, while
> X11 events would be handled by code provided by the GUI
> toolset, delivering them somehow to the objects that need
> them.
> From what you write I understand that you call this handling
> of X11 events 'the event loop' of a the toolset, which is of
> course something quite different than what I understand by
> this term and try to explain here.

not at all. what i mean by the event loop is exactly the same as you.
but any general purpose event loop needs way to add and remove event
sources, which can include additional file descriptors, timeouts, the
concept of "idle" and other more esoteric things. so at the core of
any GTK application, for example, is the extremely general purpose
glib event loop. the connection to the X11 display server is just one
source of events that it handles (though clearly, for a GTK app, an
important one).

> The problem I pointed out exists when the 'real' loop (in
> the C, C++ sense), in other words the while() thing above,
> is completely absorbed into a GUI toolkit.

you can view it that way around if you wish. but i think that its
equally accurate to say that things start with a particular event loop
(in the C, C++ sense) and then the toolkit is built around it.

> He should e.g. not be forced to
> translate all his event/messages sources into e.g. a poll()
> based framework just because the toolkit uses a fd to wait
> on X11.

the glib event loop has completely abstract notions of what an event
source is. an event source simply has a few simply functions like
"prepare", "ready" and "dispatch".

> The loop as written above is a multiplexer (the wait_for_events()),
> and a demultiplexer (the process_event()). The GUI toolkit is on
> an input to the first and on an output of the second. The only
> thing that should matter is that the link between these two
> points exists, not how it is implemented.

the GTK (and now Qt) toolkits simply add handlers for events that come
from the "X11 event source" that they added early in the program's
life. the event loop can be handling other entirely different event
sources using entirely different code  - but it is all now centralized
by the single glib event loop. the event loop itself doesn't really
care about what the event types are, or what handles them.

> What do you mean by 'raw events', or a 'normal event stream' ?

raw events: whatever actually happened somewhere in the computer to
make an event source believe that a new event was ready. could be an
X11 event, could be a byte arriving on an arbitrary byte-oriented
communication endpoint, the creation or modification of a file, etc.
etc. etc.

"normal event stream" : whatever the GUI toolkit passes around to widgets.

> What you write seems to suggest that both the GTK and Qt
> dispatching mechanism exist at the same time, that they don't
> really interface to each other, but that raw X events get
> handled either by the one or the other.

Thats more or less correct, yes.

>This will work if
> they can somehow work out who should handle what without
> tripping on things from the other one.

And they can, by using a variety of mechanisms, none of which are
particularly clean.

> All raw X events have
> a destination window ID, and for most cases testing on that
> would be all that's required.

for X events, yes. But the event loop handles things other than X events.

>But again this has nothing do
> to with how the event loop (in my sense) is organised.

Agreed. What I consider central is the idea of set of event sources,
and event loop and handlers for events that are injected into the
loop. There is no common framework for this on Unix, there never has
been and as long design policy is made by developers who value choice
and flexibility over single frameworks that enforce consistency, there
almost certainly never will be. Hell, on Unix you can't even wait for
file I/O and/or a signal in the same thread.

Again, contrast this to the situation on OS X where time and again,
when I am wondering just how you are supposed to integrate a
particular kind of event-driven programming into some code, it all
ultimately comes back to a CFRunLoop (which conceptually is not far
from being a thread, although its not).

The unix programming model is awesome for data driven programming. It
just hasn't ever stepped up to provide these kinds of abstractions for
event driven programming.


More information about the Linux-audio-dev mailing list