On Tue, Nov 24, 2009 at 12:02 PM, <fons(a)kokkinizita.net> wrote:
1. Almost all GUI toolkits mix up a few things that
should
remain well separated:
- Getting X11 events,
- Handling X11 events,
- Creating a framework for messaging (events + data)
between threads, and if you are lucky a RT-safe API
for this or at least part of it,
- Creating the main loop (which if you separate out
the three above, becomes a trivial thing).
Neither Qt nor GTK combine these. They are both built on an event loop
that has nothing to do with X11 events, or thread message dispatch.
A well designed GUI toolkit should do only the
first
two and be able to use whatever messaging system and
event loop.
I'm not quite sure what you mean, but my initial interpretation of
this suggests a goal that is a little absurd to me. The design of the
event loop is where the design of a GUI toolkit starts, because the
event loop needs to be able to handle things other than X11 (or Aqua
or GDI or whatever) events. If you use an event loop that doesn't do
this well, then you rule out a class of moderately complex
applications from being implemented in the most obvious way.
So, you start with a given event loop (hopefully a good one) and then
start adding other functionality on top, either as part of the toolkit
or as part of the application itself (via direct interactions with the
event loop API). How do you propose that the toolkit just be able to
switch to "any" event loop? When Trolltech modified Qt to be able to
(optionally) use the glib event loop, it was quite a bit of tricky
work. Perhaps you just mean that the specific event loop that is
chosen is not that relevant, rather than that it should be possible to
switch them?
2. Almost all toolkits try to be cross-platform.
Which
means they will translate X11 events and other data
structures from/into their 'portable' format. That's
OK as if they would also provide these translations
as separate functions which then could be used at the
interface between a host and a plugin using different
toolkits.
I don't think that would really address the issue. Take a look at what
Qt did to allow use of the glib event loop, and thus, by extension,
GTK widgets from within a Qt application. It has nothing to do with
event translation - you simply deliver the raw events from whatever
source (or even, just "i/o") into the relevant event loop handlers and
the translation happens in the same place that it would normally do
so. There is no reason to expose this to any higher level - the plugin
is not interested in doing explicit translation in its GUI - it just
wants a normal event stream delivered to it. What matters here is
being able to *register* the event handler with the event loop, and
this precisely where the lack of any common event loop abstraction on
Unix breaks down. GTK has a very abtract event handler that picks up
stuff from a communication endpoint that (might) happen to be
connected to an X server. But the API it uses for this (provided by
the glib event loop) is different than the one it would have to use if
it wanted to integrate with the Qt "native" event loop. As a result,
you either have to have N versions of the low level integration
handlers (N == number of event loop APIs supported) or you need a
common event loop API. Unix has never had the latter, and most toolkit
developers are not planning to see their upper layers running on a
different event loop core. What Trolltech did with Qt + glib was quite
remarkable, really. However, it still only makes N=2. If you wanted to
use Qt widgets in an application that used on a self-made X11 event
loop, it wouldn't work.
The only reason I can see why most GUI toolkits do
not
provide the required modularity is that it would complicate
their 5-line 'hello world in a window' demo programs, and
that is considered bad marketing.
I think this is ridiculous. 99% or more of the people developing
applications with GUI toolkits are not interested in the kind of
modularity that you describe. It would bring them no benefits, and
complicates the lives of toolkit developers and possibly app
developers too. Why should toolkit developers add complexity to their
work to satisfy a tiny number of developers who want to be able to do
clever things?
OS X put the run loop abstraction (CFRunLoop) into the lowest layer of
their "application stack" (i.e. the part of OS X that isn't Unix). To
me, *that's* the right kind of "modularity by commonality" - you could
build a huge variety of different toolkits and GUIs around this single
abstraction (and indeed, their X11 server does just that, alongside
the Aqua display server). But on Unix, everybody's been pushing for
"modularity by decomposition" for so long that the idea that you'd
have a common core for any application that is event-driven rather
than data driven just seems ... well, its disparaged as
"unnecessary".
The fact that such trivial
demos are not representative of real-life applications, let
alone those that do include real-time audio processing, is
not a point, the user will discover that only much later when
he's already firmly hooked into a particular toolkit.
The only that an app with an RT part and a GUI part really needs is a
way of delivering messages from the RT part to the GUI part in an RT
safe way. This doesn't require the cooperation of the toolkit at all,
so what would the toolkit matter?
There are lots of different reasons to prefer one toolkit over
another, but the presence of RT code in its own thread within the app
has never seemed to be one of them, for me.