Hi Julien,
Hello!
so as has been suggested on LAU, I moved this discussion back on-list, but
thought it would be more relevant to the LAD than LAU for obvious reasons.
Maybe some general points. Blind or visually impaired people most work with
one of these technologies: Braille display, Speech synthesis (text-to-speech)
or magnifier. The latter is, I think, the easiest to accomodate, as GNOME and
basic X!! offers screen mags.
Braille displays and speech synths are in general one dimensional tools.
Both can work with graphics, yet there are restrictions.
Very interesting points
there.
I am short-sighted but lucky enough to be able to use a screen (with
fonts, resolution and other tweaks), thus very interested to the topic
and accessibility in general.
I always asked myself if it made sense/would be useful/feasible etc. for
supporting visually impaired people to have applications where the user
interfaces use non-speech (TTS) audio feedback. Just a (probably very
naive example) for an audio application, say a DAW, you change a fader
or automation point, or maybe move through the time-line and the
frequency of a 'custom tone' moves up/down following the movement a
certain frequency is the zero and another is the maximum.. all should be
totally customisable of courses.
What do you think? Does it make any sense at all or is this idea
completely off?
Kind regards,
Lorenzo.