An idea,

Sean McMahon smcmahon at usgs.gov
Wed Jul 27 14:12:49 EDT 2005


They really should have a device that can be trained to understand certain
shapes and just say what they are.  Some which you could point at any visual
serface.
----- Original Message ----- 
From: "Janina Sajka" <janina at rednote.net>
To: "Speakup is a screen review system for Linux." <speakup at braille.uwo.ca>
Sent: Wednesday, July 27, 2005 6:05 AM
Subject: Re: An idea,


> Hi, Lorenzo:
>
> Others have responded with reference to what an X server does, and
> doesn't do. I want to respond to two other particular points from your
> message.
>
> Lorenzo Taylor writes:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> >  ... Gnopernicus, for example, is using libraries that
> > rely on certain information ent by the underlying application libraries.
> > Unfortunately, this implementation causes only some apps to speak while
others
> > which use the same widgets but whose libraries don't send messages to the
> > accessibility system will not speak.
>
> This is only partially correct. Any applications using those "same
> widgets," as you put it, will speak. There are no exceptions.
>
> What causes them to not speak is that the properties required to make
> them speak have not been supplied. So, Gnopernicus is getting an empty
> string to renderd, which I suppose it dutifully renders as silence.
>
> Fortunately, these are open source applications and we don't need an
> advocacy campaign to resolve these kinds of problems. A solid example of
> this at work is the Gnome Volume Control. It was written with gtk2, but
> the developers did not supply all the relevant property data. So, a
> blind programmer came along one weekend, fixed it, and submitted the
> patch which has shipped with the rest of Gnome Volume Control ever
> since.
>
> Now the next point ...
>
> >  But it occurs to me that X is simply a
> > protocol by which client applications send messages to a server which
renders
> > the proper text, windows, buttons and other widgets on the screen.  I
believe
> > that a screen reader that is an extension to the X server itself, (like
Speakup
> > is a set of patches to the kernel) would be a far better solution, as it
could
> > capture everything sent to the server and correctly translate it into
humanly
> > understandable speech output without relying on "accessibility messages"
being
> > sent from the client apps.
>
>
> As other have pointed out, there's nothing to be gained by speaking RGB
> values at some particular X-Y mouse coordinate location. But, I'm sure
> that's not what you really intend. If I interpret you correctly you're
> suggesting some kind of mechanism whereby a widget of some kind can be
> reliably identified and assigned values that the screen reader can
> henceforth utter. This is the approach with Windows OSM that has been
> used over the past decade, and it's what allows screen readers, like
> JFW, to develop interfaces based on scripts. For instance, Take widget
> number 38,492 and call it "volume slider," and speak it before anything
> else on screen when it shows up on screen, and facilitate the method
> that will allow user to use up and down arrow to change it's value,
> etc., etc.
>
> It is arguable, and has been cogently argued over the past 18 months,
> that the failure of the original Desktop Accessibility Architecture
> promoted by Sun and Gnome was to not provide such mechanisms. A great
> part of the intent of the Orca screen reader proof of concept was to
> provide exactly this kind of functionality. I believe this is now being
> addressed, though I'm not aware any code for newer Gnopernicus (or post
> Gnopernicus) readers is yet released. However, I do fully expect that
> Gnopernicus is not the last word in desktop screen readers.
>
> Janina
>
> _______________________________________________
> Speakup mailing list
> Speakup at braille.uwo.ca
> http://speech.braille.uwo.ca/mailman/listinfo/speakup





More information about the Speakup mailing list