anti-word

Yvonne Smith yvonne at thewatch.net
Sun Jan 13 01:46:28 EST 2002


Good grief, a text mode word clone? The amount of work involved in
that would be out of this world. I think our only hope here, people,
is to hope the eventual speech access to gnome will give us access to
some of the word processors in Linux that save in word format. None of
them are perfect, but I think an xwindows screen reader would be a lot more
productive than trying to write such a beast. 
And no, this is not a prelude to the "we don't need xwindows" rant
that I just know someone is going to reply to this with. I'm with you
on this, ok? I might not use speakup much, being a primarily emacspeak
user <no, *that* isn't worth going on a rant about either, I learnt
emacspeak first and only use speakup occasionally when necessary,
personal preference>. Basically I'm much happier in a console or in
emacs myself. All I'm saying is that, I can't imagine too many people
other than us would have a huge amount of use for a word processor
like that. I seem to have vague memories of a console version of word
perfect existing at one point, if you bought the commercial version
but don't quote me on it. I'm pretty sure it doesn't exist now in any
case. What I'm trying to say is, right now, we just have to live with
what we've got. If we want to do more than read word documents, we've
got to run windows until someone writes a screen reader that'll let us
use star office or something of the sort. If we want to use
javascript, we've got to use windows until someone writes a screen
reader that'll let us use netscape or Galion or something. I know, I
know, it's harsh, but most sighted people aren't going to write these
for us in console mode. They can already use all this stuff in x, and
open source, like it or not, usually involves people writing what they
personally have a use for. If none of us can or have the time to write
this stuff ourselves, most likely it isn't going to get done.

I know, this is harsh, and is probably going to result in me being
flamed off the list, since I'm not a regular contributor, or a regular
user of speakup, but while I'm here, I thought I'd say it. The same
thing applies to kirk, and whoever else writes speakup. They'll write
what they need first, and afterwards what other people want if they
have the time and feel it's worth it. To get better service than that,
you've either got to get involved in a project that more closely
mirrors what you need in a program, learn to program yourself and
write it yourself, or live with the decisions that the programmers
make. That's just the way it is in the open source world, I'm
afraid. As someone who doesn't know, and probably never will know c or
c++, I'm in the same position as most of you. We can make suggestions,
we can make bug reports, and we can help new users with what we know
and they don't yet, to pay for what they give us, but that's about
it. 
As it is, at least with speakup or emacspeak or something like that,
we can talk to the developers. It isn't going to cost us thousands of
dollars for access to what software's available and what we get might
more closely resemble what we want, rather than what primarily sighted
developers think we want. 

and finally, just to end this rant and reply to another thread,
hardware vs software synthe. again, software speech is something we're
all just going to have to live with. I prefer hardware speech myself,
but I'm not using it much right now. I'm moving around all the time,
often have limited space for things, and I just don't want to fuss
with the cables and junk that the hardware synthe brings. Not only
that, the amount a hardware synthe costs can put it out of reach for a
lot of people. Not to mention, using a laptop with a hardware speech
synthe can be a *major* pain in the neck, as a lot of you can
testify. It's not something to get into a religious war about. When
Tuxtalk is eventually written, those people who don't have a hardware
synthe, for whatever reason, will just have to live with the fact that
they won't be able to see the early bootup messages. 95% of times,
that doesn't matter at all. And in my case, I'll probably end up using
both, depending on which is more practical, so if I really get into
trouble, I can plug the hardware synthe in and figure out why it is
that my linux kernel has suddenly decided not to talk to me. But you
aren't going to lose what speakup can currently give you. If you're
still using hardware for speech, you'll still get the same output as
speakup has always given you, and people who can't or don't want to
use a hardware synthe will have access to the linux console, at least,
which'll probably bring more blind people into linux, which is a good
thing by anyones standards.

Now I'm out of here before I rant any more, and going to duck into my
flameproof bunker for a while.




More information about the Speakup mailing list