Researching the most powerful personalised interface
A company called Smart Box (who make what seems to be one of the best interfaces for Assistive Technology, something called Grid-3) has offered to use me to research better and faster ways of computer/environment interaction https://thinksmartbox.com/. They’ve initially lent me a high-end eye-tracking device so I can begin experimenting with different techniques of data entry and environmental control. We plan to develop a special interface that will also allow me to rapidly select from a very large bank of pre-prepared actions (such as full phrases or complex automated actions).
I am exploring how I can pre-record the sorts of ‘infill’ phrases I tend in reality to use in relaxed conversation, so that they can be, at least partially, filling the silence (running as a background subroutine once I kick one or more of them off) while I frantically create the substantive sentences I want to say as soon as those pre-recorded phrases end – using a combination of core-based systems and phrase-predictive input. I can imagine that, with enough practice, I could get quite proficient at keeping a stream of words emerging even though in reality it was a bit of an illusion because I was most of the time focusing on what I was preparing to say rather than what my synthesiser was currently saying. For what it’s worth, I suspect that’s not so different from the way most casual conversations run anyway, with about half of the words almost being ‘talking on autopilot’.