User Interfaces for People with Disabilities

Ten percent of the population has some kind of disability affecting their use of technology. This is too big of a sub-population to ignore since, as medical advances increase, people living longer will only make this number larger. This research concentrates on making technology more accessible for varying disabilities simultaneously.
For more research conducted in parallel to design, please see my portfolio.

Deliverables

Software Donated

  • Firefox plug-in donated to Senior Centers, Cerebral Palsy Clinics, and Low Vision Groups in New York

My Contributions

  • Front-end development
  • User testing by low vision, cerebral palsy, and older adult groups

Accessibility Works

User benefiting from accessibility Works

So much of life has been integrated with the use of the Internet that those that do not have access to it can be left out. People with disabilities and older adults are able to use this software to change how the web page is loaded in the browser to make it easier to read and interact with. The goal was to create an easy to understand user interface that required no training. Settings ranged from the ability to have the web page read to you, and adjusting things like letter, word, and line spacing to things like getting rid of tremor in mouse movements. Originally each individual plug-in had been created for a different disability. The desire was to put all the settings together so it could be tested with older adults since as people get older they start to exhibit small traits of a range of disabilities.

The Accessibility Research Group had created AccessibilityWorks (a plug-in for Firefox) that allows web pages to be reprocessed based on the user’s disabilities. However, there was not a large uptake because of user interface problems. I conducted user surveys of older adults and users with low-vision, blind, cognitive disabilities, and cerebral palsy. User’s with cognitive disabilities had problems with settings being in a continuous ribbon and low-vision users could not get the setting panels large enough. From this I redesigned the user-interface to be full-screen and tree-hierarchy menu driven. I designed vector icons to replace all the graphics so pictures would scale with words. After building the software I conducted user testing to determine ease of use. Users were faster at completing tasks as well as more satisfied with the look of the interface. One of the difficult aspects of this project was making it configurable for all the different groups without appearing to be talking-down or dumbing the interface for someone with a less severe disability. Since this is a Firefox plug-in the interface was created using XML and javascript which interacted with C in the plug-in. All graphics were done in Adobe Illustrator.

Evaluation of a haptic tongue device

Camera grid captring pixels and picture of Haptic display

If done correctly, a camera hooked to an grid of electrical stimulators has the ability to use the sense of touch (Haptics) to replace vision. This was previously shown by research involving using haptics to recruit the area of the brain known as the visual cortex to simulate vision by interpreting spatial data into useful information. The problem was that the detail was very low since touch nerves are far apart as compared to the visual nerves. The highest concentration of touch nerves is on the tongue but the previous testing had only been successful with a 16 pixel by 16 pixel grid.
The goal of my thesis was to maximize nerve processing frequency by flashing grid points at a higher frequency to increase pixel resolution similar to how a cochlear implant creates frequencies between the available actuators by alternately activating both of them. With enough training participants were able to distinguish large objects but the project mostly failed to get the desired effect due to technical limitations.

Technical Details: A PIC18F4550 micro-controller was used to translate the visual information from the camera into electrical impulses, C# was used to build the user interface to fine tune research trials, I hand-built the tongue pixel array using multiple multiplexers to activate all 256 points.

Deliverables

Thesis Written

  • Evaluation of a Haptic Tongue Device, Accepted June 2007

My Contributions

  • Built Hardware
  • Obtained FDA approval for testing
  • Conducted user testing
  • Wrote thesis

Deliverables

Papers Written

  • Human anterior intraparietal and ventral premotor cortices support representations of grasping with the hand or a novel tool, Stéphane Jacobs, Cogn Neurosci 22:2594-608. 2010
  • Evidence for context sensitivity of grasp representations in human parietal and premotor cortices
    Mattia Marangon,Neurophysiol 105:2536-46. 2011
  • Handedness-dependent and independent cerebral asymmetries in the anterior intraparietal sulcus and ventral premotor cortex during grasp planning, Kimberley Martin, Neuroimage 57:502-12. 2011

My Contributions

  • Wrote infrared body tracking software
  • Built Robot
  • Wrote configurable software interface between the glove and robot

Brain Adaptation to Prosthesis Use

different orientations of a hand and gripper tool

Humans display a remarkable capacity to use tools instead of their biological effectors. Yet, little is known about the mechanisms that support these behaviors. Here, participants learned to grasp objects, appearing in a variety of orientations, with a novel, handheld mechanical tool. Following training, psychophysical functions relating grip preferences (i.e., pronated vs. supinated) to stimulus orientations indicate a reliance on distinct, effector-specific internal representations when planning grasping actions on the basis of the tool versus the hands.

I developed a program to measure brain processing time for different grip angles by hand and using tools using a Infrared body tracking system to measure 3D trajectory of hand reaching for tools. This allowed to study hesitation and hand corrections as the participants reached for the tool.

A robot arm and glove with wires running between them.

Robot Prosthetic

From there it was how to study how the brain adapts to function with a prosthesis. A simulated prosthetic appendage (Robot hand and control glove) was built to study how the brain reacts while learning a new prosthetic device. The study’s goal was to learn if the area of the brain used is the same as the limb or if the area of the brain used for tool use is activated.

The requirement was to create an interface that would allow for the experiment conductors to change different aspects of how the glove controlled the robot on the fly. Quick access to ever aspect control was needed since only a second was allowed between MRI scans.

This interface was designed as an expert interface with priority given to exposed access to the controls over ease of use. Two people were conducting this experiment, one using a Mac and on a PC so Java was used to allow for cross-platform usability with RxTx controlling the serial connections. Based on discussions with the people conducting the test sensors could be assigned to robot servos or complex movements. The ability to save all preset was added after pilot testing.