My current research broadly focuses on the interaction between the environment and response behavior.  This primarily involves understanding how goal-oriented actions are influenced by their relationship to features of the environment, also known as stimulus-response (S-R) compatibility effects.   Simply put, performance is better when features of the environment match responses than when they mismatch, even when those features are irrelevant to performance.  Much of my research is related to this phenomenon.

Representing spatial information (and using it)

There are many ways to present spatial information to people – examples include actual physical locations, spatial words, and symbols such as arrows.  Much of my current work is focused on understanding whether these forms of spatial information lead to similar or different types of mental representations of space.  Additionally, if there are different types of spatial representations, are some better than others in certain situations?  Future research will further examine what types of spatial information work best (lead to faster and more accurate performance) in different types of environments.  For example, in cluttered environments, such as a highway with many signs, is it more useful to indicate a detour to the left through arrows, words or spatial locations? 

“Feature Attraction” between controlled objects and targets.

I am currently examining a phenomenon I refer to as “feature attraction,” in which actions that involve moving objects that share features with targets in the world are drawn toward these targets.  For example, holding a green hammer may predispose responses toward green nails versus red nails.  I am currently examining feature attraction in real-world contexts, such as dart throwing.  In the long term, a better understanding of the effects of these relationships has strong applications in the design of tools and other devices that are used to control objects in the real world and in virtual environments.

The costs and benefits of eye gaze control

Since commercial eye-gaze control is just catching on in the consumer electronics, very little is known about how using eye-gaze as an input device (e.g., controlling a cursor on a computer display with eye movements) affects our ability to perform other concurrent tasks. For example, how will selecting an icon on a smartphone with eye gaze affect the ability of drivers to pay attention to the road or pedestrians notice obstacles in their way? In many ways, eye gaze control is a novel way to control stimuli in our environment – traditionally, eye movements are used to guide the input of visual information rather than the output of response selection. In my lab, we have just started research into the eye gaze control on attention on computer displays.