Tap, swipe, or move: attentional demands for distracted smartphone inputMatei Negulescu, Jaime Ruiz, Yang Li, and Edward Lank
Smartphones are frequently used in environments where the user is distracted by another task, for example by walking or by driving. While the typical interface for smartphones involves hardware and software buttons and surface gestures, researchers have recently posited that, for distracted environments, benefits may exist in using motion gestures to execute commands. In this paper, we examine the relative cognitive demands of motion gestures and surface taps and gestures in two specific distracted scenarios: a walking scenario, and an eyes-free seated scenario. We show, first, that there is no significant difference in reaction time for motion gestures, taps, or surface gestures on smartphones. We further show that motion gestures result in significantly less time looking at the smartphone during walking than does tapping on the screen, even with interfaces optimized for eyes-free input. Taken together, these results show that, despite somewhat lower throughput, there may be benefits to making use of motion gestures as a modality for distracted input on smartphones.
Citation
Matei Negulescu, Jaime Ruiz, Yang Li, and Edward Lank. 2012. Tap, swipe, or move: attentional demands for distracted smartphone input. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI ’12). Association for Computing Machinery, New York, NY, USA, 173–180. https://doi.org/10.1145/2254556.2254589
Bibtex
@inproceedings{10.1145/2254556.2254589,
author = {Negulescu, Matei and Ruiz, Jaime and Li, Yang and Lank, Edward},
title = {Tap, Swipe, or Move: Attentional Demands for Distracted Smartphone Input},
year = {2012},
isbn = {9781450312875},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2254556.2254589},
doi = {10.1145/2254556.2254589},
abstract = {Smartphones are frequently used in environments where the user is distracted by another task, for example by walking or by driving. While the typical interface for smartphones involves hardware and software buttons and surface gestures, researchers have recently posited that, for distracted environments, benefits may exist in using motion gestures to execute commands. In this paper, we examine the relative cognitive demands of motion gestures and surface taps and gestures in two specific distracted scenarios: a walking scenario, and an eyes-free seated scenario. We show, first, that there is no significant difference in reaction time for motion gestures, taps, or surface gestures on smartphones. We further show that motion gestures result in significantly less time looking at the smartphone during walking than does tapping on the screen, even with interfaces optimized for eyes-free input. Taken together, these results show that, despite somewhat lower throughput, there may be benefits to making use of motion gestures as a modality for distracted input on smartphones.},
booktitle = {Proceedings of the International Working Conference on Advanced Visual Interfaces},
pages = {173–180},
numpages = {8},
keywords = {smartphones, motion gestures, eyes-free interaction},
location = {Capri Island, Italy},
series = {AVI '12}
}