PickSense: Deep-Learning-based Unobtrusive Handedness Prediction for One-handed Smartphone Interaction
The handedness (i.e. the side of the holding and operating hand) is an important contextual information to optimise the one-handed smartphone interaction. In this paper, we present PickSense, a deep-learning-based technique for unobtrusive handedness prediction in one-handed smartphone interaction. PickSense is built upon a multilayer LSTM (Long-Short-Term Memory) neural network, and processes the built-in motion-sensor data of the phone in real time. Compared to the existing approaches, PickSense eliminates the need of extra user actions (e.g., on-screen tapping and swiping), and predicts the handedness based on the picking-up action and the holding posture before the user performs any operation on the screen. PickSense is able to predict the handedness when a user is sitting, standing, and walking at an accuracy of 97.4%, 94.6%, and 92.4%, respectively. We also show that PickSense is robust to the turbulent noise with an average accuracy of 94.6% for the situations of users in the transportation tools (e.g., bus, train, and scooter). Furthermore, PickSense can classify users’ real-life single-handed smartphone usage into left- and right-handed with an average accuracy of 89.2%.