About MEI Lab
Multimodal and Embodied Interaction (MEI) Laboratory is a Human-Computer-Interaction research group in the School of Creative Media, City University of Hong Kong.
MEI, 美 in Chinese, means beautiful and elegant. In MEI lab, we aim to create, innovate, and enhance the beauty in human-computer interaction through multimodal and emobided interfaces.
While the computer interfaces today are mainly visual and auditory, in MEI, we envision a “touchable” future for our society. Our research interests include tangible user interfaces, wearable user interfaces, mobile user interfaces, virtual and augmented reality, and the application of these interfaces/technologies in education, entertainment, accessibility, and so on.
Please refer to our projects and publications page for more details.
We received RMB 610,000 from National Natural Science Foundation of China General Programme (國家自然科學基金面上項目) to work on Data-driven Thermotactile Signal Synthesis and Rendering for Virtual Material Simulation.
New PhD Student
Xingyu Yang will join us as a PhD student. He got his master degree from Eindhoven University of Technology (TU/e), and his bachelor degree from Dalian University of Technology.
Our paper "ThermEarhook: Investigating Spatial Thermal Haptic Feedback on the Auricular Skin Area" is accepted by The 23rd ACM International Conference on Multimodal Interaction (ICMI 2021). Congrats Arshad and Kexin!
Our paper "FritzBot: A Data-Driven Conversational Agent for Physical-Computing System Design" is accepted by International Journal of Human-Computer Studies. Congrats Taizhou and Lantian!
Our paper "Visual-Tactile Cross-Modal Data Generation using Residue-Fusion GAN with Feature-Matching and Perceptual Losses" is accepted by IEEE Robotics and Automation Letters and IEEE IROS 2021. Congrats to Shaoyu, Prof Narumi and Prof Ban from University of Tokyo.
More previous news here.