In collaboration with Intel, Microsoft, and IBM, University College London has developed MotionInput software for fully touchless computing. The solution uses a combination of machine learning and computer vision to analyze and convert interactions like hand gestures, facial expressions, and speech into mouse, keyboard, and joystick signals.