Technology

Robotic Materials’ autonomous hand fuses 3D perception and tactile sensing to create a high-resolution representation of its environment. Integrated, GPU-enabled perception algorithms provide advanced 3D reasoning and object recognition capabilities that are available to high-level applications like pick-and-place, bin picking, and assembly. By co-designing the mechanism, sensors, and algorithms, we are able to achieve unprecedented performance in autonomous manipulation tasks.

 

 

Our patent-pending tactile sensing technology allows the robot to see where conventional cameras cannot, for example right before the grasp or inside a bin. High precision encoders in the hand allow us to register tactile information with depth images gathered while the robot approaches an object.

We use deep learning for object recognition and 3D perception by combining powerful hand-coded algorithms with self-supervised learning that let the robot adopt to an user’s environment. All our applications can be reconfigured using a graphical programming environment, enabling novice programmers to building powerful applications that can be operational the same day you unbox your new robotic hand.