Robotic Materials’ autonomous hand fuses 3D perception and tactile sensing to create a high-resolution representation of its environment. Integrated, GPU-enabled perception algorithms provide advanced 3D reasoning and object recognition capabilities that are available to high-level applications like pick-and-place, bin picking, and assembly. By co-designing the mechanism, sensors, and algorithms, we are able to achieve unprecedented performance in autonomous manipulation tasks.
Example programs in RM Studio are shipped with 3D models, here an example from the Siemens Learning Challenge, that users can 3D print and experiment with.
The Smart Hand is a self-contained embedded Linux system. It can be programmed from any computer with a standard web browser via RM Studio. RM Studio is based on Jupyter Lab and includes a graphical programming environment, a Python editor, and provides full terminal access. The graphical programming environment is fully integrated with the host robot, allowing the user to record robot poses and step through a program.
Our patent-pending tactile sensing technology allows the robot to see where conventional cameras cannot, for example right before the grasp or inside a bin. High precision encoders in the hand allow us to register tactile information with depth images gathered while the robot approaches an object.
We use deep learning for object recognition and 3D perception by combining powerful hand-coded algorithms with self-supervised learning that let the robot adopt to an user’s environment. In addition, users have access to a wide variety of perception and machine learning tools including OpenCV, Open3D, sci-kit learn, Tensorflow,and Keras that allows them to create their own applications quickly and integrate their perception chain with standard cloud services.