Robotic Materials has developed a series of primitives around peg-in-hole and peg-on-hole assembly that aid in locating both parts and insertion points using built-in 3D perception. Unlike 2D vision, Robotic Materials’ SmartHand uses three-dimensional, volumetric data, removing perspective error when calculating grasp points, and performing center grasps that align with an object’s principal axis out of the box.
|Parts from the “Siemens Learning Challenge” (left) and after segmentation (right).|
All behaviors are force controlled, letting users specify maximum insertion forces. This allows assembly of standard, industrial parts including shafts, pulleys, screws, nuts and even rubber bands. Assembly primitives can be built either from spiraling or tilting motions with configurable thresholds. The video below shows a task board that has been developed in collaboration between the National Institute of Standards and Technology (NIST) and a Japanese industrial/research consortium.
Assembly algorithms can be implemented either using a powerful Python library in Jupyter Lab / RM Studio or using a graphical programming interface based on Google’s Blockly.
RM Studio’s example libraries also include examples on how machine learning, specifically recurrent convolutional neural networks, can be used for automatic classification of time series data collected during manipulation.