The majority of artificial intelligence (AI) research to date has been focused on vision. Thanks to machine learning, and in particular deep learning, we now have robots and devices that have a pretty good visual understanding of their environment. But let’s not forget, sight is just one of the human biological senses. For algorithms that better mimic human intelligence researchers are now focusing on datasets that draw from sensorimotor systems and tactile feedback. With this extra sense to draw on, future robots and AI devices will have an even greater awareness of their physical surroundings, opening up new use cases and possibilities.
Haptically-trained artificial intelligence systems
Jason Toy, AI enthusiast, technologist and founder of deep learning and neuro-linguistic programming specialist Somatic, recently set up a project focused on training AI systems to interact with the environment based on haptic input. Called SenseNet: 3D Objects Database and Tactile Simulator, the project focuses on expanding robots’ mapping of their surroundings beyond the visual to include contours, textures, shapes, hardness, and object recognition by touch.
Toy’s initial aim for the project is to create a wave of AI research into sensorimotor systems and tactile feedback. Beyond that, he imagines haptically-trained robots could eventually be used to develop robotic hands for use in factories and distribution centres to perform bin packing, parts retrieval, order fulfilment, and sorting. Other possible applications include robotic hands for food preparation, household tasks, and assembling components.
Robotics and deep reinforcement learning
The SenseNet project relies on deep reinforcement learning (RL), a branch of machine learning that draws from both supervised and unsupervised learning techniques and relies on a system of rewards based on monitored interactions to find better ways to improve results iteratively.
Many believe that RL offers a pathway to developing autonomous robots that could master certain independent behaviours with minimal human intervention. For example, initial evaluations of deep RL techniques indicate that it is possible to use simulation to develop dexterous 3D manipulation skills without having to manually create representations.
Using the SenseNET dataset
SenseNET and its supporting resources are designed to overcome many of the common challenges
researchers face when embarking on touch-based AI projects. An open source dataset of shapes, most of which can be 3D printed, as well as a touch simulator allow AI researchers to accelerate project work. Figure 1 shows examples of some of the shapes included in the SenseNET dataset.
The SenseNet repository on GitHub* provides numerous resources beyond the 3D object dataset, including training examples, classification tests, benchmarks, Python* code samples, and more.
The dataset is made even more useful through the addition of a simulator that lets researchers load and manipulate the objects. Toy explains: “We have built a layer upon the Bullet physics engine. Bullet is a widely used physics engine in games, movies, and—most recently—robotics and machine learning research. It is a real-time physics engine that simulates soft and rigid bodies, collision detection, and gravity. We include a robotic hand called the MPL that allows for a full range of motion in the fingers and we have embedded a touch sensor on the tip of the index finger that allows the hand to simulate touch.” Figure 2 shows some of the supported hand gestures that are available using MPL.
To accelerate the training and testing of many reinforcement learning algorithms Toy used Intel’s Reinforcement Learning Coach – a machine learning test framework. Functioning within a Python* environment, the Reinforcement Learning Coach lets developers model the interaction between an agent and the environment, as shown in figure 3.
By combining various building blocks and providing visualisation tools to dynamically display training and test results, the Reinforcement Learning Coach makes the training process more efficient, as well as supporting testing of the agent on multiple environments. The advanced visualisation tools, based on data collected during the training sequences, can be readily accessed through the Coach dashboard and used to debug and optimise the agent being trained.
Opportunities for developers
In terms of opportunities for other developers, Toy says: “Don’t be afraid to go against the norm. Most of the deep learning craze is centred around convolutional neural networks (CNNs) and computer vision, since that is where the most gains have occurred.” Other less-explored areas offer insights and sometimes breakthroughs in AI and these less popular avenues can lead in promising directions.
Finally, Toy says: “Don’t just study artificial intelligence from the point of view of mathematics and computer science. Look at other fields such as computational neuroscience and cognitive science.”