Making a lifelike robot with limited processing resources

It's a simple fact of life that you NEVER have the computational resources you would like, and the more ambitions your project, the more true this becomes. A great example of this is making lifelike and relatable robots. Consider Sophia, the ubercreepy state of the art humanoid robot. Hansen robotics has thrown computational resources at this problem that are limited only by the current state of the art computer chip. And for all that, they've only moved one tiny step past nightmare fuel. How can our tiny little robots with their tiny little processors compete with that?


The answer, as is so often the case, is to change the rules. To start, one must consider how they expect the user to interact with their robot. In the case of mimicArm robot arm we've found that, especially among our younger users, the gripper is the focal point. No matter what the robot is programmed to do, users expect the robot to grab an object and do something with it after it's violently shoved in it's mouth. Our IR Distance Sensor example uses a surprisingly simple code to make the robot react to objects in front of it. There's no need for binocular vision sensors, object recognition, or any other processor intensive code, just a simple analog output based on the infrared reflection off of a block. With a half dozen lines of code and a 16 MHz processor our robot suddenly takes on a personality.

mimicArm robot arm reaches for and grabs a block

Another method of making a root appear lifelike is to record and playback human motions. Using the mimicArm robot arm manual controller and the inputBox it's easy to record motions onto an SD card. The motions appear organic and lifelike because they are. Once these recordings are made, the programmer can add anything they like to create interactions to go along with these pre-recorded motions.

mimicArm robot arm dances


Another method of making lifelike robots with limited computing power is to distribute computational resources across different devices. The robot's "brain" might not need to know what color a block is, or be able to interpret the video collected from the camera on it's gripper. All the brain needs is the x,y coordinates of the block it's looking for. This can be accomplished by using a specialized video processing chip with limited output. As with the simple IR Distance Sensor, this allows us to use simple Arduino microcontrollers to do impressive and lifelike things with our robots, like following a specific color of block.

mimicArm robot arm follows a colored block

Additionally, our robot needs to make eye contact! One method is complicated facial recognition algorithms. Another option? Track body heat! Using the gridEye IR sensor we can follow the hottest thing in the room, which is usually peoples faces (note that the torso is slightly warmer than the face, which isn't awkward as long as it's covered up). This is the most relatable and lifelike program we've ever introduced. It's a little more complicated than the others, but the heavy lifting is done in the Arduino library.

mimicArm Robot Arm

All of these interactions have been one way, but the addition of a simple, dedicated voice recognition module changes that! The elechouse VR module sends a number that corresponds to a sentence stored on the voice recognition module.

All of these items, on their own, represent an element of human interaction and relatability. Imagine, if you will, what happens when you combine them all! Our little 16 MHz Arduino can compete with the multi-GHz Sophia robot in relatability and interaction. With a little imagination, and creative use of sensors, we can build a robot that people can accept and relate to.

Older Post Newer Post