During my job at 100%FAT, I developed all the software, electronics and interaction design for a social robot for Concordia.
Concordia is a cultural institute holding art exhibitions, screening movies and hosting theatre. Concordia is based right in the city center of Enschede. In order to attract more people and expand their audience, Concordia wanted to put a technological eye-catcher in front of the museum, which acts as an interactive banner and invites people to come in.
The concept is a robot sitting on a bench, which can be interacted with. The robot should interact with the potential visitors that are walking by in the busy shopping street. People should feel invited to come closer, interact more with the robot, and ultimately feel more intrigued to visit Concordia.
The robot will be placed outside, so all electronics are made water-proof. The whole robot runs at a single 30Ah 12V battery which lasts 12+ hours and can be charged overnight.
The eyes and mouth are 18×15 (mini)WS2812B led matrices, allowing for full individually addressable LED’s that are also visible outdoors. Below the main control PCB, made by me, is shown. A Teensy 4.0 is used as the main brain of the robot. Various other components are used to control the various motors, sensors, led matrices and sound.
A Raspberry Pi 3B+ is used for playing and storing audio. It also does the people-tracking with the Intel Realsense D435 depth camera. For this, I implemented a blob-tracking algorithm on the point cloud data of the D435.
A total of 50 different animations have been implemented. These animations can easily be linked to different audio files. Some of the animations show different emotions, others are just cool effects. Some animations also have linked motor behavior.
First an attempt was made at implementing speech synthesis. However, the results of these were not always audible and had little inflections in the voice. The decision was therefore made to have a real person record the audio and applying a filter over this. The mouth moves with the amplitude of the audio.
The robot has multiple interaction states. In each state the robot says different things and behaves differently. It can distinguish whether people are walking by or not, and will try to attract people if there aren’t any people close by. If someone comes to interact, the robot says ‘hello’ and starts a conversation.
The bench also has sensors, so you can sit next to the robot and the robot will look at you and say different things. When you stay up, the robot will say ‘goodbye’. Lastly, the robot also has a wheel that you can turn, which will also trigger certain events.