About This Project
Human-following robots have potential applications in the future as aid for the elderly or children. Many different combinations of software and hardware have been used to program the detecting and tracking abilities of said robots, which can make the robot extremely complex and expensive. In this experiment, we will attempt to build a human-following robot with affordable commercial parts and test how well it functions in comparison to high-end robots.
Ask the ScientistsJoin The Discussion
What is the context of this research?
As robots have begun to be more developed and advanced, we’ve started seeing more applications for them. Mobile robots in particular have potential domestic uses: serving as aid for the elderly, sick, or children.
One feature of such robots, their ability to follow a person around by detection and tracking, has been studied intensively. Many different sensors have been proposed for such a robot, one being the laser rangefinder for depth detection. It’s far more accurate than other types of sensors but more expensive (therefore difficult for mass production). An alternative is the stereo camera, which has been shown to be able to match the accuracy of LiDAR (Light Detection and Ranging). My experiment tests the efficiency of a robot that relies on this alternative as its primary sensor.
What is the significance of this project?
Although there are different ways of building a human-following robot, each method has its own pros and cons to consider. The stereo camera, which is increasingly used, is different than a regular camera in that it contains two cameras placed at a set distance from each other. This gives it an ability similar to having two eyes, where a person can decide what object is closer or farther by looking at its depth. My project aims to use affordable stereo camera and ultrasonic sensorsfor detection and tracking in a robot and compare it to the accuracy of more expensive robots to explore the trade-off between cost and accuracy.
What are the goals of the project?
The goal of this project is to build a dataset that trains the stereo camera to detect human figures and to use a stereo camera and ultrasonic sensors on a mobile robot to capture 3D images and build a depth map. This will aid the robot in detecting and tracking the target. In addition, the accuracy of this robot will be compared to robots using laser rangefinders and other sensors.
This project will be done using a Raspberry Pi 3 Model B ($35) and a robot body, both of which I already have access to. Before building the robot, a dataset would have to be collected to train the algorithm to recognize objects. This would be done by collecting footage with the Intel depth camera, which would be mounted on a rolling tripod and would feed the images directly to a computer nearby on a storage cart. The sensors used on the robot would be an Intel stereo camera and an ultrasonic sensor.
Due to high school and the nature of the experiment, I expect the programming of the robot to be done over the course of my junior and senior year. Implementing the algorithms used for detecting and tracking will be difficult as I will have to adapt them for use on a Raspberry Pi. Part of the dataset would be used to train the algorithms, and the other part would be used to test the program along the stages as it's being built.
Nov 04, 2019
Nov 21, 2019
Dec 01, 2019
Share information with backers about programming process
Dec 20, 2019
Create dataset with Intel D415
Jan 01, 2020
Meet the Team
I’m a junior at Princeton High School. I’m also in the school’s Research course, which allows students like me to be able to conduct research experiments by themselves. Programming has been a huge interest of mine, and I'm grateful to be given the chance to learn it. I'm interested in artificial intelligence and machine learning as well, so this project is an opportunity for me to bring together the things I enjoy and better understand the current advancements of technology.
- $230Total Donations
- $19.17Average Donation