Photo from unsplash: iwhpz68g4dryndqa9x35

TensorFlow Object Detection on the Raspberry Pi

Written on July 08, 2023 by Derek Sun.

Last updated July 08, 2023.

See changes
4 min read
views

This example uses TensorFlow Lite with Python on a Raspberry Pi to perform real-time object detection using images streamed from the Pi or USB Camera. It draws a bounding box around each detected object in the camera preview (when the object score is above a given threshold).

At the end of this page, there are extra steps to accelerate the example using the Coral USB Accelerator to increase inference speed.

Update the Raspberry Pi

First, the Raspberry Pi needs to be fully updated. Open a terminal and issue:

sudo apt-get update sudo apt-get dist-upgrade
sh

Depending on how long it’s been since you’ve updated your Pi, the upgrade could take anywhere between a minute and an hour.

Install TensorFlow

Update 10/13/19: Changed instructions to just use "pip3 install tensorflow" rather than getting it from lhelontra's repository. The old instructions have been moved to this guide's appendix.

Next, we’ll install TensorFlow. The download is rather large (over 100MB), so it may take a while. Issue the following command:

pip3 install tensorflow
sh

TensorFlow also needs the LibAtlas package. Install it by issuing the following command. (If this command doesn't work, issue "sudo apt-get update" and then try again).

sudo apt-get install libatlas-base-dev
sh

While we’re at it, let’s install other dependencies that will be used by the TensorFlow Object Detection API. These are listed on the installation instructions in TensorFlow’s Object Detection GitHub repository. Issue:

sudo pip3 install pillow lxml jupyter matplotlib cython sudo apt-get install python-tk
sh

Alright, that’s everything we need for TensorFlow! Next up: OpenCV.

Install OpenCV

TensorFlow’s object detection examples typically use matplotlib to display images, but I prefer to use OpenCV because it’s easier to work with and less error prone. The object detection scripts in this guide’s GitHub repository use OpenCV. So, we need to install OpenCV.

To get OpenCV working on the Raspberry Pi, there’s quite a few dependencies that need to be installed through apt-get. If any of the following commands don’t work, issue “sudo apt-get update” and then try again. Issue:

sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev sudo apt-get install libxvidcore-dev libx264-dev sudo apt-get install qt4-dev-tools libatlas-base-dev
sh

Now that we’ve got all those installed, we can install OpenCV. Issue:

sudo pip3 install opencv-python
sh

Alright, now OpenCV is installed!

Set up your hardware

Before you begin, you need to set up your Raspberry Pi with Raspberry Pi OS (preferably updated to Buster).

You also need to connect and configure the Pi Camera if you use the Pi Camera. This code also works with USB camera connect to the Raspberry Pi.

And to see the results from the camera, you need a monitor connected to the Raspberry Pi. It's okay if you're using SSH to access the Pi shell (you don't need to use a keyboard connected to the Pi)—you only need a monitor attached to the Pi to see the camera stream.

Download the example files

First, clone this Git repo onto your Raspberry Pi like this:

git clone https://github.com/tensorflow/examples --depth 1
sh

Then use our script to install a couple Python packages, and download the EfficientDet-Lite model:

cd examples/lite/examples/object_detection/raspberry_pi # The script install the required dependencies and download the TFLite models. sh setup.sh
sh

In this project, all you need from the TensorFlow Lite API is the Interpreter class. So instead of installing the large tensorflow package, we're using the much smaller tflite_runtime package. The setup scripts automatically install the TensorFlow Lite runtime.

Run the example

python3 detect.py \ --model efficientdet_lite0.tflite
sh

picture1 You should see the camera feed appear on the monitor attached to your Raspberry Pi. Put some objects in front of the camera, like a coffee mug or keyboard, and you'll see boxes drawn around those that the model recognizes, including the label and score for each. It also prints the number of frames per second (FPS) at the top-left corner of the screen. As the pipeline contains some processes other than model inference, including visualizing the detection results, you can expect a higher FPS if your inference pipeline runs in headless mode without visualization.

For more information about executing inferences with TensorFlow Lite, read TensorFlow Lite inference.

Tweet this article

Enjoying this post?

不再錯過 😉.留下你的郵箱地址,發送郵件告訴我 .

Subscribe Now