Raspberry Pi 3D Time-of-Flight Camera (Lidar)

Bjarne Johannsen
3 min readNov 24, 2020

The time-of-flight technology integrated in the Pieye Nimbus 3D determines the distance between the camera lens and the object using the travel time of modulated light pulses. The camera provides a distance value for each image point with each exposure and makes this so-called point cloud directly available for evaluation. Since the time-of-flight technology enables 3D measurements with a single sensor, the advantage is a particularly compact design as an attachment for the Raspberry Pi for object recognition, gesture control or robot navigation.

The Nimbus 3D mounted on a Raspberry Pi 4

The following graphic shows the underlying architecture of the Nimbus software. The imager takes the image and the Linux kernel module makes it available via Video4Linux. Now two possibilities are available. Either the nimbus-server can be used, which makes the data available in the local network via web sockets and lets you change parameters via JSON RPC. nimbus-python and nimbus-web both use this interface. In this way the data can be used asynchronously locally and in the network and a distributed system can be realized. The Python interface is particularly suitable for easy use of the data. The web interface especially to get a live image and simply adjust the exposure.

Nimbus 3D Software

Nimbus-web

After successful installation and setup of nimbus-web it can be accessed in the browser via the IP address of the Raspberry Pi. On the left side the point cloud is visible, in the middle a grey value image and on the right side a depth image. With a click on the buttons below, the exposure settings and information about the current image can be opened.

Webbrowser visualizing Nimbus-web

Nimbus-python

Nimbus-Python is the Python interface for the Nimbus 3D. Here it is possible to get the 3D data in Python within the local network. The package can also be installed directly via pip. The following is an example how to get the image by using python.

from nimbusPython import NimbusClient
cli = NimbusClient.NimbusClient("192.168.0.69")
header,(ampl,radial,x,y,z,conf) = cli.getImage(invalidAsNan=True)

Nimbus-ros

Robot Operating System (ROS) is robotics middleware. Although ROS is not an operating system, it provides services designed for a heterogeneous computer cluster such as hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management.

In order to use the Nimbus 3D in ROS you need nimbus-ros and ROS itself on your Raspberry Pi.

ROS is particularly useful for more extensive projects, such as industrial robotics and autonomous systems, as well as the use of existing algorithms. The ROS driver provides point cloud, intensity image and depth image which can be visualized with RVIZ or RQT.

Nimbus 3D Pointcloud in RVIZ

Nimbus-Perception

Nimbus-Perception is a ROS based perception stack for the Nimbus3D time-of-flight camera. The deep learning (tensorflow) based algorithms are running on embedded systems like a Raspberry Pi4 (64bit Raspberry OS) at ~10Hz. A prepared Raspberry OS 64bit image can found here.

Nimbus-Perception visualization in RVIZ

Nimbus-Detection is 3D object detection based on a MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. This implementation can detect 91 different classes as shown in the COCO_labels.txt. Every detected object will have an estimated 3D postion and size. The bounding box and class will be visualized in RVIZ.

Nimbus-Pose is a 3D human pose estimation which extracts the keypoints of the human body. It is based on posenet PersonLab: Person Pose Estimation and Instance Segmentation with a Bottom-Up, Part-Based, Geometric Embedding Model. This implementation can extract all keypoints of a single person and estimate the 3D sceleton.

Nimbus-Segmentation is a semantic pointcloud segmentation based on DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. It assigns one of 21 classes to every 3d point and publishes the colourized pointcloud.

--

--