One of the questions we get asked the most by clients is what the differences are between different vision sensors. There are tried and true cameras like the options from Point Grey (Flea3 and Bumblebee) which have been long-time favourites of R&D groups for years, but there are always questions about low-cost depth cameras and vision systems, and especially the differences between the legacy Microsoft Kinect, and the more recent Kinect 2, aka Kinect for Windows.

Often, in response to these questions, we scour the internet for other sources of specifications and comparisons of the various image systems. We’ve pointed people other places enough that it’s time for you to be able to get some of those specs right here on the Clearpath blog.

1 Kinect, 2 Kinect…

If you Google “Kinect v1 vs Kinect v2” you will find blog after blog that all show the following list of specifications comparing the two units. If you follow the listed sources back (sometimes several sites deep) you eventually get to the list of specifications that were originally posted by Imaginative Universal, prior to official Kinect 2 release:

Feature Kinect for Windows 1 Kinect for Windows 2
Color Camera 640 x 480 @ 30 fps 640 x 480 @ 30 fps
Depth Camera 320 x 240 512 x 424
Max Depth Distance ~4.5 m 8 m
Min Depth Distance 40 cm in near mode 50 cm
Depth Horizontal Field of View 57 degrees 70 degrees
Depth Vertical Field of View 43 degrees 60 degress
Tilt Motor yes no
Skeletal Joints Defined 20 joints 25 joints
Full Skeletons Tracked 2 6
USB Support 2.0 3.0

Some of the main differences between the two (or why you can’t necessarily just get a Kinect 2 when you used to use a Kinect 1) are the data consumption and the USB support. With the current version of Turtlebot, Kinect 2 isn’t an option, because the Kubuki base the mobile platform is developed from only has USB 2 ports.

When we look at larger UGVs, the issue becomes the computing load of the Kinect itself. Microsoft details on their developer website that the Kinect requires “Physical dual-core 3.1 GHz (2 logical cores per physical) or faster processor”, so if you’re running other sensors off the computer, or doing on-board path planning or other algorithms, you’re going to run into issues maxing out your CPU.

Typically, we recommend a separate computer for point-cloud reconstruction, like an Intel NUC, since the Kinect can be so hungry for computing power.

When looking for an alternative to the Kinect 1, may it rest in peace, there are a couple options out there, depending on your application. The current solution replacing it on the Turtlebot is the Orbec Astra, which has seen great adoption in the robotics community.

Jackal_with_astra-1_resized

Depth Sensors Galore

For an in-depth look at the details of the options, the team at Stimulant put together a “Depth Sensor Shootout” that they’ve updated with some of the newer contenders.

Here’s a quick summary of some of their findings, but take a look at their original post for more details:

Orbec Persee

  • Arm based SOC depth camera
  • Good for distributed sensing where direct access to C++ is helpful
  • Localized processing

Orbec Astra

  • Infrared depth sensor (Pro version has enhanced RGB)
  • Hand and gesture tracking, not great for full skeletons
  • Okay SDK Support
  • Good for longer-range applications outdoors, when raw point cloud data is required

Intel RealSense R200

  • Designed to be integrated with OEM products (Rear facing model)
  • Infrared sensing
  • Wide variety of SDK support
  • Good range and face and expression tracking, but not hand or skeleton tracking

Intel F200

  • Front facing model of R200
  • Great face, hands, object, gesture, and speech tracking
  • Wide SDK support

Intel SR300

  • Successor of R200
  • Less noisy depth feed than R200
  • Even better 3D tracking
  • Compatible with other RealSense SDK

Stereolabs ZED

  • Paired visible light cameras for stereo vision
  • Powerful hardware
  • Limited SDK
  • Any tracking requires own development

Each application will have different requirements, but it’s great that if you are in need of camera system, your options are nearly endless.

Whether you’re doing human-robot interaction, agricultural robotics, or manufacturing applications, there are a plethora of use-cases where these types of sensors would be useful on a mobile robotic platform.

Happy Sensing!

Rockwell Automation completes acquisition of autonomous robotics leader Clearpath Robotics and its industrial offering OTTO Motors. Learn more | Read More
Hello. Add your message here.