eCapture is introducing the smallest form factor stereoscopic 3D depth-sensing camera. The new LifeSense G53 is only 50×14.9×20 mm and is designed for depth capture and object tracking for industrial, robotics, and other applications driven by AI. eCapture plans to introduce a full range of depth map cameras to address the growing need for stereo imaging equipment over the next quarter.
The first in the cost-effective eCapture line, the G53 provides a 50-degree field of view (FOV) and includes two Mono Sensor pairs for various resolutions of stereo, mono, and depth disparity/distance map output via USB. The camera is ideal for the development of Robots, Automated Guided Vehicles (AGV) and Autonomous Mobile Robots (AMR), Goods to Person (G2P) delivery, as well as fast-motion depth capture.
The 3D computer vision technology is critical for AI applications, enabling autonomous functionality for software and machines, from robotic spatial awareness to scene understanding, object recognition, and better depth and distance sensing for enhanced intelligent vehicles. According to Meticulous Research, the 3D and machine vision market is expected to double from $1.35 billion in 2020 to $2.65 billion in 2027.
Leading robotics companies recently incorporated eCapture cameras with depth vision to their Autonomous Mobile Robots (AMR) design, with a people-following function for daily life. The AMR has the eCapture LifeSense G53 camera installed on a mobile dolly to track and follow operators. The solution’s global Shutter (GS) sensor setup and wide Field-of-View (FoV) allow responses to quick motion objects, which is an important function for this application, allowing the devices to react to sudden turn behavior, for example.
eCapture camera solutions are based on the innovative eYs3D stereo vision processing solutions. The eYs3D vision processor can compute the stereo depth map data and reduces the burden on the host CPU/GPU, allowing for higher performance and lower power solutions. Synchronized frame data from both cameras allows the development of SLAM algorithms.