What is Dragonfly?
Dragonfly is a robust computer vision based indoor location technology that uses the most advanced visual SLAM (vSLAM) algorithms to localize and track forklifts, robots, AGVs, drones and any other moving asset that can be equipped (or is already equipped) with only a wide-angle monocular camera and a computing unit.
Dragonfly thus delivers in real-time (at 30 Hz+) the X, Y, Z metric/imperial coordinates and 3D spatial orientation (6 Degrees of Freedom) of forklifts, robots, AGVs, drones and any other moving asset or industrial vehicle with centimeter-level accuracy, thus laying the foundation for more complex applications aimed at improving the productivity, and safety in any vertical.
Why is Dragonfly the best choice for the indoor localization of your assets?
Dragonfly represents the state-of-the-art for what concerns indoor localization technologies used in all those indoor areas where GNSS systems (such as GPS) cannot be used. Dragonfly is much more competitive compared to other indoor localization technologies based on LiDAR, UWB, Wi-Fi/Bluetooth RSS.
- Easy setup – no need for error-prone and time-consuming calibrations of ad-hoc UWB infrastructure: just a camera and a computing unit onboard your mobile vehicles.
- Maintenance – Dragonfly cameras are easier to calibrate and are more robust to changes in the environment compared to LiDAR, which can struggle to maintain accuracy in contour recognition due to changes of the obstacle’s positions over time.
- Scalable – Dragonfly can grow following the size and growth of your fleet due to the independent computing units and the maps sharing mechanism.
- No Internet connection – no need to have a continuous Internet connection inside your venue.
How does Dragonfly work?
The principle of operation is very simple:
- SETUP:
- the wide-angle camera, mounted on the moving asset, sends the real-time video feed of its surroundings to the computing unit.
- the computing unit extrapolates the features of the environment in each individual frame.
- the user performs the MAPPING of the environment by saving the features within a 3D map file representing the digital twin of the environment.
- the user performs the GEO-REFERENCING of the 3D map file putting in relation the 3D map file with a chosen X,Y, Z metric/imperial coordinates (using an accurate DWG file of the environment).
- USAGE IN PRODUCTION:
- the wide-angle camera, mounted on the moving asset, sends the real-time video feed of its surroundings to the computing unit.
- the computing unit extrapolates the features of the environment in each individual frame.
- the computing unit compares the features seen with those that are present in the geo-referenced 3D map file and calculate the X, Y, Z position and orientation of the wide-angle camera in the 3D space.
Please look at this page with some important information.