In part 2 we will explore multi-view 3D scanning and processing using sensor networks.
Multi-Sensor Networking
Sensor Discovery, Assignment, and Mapping
Gocator® multi-sensor operation is supported by built-in connection and streaming logic. You can use any web browser to connect to the “main” sensor, then connect additional sensors. Each sensor is assigned a location in a layout and its physical to logical mapping is recorded.
Aligning to a Common Coordinate System
All sensors in a network are usually aligned to a common coordinate system in order to stitch sensor data into a single 3D model or perform absolute measurement.
Gocator® offers built-in alignment that scans a known shape (such as a 4-sided polygon), where each sensor “sees” one of the vertices of the polygon and creates a transformation from sensor to world coordinates.
Data Transformation
With Gocator®, data merging and resampling is built-in. The smart sensor applies 6 DOF transformations from the alignment process to convert sensor coordinates to a common coordinate system. Overlapping sensor data is intelligently merged and the final 3D cross section is resampled to the desired resolution.
Point Cloud Processing
360° cross sectional profiles from laser profilers are synchronized to an encoder and collected to build 3D point clouds. Point clouds are aligned and projected to height maps for fast 3D feature or profile measurement.
Anchoring is used to adjust measurement position for high repeatability.
Data Synchronization
Data acquisition from multiple sensors requires accurate synchronization so each sensor exposes at exactly the same time (based on time, discrete triggering, or encoder).
This is achieved using LMI Master hubs. Master hubs transmit sub-microsecond synchronization (more reliable than PTP/IEEE 1588 and much lower cost – with no need for 1588 Gigabit Ethernet switch) to each sensor. Masters also support time or encoder-based offsets as required.
Read part 3 now, where we cover multi-view processing using surface stitching.