3D Scanners Reproduce Reality
A 3D scanner is used to convert a three-dimensional object into digital bits (or a point cloud of information), which are used for many processes, such as:
• Ensure proper fit and form of medically implantable devices, such as orthotics, prostheses, and hearing aids. In some medical applications, physicians use scanners in diagnosis and patient follow-up visits.
• Archive and document. Scanners are useful for recording information on legacy parts that do not have an original CAD version.
• Create the negative shape of an object. This capability is particularly useful in packaging applications. Users scan the object and use that data to cut protective packaging to the right shape for the item, saving material and money.
• Inspect the external profiles of parts for quality purposes. By comparing this data against 3D CAD data, you can create measurement reports on overall deviation, cross-sectional deviation, and other measurements, comparing actual with schematic, enabling you to confirm and verify whether parts were manufactured to the design.
While today’s 3D scanners are better and easier to use, the scanning industry still faces technical challenges in collecting data from some objects or surfaces, such as shiny, transparent, and some textures. Improvements are underway, however, and the ability to scan such features continues to improve.
3D scanners are typically either contact or non-contact. Coordinate measuring machines are an example of the contact variety. Non-contact 3D digitizers work without touching the object and without damaging it. The data obtained from either version can be stored digitally or output in analog form.
Most 3D scanners use triangulation to gather data. The scanner projects a line from a light source onto the object. Usually the light source is a laser, but in some cases light bulbs, ultrasound, or x-rays are used. A sensor, such as a charge-coupled device or position sensor, measures the distance to the surface.
To determine position, the object is either marked with reference features or the scanner uses a form of external tracking, sometimes with its own internal coordinate system. The collected data are recorded as points in three-dimension space. From here they can be converted to a triangulated mesh for use with CAD software.
Some scanners use “structured light” to obtain scanned data. In this method, a projected line of light creates a pattern onto the scanned object. The scanner looks at the deformation of this pattern as it moves over the object and records the pattern changes. Most scanners use a one-dimension line for the pattern, but two-dimension lines or grid patterns can also be used. Algorithms are then used to calculate the distances of each point in the pattern.
Two-dimensional patterns can miss parts of an object, such as holes or areas where there is a rapid change in depth. Multistripe laser triangulation algorithms can eliminate this problem.
Structured light scanners are fast as they scan multiple points or an entire field of view at once.
Some scanners use modulated light to obtain scanned data. In this technique, the light source’s amplitude cycles, usually in a sinusoidal pattern. A camera detects the pattern shifts and determines the distance the light traveled.
With laser scanners, usually low power versions are sufficient for rapid prototyping applications. However, if the application requires details about the object surface, such as textures, than a confocal or 3D laser is used.
These scanners typically have a scan head that consists of two mirrors that detect a laser beam in X-Y coordinates. The third dimension can be found with a specific optic that moves the laser’s focal point in the Z direction. This information is useful with rapid systems that splice an object into layers.
In some vendor literature, you will see a reference to “disparity.” Such a feature handles the differences in image location when data are collected from at least two points of reference and is used for depth measurement. For example, binocular disparity refers to the difference in the location of an object as seen by the left and right eyes.