Machine vision is a technology used to automatically inspect and measure objects and surfaces. Typically, a visible or infrared light source illuminates the target surface, and a machine vision camera or position sensitive detector (PSD) captures the reflection.  Software is then used to automatically reconstruct objects in a digital environment, verify product conformity, perform presence checks, or detect faults along a production line. This can increase speeds, reduce costs, and improve the accuracy of quality control processes in industry.


Lasers are widely used as illumination sources in machine vision for the following reasons:

– They can be focussed to fine dots or lines that illuminate a small surface area and, therefore, exaggerate small surface features. This allows imagers to measure scratches, pits, or areas of roughness with high resolution.
– Many laser modules have a large depth of focus, allowing objects of various thicknesses to be inspected without the need for re-focussing.
– Various optical patterns, wavelengths, and output powers are available to suit objects of different shapes, sizes, and textures in high or low ambient lighting conditions.
– High signal-to-noise ratio (SNR) can be achieved using a bandpass filter that transmits the laser wavelength and blocks ambient light before illuminating the camera or detector.

As laser diodes typically emit linearly polarised light, reflections from other nearby sources can be removed using polarisation filters.



Triangulation is a branch of machine vision that uses a laser source and position sensitive detector (PSD) to measure variations in the depth of a target surface. The resulting data is used by software to detect movement or create a depth map illustrating surface features such as areas of roughness, edges, intersections, curvature, etc.

A simple laser dot and linear position sensitive detector (e.g. photodiode array) can be used for triangulation. The detector, laser source, and laser dot (incident on the target surface) are arranged in a triangle in the working space. The distance between the detector and the laser source, and the angle of the beam axis relative to the detector, are known properties that are allocated by the user during set up. Manufacturers of machine vision systems may also fix the source and detector into position inside a compact housing.

The laser dot is diffusely reflected by the target surface, and some of the reflected rays are focussed by a collection lens in front of the camera. The position of the reflected laser dot incident on the detector is then measured. By tracing the path of the beam from the detector, through the centre of the collection lens, and to the target surface, the remaining sides and angles that form the triangle can be then calculated.

The complete size and geometry of the triangle determines the distance to the laser dot from its source and, hence, the depth of the target surface at that point. A variation in depth as the target surface is scanned changes the position of the laser dot on the detector. As a result, software uses position data supplied by the detector to measure movement or generate a depth map.

Triangulation Single Dot Laser


It can be time consuming to perform triangulation on large surfaces or a collection of objects using a laser dot because data is only collected a single point at a time. Unlike linear detectors, CMOS/CCD cameras are capable of simultaneously capturing light reflected from more than a single point in its field of view. As a result, a laser line or structured light can also be used for triangulation. Such methods are typically used to measure features over a large surface area more quickly and efficiently than a laser dot.

When incident on the target surface, a laser line appears distorted (e.g. curved or ragged) in the camera’s perspective if there are variations in depth along the line. These variations generate pixel shifts in the camera which are used by software to calculate the distance to every point along the line. The line can also be swept across the entire field of view of the camera to reconstruct three-dimensional objects and surfaces in a digital environment – a process known as “3D mapping” or “object reconstruction”.

A conveyer belt, motorised stage, laser scanning mirrors (“galvanometer”), or another means of scanning a surface or moving an object through a beam may not be available to the user. In such applications, structured light may be used to obtain a 3D map without the need for scanning.

Laser light can be shaped into a two-dimensional geometric pattern (“structured light”) typically using a diffractive optical element (DOE). The most common patterns include dot matrices, line grids, multiple parallel lines, and circles. Another pattern may be chosen to provide the optimum resolution and measurement speed. Typically, the structured light pattern is instantaneously projected across the target surface. Upon collection of the reflected light, an algorithm in the software creates a 3D point cloud by comparing the reflection with that of a reference surface. This comparison then infers the profile of the measurement surface.

3D Surface Laser


Stereo vision uses two or more cameras and a structured light pattern to perform triangulation. This technique is typically used when an algorithm to create 3D point clouds using a single camera is unavailable. Instead, two or more cameras simultaneously capture an image of the structured light from different angles. The resulting images of the same surface are then compared during digital image processing. An accurate representation of the measurement surface is then generated by matching corresponding features of the structured light from different perspectives.