A completely different emerging entry in this category is the Saccade Vision MD 3D imaging system. This device does not require part or sensor motion and can automatically scan a field of view from multiple directions and with multiple variable resolutions within a single image.
“While many general areas in vision and imaging are being impacted, here are a few categories to keep an eye on, along with some notable emerging components.”
Imaging technologies have taken center stage as the main drivers of many advanced applications in automation. It is no longer a question of whether a system should utilize vision and imaging. Rather, these technologies are frequently required to achieve success. Machine vision also plays a crucial role in the broader automation landscape by advancing productivity relative to concepts in Industry 4.0, such as AR/VR, IIoT, robotic guidance, and big data analytics.
The machine vision component market is booming, evidence of continued and growing demand. Part of the growth can be attributed to advances in existing technologies as well as the introduction of new components, both of which have broadened capabilities for a diverse set of applications. While many general areas in vision and imaging are being impacted, here are a few categories to keep an eye on, along with some notable emerging components.
3D Imaging Continues to Advance
While not an emerging technology, 3D imaging has become a mature and robust part of the machine vision market, featuring both new and updated components and systems for critical advanced automation tasks, including metrology, inspection, and guidance. Use cases are expanding with the advancement of the reliability, precision, and ease-of-use of this type of imaging.
A key type of imaging system in this category is the scanning laser profilometer (3D profiler). This device uses laser line triangulation to acquire and create a high-precision profile of the surface of a part, usually with the sensor or part in motion. While many companies offer competing products, one new implementation of this type of imaging comes from Automation Technology GmbH (www.automationtechnology.de). Its MCS series of modular sensors allows user configuration of the physical layout of the camera and laser line generators. This unique arrangement provides added flexibility in implementation.
An advance in existing technology from Cognex (www.cognex.com), the In-Sight 3D-L4000 features new speckle-free blue laser scanning and a broad range of 3D analysis and measurement tools, all implemented within the familiar In-Sight spreadsheet environment. A completely different emerging entry in this category is the Saccade Vision MD 3D imaging system (www.SaccadeVision.com). This device does not require part or sensor motion and can automatically scan a field of view from multiple directions and with multiple variable resolutions within a single image. The new Flash sensor from Teledyne e2v (https://imaging.teledyne-e2v.com) features higher-speed, specialized capabilities specifically targeted for advanced laser scanning systems.
Beyond 3D profiling, many machine vision components acquire a full-frame 3D point cloud. This type of imaging is driving emerging use cases, notably for 3D robotic guidance applications like flexible random part handling and bin picking. Two recent offerings come from IDS (www.ids-imaging.us/ensenso-stereo-3d-camera.html) and Zivid (www.zivid.com); both have introduced ultra-compact and lightweight structured light imaging systems designed for end-of-arm robotic mounting.
Emerging 3D components from sensor manufacturers also are driving capability in time-of-flight (ToF) imaging. Both Teledyne e2v (https://imaging.teledyne-e2v.com) and Sony (https://www.sony-depthsensing.com) have introduced general-purpose ToF sensors for integration into machine vision cameras. Camera manufacturers are also leveraging these sensors in 3D industrial cameras, such as the updated Helios 2 ToF camera from Lucid Vision Labs (www.thinklucid.com).
Camera and Interface Improvements
Demands for higher-resolution imaging and increased process throughput drive the need for advanced and high-speed machine vision camera components. Supporting high frame rates with large-data images further requires high-speed interfacing between the camera and processor. High resolution and frame rates are increasingly available in imaging sensors for machine vision, driving new camera offerings.
Emergent Vision Technologies (www.emergentvisiontec.com) utilizes a GPixel 103MPixel CMOS sensor in its new Zenith grayscale/color camera. To optimize the sensor’s available frame rate, the Zenith uses a 100GigE interface. This emerging technology in machine vision provides 100 times the speed of basic GigE connections.
Other interfaces used in machine vision, such as CoaXPress (CXP) and Camera Link HS (CLHS), have evolving standards for transfer rates that also target higher-speed cameras. CXP frame grabbers that support CXP-over-Fiber include the Euresys (www.euresys.com) QSFP+ board and CLHS, which is already capable of 100G over a 4x25G connection, with work currently in process to provide a solution standard capable of 50G.
Lens technologies continue to advance to keep up with the demanding imaging requirements of evolving automation applications. Important features include expanded image format capabilities that support the larger physical sizes of new high-resolution sensors, such as the 1.4" format megapixel MPT series from Computar (www.computar.com); optics that provide high-quality images when using both visible and nonvisible illumination wavelengths such as shortwave infrared, as in the Computar ViSWIR series and Kowa’s (www.kowa-lenses.com) VIS-SW lenses; and embedded motorized or liquid lens focus control, available, for example, in Edmund Optics’ (www.edmundoptics.com) TECHSPEC LT series and Computar’s LensConnect lenses.
Embedded Systems for Deep Learning
As machine vision systems leveraging deep learning (DL) techniques continue to show promise in several different types of inspection applications, a wide range of components and software for implementing DL inspection has emerged. Some of the most recent are cameras and computing systems with onboard (or embedded) processing for deep learning tasks.
The NEON-2000-JNX smart camera from ADLINK Technology (www.adlinktech.com) has a GPU-based system with additional FPGA support combined with software to perform edge AI. The unique Deepview camera from Deepview AI (www.deepviewai.com) is a self-contained, server-level computing system with imaging that can execute both training and inference for deep learning within a smart camera format. Pleora Technologies (www.pleora.com) offers a computing platform approach intended to facilitate development of AI in machine vision applications.
These examples are only a small subset of emerging technologies that are helping to shape the current landscape of machine vision in automation. The future is very bright, and we can expect to see continued growth in the machine vision marketplace.