Most of today’s autonomous systems operate in tightly controlled environments like mass production factories, where unforeseen tolerances are eliminated by an (over-)engineered approach. Autonomous systems have far less impact yet in small series production applications where those tolerances of components/tooling/process exceed the required end-product accuracies. This limitation is inherent to limitation in geometrical awareness of these systems. It’s long been recognized that sensor integration is fundamental to increasing the application domain of autonomous systems, for it’s not cost-effective to re-engineer every situation. The obvious way forward is to add sensors and intelligence. Some situations suffice with look-before-move, while some cope with even higher dynamic requirements as such that they require looking-while-moving. If control is added to this latter method it is called Vision-In-The-Loop. Vision in the loop increases accuracy by actively suppressing disturbances and generates adaptive motion trajectories for the control loop.
The common challenges with Vision-in-the-loop systems are related to the high data rate combined with low latency requirements. With partners in the display industry, foodprocessing industry, big science, and safety & security TNO is solving this by combining optics and algorithms with High Performance Computing and hard real time operating systems into one single integrated development and implementation approach.
Typical example, as shown in the image: Increased accuracy of position control during production of displays, where the grid is deformed beyond the OLED pixel grid. A high speed camera observes the OLED surface. With 5000images/sec and a signal latency of <500usec disturbance rejection up to frequencies as high as 150Hz can be realized.
If you have a challenge similar to this, contact us