This project, in collaboration with AVIOAERO, involves the automatic inspection of a complex mechanical part. It is devoted to the development of Machine Vision algorithms to automate highly time-consuming and critical quality inspection processes, addressing the problem of verifying if a (sub)assembly has been equipped with all the required parts. Normally, in the plant, this task is performed by dedicated employees without any automatic support. The goal is to introduce an automatic solution that can, solely from images, determine if all the sub-parts have been correctly installed on the (sub-)assembly. In practice, the system is able to provide an automatic tool to understand whether some parts are missing and to provide an alarm. The system employs a CAD model which will be used as reference for image-based validation of the components forming the part to be checked.
Automatic visual inspection of object surfaces
This task deals with AVIOAERO's strong requirements for visual integrity of parts and components, even aesthetic. In particular, damage such as scratches and stains, even if they do not imply problems in the functioning of a part, can be a cause for rejection by the customer, and should be detected and eliminated. At the moment, visual inspection is carried out by a human operator and an automatic solution to address these issues is needed on a variety of parts produced by AVIOAREO. This task has already been solved for some of the parts by using existing commercial solutions, but the high complexity of many more elements manufactured by AVIOAREO makes it unreachable by currently available commercial systems, hence the need to develop a research project for a set of the most critical ones selected by AVIOAREO.
Autonomatic detection and localization of machining swarf in engine oil pipes
This project, still in collaboration with AVIOAERO, is related to the visual inspection of the oil pipes inside a aerospace mechanical piece using a camera sensor. This project provides a complete system for an autonomatic visual inspection, capable to detect, recognize, and remove eventual debris present in the pipes due to previous production process. Inspection inside the very thin pipes (up to 2.5 mm) is performed using a digital borescope camera handled by a robotic arm. The computer vision module uses a deep learning framework to analyze the scene and detect the debris location inside the pipe. The algorithm is also able to recognize and classify different types of debris since they have to be removed by (different) special tools. This system also provides a simple GUI to support human operator in this high challenging task.
Automatic 6D Pose Estimation
This project in collaboration with OMRON develops a method for 6D pose estimation from a single RGB image for complex texture-less objects. This class of objects are common in any environment but still challenging to deal with. This is due to the fact that the distribution of surface brightness makes difficult to compute interest points or appearance-based descriptors. This novel part-based method use an efficient template matching approach where each template independently encodes the similarity function using a Forest trained over the templates. Moreover, accuracy is even more incremented by using a cascade of the learned forest. These templates forests together with the simplicity of the computed image features allow a quick estimate of the pose achieving real-time performance. Performance are demonstrated both on synthetic and real images with known ground truth.
Marker based 3D Registration
We performed 3D automatic registration using markers located on complex geometry objects while using a sensors' network framing the entire scene. The sensors network can be composed by several sensors such as cameras (with different resolutions and wave-lenghts sensitivity), lasers, 3D stereo cameras and other sensors. Registration can be shown using a software tool with a 3D model of the scene. In the figure, the visibility map between camera network and marker boards is displayed. Cameras are depicted with circles and markers (boards) with squares. Lines linking the nodes represent visibility of the boards to a given camera. The algorithm can show a virtual representation of the object with multi-board bundle adjustment (BA). The green cameras represent the first initialization of the position of the cameras, and the yellow cameras are the new positions found by BA. The squares are the models of the marker boards and the red lines are the 3D rays passing through the optical center of camera 1 and the corners of the markers.
- Marco San Biagio, Carlos
Beltran-Gonzalez, Salvatore Giunta, Alessio Del Bue Vittorio Murino
"Automatic inspection of aeronautics components"
Machine Vision and Applications, 2017
- E. Muñoz, Y. Konishi, C. Beltran, V. Murino and A. Del Bue
"Fast 6D Pose from a Single RGB Image using Cascaded Forests Templates"
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016
E. Muñoz, Y. Konishi, V. Murino, A. Del Bue
"Fast 6D Pose Estimation for Texture-less Objects from a single RGB image”
International Conference on Robotics and Automation (ICRA), 2016