At ICALEO 2021 Matthew Dale learned of the numerous ways that AI can optimise laser-based manufacturing
The increasing presence of artificial intelligence (AI) in laser materials processing was brought to the foreground of this year’s ICALEO conference, with the topic dominating both the panel discussion and the two opening plenaries.
It was made clear at the virtual event that AI is one of the key tools for automating complex tasks such as sensor parameterisation, as well as unlocking the ‘dream’ of closed- loop laser materials processing – where laser machines adjust their parameters on the fly using process monitoring data to achieve optimal results at the workpiece.
Automating expertise
Before process monitoring sensors can acquire meaningful data, they must first be optimised to obtain clear signals of the workpiece. This was highlighted in the opening plenary by Julia Hartung, who is currently pursuing a doctorate at the Karlsruhe Institute of Technology in cooperation with Trumpf Laser. Together she and her colleagues have developed an evolutionary AI model capable of optimising optical coherence tomography (OCT) sensor parameterisation for assessing the quality of welding processes.
‘Many laser applications are supported or monitored by complex sensor technologies,’ she explained. ‘These sensors often have to be adjusted by lots of different parameters depending on the application, materials and components being processed. First, the employees need to have expert knowledge and experience to be able to handle the process sensors. But even then, it’s often difficult to find the optimal parameter set for certain components in a short time – with production processes usually being time critical. At this point AI can be used to parameterise the sensors optimally.’
With the example of using OCT to monitor the welding of copper hairpins in the production of electric motors, Hartung described how the OCT sensor’s data acquisition is regulated by 11 separate parameters. These include values such as exposure time, background threshold, auto crop and scaling, etc.
‘The parameters are partly interdependent and can not be defined separately, while also having different parameter spaces, so in total there are too many combinations to use trial and error to optimise them all,’ she said. ‘Our idea is that the user only has to press one button, then a hidden evolutionary algorithm will find the optimal parameters matched to the workpiece and adjust them. Our goal is to obtain a clear OCT sensor signal to provide a meaningful height profile of the component that is free of noise and artifacts.’
AI can be used to automatically optimise sensor equipment to monitor applications such as welding. (Image: Shutterstock/Pixel B)
Through an AI-based enhancement, the evolutionary algorithm developed by Hartung and her colleagues is able to perform parameter determination better and faster as it increasingly learns about a type of workpiece – reducing the time for an optimisation run from nearly a minute for completely new workpiece types, to less than 10 seconds for already known or similar parts.
Once the algorithm is integrated into a system, an operator would simply have to enter the type of workpiece, push a button, and the OCT sensor parameters would be optimised to capture a clear signal. This data could then be used, for example, to determine the workpiece position before welding, or to assess quality after welding.
Hartung and her colleagues have also developed an evolutionary AI model for performing image-based quality assessment of overlap welding of steel sheets using data from an on-axis camera. This image-based approach was developed for in-process monitoring, where spatter detection can be performed directly in the process. Based on the occurrence of spatter, a conclusion can be drawn about the quality of the weld seam.
She described how only 251 images were required to train the artificial neural network. This training dataset consisted of 74 ‘good weld’ images and 177 ‘bad weld’ images. Bad welds are, for example, those where there is a gap between the sheets, where the laser power is reduced, or where defocusing of the scanner optics occurs. In addition, spatter can be seen on many images. Such training apparently takes around 1-2 hours, however Hartung later hinted in a Q&A that in a similar use case, even as little as 20 images could be used to train the model on a new application.
In order to use very little training data, this model was developed using a small network architecture and is based on semantic segmentation, a pixel-precise class prediction. In the training process, this has the advantage that each pixel can be seen as a training instance and not, as in object detection, only the individual objects in the images (for example, the entire weld seam). In addition, the network architecture is strongly designed for data augmentation. For example, translation, rotation, scaling, and illumination were applied to extend the dataset. (more details on how training is possible with this limited data were not revealed at the time).
While the newly developed AI models are not yet commercially available, Hartung noted that Trumpf is beginning to introduce them to industry partners in various projects.
Closing the loop
In order to use captured images to close the loop in materials processing, the process monitoring data must first be interpreted correctly – a task that has previously proved challenging, Professor Carlo Holly, head of the Chair for Technology of Optical Systems TOS at RTWH Aachen University, explained in his opening plenary.
‘What we’ve been doing for a while is collecting all this data, but we haven’t been able to make sense of it,’ he confirmed. ‘So this is something where AI can help a lot. By reading all these parameters we can identify how they each influence the workpiece, and then of course our dream is to close the loop finally. If we can reverse engineer it all and say “okay, this part has a defect because the laser fluctuated or it was too hot or too cold at this process stage”, then with enough computing power this can be done on the fly. You can then take the necessary countermeasures and build a self-optimising machine.’
Researchers at Fraunhofer ILT, in collaboration with colleagues from RWTH Aachen, have begun exploring this approach for laser powder bed fusion (LPBF) to analyse and optimise surface quality in real time.
AI can be used to analyse and optimise surface quality in closed-loop additive manufactuing. (Image: Shutterstock/Pixel B)
‘In LPBF the part quality is very crucial,’ Holly remarked. ‘Assessing part quality involves examining the created surface topography to determine the roughness and whether there are defects. Usually you take out the finished parts and put them under a white light interferometer to examine the surface topography. However this is impractical for inline processing as white light interferometers aren’t present during the build. What you instead need to do is rely on optical images taken during the process.’ The research partners have therefore developed a convolutional neural network (CNN) capable of predicting local surface roughness based on optical images of LPBF processes. ‘It gives us super-fast direct feedback about the quality of our process results,’ explained Holly. ‘Now, we can use this as an input for a reinforcement learning block and actually train the system how to improve this local surface roughness or decrease defects.’
Reinforcement learning works by having an environment inform the CNN whether a result is good or bad. Based on the negative or positive reports, the network then tries to optimise its output to achieve the most positive, highest-value result.
‘By doing this the CNN is able to adjust the laser power and scan velocity of the LPBF system, and then every time a new layer is built, the surface roughness is determined from optical images and fed back into the reinforcement learning system,’ said Holly. ‘The system then evaluates how its parameter adjustments have affected the result, and then uses this as a benchmark for the next layer, where it can optimise the parameters further to improve the overall process.’
By using this technique, the developed CNN was able to improve the initial surface roughness of the layers of a build from 10μm +/- 5μm down to 3.3μm +/- 0.28μm by the end of the build. ‘This is a great result that shows how you can actually optimise your process inline using a reinforcement learning model,’ Holly remarked. He continued by expanding on this concept and explaining that a future stage of development would then be to expand this self-optimising capability from one machine to a whole facility of machines. ‘Then what you can do is say: “Okay, if this machine learned how to optimise a certain process and get all this data, then it can tell the other machines how to optimise their processes as well.” And with that you enter the world of business optimisation using digital shadows and big data.’
Additional examples
Holly presented another example of digital solutions optimising laser processing by showing how it can also be applied to a relatively new laser application: multi-beam materials processing using ultrafast lasers. Here, an incoming high-power beam with more than 1kW average power is split into 64 individual beams in an 8 x 8 dot matrix. These beams can then be moved over a surface and then be individually switched on and off to produce arbitrary structures.
He explained that the data of the process – scanner position, laser power, temperature information from cameras, etc – can all be fed into a control system that then closes the loop to the active optical system that switches the individual beams on and off.
‘We rely on fast real-time processing because when moving this spot pattern, we actually have to know exactly where each individual spot is and when to switch it on and off to actually create the desired structures,’ said Holly. ‘If you sum it up, the data acquired during this process is about 50 gigabytes per hour. So this is why we need really fast hardware to process it in real time and actually close this loop here.’
Multi-beam modules can be used to split the high-power beam of an ultrafast laser into a matrix of lower-power beams. (Image: Fraunhofer ILT).
He remarked that AI is especially useful for these types of applications where very large datasets need to be handled, and have unknown or very complex correlations between them: ‘If you have a certain parameter set and then you have a certain process result, it’s often not that easy to correlate them all in terms of defects. For example, depending on your laser power or process speed, you might have a certain defect in your part, and it’s just not easy to correlate this upfront.’
Fraunhofer ILT and RWTH Aachen University have also been exploring AI for laser welding, having developed their own model capable of identifying defects in weld images. To create the model, over 17,000 weld images from 10-15 welding trails were initially taken using a near-infrared camera and then classified by humans using metallographic characterisation annotating five defect categories. These were then fed into a deep neural network, which once trained was able to identify whether a newly presented image contained a weld defect.
‘The resultant model gives a really good correlation,’ confirmed Holly. ‘Running it through real process data offers a great prediction of whether there is a weld defect, a lack of penetration or no weld at all etc. So this is a really nice model which can now be used for real- time processing. Of course you need to have fast hardware, like GPUs and FPGA hardware to really do it online and in real time, but this is a great demonstration of how AI can be used in process analysis.’
Hype, or the next revolution?
Hartung and Holly were brought together with Professor Volker Sorger of George Washington University in a panel discussion moderated by Professor William Steen – a pioneer who is commonly referred to as ‘the father of laser materials processing’ in the industrial laser community.
It was rather fitting to see Steen, who founded the world’s first university-based research group in laser material processing in 1968, learning about the many ways that such cutting-edge technology as AI is influencing an industry that he himself helped build.
Having seen the rise of numerous technologies over his extensive career, one of the main points Steen queried the panel about was whether they thought AI was simply hype, or if it could indeed be ‘the next revolution’.
AI was the focal point of the panel discussion at this year’s ICALEO. (Image: The Laser Institute)
‘The community is rather excited and I would add, rightly so,’ began Sorger. ‘The reason being, that automation has forever driven human development and technologies, and AI is just the next (expected) level of automation.’ He highlighted that while with any hype, there needs to be an air of caution – especially from the perspective of scientists and engineers – from a developmental perspective, having a bit of excitement can be fruitful. ‘Because without such an initial momentum, development might be slow or possibly missed for decades. Case in point, neural networks were developed in the 1950s already, yet a long 50-year AI-winter followed, which was just resurrected over the last 1-2 decades accelerated by big-data, special-purpose computing hardware (e.g. GPUs), and machine learning algorithms (e.g. deep learning),’ he said.
Holly weighed in that there is definitely a future for AI and that he believes that we are indeed in the next revolution. ‘I think every field of technology, and every application in these fields where AI can be applied to, is at a different point in the hype chart,’ he said. ‘For us, speaking from the perspective of building lasers and optical systems for materials processing, I think we are very much at a point of excitement. We are at the point where we are trying to determine “okay, where does AI really help and where do we really benefit from it?” And I think if we do identify benefits, it’s extreme benefits that we’re seeing. However, for certain applications, we are seeing that classical solutions are still the most optimal – AI doesn’t need to be applied to everything.’
Hartung agreed: ‘I think AI is very much in a hype stage at the moment, and so there’s a tendency to want to use it for everything, but there are indeed many applications where we don’t need it, where there are classical algorithms that work just fine. So we should look at AI as a toolbox with different instruments, and we have to identify when to use it and when not. I do believe that AI has a future, but it is also in a hype stage at the moment.’
Sorger highlighted the need for caution surrounding the choice of data used to train the neural networks that are performing AI and machine learning tasks. As he explained, every new technology offers positive aspects for society, but it might also be misused, and in this case ‘miss- trained’. ‘If we train a system on certain data sets that happen to favour a particular outcome, then we create a biased AI decision making system. Thus, we must be vigilant with respect to the data that we feed into the neural network during the training step,’ he confirmed. ‘Nonetheless, AI could be used to free humanityfrom repetitive tasks or those with relatively straightforward decision-making. Interestingly, the amount of trust that we bestow on automated systems to make decisions will depend on our oversight and validation of AI training data sets. Not surprising, these datasets are often trade-secrets.’
‘That point is very important,’ remarked Holly. ‘Do you just listen to the machine and then still make the decision as a human, or do you just let it all run by itself? I’m pretty optimistic that AI will bring continuous benefits to the field of laser processing, especially in quality control and in closing the loop.’
Hartung also sees big opportunities in AI, however she believes it’s important to increase the transparency of the technology and its algorithms to help encourage its adoption. ‘It’s important to explain the algorithm a bit more, which is known as “explainable AI”,’ she said. ‘Currently it’s a big black box – and a lot of people don’t like black boxes – and we can’t say with 100 per cent certainty if the AI is generating the right result or not. For example, maybe the training data isn’t good enough, or perhaps we missed something out during the training process. So I think while AI will be a good toolbox to use in the future, explainability will also be very important.
Generative design
Explainability could become increasingly important as AI is used not only to optimise manufacturing processes, but also to optimise the design of the chips the AI itself is running on.
Holly pointed out that big chip manufacturers have announced that they’re now using reinforcement learning approaches to actually design entire chips and lay out where the transistors are. ‘And the thing is, they can actually do this much better than a human could possibly achieve,’ he said. ‘For me, that’s one of the most exciting fields, where you actually create something new that humans are not able to because it’s just too complex. This is what I call generative forward- thinking design, and it can even be applied to optics. We are currently working on AI, especially reinforcement learning solutions, to do the optics design. This means that instead of relying on years of experience in what kind of curvatures you should give a certain lens in your system, we try to teach an AI to do it.’
This type of generative design is also being used at George Washington University in the designing, prototyping, and testing AI/machine learning special-purpose computing accelerators. ‘These are compute systems like GPUs that optimise tensor operations such as matrix-matrix multiplications or convolutions, which make up the vast majority (over 90 per cent) of all machine learning effort,’ said Sorger. ‘Using electronic-photonic mixed-signal ASICs (chips) we can speed-up neural network processors and make them more energy efficient. Interestingly, such systems also allow for self-optimising machines; that is, the AI accelerator can optimise everything from image processing, to data analysis, but also augment and optimise hardware systems, or even itself, known as self-learning. That way an electronic-photonic AI ASIC processor can optimise the very electronic and optical components it is made of. Key for efficient and high-performance AI systems is dense component integration on semiconductor chip technology, including electronic (CMOS) and optoelectronic (photonic integrated circuits) components.’