Towards in-situ flaw detection in LPBF through layerwise imagery and machine learning

Share this on social media:

Issue: 

Figure 1: Complex two-phase copper vapour chambers, for high-power space applications, built using laser-powder bed fusion.

Brett Diehl, of the Penn State Applied Research Laboratory, puts neural networks to work in identifying voids in additive manufacturing

Laser Powder Bed Fusion (LPBF) additive manufacturing is sought after for its ability to produce components of nearly arbitrary geometry. 

The intricate copper vapour chambers shown in figure 1 are a good example of this. As such, it has applications in the aerospace, defence and biomedical fields. In these high-cost applications, the prevention of defects is extremely important. 

Much work has been done to improve the quality of components produced by LPBF additive manufacturing – also known as direct laser melting, direct metal printing and selective laser melting – and it has been successfully shown that process mapping and build planning can prevent the majority of defect formation. However, it is not possible to pick a parameter set that produces no defects. For example, one proposed defect mechanism is the interaction of spatter particles with the melt pool, a possible occurrence of which is shown in figure 2.

Steps can be taken to reduce spatter and remove it from the build plate, but it is not possible to prevent and eliminate spatter entirely; more importantly, it is not possible to predict, with certainty, where spatter particles will form and land. This process is inherently stochastic, so monitoring the process in real time is essential to detect and prevent these defects. In the near term, process monitoring may be able to predict where voids will form to rapidly and cheaply qualify them. In the long term, corrective actions may be taken.

Figure 2: Cross-section of a fusion flaw, hypothesised to be caused by a spatter particle perturbing the melt pool2.

CNNs for data correlation

Recently I have been focusing on using convolutional neural networks (CNNs) to correlate points in images of the build plate taken after each layer (shown in figure 3) to points in computed tomography (CT) data where defects exist in the built component. This work was conducted with my colleagues Zackary Snow, Abdalla Nassar, and Edward Reutzel at Penn State Applied Research Laboratory (ARL). It was presented at ICALEO 2020 and has been submitted for publication in Additive Manufacturing.

During a single build, hundreds of gigabytes of sensor data can be produced. It is not practical to manually evaluate and correlate this data with defect locations (obtained post-build via CT); there is simply too much data and possible interactions are complicated and not obvious. For example, what a spatter particle (or other defect-inducing object) looks like may vary as a function of build-plate location, because the lighting conditions in the machine are not homogeneous, so those signals are not simple to detect. That is where the need for machine learning arises.

Figure 3: Representative layerwise images at each of the three lighting conditions (columns) before laser processing (top row) and after (bottom row).

Neural networks (NNs) are one of the most well-known machine learning architectures. Essentially, they take an input and map it to an output in a way that is nonlinear and extremely flexible; they are relatively good at taking inputs and performing regression or classifying, but they are not specialised for any type of input data. Previous work by Gobert, Reutzel, Petrich, Nassar and Phoha at Penn State ARL1 has shown that NNs can be used to detect voids in layerwise images of the LPBF build. CNNs on the other hand, are a machine learning architecture specialised to process image inputs. They apply sliding convolutional filters to input data, which allows them to more readily detect spatial dependencies. During training CNNs learn to tune the filters to be most effective at separating the two classes of data. 

A challenge in using machine learning is generalisability. If a classifier can detect defects in one build, what does that mean for the classifier operating on another build in this machine, in another machine, or with different lighting conditions? Those sliding filters that CNNs optimise to detect voids contain information about what voids look like. In theory, these learned filters enable CNNs to be better than NNs, which learned only a single transformation from input data to output prediction, at generalising what defect precursors look like.

Testing the hypothesis

This hypothesis was tested by training NNs and CNNs on build data from various regions of the build plate. Because the lighting conditions varied over the build plate, a classifier trained on data from one region might not be able to operate accurately on data from other regions. When CNNs were trained on the same quantity of data, but spread over a larger region, their performance increased when tested on a similar, but separate, build. The experiment was repeated with NNs. The NNs consistently underperformed compared with the CNNs and did not improve when trained with data from various regions of a build. This demonstrates that CNNs are superior to NNs when trying to utilise a diverse dataset to make robust predictions. 

Figure 4: Layerwise imagery corresponding to a location with a fusion void. Note that the spatter object is not visible in all lighting conditions.

Regions which were marked by the classifier as containing defects, contained spatter particles – as shown in figure 4 – significantly more often than other regions. This supports the hypothesis that spatter particles can disrupt the melting process and cause fusion voids, as shown in figure 5. The technique had a recall rate of 78 per cent on the build it was tested on (82 per cent for voids larger than 200μm), that is, it detected 78 per cent of the voids that were present, and 82 per cent of the voids larger than 200μm that were present. In theory, a system could be designed in which a laser re-melts these regions after each layer. Such a setup could fix the majority of fusion voids. This may be critical in fatigue-sensitive applications, as large and irregular fusion voids likely have a strong influence on the fatigue life of additively manufactured components.

Figure 5: CT scan of a fusion flaw, found in the same location as the layerwise images in figure 4.

Looking forward

The next steps are parallel lines of development: algorithm improvement and implementation. While the algorithm can be improved, it can currently detect approximately 80 per cent of voids as they form in the build. Researchers at Penn State ARL are currently working with commercial system providers to implement robust strategies for the re-melting and repair of fusion voids over selected areas between layers. Further areas of improvement have been identified for the algorithm, which can reduce false detection rate and improve accuracy. These developments are expected to lead to significant improvements in quality and performance of additively manufactured components.

Brett Diehl is a graduate assistant at the Penn State Applied Research Laboratory.

References

  1. C. Gobert, E. Reutzel, J. Petrich, A. Nassar, and S. Phoha, ‘Application of supervised machine learning for defect detection during metallic powder bed fusion additive manufacturing using high resolution imaging.’ Additive Manufacturing, vol. 21, pp. 517–528, May 2018, doi: 10.1016/j.addma.2018.04.005.
  2. A. Nassar, M. Gundermann, E. Reutzel, et al. ‘Formation processes for large ejecta and interactions with melt pool formation in powder bed fusion additive manufacturing.’ Scientific Reports 9, 5038 (2019). https://doi.org/10.1038/s41598-019-41415-7

Navigation

Navigation

Navigation

Navigation

Navigation

Navigation