Keywords

1 Introduction

In today’s highly competitive industry, automotive and other large industrial manufacturers intend to bring their products to perfection; both functionally and cosmetically. Painted (or unpainted) injection moulded parts have an extensively high application in the goods that are being used in everyday life. Hence it is necessary to inspect them for cosmetic defects before putting them into service [11]. Nowadays machine vision is on the way to become a standard in quality inspection. Its use brings many advantages, while the following are the most important: High reliability and repeatability, saving production time, and minimizing errors caused by humans.

Since end users of injection moulded parts are humans, “defects” defined on these parts are all in correspondence with the human vision. Thus, it is important for the machine vision to be able to mimic the human perception of the surface in the sense of defect detection. This point was addressed in previous work such as for instance in [3], where a machine vision system is used to develop a model parameter to simulate the human perception of the sink marks (local disturbances in curvature of the surface [6]) that happen frequently in injection moulded parts. A morphological method is proposed in [2] to detect surface properties such as scratches. In [4] a model based on skewed Gauss function is developed in order to quantitatively evaluate the visually perceptible sink marks on the surface of the injection moulded parts. Phase Measuring Deflectometry was used in [6, 7] to detect the sink marks.

Although collaboration of machine learning with machine vision showed outstanding performance in detection and evaluation of the cosmetic defects, and can inspect parts with virtually zero error, in some cases it still lacks the power of experience and subjective evaluation that a skilled operator has. The frequently encountered defects on the surface of a painted injection moulded part which can be detected without difficulty are paint dots, scratches, and paint flows. The orange peel effect occurs frequently too, and can cover a relatively large area on the surface. It can have several degrees of intensities, which makes it difficult for machine learning algorithms to distinguish a defective orange peel surface from a non-defective one.

The orange peel effect can be described as a series of peaks and valleys on the surface, which their combination creates a look similar to that of the skin of an orange. An example of a surface with orange peel effect can be seen in Fig. 1. Unless it is created intentionally in the given case, it is one of the most important defects in the paint and polishing industries. It is a result of improper application of spray paint on the surface, and causes distortions in the light that reflects to its surroundings, as mentioned by Konieczny [5]. According to Konieczny [5], and Sone and Watanabe [12] on dark-painted high-gloss surfaces this effect becomes more significant.

In literature there is work dealing with the prediction and evaluation of the orange peel effect. For the prediction of the visual appearance of a painted steel surface, Scheers et al. [10] have proposed a method to correlate the waviness of the painted steel surfaces to the surface roughness parameters. In order to measure and characterize the orange peel defects in the nanometre range on polished metallic surfaces, in [9] the interferometric microscopy was used. In [8], Miranda-Medina et al. have proposed a correlation between surface roughness parameters and degree of orange peel effect (which is determined by visual inspection for each surface) on highly polished steel surfaces. The aim is to find the surface roughness parameters which can be used to distinguish between different orange peel grades. In this paper, phase shifting interferometry has been used to measure the surface topography. Sone and Watanabe [12] have used a spectral camera and a suitable illumination to project light patterns on the surface. Then, the frequency analysis of the projected pattern and human visual inspection results have been used to find a correlation between measurements and subjective evaluation. A commercial device [1] is able to characterize the surface by magnitude of the distortions. It simulates the human eye’s resolution at different distances. The signal received from subject surface is being processed by a mathematical filter and magnitude of wavelengths at various ranges are used to determine the degree of the orange peel.

The aim of this work is to evaluate and classify the waviness due to the orange peel effect on a painted surface. The selected surface is from a part that is being used in automotive industry. It is black-painted and high-gloss, which makes it an interesting testing case for surface inspection.

In the rest of the paper first the experimental setup and the evaluation algorithm are explained. Then the feasibility of developing such an automatic orange peel evaluation framework is addressed by looking for a correlation between the result of the machine vision measurements and the subjective evaluation by the experts. This is done by classifying 30 surfaces with different orange peel degrees by the developed algorithm, and comparing the results with the classifications done by visual testing experts.

In addition to the fact that this evaluation method eliminates human errors which are due to the qualitative nature of the visual inspection, it has also the following advantages compared to the other machine vision results.

  • Compared to the commercial product mentioned above [1], the advantage of this method is that it is non-contact, eliminating the risk of damaging the surface.

  • The evaluation process is computationally inexpensive, and therefore it is fast.

  • This approach directly utilizes the dynamics of the anomaly itself (waviness), and since there are no learning processes involved, it does not require any prior training phase, or large amounts of labelled data.

  • The algorithm is flexible. By changing the parameters of the algorithm it is possible to apply it to other surfaces or other inspection environments; For instance for surfaces with shorter or longer orange peel wavelengths, or different illuminating methods.

Fig. 1.
figure 1

A surface with orange peel effect. Other defects (e.g., scratches, dots) are also visible.

2 Experimental Setup

In experiments samples of 3D-shaped black-painted high-gloss parts were inspected for the orange peel effect. The setup consists of a camera, a white-coloured LED-bar, and a robotic manipulator to handle the part in front of the camera. The camera has a resolution of 1.3 MP but its image acquisition rate can reach up to 168 frames per second. The lens has a fixed focal length of 25 mm. The testing sample manipulator used is a 6-axis industrial robotic arm which can be pre-programmed to follow a demanded path. Its maximum speed during automatic operation can reach up to 2 m/s. A sketch of the setup can be seen in Fig. 2.

During the experiments the aperture is set to maximum and the focus is adjusted to the surface. Setting the aperture to the maximum has the following benefits.

  • Maximum aperture makes it possible to set the exposure time of the camera low, which makes it possible to increase the frame rate of the camera. This in turn helps to reduce the inspection time of the whole surface. This is a demanded quality for an in-production inspection process.

  • Another effect of maximum aperture is to have a smaller depth of field, which results in a smaller area on the surface in focus. Having a small sharp region on the surface may count as a challenge, if the minimization of the inspection time is intended: A smaller sharp region (which ROI is a part of) means it will take a longer time to inspect the whole surface of the part. This brings the necessity of handling the part with a high-speed robot.

Fig. 2.
figure 2

A sketch of the experimental setup showing the configuration of the camera and the LED-bar, and the robotic manipulator handling the part

The experimental setup is constructed the way that, in addition to the orange peel effect other defect types can also be detected with little additional effort. In this configuration the orange peel effect is visible at the edge of the reflection of the LED-bar from the surface. As seen in Fig. 1, it is possible to detect other anomalies such as scratches and dots at the centre of the ROI, while the orange peel effect can be seen at the edge.

3 Evaluation Algorithm

A decisive parameter in classifying the orange peel effect comes from the nature of the anomaly itself: The wavelengths of the wavy structure on the surface. By illuminating the subject surface from the correct angle these waves are clearly visible.

In the image containing the region of interest, each pixel column contains the waviness information caused by the orange peel effect. Therefore, the idea is to analyse each pixel column separately and calculate an “orange peel intensity score”. Finally, the scores are combined to give a final result for the whole ROI. A diagram showing the steps taken during the analysis of a single pixel column is shown in Fig. 3.

  1. 1.

    After acquisition of the image, a ROI is selected.

  2. 2.

    A pixel value matrix is created from ROI, containing values between 0 and 255.

  3. 3.

    A frequency analysis is performed for every column of the matrix (the analysis includes the following steps: Filtering out the low frequencies, obtaining the frequency spectrum by Fast Fourier Transformation, and picking out the dominant frequency).

  4. 4.

    Calculation of the intensity score of the orange peel (the higher the value, the more defective the anomaly is).

  5. 5.

    Steps 3 and 4 are performed for every pixel column, and an average of the intensity scores of all columns is taken for the evaluation of the orange peel effect.

Fig. 3.
figure 3

Calculation procedure of orange peel intensity score of a single pixel column: after extraction of the pixel array, it goes through a frequency analysis to obtain the peak frequency, then based on the peak frequency and its amplitude the orange peel intensity score is calculated.

The ROI is selected manually from the captured greyscale image and the pixel values (between 0 and 255) of each pixel column is extracted by using an implemented Python script.

In order to find the frequencies corresponding to the orange peel, first high-pass filtering must be applied. Cutting off the very low frequencies eliminates the interference caused by bright to dark transitions in the image, which do not correspond to the orange peel effect. There might also be some other shape-errors on the surface which are not related to the orange peel effect. These shape-errors can occur due to a failure in the painting process, or a fault from the injection moulding process.

Applying the high-pass filter masks out all the features which have a wavelength larger than 1.5 mm from the pixel array. This length is displayed in Fig. 4. It must be noted that the algorithm is designed to evaluate the orange peel effect on this specific surface. Depending on the needs of the manufacturer or specifications of the production process the wavelengths comprising the orange peel effect can differ from part to part. Therefore another surface with a different painting and production process will need another cut-off frequency. This cut-off frequency is to be determined depending on the needs of the manufacturer and the application areas of the subject surface. As an example, a surface with an application in a range not too close to the human’s field of view will have different cut-off frequency requirements than a surface which is used very close to human’s field of view.

Fig. 4.
figure 4

The distance on the part over which any orange peel anomaly is ignored

After cutting off the lower frequencies, the dominant frequency corresponding to the waviness of the surface is found. For this purpose a Fast Fourier Transform (FFT) function is used. The dominant frequency, and its amplitude are used as two parameters to calculate the final intensity score of the orange peel anomaly. It is known that the higher the frequency, the more “defective” the orange peel looks. In addition, the amplitude of the dominant frequency represents a “weight” that amplifies the effect of this frequency on the perceived surface structure. Therefore, at each column if there is a dominant frequency found, it is multiplied by its amplitude, as shown in Eq. 1,

$$\begin{aligned} I_i = A \times f \end{aligned}$$
(1)

where \(I_i\) is the orange peel intensity score of the \(i^{\text {th}}\) column, f is the dominant frequency of the pixel column, and A is the amplitude of the dominant frequency.

In a final step, these values are added for the whole image, and divided by the number of the columns to get a normalized value. This makes the score of the evaluation to be independent of the size of the ROI. Equation 2 shows the formula used to calculate the orange peel intensity score of the whole ROI,

$$\begin{aligned} I = \frac{\sum _{i=1}^{n}A_i \times f_i}{n} \end{aligned}$$
(2)

where \(f_i\) is the dominant frequency of the \(i^{\text {th}}\) column, \(A_i\) is the amplitude of the dominant frequency, and n is the number of columns.

4 Results

In order to assess the relevance between the calculated orange peel intensity score and the human perception, the following approach was followed: 30 images from the same surface area of different parts having different degrees of orange peel were selected. Each image corresponds to an area of approximately \(14\,\mathrm{mm} \times 23\,\mathrm{mm}\) on the surface of the part. First, an orange peel intensity score was calculated using the proposed algorithm for each of these images. Then 5 experts were asked to evaluate the same images and classify them to 3 predefined levels of orange peel intensity: “weak or no orange peel”, “medium orange peel”, and “strong orange peel”. In Fig. 5 examples for weak, medium, and strong orange peel can be seen.

Fig. 5.
figure 5

The examples for different degrees of orange peel on the black-painted high-gloss injection moulded part. The left image corresponds to a “weak orange peel”, the middle image is an example for “medium orange peel”, and the right image has a “strong orange peel”.

In order to compare the calculated scores with the evaluations of the experts, the average value of the experts’ scores were taken. The results of the two evaluation approaches are shown in Fig. 6. The images are named from 1 to 30. The scores of the algorithm are red, while the average of the scores given by the experts are blue. The scale on the right hand side of the image corresponds to the human perception results, and the scale on the left hand side corresponds to the score calculated by the algorithm.

After comparing the results, it can be said that in the cases of “weak” and “strong orange peel” there is a strong consistency between the algorithm and the evaluation of the experts. For the five images having the lowest amount of orange peel according to the algorithm (images 1 to 5 in Fig. 6), there is a 100% correspondence between the two evaluation methods; all of the experts have labelled these images as “weak or no orange peel”. The surface, which according to the algorithm has the strongest orange peel (image 30 in Fig. 6), has been evaluated to have “strong orange peel” by all experts.

Fig. 6.
figure 6

Results of the two approaches to evaluate the orange peel appearance in 30 images (Color figure online)

For “medium orange peel” images, the result of the algorithm shows deviations from the evaluations of the volunteers. Out of these images, image 10 and image 26 are noticeable outliers and will be further analysed. According to the algorithm, image 9 and image 10 have similar orange peel degrees. However according to the experts’ estimation image 9 has a stronger orange peel. These two images can be seen in Fig. 7.

Fig. 7.
figure 7

Image 9 (left) and image 10 (right). According to the algorithm the two images have a similar orange peel, while according to the volunteers image 9 has a stronger orange peel.

Algorithm has calculated image 26 and 25 to have similar orange peel degrees, while according to the experts the orange peel effect in image 26 is stronger. The two exemplary images can be seen in Fig. 8.

Fig. 8.
figure 8

Image 25 (left) and image 26 (right). According to the algorithm the two has a similar orange peel, while according to the volunteers image 26 has a stronger orange peel.

Another important difference between the results of the two evaluation approach can be noticed in images 6, 7, and 8. The scores of these images calculated by the algorithm are close to that of image 5. This image is evaluated by the experts to have “weak or no orange peel”. On the other hand, the outcome of the experts’ evaluation of images 6, 7, and 8 have put them in “medium orange peel” range. Looking at the results in Fig. 6 it can be seen that image 13 has also been evaluated by the experts to have exactly the same orange peel level as images 6, 7, and 8. However, according to the algorithm it has a higher orange peel score. Images 5, 6, and 13 are shown in Fig. 9. From images 6, 7, and 8, in order to compare them with images 5 and 13, only image 6 is picked, since all three of them have received similar scores by both evaluation approaches.

Fig. 9.
figure 9

From left: Image 5, 6, and 13. According to the algorithm, image 6 has similar degree of orange peel as image 5, while according to the experts its orange peel degree is substantially higher and is similar to that of image 13.

The deviations of the quantitative evaluation from the experts’ evaluations is an expected result. This is particularly true if the intensity of the orange peel on the surface is in the “medium” range. In cases where the distinction of a defective surface from a not defective one becomes difficult, it is an advantage to utilize the presented quantitative evaluation method to avoid reliance on the subjective nature of the evaluation by humans. The evaluation by a human can be especially prone to errors when the assessor is doing the same thing over for a long period of time.

In order to further attest the robustness and flexibility of the algorithm, it is planned to perform more experiments with different surface characteristics, and compare the results with expert opinions.

5 Conclusion

In this paper a new method for quantitative evaluation and classification of the orange peel effect on painted injection moulded parts was presented. Images were captured using a monochrome camera from the surface of a black-painted high-gloss injection moulded part while it was being handled by an industrial robot, then using image processing techniques it was looked for waviness on the surface. Next, the results of the comparison between the orange peel intensity scores of the 30 images with different degrees of orange peel effects calculated by the algorithm and the evaluations of the experts were presented. Although the results of the two approaches for “weak” and “strong orange peel” appearances were similar, there were differences when orange peel effect was in “medium” range. The reason for this is assumed to be due to the nature of the human perception, which is based on subjective evaluation, therefore can differ from the quantitative evaluation.

In order to further develop this work, it is planned to use quantitative correlation methods in order to be able to compare the results of the algorithm even more precisely with the expert evaluations. In addition, a method will be implemented to automatically select the regions of interest from the image before evaluation. This process will make it possible to implement the algorithm automatically, without any need to manually select the ROI. For this purpose, machine learning methods can be used. This will make it possible to apply this evaluation method as an in-line inspection method. The experimental setup can also be used to detect and evaluate defects other than orange peel. For this purpose a larger framework is under development to utilize machine learning methods to inspect the surface for other anomalies. As a next step, the method proposed here can be integrated to this framework so that a complete in-line inspection system can be constructed.