Skip to main content

Advertisement

Log in

Exploring Deep Learning for Dig-Limit Optimization in Open-Pit Mines

  • Original Paper
  • Published:
Natural Resources Research Aims and scope Submit manuscript

Abstract

This paper explored a convolutional neural network to assess the clustering of selective mining units (SMUs), generated by a genetic algorithm (GA) approach. The purpose of the GA was to optimize a set of dig limits in a mine bench. Dig limits are boundaries to separate/split SMUs as per waste and other processing categorizations so they can be feasibly and profitably be extracted by existing mining infrastructure. This was achieved by involving the cluster design and the affiliated decision variables into existing mining optimization frameworks that enhance the process's efficiency and thereby increase profitability. However, the catch being that such computation comes costly both from a monetary and time consumption perceptive. This paper aimed to address this very cost issue and how to overcome this with the use of deep learning. Specifically applying the statistical learning algorithms to assess the cluster quality of the GA computed dig limit, as current assessment methodology consumes up to 70 percent of the total computation time. Short-term mine planning applications need to be directly usable by mine operators’ personnel and must be generated quickly to be useful in dynamic mining environments. This requirement shall save up time and cost. A case study was conducted on a bench with multiple destinations and 420 SMUs to test whether a convolutional neural network (CNN) can make predictions on clustering quality and discover the best CNN architecture for the task. The case study showed that using a CNN does ensure prediction accuracy and can speed up the time of computation time by 3900%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11
Figure 12

Similar content being viewed by others

References

  • Back, T. (1996). Evolutionary algorithms in theory and practice: Evolution strategies, evolutionary programming. Oxford University Press.

    Book  Google Scholar 

  • Batista, G. E., Prati, R. C., & Monard, M. C. (2004). A study of the behaviour of several methods for balancing machine learning training data. ACM SIGKDD Explorations Newsletter, 6(1), 20–29.

    Article  Google Scholar 

  • Chollet, F. (2015) Keras, GitHub. Retrieved May 1, 2017. https://github.com/fchollet/keras

  • Coello, C.A.C., Lamont, G.B., & Van Veldhuizen, D.A. (2007). Evolutionary Algorithms For Solving Multi-Objective Problems, Vol. 5. Springer

  • Colin, A., & Puaut, I. (2007). Worst case execution time analysis for a processor with branch prediction. Real-Time Systems, 18(2–3), 249–274.

    Google Scholar 

  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition, Miami (pp. 248–255).

  • Dorigo, M., & Birattari, M. (2011). Ant colony optimization. In C. Sammut & G. I. Webb (Eds.), Encyclopaedia of machine learning. springer (pp. 36–39). Springer Science & Business Media.

  • Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. M. Sugiyama & Q. Yang (Eds.), Proceedings of the thirteenth international conference on artificial intelligence and Statistics (pp. 249–256).

  • Hagan, M. T., Demuth, H. B., Beale, M. H., & De Jeśus, O. (1996). Neural network design (Vol. 20). Pws Pub.

    Google Scholar 

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). Las Vegas, NV, USA

  • He, Y., Gao, S., Liao, N., & Liu, H. (2016). A nonlinear goal-programming-based DE and ANN approach to grade optimization in iron mining. Neural Computing and Applications., 27(7), 2065–2081.

    Article  Google Scholar 

  • Shin, H., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D., & Summers, R. M. (2016). Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Transactions on Medical Imaging, 35(5), 1285–1298.

    Article  Google Scholar 

  • Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359–366.

    Article  Google Scholar 

  • Ioffe, S., & Szegedy, C. (2015). Batch normalization. Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv,1502–3167

  • Jewbali, A., & Dimitrakopoulos, R. (2018). Stochastic mine planning - example and value from integrating long-and short-term mine planning through simulated grade control, Sunrise dam, Western Australia. R. Dimitrakopoulos (Ed.) Advances in applied strategic mine planning (pp. 173–189). Springer

  • Karimpouli, S., Tahmasebi, P., & Saenger, E. H. (2020). Coal cleat/fracture segmentation using convolutional neural networks. Natural Resources Research, 29(3), 1675–1685.

    Article  Google Scholar 

  • Kennedy, J. (2011). Particle swarm optimization. C. Sammut, G.I. Webb (Eds.), Encyclopaedia of machine learning (pp.760–766). Springer.

  • Kotsiantis, S., Kanellopoulos, D., & Pintelas, P. (2006). Handling imbalanced datasets: A review. GESTS International Transactions on Computer Science and Engineering, 30(1), 25–36.

    Google Scholar 

  • Kubat, M., & Matwin, S. (1997). Addressing the curse of imbalanced training sets: one-sided selection. M. Kaufmann (Ed.), Proceedings of the 14th international conference on machine learning (pp. 179–186). Nashville, USA.

  • Kumral, M. (2006). Bed blending design incorporating multiple regression modelling and genetic algorithms. Journal of the Southern African Institute of Mining and Metallurgy, 106(3), 229–236.

    Google Scholar 

  • Kumral, M. (2011). Incorporating geo-metallurgical information into mine production scheduling. Journal of the Operational Research Society, 62(1), 60–68. https://doi.org/10.1057/jors.2009.174

    Article  Google Scholar 

  • Kumral, M. (2013). Multi-period mine planning with multi-process routes. International Journal of Mining Science and Technology, 23(3), 317–321.

    Article  Google Scholar 

  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436.

    Article  Google Scholar 

  • LeCun, Y., Boser, B.E., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W.E., & Jackel, L.D. (1990). Handwritten digit recognition with a back-propagation network. Advances in neural information processing systems, pp. 396–404.

  • L’Heureux, G., Gamache, M., & Soumis, F. (2013). Mixed integer programming model for short term planning in open-pit mines. Mining Technology, 122(2), 101–109.

    Article  Google Scholar 

  • Li, Z. P., Fan, X. H., Chen, G., Yang, G. M., & Sun, Y. (2017). Optimization of iron ore sintering process based on elm model and multi-criteria evaluation. Neural Computing and Applications, 28(8), 2247–2253.

    Article  Google Scholar 

  • Li, T., Zuo, R., Xiong, Y., & Peng, Y. (2021). Random-drop data augmentation of deep convolutional neural network for mineral prospectivity mapping. Natural Resources Research, 30(1), 27–38.

    Article  Google Scholar 

  • Li, S., Sari, Y. A., & Kumral, M. (2020). Optimization of mining-mineral processing integration using unsupervised machine learning algorithms. Natural Resources Research, 29, 3035–3046.

    Article  Google Scholar 

  • Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013). Rectifier nonlinearities improve neural network acoustic models. 30th International Conference on Machine Learning, 30(1), 1–6.

    Google Scholar 

  • Mohamad, E. T., Armaghani, D. J., Momeni, E., Yazdavar, A. H., & Ebrahimi, M. (2018). Rock strength estimation: A PSO-based BP approach. Neural Computing and Applications, 30(5), 1635–1646.

    Article  Google Scholar 

  • Nair, V., & Hinton, G.E. (2010). Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 807–814). Haifa, Israel

  • Norrena, K., & Deutsch, C. (2000). Automatic determination of dig limits subject to geostatistical, economical and equipment constraints. Center for Computational Geostatistics (CCG), University of Alberta, Edmonton, Alberta, Canada.

  • Raschka, S., & Mirjalili, V. (2019). Python machine learning: Machine learning and deep learning with Python, scikit-learn, and TensorFlow 2. Packt Publishing Ltd.

  • Rosenblatt, F. (1957). The perceptron, a perceiving and recognizing automaton Project Para. Cornell Aeronautical Laboratory.

  • Ruiseco, J. R., & Kumral, M. (2017). A practical approach to mine equipment sizing in relation todig-limit optimization in complex orebodies: Multi-rock type, multi-process, and multi-metal case. Natural Resources Research, 26(1), 23–35.

    Article  Google Scholar 

  • Ruiseco, J. R., Williams, J., & Kumral, M. (2016). Optimizing ore-waste dig-limits as part of operational mine planning through genetic algorithms. Natural Resources Research, 25(4), 473–485.

    Article  Google Scholar 

  • Sanders, J., & Kandrot, E. (2010). CUDA by example: an introduction to general-purpose GPU programming. Addison-Wesley Professional.

  • Sari, Y. A., & Kumral, M. (2018). Dig-limits optimization on through mixed-integer linear programming in open-pit mines. Journal of the Operational Research Society, 69(2), 171–182.

    Article  Google Scholar 

  • Shen, J.P., & Lipasti, M.H. (2013). Modern Processor Design: Fundamentals of Superscalar Processors. Waveland Press.

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint https://arxiv.org/abs/1409.1556. Retrieved March 7, 2018.

  • Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929–1958.

    Google Scholar 

  • Srivastava, R.K., Gre, K., & Schmidhuber, J. (2015). Training very deep networks. In Advances in neural information processing systems, pp. 2377–2385.

  • Tieleman, T., & Hinton, G. (2012) Lecture 65-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning 4(2), 26–31.

  • Villalba Matamoros, M. E., & Dimitrakopoulos, R. (2016). Stochastic short-term mine production schedule accounting for fleet allocation, operational considerations and blending restrictions. European Journal of Operational Research, 255(3), 911–921.

    Article  Google Scholar 

  • Villalba Matamoros, M. E., & Kumral, M. (2019). Calibration of genetic algorithm parameters for mining-related optimization problems. Natural Resources Research, 28(2), 443–456.

    Article  Google Scholar 

  • Villalba Matamoros, M. E., & Kumral, M. (2019). Underground mine planning: Stope layout optimisation under grade uncertainty using genetic algorithms. International Journal of Mining, Reclamation and Environment, 33(5), 353–370.

    Article  Google Scholar 

  • Wallace, G. K. (1992). The jpeg still picture compression standard. IEEE Transactions on Consumer Electronics, 38(1), 18–34. https://doi.org/10.1109/30.125072

    Article  Google Scholar 

  • Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2017). Aggregated residual transformations for deep neural networks. In IEEE conference on computer vision and pattern recognition, 2017, pp. 5987–5995. Honolulu, USA

Download references

Acknowledgments

This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) (Fund Number: 236482). The authors thank for these supports.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mustafa Kumral.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Williams, J., Singh, J., Kumral, M. et al. Exploring Deep Learning for Dig-Limit Optimization in Open-Pit Mines. Nat Resour Res 30, 2085–2101 (2021). https://doi.org/10.1007/s11053-021-09864-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11053-021-09864-y

Keywords

Navigation