Evaluating supervised learning approaches for spatial-domain multi-focus image fusion
Evaluando aproximaciones basadas en aprendizaje supervisado para la fusión en el dominio espacial de imágenes multi-foco
Palabras clave:
Multi-focus image fusion, image processing, supervised learning, machine learning (en)Fusión de imágenes mutifoco, procesamiento de imágenes, aprendizaje supervisado, aprendizaje de máquina (es)
Descargas
Descargas
Citas
Zhang, B., Lu, X., Pei, H., Liu, H., Zhao, Y. and Zhou, W., Multi-focus image fusion algorithm based on focused region extraction. Neurocomputing, 174, pp. 733-748, Jan. 2016. DOI: 10.1016/j.neucom.2015.09.092.
Favaro, P., Mennucci, A. and Soatto, S., Observing shape from defocused images. Int. J. Comput. Vis., 52(1), pp. 25-43, 2003. DOI: 10.1023/A:1022366408068.
Xiao, J., Liu, T., Zhang, Y., Zou, B., Lei, J. and Li, Q., Multi-focus image fusion based on depth extraction with inhomogeneous diffusion equation. Signal Process., 125, pp. 171-186, Aug. 2016. DOI: 10.1016/j.sigpro.2016.01.014.
Saeed, A. and Choi, T.-S., A novel algorithm for estimation of depth map using image focus for 3D shape recovery in the presence of noise. Pattern Recognit., 41(6), pp. 2200-2225, 2008. DOI: 10.1016/j.patcog.2007.12.014.
Song, Y., Li, M., Li, Q. and Sun, L., A new wavelet based multi-focus image fusion scheme and its application on optical microscopy, in: Robotics and Biomimetics, 2006. ROBIO ’06. IEEE Int. Conf., 2006, pp. 401-405. DOI: 10.1109/ROBIO.2006.340210.
Wan, T., Zhu, C. and Qin, Z., Multifocus image fusion based on robust principal component analysis. Pattern Recognit. Lett., 34(9), pp. 1001-1008, 2013. DOI: 10.1016/j.patrec.2013.03.003.
Huang, W. and Jing, Z., Evaluation of focus measures in multi-focus image fusion. Pattern Recognit. Lett., 28(4), 2007. DOI: 10.1016/j.patrec.2006.09.005.
Xin, W., You-Li, W., and Fu, L., A New Multi-source image sequence fusion algorithm based on SIDWT, in: 2013 Seventh International Conference on Image and Graphics, 2013, pp. 568-571. DOI: 10.1109/ICIG.2013.119.
Yu, B. et al., Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion, Neurocomputing, 182, pp. 1-9, Mar. 2016. DOI: 10.1016/j.neucom.2015.10.084.
Haghighat, M.B.A., Aghagolzadeh, A. and Seyedarabi, H., Multi-focus image fusion for visual sensor networks in DCT domain. Comput. Electr. Eng., 37(5), pp. 789-797, Sep. 2011. DOI: 10.1016/j.compeleceng.2011.04.016.
Hua, K.-L., Wang, H.-C., Rusdi, A.H. and Jiang, S.-Y., A novel multi-focus image fusion algorithm based on random walks, J. Vis. Commun. Image Represent., 25(5), pp. 951-962, Jul. 2014. DOI: 10.1016/j.jvcir.2014.02.009.
Li, H., Li, L. and Zhang, J., Multi-focus image fusion based on sparse feature matrix decomposition and morphological filtering, Opt. Commun., 342, pp. 1-11, May 2015. DOI: 10.1016/j.optcom.2014.12.048.
Bai, X., Zhang, Y., Zhou, F. and Xu, B., Quadtree-based multi-focus image fusion using a weighted focus-measure. Inf. Fusion, 22, pp. 105-118, Mar. 2015. DOI: 10.1016/j.inffus.2014.05.003.
De, I. and Chanda, B., Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure, Inf. Fusion, 14, pp. 136-146, 2015. DOI: 10.1016/j.inffus.2012.01.007.
Rajagopalan, A.N. and Chaudhuri, S., An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images. IEEE Trans. Pattern Anal. Mach. Intell., 21(7), pp. 577-589, Jul. 1999. DOI: 10.1109/34.777369.
Xu, N., Tan, K., Arora, H. and Ahuja, N., Generating omnifocus images using graph cuts and a new focus measure, in: Proc. 17th Int. Conf. Pattern Recognition, 2004. ICPR 2004, 4, pp. 697-700, 2004. DOI: 10.1109/ICPR.2004.1333868.
Li, S., Kang, X., Hu, J. and Yang, B., Image matting for fusion of multi-focus images in dynamic scenes. Inf. Fusion, 14, pp. 147-162, 2013. DOI: 10.1016/j.inffus.2011.07.001.
Nejati, M., Samavi, S. and Shirani, S., Multi-focus image fusion using dictionary-based sparse representation. Inf. Fusion, 25, pp. 72-84, 2015. DOI: 10.1016/j.inffus.2014.10.004.
Saeedi, J. and Faez, K., Fisher classifier and fuzzy logic based multi-focus image fusion. In: Intelligent Computing and Intelligent Systems, 2009. ICIS 2009. IEEE Int. Conf., pp. 420-425, 2009. DOI: 10.1109/ICICISYS.2009.5357648.
Li, S., Kwok, J. and Wang, Y., Multifocus image fusion using artificial neural networks. Pattern Recognit. Lett., 23, pp. 985-997, 2002. DOI: 10.1016/S0167-8655(02)00029-6.
Wang, Z., Ma, Y. and Gu, J., Multi-focus image fusion using PCNN. Pattern Recognit., 43(6), pp. 2003-2016, Jun. 2010. DOI: 10.1016/j.patcog.2010.01.011.
Liu, Z., Tsukada, K., Hanasaki, K., Ho, Y.K. and Dai, Y.P., Image fusion by using steerable pyramid. Pattern Recognit. Lett., 22(9), pp. 929-939, 2001. DOI: 10.1016/S0167-8655(01)00047-2.
Santhosh, J., Ketan, B. and Anand, S., Application of SiDWT with extended PCA for multi-focus images, in Medical Imaging, m-Health and Emerging Communication Systems (MedCom), 2014. Int. Conf., pp. 55-59, 2014. DOI: 10.1109/MedCom.2014.7005975.
Savic, S., Multifocus image fusion based on empirical mode decomposition, in: 20th Int. Electrotechnical and Computer Science Conf., 2011, pp. 91-94.
Nayar, S.K. and Nakgawa, Y., Shape from focus, IEEE Trans. Pattern Anal. Mach. Intell., 16(8), pp. 824-831, 1994. DOI: 10.1109/34.308479.
Balakrishnama, G., Linear discriminant analysis – A brief tutorial, 1998.
Cheeseman, P. and Stutz, J., Bayesian classification (autoclass): Theory and results, in: Advances in knowledge discovery and data mining, Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P. and Uthurusamy, R., Eds., Menlo Park, CA, USA: American Association for Artificial Intelligence, 1996, pp. 153-180.
Tsangaratos, P. and Ilia, I., Comparison of a logistic regression and Naïve Bayes classifier in landslide susceptibility assessments: The influence of models complexity and training dataset size. CATENA, 145, pp. 164-79, Oct. 2016. DOI: 10.1016/j.catena.2016.06.004.
Domingos, P. and Pazzani, M., Beyond independence: Conditions for the optimality of the simple Bayesian classifier, in: Machine Learning, 1996, pp. 105-112.
Villa-Medina, J.L., Boqué, R. and Ferré, J., Bagged k-nearest neighbours classification with uncertainty in the variables. Anal. Chim. Acta, 646(1–2), pp. 62-68, Jul. 2009. DOI: 10.1016/j.aca.2009.05.016.
Orhan, U., Hekim, M. and Ozer, M., EEG signals classification using the K-means clustering and a multilayer perceptron neural network model. Expert Syst. Appl., 38(10), pp. 13475-13481, Sep. 2011. DOI: 10.1016/j.eswa.2011.04.149.
Kong, Y.B., Lee, E.J., Hur, M.G., Park, J.H., Park, Y.D. and Yang, S.D., Support vector machine based fault detection approach for RFT-30 cyclotron. Nucl. Instrum. Methods Phys. Res. Sect. Accel. Spectrometers Detect. Assoc. Equip., 834, pp. 143-148, Oct. 2016. DOI: 10.1016/j.nima.2016.07.054.
Pedregosa, F. et al., Scikit-learn: machine learning in Python. J. Mach. Learn. Res., 12, pp. 2825-2830, 2011.
Schaul, T. et al., PyBrain. J. Mach. Learn. Res., 11, pp. 743-746, 2010.
Xydeas, C.S. and Petrovic, V., Objective image fusion performance measure. Electron. Lett., 36(4), pp. 308-309, Feb. 2000. DOI: 10.1049/el:20000267.
Han, Y., Cai, Y., Cao, Y. and Xu, X., A new image fusion performance metric based on visual information fidelity. Inf. Fusion, 14(2), pp. 127-135, 2013. DOI: 10.1016/j.inffus.2011.08.002
Haghighat, M.B.A., Aghagolzadeh, A. and Seyedarabi, H., A non-reference image fusion metric based on mutual information of image features, Comput. Electr. Eng., 37(5), pp. 744-756, Sep. 2011. DOI: 10.1016/j.compeleceng.2011.07.012.
Kvalseth, T.O., Entropy and correlation: some comments. IEEE Trans. Syst. Man Cybern., 17(3), pp. 517-519, May 1987. DOI: 10.1109/TSMC.1987.4309069.
Hossny, M., Nahavandi, S. and Creighton, D., Comments on ‘Information measure for performance of image fusion’. Electron. Lett., 44(18), pp. 1066-1067, Aug. 2008. DOI: 10.1049/el:20081754.
Duda, R., Hart, P. and Stork, D., Pattern classification, 2nd ed., 2001.
Licencia
Derechos de autor 2017 DYNA

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-SinDerivadas 4.0.
El autor o autores de un artículo aceptado para publicación en cualquiera de las revistas editadas por la facultad de Minas cederán la totalidad de los derechos patrimoniales a la Universidad Nacional de Colombia de manera gratuita, dentro de los cuáles se incluyen: el derecho a editar, publicar, reproducir y distribuir tanto en medios impresos como digitales, además de incluir en artículo en índices internacionales y/o bases de datos, de igual manera, se faculta a la editorial para utilizar las imágenes, tablas y/o cualquier material gráfico presentado en el artículo para el diseño de carátulas o posters de la misma revista.
