Publicado

2019-04-01

Point cloud saliency detection via local sparse coding

Detección de prominencia de nubes de puntos por medio de codificación dispersa local

DOI:

https://doi.org/10.15446/dyna.v86n209.75958

Palabras clave:

point clouds, sparse coding, saliency, minimum description length (en)
nubes de puntos, código disperso, salencia, longitud de descripción mínima (es)

Autores/as

The human visual system (HVS) can process large quantities of visual information instantly. Visual saliency perception is the process of locating and identifying regions with a high degree of saliency from a visual standpoint. Mesh saliency detection has been studied extensively in recent years, but few studies have focused on 3D point cloud saliency detection. The estimation of visual saliency is important for computer graphics tasks such as simplification, segmentation, shape matching and resizing. In this paper, we present a method for the direct detection of saliency on unorganized point clouds. First, our method computes a set of overlapping neighborhoods and estimates a
descriptor vector for each point inside it. Then, the descriptor vectors are used as a natural dictionary in order to apply a sparse coding process. Finally, we estimate a saliency map of the point neighborhoods based on the Minimum Description Length (MDL) principle.
Experiment results show that the proposed method achieves similar results to those from the literature review and in some cases even improves on them. It captures the geometry of the point clouds without using any topological information and achieves an acceptable performance. The effectiveness and robustness of our approach are shown by comparing it to previous studies in the literature review.

El sistema visual humano (SVH) puede procesar grandes cantidades de información visual al instante. La percepción de la salencia visual es el proceso de localizar e identificar regiones con un alto grado de prominencia desde un punto de vista visual. La detección de la salencia de la malla se ha estudiado ampliamente en los últimos años, pero pocos trabajos se han centrado en la detección desde las nubes de puntos 3D. La estimación de la salencia visual es importante para tareas de gráficos por computadora tales como simplificación, segmentación, ajuste de formas y cambio de tamaño. En este artículo presentamos un método para la detección directa de la salencia en nubes de puntos no organizadas. Primero, nuestro método calcula un conjunto de vecindarios superpuestos y estima un vector descriptor para cada punto dentro de él. Luego, los vectores descriptores se utilizan como un diccionario natural para aplicar un proceso de codificación disperso. Finalmente, estimamos un mapa de prominencia de los vecindarios de puntos según el principio de Longitud de descripción mínima (MDL). Los resultados experimentales muestran que el método propuesto funciona bien. Captura la geometría de las nubes de puntos sin utilizar ninguna información topológica y logra un rendimiento aceptable. La efectividad y la solidez de nuestro enfoque se muestran en comparaciones con trabajos anteriores en el estado del arte.

Referencias

Jia S., Zhang C., Li X., and Zhou Y., Mesh resizing based on hierarchical saliency detection, Graph. Models, 76 (5), pp. 355–362, Sep. 2014.

Wolfe J. M., Guided Search 2.0 A revised model of visual search, Psychon. Bull. Rev., 1(2), pp. 202–238, Jun. 1994.

Koch C., and Poggio T., Predicting the visual world: silence is golden, Nat. Neurosci., 2 (1), pp. 9–10, Jan. 1999.

Somasundaram G., Cherian A., Morellas V., and Papanikolopoulos N., Action recognition using global spatio-temporal features derived from sparse representations, Comput. Vis. Image Underst., 123(1), pp. 1–13, Jun. 2014.

Kalboussi R., Abdellaoui M., and Douik A., A spatiotemporal model for video saliency detection, in 2016 International Image Processing, Applications and Systems (IPAS), 2016, pp. 1–6.

Kim Y., Varshney A., Jacobs D. W., and Guimbretière F., Mesh Saliency and Human Eye Fixations, ACM Trans Appl Percept, 7 (2), pp. 1–13, Feb. 2010.

Lau M., Dev K., Shi W., Dorsey J., and Rushmeier H., Tactile Mesh Saliency, ACM Trans Graph, 35(4), pp. 1–11, Jul. 2016.

Limper M., Kuijper A., and Fellner D. W., Mesh Saliency Analysis via Local Curvature Entropy, in Proceedings of the 37th Annual Conference of the European Association for Computer Graphics: Short Papers, Goslar Germany, Germany, 2016, pp. 13–16.

Jia S., Zhang C., Li X., and Zhou Y., Mesh resizing based on hierarchical saliency detection, Graph. Models, 76 (5), pp. 355–362, Sep. 2014.

Wolfe J. M., Guided Search 2.0 A revised model of visual search, Psychon. Bull. Rev., 1(2), pp. 202–238, Jun. 1994.

Koch C., and Poggio T., Predicting the visual world: silence is golden, Nat. Neurosci., 2 (1), pp. 9–10, Jan. 1999.

Somasundaram G., Cherian A., Morellas V., and Papanikolopoulos N., Action recognition using global spatio-temporal features derived from sparse representations, Comput. Vis. Image Underst., 123(1), pp. 1–13, Jun. 2014.

Kalboussi R., Abdellaoui M., and Douik A., A spatiotemporal model for video saliency detection, in 2016 International Image Processing, Applications and Systems (IPAS), 2016, pp. 1–6.

Kim Y., Varshney A., Jacobs D. W., and Guimbretière F., Mesh Saliency and Human Eye Fixations, ACM Trans Appl Percept, 7 (2), pp. 1–13, Feb. 2010.

Lau M., Dev K., Shi W., Dorsey J., and Rushmeier H., Tactile Mesh Saliency, ACM Trans Graph, 35(4), pp. 1–11, Jul. 2016.

Limper M., Kuijper A., and Fellner D. W., Mesh Saliency Analysis via Local Curvature Entropy, in Proceedings of the 37th Annual Conference of the European Association for Computer Graphics: Short Papers, Goslar Germany, Germany, 2016, pp. 13–16.

Wang S., Li N., Li S., Luo Z., Su Z., and Qin H., Multi-scale mesh saliency based on low-rank and sparse analysis in shape feature space, Comput. Aided Geom. Des., 35(36), pp. 206–214, May 2015.

Liu X., Tao P., Cao J., Chen H., and Zou C., Mesh saliency detection via double absorbing Markov chain in feature space, Vis. Comput., 32(9), pp. 1121–1132, Sep. 2016.

Tao P., Cao J., Li S., Liu X., and Liu L., Mesh saliency via ranking unsalient patches in a descriptor space, Comput. Graph., 46, pp. 264–274, Feb. 2015.

Song R., Liu Y., Martin R., and Rosin P. L., Mesh Saliency via Spectral Processing, ACM Trans Graph, 33(1), pp. 6:1–6:17, Feb. 2014.

Wu J., Shen X., Zhu W., and Liu L., Mesh saliency with global rarity, Graph. Models, 75(5), pp. 255–264, Sep. 2013.

Nouri A., Charrier C., and Lézoray O., Multi-scale mesh saliency with local adaptive patches for viewpoint selection, Signal Process. Image Commun., 38, pp. 151–166, Oct. 2015.

Liu X., Ma L., and Liu L., P2: a robust and rotationally invariant shape descriptor with applications to mesh saliency, Appl. Math.- J. Chin. Univ., 31(1), pp. 53–67, Mar. 2016.

Zhao Y. et al., Region-based saliency estimation for 3D shape analysis and understanding, Neurocomputing, 197, pp. 1–13, Jul. 2016.

Jeong S. W., and Sim J. Y., Saliency Detection for 3D Surface Geometry Using Semi-regular Meshes, IEEE Trans. Multimed., 19(12), pp. 2692–2705, Dec. 2017.

Song R., Liu Y., Martin R., and Echavarria K., Local-to-global mesh saliency, Vis. Comput., 34(3), pp. 323–336, Nov. 2016.

Guo Y., Wang F., and Xin J., Point-wise saliency detection on 3D point clouds via covariance descriptors, Vis. Comput., pp. 1–14, Jun. 2017.

Tasse F. P., Kosinka J., and Dodgson N., Cluster-Based Point Set Saliency, in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 163–171.

Shtrom E., Leifman G., and Tal A., Saliency Detection in Large Point Sets, in 2013 IEEE International Conference on Computer Vision, 2013, pp. 3591–3598.

Akman O., and Jonker P., Computing Saliency Map from Spatial Information in Point Cloud Data, in Advanced Concepts for Intelligent Vision Systems, 2010, pp. 290–299.

Yu H., Wang R., Chen J., Liu L., and Wan W., Saliency computation and simplification of point cloud data, 2012, pp. 1350–1353.

An G., Watanabe T., and Kakimoto M., Mesh Simplification Using Hybrid Saliency, in 2016 International Conference on Cyberworlds (CW), 2016, pp. 231–234.

Dutta S., Banerjee S., Biswas P. K., and Bhowmick P., Mesh Denoising Using Multi-scale Curvature-Based Saliency, in Computer Vision - ACCV 2014 Workshops, 2014, pp. 507–516.

Jiao X., Wu T., and Qin X., Mesh segmentation by combining mesh saliency with spectral clustering, J. Comput. Appl. Math., 329(1), pp. 134–146, Feb. 2018.

Tasse F. P., Kosinka J., and Dodgson N., How Well Do Saliency-based Features Perform for Shape Retrieval?, Comput Graph, 59(C), pp. 57–67, Oct. 2016.

Gal R., and Cohen-Or D., Salient Geometric Features for Partial Shape Matching and Similarity, ACM Trans Graph, 25(1), pp. 130–150, Jan. 2006.

Wang W. et al., Saliency‐Preserving Slicing Optimization for Effective 3D Printing, Comput. Graph. Forum, 34(6), pp. 148–160, Sep. 2015.

Li Y., Zhou Y., Xu L., Yang X., and Yang J., Incremental sparse saliency detection, in 2009 16th IEEE International Conference on Image Processing (ICIP), 2009, pp. 3093–3096.

Ramirez I. and Sapiro G., An MDL Framework for Sparse Coding and Dictionary Learning, IEEE Trans. Signal Process., 60(6), pp. 2913–2927, Jun. 2012.

Lee C. H., Varshney A., and Jacobs D. W., Mesh Saliency, in ACM SIGGRAPH 2005 Papers, New York, NY, USA, 2005, pp. 659–666.

Leifman G., Shtrom E., and Tal A., Surface regions of interest for viewpoint selection, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 414–421.

Itti L., Koch C., and Niebur E., A model of saliency-based visual attention for rapid scene analysis, IEEE Trans. Pattern Anal. Mach. Intell., 20(11), pp. 1254–1259, Nov. 1998.

Itti L. and Koch C., Computational modelling of visual attention, Nat. Rev. Neurosci., 2(3), pp. 194–203, Mar. 2001.

Olshausen and Field D. J., Sparse coding with an overcomplete basis set: A strategy employed by V1?, Vision Res., 37(23), pp. 3311–3325, Dec. 1997.

Bao C., Ji H., Quan Y., and Shen Z., Dictionary Learning for Sparse Coding: Algorithms and Convergence Analysis, IEEE Trans. Pattern Anal. Mach. Intell., 38(7), pp. 1356–1369, Jul. 2016.

Rissanen J., Modeling by shortest data description, Automatica, 14(5), pp. 465–471, Sep. 1978.

Bruce N. D. B., and Tsotsos J. K., Saliency Based on Information Maximization, in Proceedings of the 18th International Conference on Neural Information Processing Systems, Cambridge, MA, USA, 2005, pp. 155–162.

Borji A. and Itti L., Exploiting local and global patch rarities for saliency detection, in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 478–485.

Chen X., Saparov A., Pang B., and Funkhouser T., Schelling Points on 3D Surface Meshes, ACM Trans Graph, 31(4), pp. 29:1–29:12, Jul. 2012.

Mitra N. J. and Nguyen A., Estimating Surface Normals in Noisy Point Cloud Data, in Proceedings of the Nineteenth Annual Symposium on Computational Geometry, New York, NY, USA, 2003, pp. 322–328.

Tasse F. P., Kosinka J., and Dodgson N. A., Quantitative Analysis of Saliency Models, in SIGGRAPH ASIA 2016 Technical Briefs, New York, NY, USA, 2016, pp. 19:1–19:4.

Cómo citar

IEEE

[1]
E. A. Leal Narvaez, G. Sanchez Torres, y J. W. Branch Bedoya, «Point cloud saliency detection via local sparse coding», DYNA, vol. 86, n.º 209, pp. 238–247, abr. 2019.

ACM

[1]
Leal Narvaez, E.A., Sanchez Torres, G. y Branch Bedoya, J.W. 2019. Point cloud saliency detection via local sparse coding. DYNA. 86, 209 (abr. 2019), 238–247. DOI:https://doi.org/10.15446/dyna.v86n209.75958.

ACS

(1)
Leal Narvaez, E. A.; Sanchez Torres, G.; Branch Bedoya, J. W. Point cloud saliency detection via local sparse coding. DYNA 2019, 86, 238-247.

APA

Leal Narvaez, E. A., Sanchez Torres, G. & Branch Bedoya, J. W. (2019). Point cloud saliency detection via local sparse coding. DYNA, 86(209), 238–247. https://doi.org/10.15446/dyna.v86n209.75958

ABNT

LEAL NARVAEZ, E. A.; SANCHEZ TORRES, G.; BRANCH BEDOYA, J. W. Point cloud saliency detection via local sparse coding. DYNA, [S. l.], v. 86, n. 209, p. 238–247, 2019. DOI: 10.15446/dyna.v86n209.75958. Disponível em: https://revistas.unal.edu.co/index.php/dyna/article/view/75958. Acesso em: 22 mar. 2026.

Chicago

Leal Narvaez, Esmeide Alberto, German Sanchez Torres, y John William Branch Bedoya. 2019. «Point cloud saliency detection via local sparse coding». DYNA 86 (209):238-47. https://doi.org/10.15446/dyna.v86n209.75958.

Harvard

Leal Narvaez, E. A., Sanchez Torres, G. y Branch Bedoya, J. W. (2019) «Point cloud saliency detection via local sparse coding», DYNA, 86(209), pp. 238–247. doi: 10.15446/dyna.v86n209.75958.

MLA

Leal Narvaez, E. A., G. Sanchez Torres, y J. W. Branch Bedoya. «Point cloud saliency detection via local sparse coding». DYNA, vol. 86, n.º 209, abril de 2019, pp. 238-47, doi:10.15446/dyna.v86n209.75958.

Turabian

Leal Narvaez, Esmeide Alberto, German Sanchez Torres, y John William Branch Bedoya. «Point cloud saliency detection via local sparse coding». DYNA 86, no. 209 (abril 1, 2019): 238–247. Accedido marzo 22, 2026. https://revistas.unal.edu.co/index.php/dyna/article/view/75958.

Vancouver

1.
Leal Narvaez EA, Sanchez Torres G, Branch Bedoya JW. Point cloud saliency detection via local sparse coding. DYNA [Internet]. 1 de abril de 2019 [citado 22 de marzo de 2026];86(209):238-47. Disponible en: https://revistas.unal.edu.co/index.php/dyna/article/view/75958

Descargar cita

CrossRef Cited-by

CrossRef citations4

1. Li-na Zhang, Shi-yao Wang, Jun Zhou, Jian Liu, Chun-gang Zhu. (2020). 3D grasp saliency analysis via deep shape correspondence. Computer Aided Geometric Design, 81, p.101901. https://doi.org/10.1016/j.cagd.2020.101901.

2. Weisi Lin, Sanghoon Lee. (2022). Visual Saliency and Quality Evaluation for 3D Point Clouds and Meshes: An Overview. APSIPA Transactions on Signal and Information Processing, 11(1), p.1. https://doi.org/10.1561/116.00000125.

3. Reuma Arav, Dennis Wittich, Franz Rottensteiner. (2025). Evaluating saliency scores in point clouds of natural environments by learning surface anomalies. ISPRS Journal of Photogrammetry and Remote Sensing, 224, p.235. https://doi.org/10.1016/j.isprsjprs.2025.03.022.

4. Esmeide Leal, German Sanchez-Torres, John W. Branch-Bedoya, Francisco Abad, Nallig Leal. (2021). A Saliency-Based Sparse Representation Method for Point Cloud Simplification. Sensors, 21(13), p.4279. https://doi.org/10.3390/s21134279.

Dimensions

PlumX

Visitas a la página del resumen del artículo

777

Descargas

Los datos de descargas todavía no están disponibles.