Weed recognition by SVM texture feature classification in outdoor vegetable crops images
Reconocimiento de maleza por características de textura usando SVM en imágenes exteriores de cultivos de hortalizas.
DOI:
https://doi.org/10.15446/ing.investig.v37n1.54703Palabras clave:
Weed recognition, support vectors, co-occurrence matrix, PCA (en)Reconocimiento de maleza, vectores de soporte, matrices de co-ocurrencia, PCA (es)
Descargas
This paper presents a classification system for weeds and vegetables from outdoor crop images. The classifier is based on support vector machine (SVM) with its extension to nonlinear case using radial basis function (RBF) and optimizing its scale parameter σ to smooth the decision boundary. The feature space is the result of principal component analysis (PCA) for 10 texture measurements calculated from gray level co-occurrence matrices (GLCM). The results indicate that classifier performance is above 90%, validated with specificity, sensitivity and precision calculations.
El presente trabajo expone un sistema de clasificación de maleza y hortalizas a partir de imágenes exteriores de cultivos. El clasificador está basado en la teoría de las máquinas de vectores de soporte (Support Vector Machine SVM) con su extensión para el caso no lineal, haciendo uso de la función de base radial (RBF) y optimizando su parámetro de escala σ para suavizar la región de decisión. El espacio de características es el resultado del análisis por componentes principales (PCA) de 10 medidas de textura calculadas a partir de matrices de co-ocurrencia en niveles de gris (GLCM). Los resultados indican un rendimiento del clasificador por encima del 90% calculando los índices de especificidad, sensibilidad y precisión.
Weed recognition by SVM texture feature
classification in outdoor vegetable crop images
Reconocimiento de maleza por características de textura usando
SVM en imágenes exteriores de cultivos de hortalizas
Camilo Pulido 1, Leonardo Solaque 2 , and Nelson Velasco 3
1Mechatronics Engineer, Universidad Militar Nueva Granada, Colombia - UMNG. Research Assistant, UMNG, Colombia. E-mail: camilopulidorojas@gmail.com.
2Electronics Engineer, Universidad de Antioquia, Colombia. Docteur de L’INSA Specialite Systemes Automatiques, L’institut National Des Sciences Appliquées, France. Professor Titular, UMNG, Colombia. E-mail: leonardo.solaque@unimilitar.edu.co.
3Mechatronics Engineer, Universidad Militar Nueva Granada- UMNG, Colombia. Master of Science, Universidad Nacional, Colombia. Professor Assistant, UMNG, Colombia. E-mail: nelson.velasco@unimilitar.edu.co.
How cite: Pulido, C., Solaque, and Velasco, N. (2017). Weed recognition by SVM texture feature classification in outdoor vegetable crop images. Ingeniería e Investigación, 37(1), 68-74.
DOI: 10.15446/ing.investig.v37n1.54703.
ABSTRACT
This paper presents a classification system for weeds and vegetables from outdoor crop images. The classifier is based on Support Vector Machine (SVM) with its extension to the nonlinear case, using the Radial Basis Function (RBF) and optimizing its scale parameter σ to smooth the boundary decision. The feature space is the result of Principal Component Analysis (PCA) for 10 texture measurements calculated from Gray Level Co-occurrence Matrices (GLCM). The results indicate that classifier performance is above 90%, validated with specificity, sensitivity and precision calculations.
Keywords: Weed recognition, support vectors, co-occurrence matrix, PCA.
RESUMEN
El presente trabajo muestra un sistema de clasificación de maleza y hortalizas a partir de imágenes exteriores de cultivos. El clasificador está basado en la teoría de las máquinas de vectores de soporte (Support Vector Machine o SVM) con su extensión para el caso no lineal, haciendo uso de la función de base radial (RBF) y optimizando su parámetro de escala σ para suavizar la región de decisión. El espacio de características es el resultado del análisis por componentes principales (PCA) de 10 medidas de textura calculadas a partir de matrices de co-ocurrencia en niveles de gris (GLCM). Los resultados indican un rendimiento del clasificador por encima del 90%, calculando los índices de especificidad, sensibilidad y precisión.
Palabras clave: Reconocimiento de maleza, vectores de soporte, matrices de co-ocurrencia, PCA.
Received: December 15th 2015
Accepted: February 7th 2017
Introduction
Weeds are plants that compete with the desired commercial crop, lowering productivity. They do this by blocking irrigation canals and by competing for water, nutrients, space, and light, consequently, the quality and crop yield decreases. A robotics application that can discriminate between weeds and crops from images is a cost-effective alternative to allow selective treatment to focus on optimizing resources and preserving environments, by identifying and removing only weed plants mixed with vegetables in crops. This approach can be solved using images processing to select undesired plants and by making an autonomous mechanical eradication on a mobile platform moving through crop, without affecting the other plants using chemical products.
Recent developments on the field of machine vision have led to a renewed interest in implementing weed recognition systems based on it. Basically, there are three main approaches for weed detection: based on colour, shape and texture analysis. Relating to colour and shape features, previous research suggests a criterion for segmenting plants based on a Vegetation Index that emphasizes the “green” component of the source image. Two such indices are the Excess Green Index (Woebbecke, et al., 1995), (Muangkasem, et al., 2010) and the Normalized Difference Vegetation Index using for weed classification considering color and shape (Pérez et al., 2000) and quantify map vegetative cover (Wiles, 2011). An advantage of indices is that they may perform well with different sunlight and background conditions, as a side effect. Colour features can be complemented with shape features that describe their geometry. If weeds can be identified by using shapes, then they can be identified by using area, perimeter, convexity and longest chord calculations (Shinde & Shukla, 2014).
Other studies have been carried out for weed classification with texture features using texture measures calculated on the basis of the Gray Level Co-Occurrence Matrix (GLCM) (Haralick, et al., 1973 and Burks, 1997), which preserves the spatial and high and texture descriptors have been used for training neural networks to identify plant species or weeds; numerous researchers have explained this approach, some of the most relevant articles are shown in Huang K. (2007) and Kavdir I. (2004) with classification accuaracy around 90%. Wu & Wen (2009) proposed a support vector machine classifier to identify weeds in corn fields during early crop stages, using the co-occurrence matrix in gray levels and statistical histogram properties to extract texture features with greater accuracy to 92%. Similarly, Ahmed, et al. (2012) evaluate fourteen colour, size and moment invariant features to get an optimal combination that provides the highest classification rate; their result achieves above 97% accuracy.
Based on studies described above, there exists a potential of using texture features to discriminate weed and vegetables applied to classifiers design and so to be carried to agriculture robotic applications. The main objective of the research presented in this paper is to develop a weed identification system using GLCM texture features extraction and their dimensionality reduction using Principal Component Analysis (PCA) to get suitable patterns in 2D feature space for training a support vector machine classifier from outdoor and unfiltered RGB images. The data processing system consists of a 3,30 GHz processor and 8GB RAM running MATLAB 2015b.
This paper has been divided into five parts. The first part is the Introduction; section 2 describes texture feature calculation and dimensionality reduction. In section 3, Support Vector Machine training is explained. Results and discussion are in section 4. The final part 5 is about conclusions and future work. Finally, acknowledgements are presented.
Texture Feature Processing using PCA
Texture features quantify, in various ways, grey levels differences. Building texture calculations over an entire image makes visible areas where these changes occur. Processing for texture features extraction contains a data base with outdoor vegetables crops images for SVM training, including labeled images according to weed and vegetables classes. Then, Grey Level Co-Occurrence Matrix method (GLCM) is purposed for each observation in the training set, where each matrix calculated serves as a basis to compute 10 statistical texture measures. Then, Principal Component Analysis (PCA) is used to represent original data in a new base where most variance is preserved on each axis; as a result, dimensionality reduction is performed to get a 2D feature space which procedure is described below.
GLCM and texture feature extraction
The GLCM method is a practical way to organize and tabulate the changes in brightness for different combinations of pixels, preserving the spatial information, getting first and second order texture measures, obtaining statistical calculations considering or not the relationship between neighboring pixels. The mathematical definition of co-occurrence matrix is shown in the Equation (1).

Where C, is a co-occurrence matrix defined over an I image with m x n size, parameterized with steps ∆x and ∆x. This matrix must be modified in such a way diagonal and normalized according to Haralick, et al., (1973) for texture calculations. The present paper works with ten texture measures computed from the co-occurrence matrix. These features were selected for cover three groups: contrast, order and descriptive statistics. Likewise, these quantify similarity or local variance in the image, deviation of the gray levels, co-occurrence frequency of pixels, uniformity and homogeneity of the image within the image evaluation. The mathematical expressions about texture features are shown in Equations (2) – (11).
Autocorrelation: ( 2 )
Contrast: ( 3 )
Correlation: ( 4 )
Energy: ( 5 )
Dissimilarity: ( 6 )
Entropy: ( 7 )
Homogeneity: ( 8 )
Variance: ( 9 )
Difference Variance: ( 10 )
Cluster Shade: ( 11 )
Where Nx and Ny are the columns and rows respectively to rectangular image, quantized in Ng gray levels and p(i,j) are the input (i,j)ith of the normalized GLCM.
The database of images was built using 250 photos captured by a person moving along vegetable crops using an 8MP camera perpendicular to crop lines, avoiding illumination disturbances in a not-automated manner, controlling the lens aperture for redu-cing light input to preserve the real color of the plants. Although this process was manual, it is a first approach to testing the present weed classification based on texture descriptors, and even more thinking about further work with light controlled conditions using a camera obscura. The photos used in this paper include spinach and chard crops of “Horticulture Technology”, an academic program of Universidad Militar Nueva Granada Campus in Cajicá, Colombia. The images were labeled manually based on the random growth of weed and the expertise of crops manager to identify using a binary classifier between weed and plants classes.
With the dataset acquired, an observation with 100 x 100 pixels was built, dividing the original image into a grid. Some of the images used for feature extraction are shown in Figure 1.
Figure 1. Some images used for feature extraction. Left: Vegetable class. Right: Weed class.
It is important to highlight that the size of the observation was considered in this way, due to weed size. If the area was smaller, it would be a kind of zoom in and the little leafs of the weed could fill the entire window.
Then, texture measurements described above are calculated for each GLCM (observation with size of 100 x 100 pixels). Following this, results were stored and tabulated in a matrix of 250 rows for the observations and 10 columns representing the variables or texture descriptors.
PCA and dimensionality reduction
The Principal Component Analysis (PCA) is a multivariate method or tool used to find patterns in data, establishing a relationship of observed variables to detect trends, groups, deviations and outliers. The objective of the analysis is to represent the data in terms of a Y matrix that contains the greatest variance information in the directions of their eigenvectors (see Equation (12)), from X matrix with n columns for samples and m rows assigned to variables (Reddy, et al., 2012).
(12)
The principal components α result from eigenvectors of normalized covariance matrix of X, which are orthogonal to each other and sorted in descending order respect to eigenvalues (proportion of total dataset variance). Eigenvalues, therefore, indicate the proportion of variance and importance (length) of each principal component axis. To clarify, PCA calculation should be with the normalized data, subtracting the mean and divided by standard deviation, in order to avoid a large variance values due to measuring range and units of the extracted texture features.
For the purpose of increasing the reliability of weed classification, a dimensionality reduction is carried out preserving the greatest variance in the data, thus, the eigenchannels derived from the texture descriptors and containing the majority of the input variance are those that best describe the features resulting from the linear combination of all calculated texture statistics for this particular image. The PCA calculation is performed on training set and the cumulative variances are shown in 1.
Table 1. Principal components cumulative variance (%)
PC1 |
PC2 |
PC3 |
PC4 |
PC5 |
PC6 |
PC7 |
PC8 |
PC9 |
PC10 |
56,90 |
80,96 |
93,20 |
96,48 |
99,56 |
99,89 |
99,99 |
99,99 |
100 |
100 |
Moreover, another way of analyzing the principal component relevance for dimensionality reduction is through scree plot that represents eigenvalues versus number of components (See Figure 2).
Figure 2. Scree plot. Eigenvalues vs Number of principal components.
The descriptors must contain the most information possible about classes to discriminate them. In the same way, this corresponds to the direction with the greatest variance (eigenvalues) in the data. With this in mind, the first two components retain 80,96% of the data variance. For the purpose to validate a clear difference between weed and vegetable classes in 2D principal component space, the training set is transformed using eigenvectors obtained and graphed in Figure 3.
Figure 3. Mapped features in 2D Principal Component space. Red: Weed-Green: Vegetables.
These results show a group of differentiable features for classes, and form the basis for the classifier design.
Support Vector Machine classification
Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. Support vectors are the closest examples to the separating hyperplane, and the aim of SVM is to orientate this hyperplane in such a way as to be as far as possible (margin) from the closest members of both classes. This separating hyperplane works as the decision surface and is described by wx+b=0, where w is normal to the hyperplane and b/||w|| is the perpendicular distance from the hyperplane to the origin. The Figure 4 shows a decision boundary example for discriminating two classes A (Circles) and B (Triangles) corresponding to vectors xi of the training set, with yi class labels of +1 and -1 respectively. The hyperplane´s equidistance from H1 and H2 (d1+d2) means a quantity known as margin.
Figure 4. Hyperplane through two linearly separable classes.
Then, the SVM approach is based on selecting the variables w and b to describe training data using Equations (13) and (14).
(13)
(14)
In order to orientate the hyperplane to be far from the Support Vectors, it is necessary to maximize the margin value 1/||w||. This implies to:
(15)
(16)
This is a convex quadratic optimization problem, which than can be expressed as a dual problem with Lagrange Multipliers.
(17)
Where Λ=(λ1,…, λl) is the vector of non-negative Lagrange multipliers corresponding to the constraints in Equation (16). Therefore, the dual problem is:
(18)
(19)
The optimal solution λ* is a discriminant function to classify new points in feature space. Equations (20) and (21) shows how is built this function. Where b* is a threshold value (Osuna, et al., 1997).
(20)
(21)
In most cases, the classification problems are nonlinear in feature space, then, SVM theory can be extended projecting input data to a higher dimensionality space, in which a separating hyperplane can be built. This approach is achieved using a kernel function given a suitable mapping x −> ø(x).
(22)
Up to now, this section has explained SVM theory; therefore, the methodology used to train will be set out. Figure 5 shows the highlighted support vectors (black circles) according to training set results from dimensionality reduction using PCA with texture features.
Figure 5. Support vectors. Red: Weed-Green: Vegetables-Black circles: Support vectors.
The radial basis function (RBF) is used as kernel to resolve weed classification problem due to the nonlinear representation of texture features in principal components space. The mathematical definition of RBF is exposed in Equation (23).
(23)
Figure 6 shows the contour of decision boundary corresponding to RBF kernel with σ=1 in training set.
Figure 6. Support Vector Machine classifier. Initial decision boundary. Red: Weed-Green: Vegetables-Black line: Decision boundary.
To increase the reliability of weed discrimination, the support vector machine trained above is optimized. For this purpose, the training data is partitioned into 10 sets, then, 10-fold cross-validation loss is expressed as a function and it is used to find an optimal σ value with the simplex search method of Lagarias et al., (1998). The resulting value of σ is 0,9614 and the smoothed decision boundary is shown in Figure 7.
Figure 7. Support Vector Machine classifier. Smoothed decision boundary. Red: Weed-Green: Vegetables-Black line: Decision boundary.
Results
The classification algorithm was tested with a set of images that contain weed and vegetable observations. As the training set, each observation has a size of 100 x 100 pixels and was taken perpendicular to crop lines, avoiding illumination. The experiments were carried out over two validation sets with a size of 70 and 320 samples, these experiments were labeled and stored respect their class in a column vector, with regard of classification performance indices calculations described in Equations (24), (25), (26) and (27).
(24)
(25)
(26)
(27)
Where True Positive (TP) is the number of plants detected as weed correctly. True Negative (TN) corresponds to the number of plants detected as crop correctly. False Positive (FP), the number of crop plants detected as weed and False Negative (FN), the number of weed plants detected as crop.
The first test was performed with a set of 70 images, 35 observations of weed, and the remaining, weed images. The results are shown in Figure 8, in which pattern or principal components space with the decision boundary is displayed, and the position of asterisks indicate weed (magenta) and vegetables (cyan) classification.
Figure 8. Support Vector Machine classification. Magenta: Weed classification-Cyan: Vegetable classification.
The classifier performance is tabulated in Table 2 with the indices calculation described above.
Table 2. Performance Indices results of 70 images validation set
TP |
TN |
FP |
FN |
SN |
SP |
PPV |
NPV |
34 |
33 |
2 |
1 |
97,143 |
94,286 |
94,444 |
97,059 |
Another test was performed with a set of 320 images, half corresponding to weed and the remaining to vegetable samples. Table 3 exposed the results achieved as performance indices.
Table 3. Performance Indices results of 320 images validation set
TP |
TN |
FP |
FN |
SN |
SP |
PPV |
NPV |
145 |
143 |
17 |
15 |
90,625 |
89,375 |
89,506 |
90,506 |
Conclusions
The present study was designed to extract relevant features as patterns from texture measures, and use it for classify weed and vegetables. The approach used outdoor images without preprocessing and was validated according to the results of principal components analysis and the clear difference between classes exposed in feature space (See Figure 3), thereby, other algorithms to try outdoor conditions images are not necessary, and the computational cost is also reduced. The statistical measures of the performance of classifier indicate: Sensitivity and specificity values above 90% represent a high percentage of correct classification of weed and vegetables according their true condition. Meanwhile, positive and negative predicted values describe a high accuracy, indicating the probability that a new sample can be really classified as weed or vegetable. These results suggest that the system classification developed has a high performance and can be applied for selective treatment of weeds or applications that requires continuous monitoring for minimizing resource consumption on agricultural productivity. Future work is focus on transfer coordinates from identification results of the crop scene to a mechanical structure as set-points to reach and pull out weed plants on a module that will be pulled by a tractor.
Acknowledgements
This work is supported by the project INV ING-1758 namely “Sistema de detección de malezas, suelo compacto, suelo seco, plagas y piedras en los campos de cultivo Fase 1”, funded by the research vice D.R AVE rectory of the Universidad Militar Nueva Granada in Bogotá-Colombia.
References
Ahmed, F., Al-Mamun, H., Bari, A., Hossain, E., & Kwan, P. (2012). Classification of crops and weeds from digital images: A support vector machine approach. Crop Protection, 40, 98-104.
Burks, T. (1997). Color image texture analysis and neural network classification of weed species.
Cho, S., Lee, D., & Jeong, J. (2002). AE—Automation and Emerging Technologies: Weed–plant Discrimination by Machine Vision and Artificial Neural Network. Biosystems Engineering, 83(3), 275-280.
Haralick, R. M., & Shanmugam, K. (1973). Textural features for image classification. IEEE Transactions on systems, man, and cybernetics, 3(6), 610-621.
Huang, K.-Y. (2007). Application of artificial neural network for detecting Phalaenopsis seedling diseases using color and texture features. Computers and Electronics in Agriculture, 57, 3-11.
Kavdır, İ. (2004). Discrimination of sunflower, weed and soil by artificial neural networks. Computers and Electronics in Agriculture, 44(2), 153-160.
Lagarias, J. C., Reeds, J. A., Wright, M. H., & Wright, P. E. (1998). Convergence properties of the Nelder--Mead simplex method in low dimensions. SIAM Journal on optimization, 9(1), 112-147.
Muangkasem, A., Thainimit, S., Keinprasit, R., Isshiki, T., & Tangwongkit, R. (2010). Weed detection over between-row of sugarcane fields using machine vision with shadow robustness technique for variable rate herbicide applicator. Energy Research Journal, 1(2), 141-145.
Osuna, E., Freund, R., & Girosi, F. (1997). Support vector machines: Training and applications.
Pérez, A. J., López, F., Benlloch, J. V., & Christensen, S. (2000). Colour and shape analysis techniques for weed detection in cereal fields. Computers and electronics in agriculture, 25(3), 197-212.
Reddy, T. A., Devi, K. R., & Gangashetty, S. V. (2012, March). Nonlinear principal component analysis for seismic data compression. In Recent Advances in Information Technology (RAIT), 2012 1st International Conference on (pp. 927-932). IEEE.
Shinde, A. K., & Shukla, M. Y. (2014). Crop detection by machine vision for weed management. International Journal of Advances in Engineering & Technology, 7(3), 818.
Wiles, L. J. (2011). Software to quantify and map vegetative cover in fallow fields for weed management decisions. Computers and electronics in agriculture, 78(1), 106-115.
Woebbecke, D. M., Meyer, G. E., Von Bargen, K., & Mortensen, D. A. (1993, May). Plant species identification, size, and enumeration using machine vision techniques on near-binary images. In Applications in Optical Science and Engineering (pp. 208-219). International Society for Optics and Photonics.
Woebbecke, D. M., Meyer, G. E., Von Bargen, K., & Mortensen, D. A. (1995). Color indices for weed identification under various soil, residue, and lighting conditions. Transactions of the ASAE-American Society of Agricultural Engineers, 38(1), 259-270.
Wu, L., & Wen, Y. (2009). Weed/corn seedling recognition by support vector machine using texture features. African Journal of Agricultural Research, 4(9), 840-846.
Attribution 4.0 International (CC BY 4.0) Share - Adapt
Referencias
Ahmed, F., Al-Mamun, H., Bari, A., Hossain, E., & Kwan, P. (2012). Classification of crops and weeds from digital images: A sup-port vector machine approach. Crop Protection, 40, 98-104.
Burks, T. (1997). Color image texture analysis and neural network classification of weed species.
Cho, S., Lee, D., & Jeong, J. (2002). AE—Automation and Emerg-ing Technologies: Weed–plant Discrimination by Machine Vision and Artificial Neural Network. Biosystems Engineering, 83(3), 275-280.
Haralick, R. M., & Shanmugam, K. (1973). Textural features for image classification. IEEE Transactions on systems, man, and cybernetics, 3(6), 610-621.
Huang, K.-Y. (2007). Application of artificial neural network for detecting Phalaenopsis seedling diseases using color and tex-ture features. Computers and Electronics in Agriculture, 57, 3-11.
Kavdır, İ. (2004). Discrimination of sunflower, weed and soil by artificial neural networks. Computers and Electronics in Agricul-ture, 44(2), 153-160.
Lagarias, J. C., Reeds, J. A., Wright, M. H., & Wright, P. E. (1998). Convergence properties of the Nelder--Mead simplex method in low dimensions. SIAM Journal on optimization, 9(1), 112-147.
Muangkasem, A., Thainimit, S., Keinprasit, R., Isshiki, T., & Tang-wongkit, R. (2010). Weed detection over between-row of sug-arcane fields using machine vision with shadow robustness technique for variable rate herbicide applicator. Energy Re-search Journal, 1(2), 141-145.
Osuna, E., Freund, R., & Girosi, F. (1997). Support vector machines: Training and applications.
Pérez, A. J., López, F., Benlloch, J. V., & Christensen, S. (2000). Colour and shape analysis techniques for weed detection in cereal fields. Computers and electronics in agriculture, 25(3), 197-212.
Reddy, T. A., Devi, K. R., & Gangashetty, S. V. (2012, March). Non-linear principal component analysis for seismic data compres-sion. In Recent Advances in Information Technology (RAIT), 2012 1st International Conference on (pp. 927-932). IEEE.
Shinde, A. K., & Shukla, M. Y. (2014). Crop detection by machine vision for weed management. International Journal of Ad-vances in Engineering & Technology, 7(3), 818.
Wiles, L. J. (2011). Software to quantify and map vegetative cover in fallow fields for weed management decisions. Computers and electronics in agriculture, 78(1), 106-115.
Woebbecke, D. M., Meyer, G. E., Von Bargen, K., & Mortensen, D. A. (1993, May). Plant species identification, size, and enumera-tion using machine vision techniques on near-binary images. In Applications in Optical Science and Engineering (pp. 208-219). International Society for Optics and Photonics.
Woebbecke, D. M., Meyer, G. E., Von Bargen, K., & Mortensen, D. A. (1995). Color indices for weed identification under various soil, residue, and lighting conditions. Transactions of the ASAE-American Society of Agricultural Engineers, 38(1), 259-270.
Wu, L., & Wen, Y. (2009). Weed/corn seedling recognition by support vector machine using texture features. African Journal of Agricultural Research, 4(9), 840-846.
Cómo citar
APA
ACM
ACS
ABNT
Chicago
Harvard
IEEE
MLA
Turabian
Vancouver
Descargar cita
CrossRef Cited-by
1. Hui Zhang, Zhi Wang, Yufeng Guo, Ye Ma, Wenkai Cao, Dexin Chen, Shangbin Yang, Rui Gao. (2022). Weed Detection in Peanut Fields Based on Machine Vision. Agriculture, 12(10), p.1541. https://doi.org/10.3390/agriculture12101541.
2. Jeng Hong Eng, Azali Saudi, Jumat Sulaiman. (2018). Performance Analysis of the Explicit Decoupled Group Iteration via Five-Point Rotated Laplacian Operator in Solving Poisson Image Blending Problem. Indian Journal of Science and Technology, 11(12), p.1. https://doi.org/10.17485/ijst/2018/v11i12/120852.
3. Wen Zhang, Zhonghua Miao, Nan Li, Chuangxin He, Teng Sun. (2022). Review of Current Robotic Approaches for Precision Weed Management. Current Robotics Reports, 3(3), p.139. https://doi.org/10.1007/s43154-022-00086-5.
4. Tanzeel U. Rehman, Qamar U. Zaman, Young K. Chang, Arnold W. Schumann, Kenneth W. Corscadden, Travis J. Esau. (2018). Optimising the parameters influencing performance and weed (goldenrod) identification accuracy of colour co-occurrence matrices. Biosystems Engineering, 170, p.85. https://doi.org/10.1016/j.biosystemseng.2018.04.002.
5. Luis Guilherme Ribeiro Martins, Maria Teresinha Arns Steiner, Volmir Eugênio Wilhelm, Pedro José Steiner Neto, Bruno Samways Dos Santos. (2018). Paraná’s Credit Unions: an analysis of their efficiency and productivity change. Ingeniería e Investigación, 38(3), p.59. https://doi.org/10.15446/ing.investig.v38n3.68892.
6. Wenan Yuan, Nuwan Kumara Wijewardane, Shawn Jenkins, Geng Bai, Yufeng Ge, George L. Graef. (2019). Early Prediction of Soybean Traits through Color and Texture Features of Canopy RGB Imagery. Scientific Reports, 9(1) https://doi.org/10.1038/s41598-019-50480-x.
7. Feng Xiao, Haibin Wang, Yaoxiang Li, Ying Cao, Xiaomeng Lv, Guangfei Xu. (2023). Object Detection and Recognition Techniques Based on Digital Image Processing and Traditional Machine Learning for Fruit and Vegetable Harvesting Robots: An Overview and Review. Agronomy, 13(3), p.639. https://doi.org/10.3390/agronomy13030639.
8. Harsh Pathak, C. Igathinathane, Kirk Howatt, Zhao Zhang. (2023). Machine learning and handcrafted image processing methods for classifying common weeds in corn field. Smart Agricultural Technology, 5, p.100249. https://doi.org/10.1016/j.atech.2023.100249.
9. Jeng Hong Eng, Azali Saudi, Jumat Sulaiman. (2018). Numerical Evaluation of Quarter-Sweep SOR Iteration for Solving Poisson Image Blending Problem. 2018 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET). , p.1. https://doi.org/10.1109/IICAIET.2018.8638469.
10. Meer Hannan Dairath, M. Waqar Akram, M. Ahmad Mehmood, H. Umair Sarwar, M. Zuhaib Akram, M. Mubashar Omar, M. Faheem. (2023). Computer vision-based prototype robotic picking cum grading system for fruits. Smart Agricultural Technology, 4, p.100210. https://doi.org/10.1016/j.atech.2023.100210.
11. Jehan-Antoine Vayssade, Gawain Jones, Jean-Noël Paoli. (2023). Towards the characterization of crop and weeds at leaf scale: A large comparison of shape, spatial and textural features. Smart Agricultural Technology, 5, p.100245. https://doi.org/10.1016/j.atech.2023.100245.
12. Wenwen Li, Yun Zhang. (2024). DC-YOLO: an improved field plant detection algorithm based on YOLOv7-tiny. Scientific Reports, 14(1) https://doi.org/10.1038/s41598-024-77865-x.
13. Shanwen Zhang, Wenzhun Huang, Zuliang Wang. (2021). Combing modified Grabcut, K-means clustering and sparse representation classification for weed recognition in wheat field. Neurocomputing, 452, p.665. https://doi.org/10.1016/j.neucom.2020.06.140.
14. Brahim Jabir, Noureddine Falih. (2022). A New Hybrid Model of Deep Learning ResNeXt-SVM for Weed Detection. International Journal of Intelligent Information Technologies, 18(2), p.1. https://doi.org/10.4018/IJIIT.296269.
15. Ye Mu, Ruiwen Ni, Lili Fu, Tianye Luo, Ruilong Feng, Ji Li, Haohong Pan, Yingkai Wang, Yu Sun, He Gong, Ying Guo, Tianli Hu, Yu Bao, Shijun Li. (2023). DenseNet weed recognition model combining local variance preprocessing and attention mechanism. Frontiers in Plant Science, 13 https://doi.org/10.3389/fpls.2022.1041510.
16. Tanzeel U. Rehman, Qamar U. Zaman, Young K. Chang, Arnold W. Schumann, Kenneth W. Corscadden. (2019). Development and field evaluation of a machine vision based in-season weed detection system for wild blueberry. Computers and Electronics in Agriculture, 162, p.1. https://doi.org/10.1016/j.compag.2019.03.023.
17. Jinjin Wang, Xiaopeng Yao, Bao Kha Nguyen, Yi Xie, Xudong Jiang, Wenbing Tao, Deze Zeng. (2022). Identification and localisation of multiple weeds in grassland for removal operation. Fourteenth International Conference on Digital Image Processing (ICDIP 2022). , p.85. https://doi.org/10.1117/12.2644281.
18. Jiawei Zhao, Guangzhao Tian, Chang Qiu, Baoxing Gu, Kui Zheng, Qin Liu. (2022). Weed Detection in Potato Fields Based on Improved YOLOv4: Optimal Speed and Accuracy of Weed Detection in Potato Fields. Electronics, 11(22), p.3709. https://doi.org/10.3390/electronics11223709.
19. Adel Bakhshipour. (2021). Cascading Feature Filtering and Boosting Algorithm for Plant Type Classification Based on Image Features. IEEE Access, 9, p.82021. https://doi.org/10.1109/ACCESS.2021.3086269.
20. Jie Yang, Yundi Wang, Yong Chen, Jialin Yu. (2022). Detection of Weeds Growing in Alfalfa Using Convolutional Neural Networks. Agronomy, 12(6), p.1459. https://doi.org/10.3390/agronomy12061459.
21. Jie Yang, Yong Chen, Jialin Yu. (2024). Convolutional neural network based on the fusion of image classification and segmentation module for weed detection in alfalfa. Pest Management Science, 80(6), p.2751. https://doi.org/10.1002/ps.7979.
22. Leonardo Solaque, Alexandra Velasco, Adriana Riveros. (2018). Planificación de trayectorias por técnica de A* y suavizado por curvas de Bezier para la herramienta del sistema de remoción de maleza de un robot dedicado a labores de agricultura de precisión. Entre Ciencia e Ingeniería, 12(24), p.43. https://doi.org/10.31908/19098367.3814.
23. Shengping Liu, Junchan Wang, Liu Tao, Zhemin Li, Chengming Sun, Xiaochun Zhong. (2019). Computer and Computing Technologies in Agriculture XI. IFIP Advances in Information and Communication Technology. 545, p.452. https://doi.org/10.1007/978-3-030-06137-1_41.
24. Francisco Garibaldi-Márquez, Gerardo Flores, Diego A. Mercado-Ravell, Alfonso Ramírez-Pedraza, Luis M. Valentín-Coronado. (2022). Weed Classification from Natural Corn Field-Multi-Plant Images Based on Shallow and Deep Learning. Sensors, 22(8), p.3021. https://doi.org/10.3390/s22083021.
25. Xiao Liu, Ling Wang, Langping Li, Xicun Zhu, Chunyan Chang, Hengxing Lan. (2022). Optimum Phenological Phases for Deciduous Species Recognition: A Case Study on Quercus acutissima and Robinia pseudoacacia in Mount Tai. Forests, 13(5), p.813. https://doi.org/10.3390/f13050813.
26. Kezhu Tan, Won Suk Lee, Hao Gan, Shuwen Wang. (2018). Recognising blueberry fruit of different maturity using histogram oriented gradients and colour features in outdoor scenes. Biosystems Engineering, 176, p.59. https://doi.org/10.1016/j.biosystemseng.2018.08.011.
27. Jeng Hong Eng, Azali Saudi, Jumat Sulaiman. (2018). Poisson image blending by 4-EDGAOR iteration via rotated five-point Laplacian operator. Journal of Physics: Conference Series, 1123, p.012033. https://doi.org/10.1088/1742-6596/1123/1/012033.
28. Ghazanfar Latif, Nazeeruddin Mohammad, Jaafar Alghazo. (2023). Plant Seedling Classification Using Preprocessed Deep CNN. 2023 15th International Conference on Computer and Automation Engineering (ICCAE). , p.244. https://doi.org/10.1109/ICCAE56788.2023.10111357.
29. Jeng Hong Eng, Azali Saudi, Jumat Sulaiman. (2019). Computational Science and Technology. Lecture Notes in Electrical Engineering. 481, p.127. https://doi.org/10.1007/978-981-13-2622-6_13.
30. Jie Yang, Muthukumar Bagavathiannan, Yundi Wang, Yong Chen, Jialin Yu. (2022). A comparative evaluation of convolutional neural networks, training image sizes, and deep learning optimizers for weed detection in alfalfa. Weed Technology, 36(4), p.512. https://doi.org/10.1017/wet.2022.46.
31. Shanwen Zhang, Jing Guo, Zhen Wang. (2019). Combing K-means Clustering and Local Weighted Maximum Discriminant Projections for Weed Species Recognition. Frontiers in Computer Science, 1 https://doi.org/10.3389/fcomp.2019.00004.
32. Yanlei Xu, Yuting Zhai, Bin Zhao, Yubin Jiao, ShuoLin Kong, Yang Zhou, Zongmei Gao. (2021). Weed Recognition for Depthwise Separable Network Based on Transfer Learning. Intelligent Automation & Soft Computing, 27(3), p.669. https://doi.org/10.32604/iasc.2021.015225.
33. Olayemi Mikail Olaniyi, Emmanuel Daniya, Ibrahim Mohammed Abdullahi, Jibril Abdullahi Bala, Esther Ayobami Olanrewaju. (2021). Artificial Intelligence and Industrial Applications. Advances in Intelligent Systems and Computing. 1193, p.385. https://doi.org/10.1007/978-3-030-51186-9_27.
34. Md. Ridoy Sarkar, Syed Rayhan Masud, Md Ismail Hossen, Michael Goh. (2022). A Comprehensive Study on the Emerging Effect of Artificial Intelligence in Agriculture Automation. 2022 IEEE 18th International Colloquium on Signal Processing & Applications (CSPA). , p.419. https://doi.org/10.1109/CSPA55076.2022.9781883.
35. Xin Chen, Teng Liu, Kang Han, Xiaojun Jin, Jinxu Wang, Xiaotong Kong, Jialin Yu. (2024). TSP-yolo-based deep learning method for monitoring cabbage seedling emergence. European Journal of Agronomy, 157, p.127191. https://doi.org/10.1016/j.eja.2024.127191.
36. Vi Nguyen Thanh Le, Beniamin Apopei, Kamal Alameh. (2019). Effective plant discrimination based on the combination of local binary pattern operators and multiclass support vector machine methods. Information Processing in Agriculture, 6(1), p.116. https://doi.org/10.1016/j.inpa.2018.08.002.
37. Tao Tao, Xinhua Wei. (2022). A hybrid CNN–SVM classifier for weed recognition in winter rape field. Plant Methods, 18(1) https://doi.org/10.1186/s13007-022-00869-z.
Dimensions
PlumX
Visitas a la página del resumen del artículo
Descargas
Licencia
Derechos de autor 2017 Camilo Pulido Rojas, Leonardo Solaque Guzmán, Nelson Velasco Toledo

Esta obra está bajo una licencia internacional Creative Commons Atribución 4.0.
Los autores o titulares del derecho de autor de cada artículo confieren a la revista Ingeniería e Investigación de la Universidad Nacional de Colombia una autorización no exclusiva, limitada y gratuita sobre el artículo que una vez evaluado y aprobado se envía para su posterior publicación ajustándose a las siguientes características:
1. Se remite la versión corregida de acuerdo con las sugerencias de los evaluadores y se aclara que el artículo mencionado se trata de un documento inédito sobre el que se tienen los derechos que se autorizan y se asume total responsabilidad por el contenido de su obra ante la revista Ingeniería e Investigación, la Universidad Nacional de Colombia y ante terceros.
2. La autorización conferida a la revista estará vigente a partir de la fecha en que se incluye en el volumen y número respectivo de la revista Ingeniería e Investigación en el Sistema Open Journal Systems y en la página principal de la revista (https://revistas.unal.edu.co/index.php/ingeinv), así como en las diferentes bases e índices de datos en que se encuentra indexada la publicación.
3. Los autores autorizan a la revista Ingeniería e Investigación de la Universidad Nacional de Colombia para publicar el documento en el formato en que sea requerido (impreso, digital, electrónico o cualquier otro conocido o por conocer) y autorizan a la revista Ingeniería e Investigación para incluir la obra en los índices y buscadores que estimen necesarios para promover su difusión.
4. Los autores aceptan que la autorización se hace a título gratuito, por lo tanto renuncian a recibir emolumento alguno por la publicación, distribución, comunicación pública y cualquier otro uso que se haga en los términos de la presente autorización.