Brújula Home

Institutional repository of the Universidad Loyola

View Item 
  •   Brújula Home
  • PRODUCCIÓN CIENTÍFICA Y TRANSFERENCIA
  • Departamento Ingeniería
  • Artículos
  • View Item
  •   Brújula Home
  • PRODUCCIÓN CIENTÍFICA Y TRANSFERENCIA
  • Departamento Ingeniería
  • Artículos
  • View Item
    • español
    • English
JavaScript is disabled for your browser. Some features of this site may not work without it.

Browse

All of BrújulaCommunities and CollectionsAuthorsTitlesKeywordsAuthor profilesThis CollectionAuthorsTitlesKeywords

My Account

Login

Statistics

View Usage Statistics

Añadido Recientemente

Novedades
Repository
How to publish
Visibility
FAQs

Novel Fusion Technique for High-Performance Automated Crop Edge Detection in Smart Agriculture

Author:
Martínez, Fátima Belén; Romaine, James Brian; Johnson, Princy; Cardona Ruiz, Adrián; Millán Gata, PabloUniversidad Loyola Authority
URI:
https://hdl.handle.net/20.500.12412/7146
ISSN:
2169-3536
DOI:
10.1109/ACCESS.2025.3536701
Date:
2025
Keyword(s):

Computer vision

Object detection

YOLOv8 segmentation

YOLOv10

Superpixel

K-means

Threshold

Detectron2

Smart agriculture

Abstract:

Optimising vegetable production systems is crucial for maintaining and enhancing agricultural productivity, particularly for crops like lettuce. Separating the crop from the background poses a significant challenge when using automated tools. To address this, a novel technique has been developed to automatically detect the vegetative area of lettuces, optimising time and eliminating subjectivity during crop inspections. The proposed deep learning model integrates the YOLOv10 object detector, the K-means classifier, and a segmentation method known as superpixel. This combination enables lettuce area identification using bounding box labels instead of contour labels during training, improving efficiency compared to other methods like YOLOv8 and Detectron2. Additionally, the combination of the YKMS method with YOLOv8 (YKMSV8) is evaluated, where YKMS serves as a label assistant. These methods are also used as benchmarks to compare the proposed approach. For the training of each methods, a custom database has been created using a low-cost, low-power custom IoT node deployed on a real farm to provide the most accurate data. Throughout the comparison, a custom metric is used to evaluate performance both in training and inference, balancing computational cost and area error, making it applicable in agriculture. Performance metric is associated with computational cost factor and accuracy factor whose value are respectively 65% and 35%, ensuring applicability for autonomous agricultural devices. Computational cost is prioritised to maintain battery life during extended campaigns. The results of the custom metric during inference indicated that the YKMSV8 method achieved the highest performance, followed by Detectron2, YOLOv8, and, lastly, YKMS. Regarding area error, YOLOv8 exhibited the lowest mean error, followed by Detectron2, while YKMSV8 and YKMS produced similar values. In terms of inference time, YKMSV8 was the most computationally efficient, followed by YOLOv8, YKMS, and, finally, Detectron2.

Optimising vegetable production systems is crucial for maintaining and enhancing agricultural productivity, particularly for crops like lettuce. Separating the crop from the background poses a significant challenge when using automated tools. To address this, a novel technique has been developed to automatically detect the vegetative area of lettuces, optimising time and eliminating subjectivity during crop inspections. The proposed deep learning model integrates the YOLOv10 object detector, the K-means classifier, and a segmentation method known as superpixel. This combination enables lettuce area identification using bounding box labels instead of contour labels during training, improving efficiency compared to other methods like YOLOv8 and Detectron2. Additionally, the combination of the YKMS method with YOLOv8 (YKMSV8) is evaluated, where YKMS serves as a label assistant. These methods are also used as benchmarks to compare the proposed approach. For the training of each methods, a custom database has been created using a low-cost, low-power custom IoT node deployed on a real farm to provide the most accurate data. Throughout the comparison, a custom metric is used to evaluate performance both in training and inference, balancing computational cost and area error, making it applicable in agriculture. Performance metric is associated with computational cost factor and accuracy factor whose value are respectively 65% and 35%, ensuring applicability for autonomous agricultural devices. Computational cost is prioritised to maintain battery life during extended campaigns. The results of the custom metric during inference indicated that the YKMSV8 method achieved the highest performance, followed by Detectron2, YOLOv8, and, lastly, YKMS. Regarding area error, YOLOv8 exhibited the lowest mean error, followed by Detectron2, while YKMSV8 and YKMS produced similar values. In terms of inference time, YKMSV8 was the most computationally efficient, followed by YOLOv8, YKMS, and, finally, Detectron2.

Show full item record
Collections
  • Artículos
Files in this item
Thumbnail
Novel_Fusion_Technique_for_High-Performance_Automated_Crop_Edge_Detection_in_Smart_Agriculture.pdf (4.675Mb)
Share
Export to Mendeley
Statistics
Usage statistics
Metrics and citations  
Go to Brújula home

Universidad Loyola

Library

Contact

Facebook Loyola BibliotecaTwitter Loyola Biblioteca

The content of the Repository is protected with a Creative Commons license:

Attribution-NonCommercial-NoDerivatives 4.0 Internacional

Creative Commons Image