Computational Science Hub (CSH)
Permanent URI for this collectionhttps://hohpublica.uni-hohenheim.de/handle/123456789/16924
Browse
Browsing Computational Science Hub (CSH) by Subject "Deep learning"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Publication Assessing the capability of YOLO- and transformer-based object detectors for real-time weed detection(2025) Allmendinger, Alicia; Saltık, Ahmet Oğuz; Peteinatos, Gerassimos G.; Stein, Anthony; Gerhards, RolandSpot spraying represents an efficient and sustainable method for reducing herbicide use in agriculture. Reliable differentiation between crops and weeds, including species-level classification, is essential for real-time application. This study compares state-of-the-art object detection models-YOLOv8, YOLOv9, YOLOv10, and RT-DETR-using 5611 images from 16 plant species. Two datasets were created, dataset 1 with training all 16 species individually and dataset 2 with grouping weeds into monocotyledonous weeds, dicotyledonous weeds, and three chosen crops. Results indicate that all models perform similarly, but YOLOv9s and YOLOv9e, exhibit strong recall (66.58 % and 72.36 %) and mAP50 (73.52 % and 79.86 %), and mAP50-95 (43.82 % and 47.00 %) in dataset 2. RT-DETR-l, excels in precision reaching 82.44 % (dataset 1) and 81.46 % (dataset 2) making it ideal for minimizing false positives. In dataset 2, YOLOv9c attains a precision of 84.76% for dicots and 78.22% recall for Zea mays L.. Inference times highlight smaller YOLO models (YOLOv8n, YOLOv9t, and YOLOv10n) as the fastest, reaching 7.64 ms (dataset 1) on an NVIDIA GeForce RTX 4090 GPU, with CPU inference times increasing significantly. These findings emphasize the trade-off between model size, accuracy, and hardware suitability for real-time agricultural applications.Publication DeepCob: precise and high-throughput analysis of maize cob geometry using deep learning with an application in genebank phenomics(2021) Kienbaum, Lydia; Correa Abondano, Miguel; Blas, Raul; Schmid, KarlBackground: Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNNs) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep learning methods for maize cob image segmentation and phenotyping using a large image dataset of native maize landrace diversity from Peru. Results: Comparison of three image analysis methods showed that a Mask R-CNN trained on a diverse set of maize cob images was highly superior to classical image analysis using the Felzenszwalb-Huttenlocher algorithm and a Window-based CNN due to its robustness to image quality and object segmentation accuracy (r = 0.99). We integrated Mask R-CNN into a high-throughput pipeline to segment both maize cobs and rulers in images and perform an automated quantitative analysis of eight phenotypic traits, including diameter, length, ellipticity, asymmetry, aspect ratio and average values of red, green and blue color channels for cob color. Statistical analysis identified key training parameters for efficient iterative model updating. We also show that a small number of 10–20 images is sufficient to update the initial Mask R-CNN model to process new types of cob images. To demonstrate an application of the pipeline we analyzed phenotypic variation in 19,867 maize cobs extracted from 3449 images of 2484 accessions from the maize genebank of Peru to identify phenotypically homogeneous and heterogeneous genebank accessions using multivariate clustering. Conclusions: Single Mask R-CNN model and associated analysis pipeline are widely applicable tools for maize cob phenotyping in contexts like genebank phenomics or plant breeding.