Computational Science Hub (CSH)
Permanent URI for this collectionhttps://hohpublica.uni-hohenheim.de/handle/123456789/16924
Browse
Browsing Computational Science Hub (CSH) by Sustainable Development Goals "2"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Publication Assessing the capability of YOLO- and transformer-based object detectors for real-time weed detection(2025) Allmendinger, Alicia; Saltık, Ahmet Oğuz; Peteinatos, Gerassimos G.; Stein, Anthony; Gerhards, RolandSpot spraying represents an efficient and sustainable method for reducing herbicide use in agriculture. Reliable differentiation between crops and weeds, including species-level classification, is essential for real-time application. This study compares state-of-the-art object detection models-YOLOv8, YOLOv9, YOLOv10, and RT-DETR-using 5611 images from 16 plant species. Two datasets were created, dataset 1 with training all 16 species individually and dataset 2 with grouping weeds into monocotyledonous weeds, dicotyledonous weeds, and three chosen crops. Results indicate that all models perform similarly, but YOLOv9s and YOLOv9e, exhibit strong recall (66.58 % and 72.36 %) and mAP50 (73.52 % and 79.86 %), and mAP50-95 (43.82 % and 47.00 %) in dataset 2. RT-DETR-l, excels in precision reaching 82.44 % (dataset 1) and 81.46 % (dataset 2) making it ideal for minimizing false positives. In dataset 2, YOLOv9c attains a precision of 84.76% for dicots and 78.22% recall for Zea mays L.. Inference times highlight smaller YOLO models (YOLOv8n, YOLOv9t, and YOLOv10n) as the fastest, reaching 7.64 ms (dataset 1) on an NVIDIA GeForce RTX 4090 GPU, with CPU inference times increasing significantly. These findings emphasize the trade-off between model size, accuracy, and hardware suitability for real-time agricultural applications.Publication DeepCob: precise and high-throughput analysis of maize cob geometry using deep learning with an application in genebank phenomics(2021) Kienbaum, Lydia; Correa Abondano, Miguel; Blas, Raul; Schmid, KarlBackground: Maize cobs are an important component of crop yield that exhibit a high diversity in size, shape and color in native landraces and modern varieties. Various phenotyping approaches were developed to measure maize cob parameters in a high throughput fashion. More recently, deep learning methods like convolutional neural networks (CNNs) became available and were shown to be highly useful for high-throughput plant phenotyping. We aimed at comparing classical image segmentation with deep learning methods for maize cob image segmentation and phenotyping using a large image dataset of native maize landrace diversity from Peru. Results: Comparison of three image analysis methods showed that a Mask R-CNN trained on a diverse set of maize cob images was highly superior to classical image analysis using the Felzenszwalb-Huttenlocher algorithm and a Window-based CNN due to its robustness to image quality and object segmentation accuracy (r = 0.99). We integrated Mask R-CNN into a high-throughput pipeline to segment both maize cobs and rulers in images and perform an automated quantitative analysis of eight phenotypic traits, including diameter, length, ellipticity, asymmetry, aspect ratio and average values of red, green and blue color channels for cob color. Statistical analysis identified key training parameters for efficient iterative model updating. We also show that a small number of 10–20 images is sufficient to update the initial Mask R-CNN model to process new types of cob images. To demonstrate an application of the pipeline we analyzed phenotypic variation in 19,867 maize cobs extracted from 3449 images of 2484 accessions from the maize genebank of Peru to identify phenotypically homogeneous and heterogeneous genebank accessions using multivariate clustering. Conclusions: Single Mask R-CNN model and associated analysis pipeline are widely applicable tools for maize cob phenotyping in contexts like genebank phenomics or plant breeding.Publication Food informatics - Review of the current state-of-the-art, revised definition, and classification into the research landscape(2021) Krupitzer, Christian; Stein, AnthonyBackground: The increasing population of humans, changing food consumption behavior, as well as the recent developments in the awareness for food sustainability, lead to new challenges for the production of food. Advances in the Internet of Things (IoT) and Artificial Intelligence (AI) technology, including Machine Learning and data analytics, might help to account for these challenges. Scope and Approach: Several research perspectives, among them Precision Agriculture, Industrial IoT, Internet of Food, or Smart Health, already provide new opportunities through digitalization. In this paper, we review the current state-of-the-art of the mentioned concepts. An additional concept is Food Informatics, which so far is mostly recognized as a mainly data-driven approach to support the production of food. In this review paper, we propose and discuss a new perspective for the concept of Food Informatics as a supportive discipline that subsumes the incorporation of information technology, mainly IoT and AI, in order to support the variety of aspects tangent to the food production process and delineate it from other, existing research streams in the domain. Key Findings and Conclusions: Many different concepts related to the digitalization in food science overlap. Further, Food Informatics is vaguely defined. In this paper, we provide a clear definition of Food Informatics and delineate it from related concepts. We corroborate our new perspective on Food Informatics by presenting several case studies about how it can support the food production as well as the intermediate steps until its consumption, and further describe its integration with related concepts.Publication Integrating sensor data, laboratory analysis, and computer vision in machine learning-driven E-Nose systems for predicting tomato shelf life(2025) Senge, Julia Marie; Kaltenecker, Florian; Krupitzer, ChristianAssessing the quality of fresh produce is essential to ensure a safe and satisfactory product. Methods to monitor the quality of fresh produce exist; however, they are often expensive, time-consuming, and sometimes require the destruction of the sample. Electronic Nose (E-Nose) technology has been established to track the ripeness, spoilage, and quality of fresh produce. Our study developed a freshness monitoring system for tomatoes, combining E-Nose technology with storage condition monitoring, color analysis, and weight-loss tracking. Different post-purchase scenarios were investigated, focusing on the influence of temperature and mechanical damage on shelf life. Support Vector Classifier (SVC) and k-Nearest Neighbor (kNN) were applied to classify storage scenarios and storage days, while Support Vector Regression (SVR) and kNN regression were used for predicting storage days. By using a data fusion approach with Linear Discriminant Analysis (LDA), the SVC achieved an accuracy of 72.91% in predicting storage days and an accuracy of 86.73% in distinguishing between storage scenarios. The kNN yielded the best regression results, with a Mean Absolute Error (MAE) of 0.841 days and a coefficient of determination of 0.867. The results highlight the method’s potential to predict storage scenarios and storage days, providing insight into the product’s remaining shelf life.