Repository logo
Log In
Log in as University member:
Log in as external user:
Have you forgotten your password?

Please contact the hohPublica team if you do not have a valid Hohenheim user account (hohPublica@uni-hohenheim.de)
Hilfe
  • English
  • Deutsch
    Communities & Collections
    All of hohPublica
Log In
Log in as University member:
Log in as external user:
Have you forgotten your password?

Please contact the hohPublica team if you do not have a valid Hohenheim user account (hohPublica@uni-hohenheim.de)
Hilfe
  • English
  • Deutsch
  1. Home
  2. Browse by Subject

Browsing by Subject "Reinforcement learning"

Type the first few letters and click on the Browse button
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Publication
    AI-assisted tractor control for secondary tillage
    (2025) Boysen, Jonas; Bökle, Sebastian; Stein, Anthony
    Modern agricultural machinery requires skilled operators to optimally configure their complex machines, while autonomous machines without operators must already optimize their configuration themselves to achieve optimal performance. During secondary tillage multiple performance measures need to be monitored and maximized: Seedbed quality, area output and fuel consumption. The seedbed quality can be measured with the soil surface roughness coefficient which can be computed with 3D-cameras attached to the machine. For our work, such cameras are mounted in the front and back of a Claas Arion 660 tractor with an attached power harrow seeding combination. The soil-machine response model of our prior work is utilized to model the soil-machine interaction for the training of a reinforcement learning agent and the application of a decision-time planning agent to assist in controlling the working speed of the machine. The control agents are tested in real-world field trials and compared to good professional practice. The decision-time planning agent achieves comparable results to a gold-standard while reaching significantly higher performance in terms of area output (29.1%) and more efficient fuel consumption (8.4%) than a baseline while the reinforcement learning agent performed worse during the field trials. The seedbed quality and field emergence are not showing significant differences between the variants. Further analysis shows that model training and selection for the reinforcement agent could have led to performance loss and models that are performing better in simulation have been trained after the field trials. Furthermore, we analyze the models when tested under the field conditions in the field trials (out-of-distribution) that are different from the field conditions during training data collection. The out-of-distribution testing leads to a reduced performance in terms of rRMSE of the decision-time planning agent and to some extend reward of the reinforcement learning agent compared to in-distribution testing.
  • Loading...
    Thumbnail Image
    Publication
    Price discrimination with inequity-averse consumers

    a reinforcement learning approach

    (2021) Buchali, Katrin
    With the advent of big data, unique opportunities arise for data collection and analysis and thus for personalized pricing. We simulate a self-learning algorithm setting personalized prices based on additional information about consumer sensi- tivities in order to analyze market outcomes for consumers who have a preference for fair, equitable outcomes. For this purpose, we compare a situation that does not consider fairness to a situation in which we allow for inequity-averse consumers. We show that the algorithm learns to charge different, revenue-maximizing prices and simultaneously increase fairness in terms of a more homogeneous distribution of prices.
  • Loading...
    Thumbnail Image
    Publication
    Strategic choice of price-setting algorithms
    (2023) Schwalbe, Ulrich; Muijs, Matthias; Grüb, Jens; Buchali, Katrin
    Recent experimental simulations have shown that autonomous pricing algorithms are able to learn collusive behavior and thus charge supra-competitive prices without being explicitly programmed to do so. These simulations assume, however, that both firms employ the identical price-setting algorithm based on Q-learning. Thus, the question arises whether the underlying assumption that both firms employ a Q-learning algorithm can be supported as an equilibrium in a game where firms can chose between different pricing rules. Our simulations show that when both firms use a learning algorithm, the outcome is not an equilibrium when alternative price setting rules are available. In fact, simpler price setting rules as for example meeting competition clauses yield higher payoffs compared to Q-learning algorithms.

  • Contact
  • FAQ
  • Cookie settings
  • Imprint/Privacy policy