Tech Briefs Series: Exploring Bias-Aware Innovations
The STELAR project recognised the importance of making complex technological advancements accessible to a broader audience. To achieve this, we decided to present our findings and innovations through a series of Tech Briefs. This approach allows us to break down intricate concepts into manageable, easy-to-understand summaries, helping professionals and researchers stay informed without feeling overwhelmed by the sheer volume of information.
In our previous Tech Briefs, we explored innovative approaches to data profiling and time series imputation, offering concise insights into the latest techniques and methodologies. In this post, we continue the series with two new briefs on bias-aware data augmentation for food hazards and crop classification.
University of the Bundeswehr Munich: Tackling Bias in Data for Smarter AI Solutions
The two tech briefs in this series are written by the University of the Bundeswehr Munich (UniBwM), who are working in the domain of AI-ready data research in the STELAR project. Their focus includes bias detection and mitigation, enhancing explainability and exploring synthetic data generation. In collaboration with pilot partners Agroknow, Vista and ABACO, UniBwM aims to deepen the understanding of data and label scarcity, as well as the sources of bias in these fields.
These two tech briefs highlight their work on addressing these challenges and showcasing their novel approaches to bias-aware data augmentation and crop classification.
Tech Brief #3: Bias-aware Data Augmentation for Mitigating Bias in Food-Hazard Identification
This Tech Brief, authored by Vivek Kumar, Senior Researcher at the University of the Bundeswehr Munich, examines how data biases impact food hazard identification and introduces a method to mitigate these challenges.
Food hazard identification relies on large datasets, but if these datasets contain biases, such as over or under-representation of certain hazards, the resulting AI models may produce skewed or unreliable predictions. This brief introduces a bias-aware data augmentation approach, which strategically generates synthetic data to balance underrepresented patterns. By enriching the dataset in this way, the method improves the fairness and accuracy of food safety assessments.
The approach was tested on real-world food hazard datasets, demonstrating its potential to enhance AI-driven risk assessments. The findings underscore the importance of integrating bias mitigation techniques to ensure more reliable and equitable food safety monitoring.
Tech Brief #4: Bias-aware Crop Classification
Chethan Krishnamurthy Ramanaik, as the author of the second brief, explains in detail the challenge of bias in crop classification and introduces a method to overcome this issue. Crop classification systems, which rely on satellite and sensor data, can be affected by biases such as the underrepresentation of certain crop types or imbalanced data across regions. These biases can lead to inaccurate predictions and unreliable agricultural monitoring.
The brief presents a spatio-temporal bias-aware approach for semantic segmentation, using deep learning models and ensemble strategies to tackle issues like class imbalance, cloud coverage and seasonal changes. The approach has shown promising results, particularly in winter crop classification, with efforts now underway to extend predictions to other seasons.