
Introduction
Artificial intelligence (AI) now supports personalised learning, automated assessment and administrative decision-making in schools and universities. Yet these capabilities raise concerns about fairness, transparency and student privacy. The European Union addressed these concerns through the EU Artificial Intelligence Act, adopted in 2024. The Act classifies educational AI as “high-risk,” bans emotion-inference systems outright and demands stronger governance for all other tools. This paper summarises a recent manuscript that analyses how the Act reshapes ethical expectations for AI in education.
Literature Review
Before the Act, international bodies such as UNESCO and the OECD issued voluntary guidelines that promoted transparency, accountability and bias reduction in educational AI. Researchers warned that facial-expression recognition, automated grading and predictive analytics often reproduced racial, gender and socioeconomic biases embedded in training data. The literature also highlighted the privacy risks of collecting biometric and behavioural signals from students. Emotion recognition drew special scrutiny because cultural differences and modelling inaccuracies produce unreliable results that can harm marginalised groups. These studies created a consensus that stronger, enforceable safeguards were needed to protect learners’ rights.
Subsequent scholarship explored how existing European data-protection law, particularly the General Data Protection Regulation (GDPR), offered only partial coverage. While GDPR grants data-subject rights and requires lawful processing, it does not specify technical robustness, dataset representativeness or continuous post-market monitoring. Authors such as Holmes et al. argued that a separate AI-specific instrument was necessary to target algorithmic opacity and systemic bias, especially in learning analytics. This gap in normative guidance set the stage for the EU AI Act’s mandatory risk-management and documentation provisions.
Methodology
The manuscript employed thematic analysis with deductive coding to compare pre-Act ethical principles with the legally binding requirements introduced by the EU AI Act. Relevant sections of the Act—especially Articles 14 and 31 and Recital 81—were mapped against earlier UNESCO and OECD guidelines. In addition, the authors catalogued common emotion-inference techniques (facial recognition, voice analysis, biometrics and text sentiment) to assess which fall under the Act’s “unacceptable risk” category. Findings were organised into risk levels and ethical dimensions: transparency, accountability, fairness and governance.
Results
The comparison revealed four decisive changes:
- From voluntary to compulsory transparency. Educational AI providers must now disclose data sources, model logic and decision outputs. Institutions face legal penalties for non-compliance.
- Stronger accountability. Schools must implement quality-management systems, perform risk assessments and maintain human oversight for every high-risk AI system.
- Enforceable fairness. The Act obliges developers to use representative datasets and prohibits models that discriminate by race, gender or socioeconomic status.
- Prohibition of emotion detection. Any system designed to infer student emotions from facial, vocal or biometric cues is classified as “unacceptable risk” and is banned in EU educational settings.
Together these measures convert earlier ethical aspirations into enforceable obligations with fines and market withdrawal as enforcement tools.
Discussion
The findings indicate that the EU AI Act establishes education as a special-protection domain, similar to health care or critical infrastructure. Mandatory transparency and post-market monitoring reduce “black-box” decision-making and give students avenues to contest harmful outcomes. However, implementation challenges remain. Smaller institutions may lack resources to perform continuous risk audits, and suppliers outside Europe must still comply if their systems reach EU learners. Ambiguity also persists around text-based sentiment analysis; because it operates on voluntary student input, it may be treated as limited-risk—yet if marketed as emotion detection it falls under the ban. Developers therefore need clear documentation and purpose limitation to avoid regulatory breach. Finally, the strict prohibition on emotion-inference technologies may slow some forms of adaptive learning, but it encourages innovation toward privacy-preserving analytics that respect fundamental rights.
Conclusion
The EU AI Act transforms ethical guidelines into a binding framework that prioritises student rights in AI-enabled education. By outlawing emotion detection and imposing strict requirements for transparency, accountability and fairness, the Act signals a shift from “trust me” to “show me and prove it.” Future empirical research should track how institutions implement these rules, whether they reduce bias in practice and how they influence AI innovation worldwide. Policymakers elsewhere can draw on these lessons to craft regulations that balance educational benefit with ethical responsibility.
Full manuscript:
M. Saarela, S. Gunaserka, A. Karimov (2025): The EU AI Act: Implications for Ethical AI in Education. Local Solutions for Global Challenges : 20th International Conference on Design Science Research in Information Systems and Technology (DESRIST 2025), Springer. Lecture Notes in Computer Science, pp. 36-50. (LINK)






