Alongside, and fuelled by, rapid advances in artificial intelligence, recent years have witnessed the rise of technologies that appear to cross what was once considered the final frontier: the datafication of the human mind. Emotion technology and neurotechnology, collectively referred to as Mind Datafying Technologies (MDTs) in my book A Datafied Mind: Untangling EU Regulation of Emotion Technology and Neurotechnology (Cambridge University Press, 2025), are developing at remarkable speed.
The emergence of MDTs has introduced challenges that are unprecedented for both society and legal frameworks, requiring policymakers and scholars to move beyond established modes of thinking and adopt fresh perspectives. This imperative shaped a pivotal decision early in my research: to treat the governance of neurotechnology and emotion technology as interconnected challenges. While traditional approaches have often conceptualised neurotechnology as an isolated phenomenon, A Datafied Mind advances a more holistic perspective, arguing for an integrated, nuanced, and multilayered regulatory framework.
A second deliberate choice in my research was to focus on secondary law. This decision reflects both the procedural and political challenges of adopting new primary legislation and the potential of secondary law to serve as a responsive avenue for regulation. Accordingly, the book critically examines existing regulatory gaps and limitations and explores potential policy pathways to strengthen the framework. It also demonstrates how secondary law can provide agile and adaptable tools to address many of the risks posed by MDTs, especially when these technologies extend beyond the narrowly defined medical field. This approach complements ongoing debates on international regulatory instruments and facilitates an analysis of secondary law’s role within a multilevel governance framework.
Two legal instruments are central in this context: the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Under the GDPR, MDTs are most commonly associated with the category of biometric data. However, accurately classifying data processed by MDTs under the correct biometric category is highly complex, leaving substantial amounts of data without enhanced protection. This gap is particularly pronounced for data processed by text-based MDTs. Given the widespread deployment of large language models, this regulatory blind spot is increasingly concerning. A Datafied Mind argues for a conceptual shift: away from focusing solely on the technology or the biophysical parameters used to datafy people’s inner state of mind and towards safeguarding the information that truly warrants protection. To this end, the book proposes the introduction of “mind data” as a sui generis special category of personal data under the GDPR.
Although the GDPR establishes foundational protections for personal data, its provisions were not designed to address the specific challenges posed by AI-driven technologies. In response, the EU has adopted the AI Act to regulate such systems directly. The AI Act is the first legislative instrument to specifically address Emotion Recognition Systems (ERS), establishing a multilayered regulatory approach. Viewed in its broader context, the AI Act’s treatment of ERS reflects a nuanced legislative effort to enable innovation and technological development in a nascent field while simultaneously imposing first regulatory guardrails. Nonetheless, A Datafied Mind argues that the AI Act cannot be regarded as a definitive regulatory framework for MDTs. It should instead be treated as a general baseline, to be supplemented by further legal measures, including sector-specific restrictions or prohibitions.
Following a general analysis of these two regulatory pillars, the book turns to four concrete use cases: mental health and well-being, commercial advertising, political advertising, and employment monitoring. It examines how sector-specific legislation complements the general framework, drawing on both established instruments and more recent ones. The analysis shows that, through strategic adaptation and effective deployment of existing legal instruments, the regulatory framework governing MDTs could be significantly strengthened. In some areas, more stringent substantive rules are urgently required; in others, the principal challenge lies in ensuring effective compliance and enforcement.
Ultimately, A Datafied Mind seeks to inform evolving debates on the governance of emotion technology and neurotechnology, promoting responsible innovation and the development of regulatory frameworks capable of addressing the risks posed by these transformative technologies.

Latest Comments
Have your say!