Adobe>
21 January 2025
Advances in neurotechnology (NT) have driven the growing collection and processing of neurodata – data related to the structure and functioning of the human brain – across various societal domains. The development and use of NTs heavily depend on neurodata to ensure device functionality (e.g., neurofeedback monitors or brain-computer interfaces) and to enhance performance by refining AI algorithms integrated into these devices.
In our new Research Brief, ‘Neurodata: Navigating GDPR and AI Act Compliance in the Context of Neurotechnology’, Timo Istace highlights the delicate balance needed between leveraging neurodata for progress and protecting individual rights. He notes that ‘the indispensability of neurodata to fuelling progress in the sector needs to be balanced against the risks to individual users. Neurodata is a highly sensitive and personal form of data, akin to genetic data. Its combined features – including its informational richness (extending to cognitive processes), predictive potential, and risk of involuntary disclosure – warrant significant scrutiny to preserve individuals’ privacy, particularly mental privacy.’
Timo further explains, ‘Data protection regulations are crucial in addressing these concerns. The sensitive nature of neurodata raises questions around whether current regulatory frameworks offer adequate protection against incursions on mental privacy and the safeguarding of neurodata. While no supranational regulation specifically addresses neurodata, regional instruments like the EU’s General Data Protection Regulation (2018) provide a framework for assessing protection measures.’
This paper evaluates the effectiveness of the GDPR in mitigating risks associated with the distinctive nature of neurodata, with the goal of safeguarding neuroprivacy and mental privacy in the context of emerging NTs. It analyzes the scope and applicability of the GDPR, examines the challenges of ensuring robust protection during the collection, processing, storage, and transfer of neurodata, and considers how the recent EU AI Act might complement or reinforce GDPR safeguards.
Adobe
Our new series of Research Briefs examine the impact of digital disinformation and potential solutions for its regulation
Adobe
Our new research brief examines the complex relationship between digital technologies and their misuse in surveillance, cyberattacks, and disinformation campaigns.
Adobe Stock
The event, as part of the AI for Good Summit 2025 will explore how AI tools can support faster data analysis, help uncover patterns in large datasets, and expand the reach of human rights work.
ICRC
Co-hosted with the ICRC, this event aims to enhance the capacity of academics to teach and research international humanitarian law, while also equipping policymakers with an in-depth understanding of ongoing legal debates.
Adobe
This training course, specifically designed for staff of city and regional governments, will explore the means and mechanisms through which local and regional governments can interact with and integrate the recommendations of international human rights bodies in their concrete work at the local level.
This training course will delve into the means and mechanisms through which national actors can best coordinate their human rights monitoring and implementation efforts, enabling them to strategically navigate the UN human rights system and use the various mechanisms available in their day-to-day work.
Olivier Chamard/Geneva Academy
Adobe
This initiative wishes to contribute to better and more coordinated implementation, reporting and follow-up of international human rights recommendations through a global study on digital human rights tracking tools and databases.
Geneva Academy
Geneva Academy