Regulatory Models for Algorithmic Assessment: Robust Delegation or Kicking The Can?

  Рет қаралды 186

UCL Laws

UCL Laws

2 ай бұрын

A UCL Laws lecture recording from 25 April 2024.
Speakers: Prof. Margot Kaminski (University of Colorado Law School), Associate Prof. Michael Veale (UCL Laws) and Assistant Prof. Jennifer Cobbe (University of Cambridge).
Chair: Andrew Strait (Ada Lovelace Institute)
Recent years have seen a surge in regulation targeting algorithmic systems, including online platforms (Online Safety Act [UK], Digital Services Act [EU]), artificial intelligence (AI Act [EU], AI Executive Order [US]), and the application and extension of existing frameworks, such as data protection, to algorithmic challenges (UK and EU GDPR, California Consumer Privacy Act and Draft Automated Decisionmaking Technology Regulations [USA]). Much of the time, these instruments require regulated actors to undertake or outsource some form of assessment, such as a risk assessment, impact assessment or conformity assessment, to ensure the systems being deployed have desired characteristics. On first glance, all these assessments look like the same regulatory mode - but are they? What are policymakers and regulators actually doing when they outsource the analysis of such systems to actors or audit ecosystems, and under what conditions might it produce good regulatory results? Is the AI Act's conformity assessment really the same kind of beast as the Digital Services Act or Online Safety Act's risk assessment, or the GDPR's data protection impact assessment? Is this just kicking the can on value-laden issues, like fairness or transparency, representativeness or speech norms, down to other actors, because legislators don't want to do it?
In this discussion, three scholars of these systems will compare and contrast different regulatory regimes concerning AI with a focus on how actors within them can understand the systems around them. Does the outsourcing of the analysis of how AI systems work make sense, and is it given to actors with the position and analytic capacity to do it, or might it lead to regulatory arbitrage or even failure?

Пікірлер
Assessing "high risk AI systems" under the EU AI Act
1:00:00
Fieldfisher Data & Privacy Team
Рет қаралды 2,3 М.
Debunking the EU AI Act: an overview of the new legal framework
1:00:21
Fieldfisher Data & Privacy Team
Рет қаралды 10 М.
100❤️
00:19
MY💝No War🤝
Рет қаралды 14 МЛН
HOW DID HE WIN? 😱
00:33
Topper Guild
Рет қаралды 40 МЛН
ОСКАР vs БАДАБУМЧИК БОЙ!  УВЕЗЛИ на СКОРОЙ!
13:45
Бадабумчик
Рет қаралды 4,6 МЛН
What are Diffusion Models?
15:28
Ari Seff
Рет қаралды 209 М.
What's the future for generative AI? - The Turing Lectures with Mike Wooldridge
1:00:59
Brian Cox Lecture - GCSE Science brought down to Earth
1:15:45
The University of Manchester
Рет қаралды 2 МЛН
Andrew Ng: Opportunities in AI - 2023
36:55
Stanford Online
Рет қаралды 1,8 МЛН
The Turing Lectures: The future of generative AI
1:37:37
The Alan Turing Institute
Рет қаралды 571 М.
How AI Could Save (Not Destroy) Education | Sal Khan | TED
15:37
How AI Impacts the Practice of Law
9:24
IBM Technology
Рет қаралды 10 М.
Conference Interpreting MA
3:03
London Metropolitan University
Рет қаралды 10 М.
100❤️
00:19
MY💝No War🤝
Рет қаралды 14 МЛН