Regulatory Models for Algorithmic Assessment: Robust Delegation or Kicking The Can?

  Рет қаралды 195

UCL Laws

UCL Laws

3 ай бұрын

A UCL Laws lecture recording from 25 April 2024.
Speakers: Prof. Margot Kaminski (University of Colorado Law School), Associate Prof. Michael Veale (UCL Laws) and Assistant Prof. Jennifer Cobbe (University of Cambridge).
Chair: Andrew Strait (Ada Lovelace Institute)
Recent years have seen a surge in regulation targeting algorithmic systems, including online platforms (Online Safety Act [UK], Digital Services Act [EU]), artificial intelligence (AI Act [EU], AI Executive Order [US]), and the application and extension of existing frameworks, such as data protection, to algorithmic challenges (UK and EU GDPR, California Consumer Privacy Act and Draft Automated Decisionmaking Technology Regulations [USA]). Much of the time, these instruments require regulated actors to undertake or outsource some form of assessment, such as a risk assessment, impact assessment or conformity assessment, to ensure the systems being deployed have desired characteristics. On first glance, all these assessments look like the same regulatory mode - but are they? What are policymakers and regulators actually doing when they outsource the analysis of such systems to actors or audit ecosystems, and under what conditions might it produce good regulatory results? Is the AI Act's conformity assessment really the same kind of beast as the Digital Services Act or Online Safety Act's risk assessment, or the GDPR's data protection impact assessment? Is this just kicking the can on value-laden issues, like fairness or transparency, representativeness or speech norms, down to other actors, because legislators don't want to do it?
In this discussion, three scholars of these systems will compare and contrast different regulatory regimes concerning AI with a focus on how actors within them can understand the systems around them. Does the outsourcing of the analysis of how AI systems work make sense, and is it given to actors with the position and analytic capacity to do it, or might it lead to regulatory arbitrage or even failure?

Пікірлер
Assessing "high risk AI systems" under the EU AI Act
1:00:00
Fieldfisher Data & Privacy Team
Рет қаралды 2,6 М.
Double Stacked Pizza @Lionfield @ChefRush
00:33
albert_cancook
Рет қаралды 110 МЛН
لقد سرقت حلوى القطن بشكل خفي لأصنع مصاصة🤫😎
00:33
Cool Tool SHORTS Arabic
Рет қаралды 20 МЛН
Red❤️+Green💚=
00:38
ISSEI / いっせい
Рет қаралды 86 МЛН
Debunking the EU AI Act: an overview of the new legal framework
1:00:21
Fieldfisher Data & Privacy Team
Рет қаралды 11 М.
Webinar: Exploring 5 Key Topics under the EU AI Act - Part 1
1:12:26
Kenniscentrum Data & Maatschappij
Рет қаралды 3,3 М.
How to Appreciate Your Life Without Getting Attached | Eckhart Tolle
12:44
2014 Three Minute Thesis winning presentation by Emily Johnston
3:19
University of South Australia
Рет қаралды 5 МЛН
Redundancy as a Legal Strategy to Fight Corruption
1:16:12
How AI Impacts the Practice of Law
9:24
IBM Technology
Рет қаралды 11 М.