Overview
ANNOUNCEMENT: This seminar series was postponed in solidarity with the #UCUstrikes. The remaining two seminars were rescheduled for Trinity Term 2023.
These hybrid seminars were free, and all were welcome.
---
Hybrid: 64 Banbury Road, Oxford & Online (Zoom)
ANNOUNCEMENT: This seminar series was postponed in solidarity with the #UCUstrikes. The remaining two seminars were rescheduled for Trinity Term 2023.
These hybrid seminars were free, and all were welcome.
---
Professor Shreya Atrey, Bonavero Institute of Human Rights, University of Oxford
The seminar presents a general account of xenophobic discrimination in international law. It shows that the dominant grounds-based approach to addressing xenophobic discrimination as (i) racial discrimination and (ii) discrimination based on nationality or citizenship, fails to capture what is wrong about xenophobic discrimination. Likewise, the suggestion to address xenophobic discrimination via a dedicated ground like foreignness may also fail given the unique character of foreignness as in turn constructed by other grounds. Instead, xenophobic discrimination can be understood as a sui generis category of discrimination which is not necessarily based on a particular ground, but which leads to the particular harm of making people appear as foreigners or outsiders to the political community of a nation-state. The articles discussed thus proposes a shift away from a grounds-based to a harm-based approach to discrimination in international law.
—
Professor Laia Becares, King's College London
Ethnic inequalities in health are entrenched and persistent in the UK. This seminar explores the role of racism, experienced over the life course, in structuring ethnic inequalities in health in later life. Anchored around key tenets of life course theory, this presentation will discuss findings from recent and upcoming publications that centre racism as the root cause of ethnic inequalities, exploring life course mechanisms that pattern stark ethnic inequities in later life.
—
Professor Brent Mittelstadt, Oxford Internet Institute, University of Oxford
In recent years fairness in machine learning (ML) has emerged as a highly active area of research and development. Most define fairness in simple terms, where fairness means reducing gaps in performance or outcomes between demographic groups while preserving as much of the accuracy of the original system as possible. This oversimplification of equality through fairness measures is troubling. Many current fairness measures suffer from fairness and performance degradation, or "levelling down," where fairness is achieved by making every group worse off or by bringing better-performing groups down to the worst off. When fairness can only be achieved by making everyone worse off in material or relational terms through injuries of stigma, loss of solidarity, unequal concern, and missed opportunities for substantive equality, something would appear to have gone wrong in translating the vague concept of 'fairness' into practice. In this talk, I will examine the causes and prevalence of levelling down across fairML and explore possible justifications and criticisms based on philosophical and legal theories of equality, distributive justice, and equality law jurisprudence. FairML does not currently engage in the type of measurement, reporting, or analysis necessary to justify levelling down in practice. I will propose a first step towards substantive equality in fairML: "levelling up" systems by design through enforcement of minimum acceptable harm thresholds, or "minimum rate constraints," as fairness constraints. I will likewise propose an alternative harms-based framework to counter the oversimplified egalitarian framing currently dominant in the field and push future discussion more towards substantive equality opportunities and away from strict egalitarianism by default.