The objective of the Open Knowledge Justice Programme is to ensure public impact algorithms cause no harm.
About public impact algorithms
Public impact algorithms have four key features:
- They involve automated decision-making
- Using AI and algorithms
- By governments and corporate entities and
- Have the potential to cause serious negative impacts on individuals.
What kinds of serious negative impacts interest us?
Serious negative impacts may arise because the decision or data concerns fundamental rights, such as a person’s liberty, or safety, or welfare entitlements. Public impact algorithms have been shown to embed mass surveillance, biased processes or racist outcomes into public policy, public service delivery and commercial products too.
How we work
To ensure public impact algorithms cause no harm, the Open Knowledge Justice Programme aims to make their deployment accountable.
We do this through:
We equip legal professionals, including campaigners and activists, with the know-how and skills they need to challenge the effects of public impact algorithms in their practice.
In really accessible sessions, from 90 minutes to two days, participants gain a confident working knowledge of how these technologies operate and how to use existing law and ethics to hold deployers to account.
We work with the media to educate specialists and the general public about public impact algorithms and their potential impacts.
We especially target the legal profession (working inside and outside organisations that deploy public impact algorithms) to influence decision makers, to ensure that when public impact algorithms are deployed, they do no harm. Our most recent work focuses on pushing for open access to the mechanics of these technologies, so they can be properly audited.
Algorithms and AI is a new area of law, and the case law is thin. We aim to change this by bringing cases to court to achieve useful precedent for others. In this way we use legal action as a strategy for change. This work is supported by the Digital Freedom Fund.
Find out more
You can also follow @OKFJP on Twitter to get updates from the team.
In the press
- Economist podcast: Tracking and tracing Covid-19 - what are the promises, limitations and risks?
- Guardian article: Exams that use facial recognition may be 'fair' - but they're also intrusive
- Legal Cheek: BPTC student raises bias and privacy concerns over exam proctoring
- Legal Futures: Union offers to back students who want to sue BSB
- The Privacy Collective: The future of tech should be fair, transparent, accountable, says Meg Foulkes from the Open Knowledge Foundation
Meg Foulkes is the director of the Open Knowledge Justice Programme and has been working at Open Knowledge Foundation since 2012. She previously worked as a Legal Adviser at Refugee Legal Centre for detained asylum seekers at Oakington IRC. She is currently juggling her day job at OKF studying to become a barrister, qualifying in July 2021.
Cédric Lombion is Open Knowledge Foundation's Data & Innovation lead and has been supporting the organisation’s data literacy efforts across the world since 2015. Cédric is a political science and public communications graduate; as one of the foremost experts on data and technology literacy he is regularly invited to talk about or facilitate trainings on the topic by international organisations such as the OECD, Transparency International, the UNDP or the World Bank.
Who we work with
We are proud to have worked with 5 Essex Court, Bingham Centre for Rule of Law, Deighton Pierce Glynn, Digital Freedom Fund, Garden Court, The Privacy Collective, Public Law Project, Scottish Police Authority & Police Scotland and the Ada Lovelace Institute.