The Open Knowledge Justice Programme is working to ensure Public Impact Algorithms cause no harm.
We do this by increasing the accountability of Public Impact Algorithms through training, advocacy & strategic litigation
For the most up to date information on the The Justice Programme
- visit our website www.thejusticeprogramme.org
- follow us on twitter and
- sign up for our mailing list.
About Public Impact Algorithms
Public impact algorithms have four key features:
- they involve automated decision-making
- using AI and algorithms
- by governments and corporate entities and
- have the potential to cause serious negative impacts on individuals.
What kinds of serious negative impacts interest us?
Serious negative impacts may arise because the decision or data concerns fundamental rights, such as a person’s liberty, or safety, or welfare entitlements. Public impact algorithms have been shown to embed mass surveillance, biased processes or racist outcomes into public policy, public service delivery and commercial products too.
How we work
To ensure public impact algorithms cause no harm, the Open Knowledge Justice Programme aims to make their deployment accountable.
We do this through:
We equip lawyers, campaigners and activists with the knowledge and skills you need to challenge Public Impact Algorithms in your work.
Learn how to get trained by us here.
We believe that people need to be aware of the risks of harm that can result from using AI and algorithms to make important decisions about all our lives.
We advocate in the media and in Parliament for this issue to be properly examined, and for real positive change to be made.
Learn about our campaign to Open Up DPIAs here.
We use strategic litigation as part of our toolkit to ensure Public Impact Algorithms do no harm.
Working with specialist lawyers, we challenge the misuse of algorithmic technologies through taking legal action, aiming to set legal precedent for other change-makers to push for justice in this field.
Find out how we are using the law to challenge unfair algorithms - funded by the Digital Freedom Fund.