Ethics & Algorithms Toolkit
A risk management framework for governments (and other people too!)
The question isn’t whether you should, but when will you start?
Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk.
GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them.
We developed this because:
- We saw a gap. There are many calls to arms and lots of policy papers, one of which was a DataSF research paper, but nothing practitioner-facing with a repeatable, manageable process.
- We wanted an approach which governments are already familiar with: risk management. By identifing and quantifying levels of risk, we can recommend specific mitigations.
Our goals for the toolkit are to:
- Elicit conversation.
- Encourage risk evaluation as a team.
- Catalyze proactive mitigation strategy planning.
- Algorithm use in government is inevitable.
- Data collection is typically a separate effort with different intentions from the analysis and use of it.
- All data has bias.
- All algorithms have bias.
- All people have bias. (Thanks #D4GX!)
Advisory Board Member, Data Community DC
Former Chief Data Officer, City and County of San Francisco
Director of Data Practices, Center for Government Excellence @ Johns Hopkins University
To use this toolkit, we assume you:
- Have some knowledge of data science concepts or experience with algorithms
- Largely understand your data
Overview and Introduction
Part 1: Assess Algorithm Risk
Worksheet for Part 1
Part 2: Manage Algorithm Risk
Got feedback for us?
We are grateful for the media stories covering the toolkit. If you’d like to write an article or otherwise help spread the work, please contact the Center for Government Excellence.
- The promise and peril of algorithms in local government, Bloomberg Cities, 20 November 2018
- Algorithms Are Fraught with Bias. Is There a Fix?, Brink News, 19 November 2018
- Data Points Podcast Episode 57: Ethics and Algorithms, GovEx Datapoints Podcast, 24 September 2018
- Applying Ethical Principles to Technologies… Finally!, GovEx Blog, 20 September 2018
- 7 things we (and 600 visitors) learned from CityLab Detroit, Detroit Free Press, 30 October 2018
- Toolkit Targets Bias in Government Algorithms, Techwire, 25 September 2018
- Anti-Bias Toolkit Offers Government a Closer Look at Automated Decision-Making, GovTech, 24 September 2018
- Workshops Tackled Big, Real-World Problems at Data for Good Exchange 2018, Tech @ Bloomberg, 21 September 2018
- New toolkit helps governments vet ‘black box’ algorithms for bias, StateScoop, 20 September 2018
- The toolkit that protects citizens against bias, Smart Cities World, 19 September 2018
- Making Algorithms Less Biased, NextGov, 18 September 2018
- Algorithm toolkit aims to help cities reduce bias from automation, Smart Cities Dive, 18 September 2018
- Is that algorithm safe to use?, GCN, 17 September 2018
- Making Algorithms Less Biased, RouteFifty, 17 September 2018
The contents of this site and the Ethics & Algorithms Toolkit are licensed under a Creative Commons Attribution 4.0 International License.
The site code is licensed under an MIT license.