Ethics & Algorithms Toolkit
A risk management framework for governments (and other people too!)
The question isn’t whether you should, but when will you start?
Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk.
GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them.
We developed this because:
- We saw a gap. There are many calls to arms and lots of policy papers, one of which was a DataSF research paper, but nothing practitioner-facing with a repeatable, manageable process.
- We wanted an approach which governments are already familiar with: risk management. By identifing and quantifying levels of risk, we can recommend specific mitigations.
Our goals for the toolkit are to:
- Elicit conversation.
- Encourage risk evaluation as a team.
- Catalyze proactive mitigation strategy planning.
- Algorithm use in government is inevitable.
- Data collection is typically a separate effort with different intentions from the analysis and use of it.
- All data has bias.
- All algorithms have bias.
- All people have bias. (Thanks #D4GX!)
Advisory Board Member, Data Community DC
Former Chief Data Officer, City and County of San Francisco
Director of Data Practices, Center for Government Excellence @ Johns Hopkins University
To use this toolkit, we assume you:
- Have some knowledge of data science concepts or experience with algorithms
- Largely understand your data
There are five documents which comprise the toolkit:
Overview and Introduction
The overview section of the toolkit is comprised of level-setting background information that will be useful when traversing subsequent sections of the toolkit. We have outlined a few real-life scenarios where the toolkit might be applied, provided definitions, and more. For example, while we briefly touched upon machine learning in the previous module, the toolkit overview helps you understand more about the various types which exist, such as supervised learning, unsupervised learning, and so on.
Part 1: Assess Algorithm Risk
In Part 1 of the toolkit, there are six major steps (or questions) to help you and your stakeholders characterize an algorithm. Many of these steps have multiple components, but also include clear instructions on how to summarize those stages in order to complete the step.
Since this document can be difficult to navigate, we have developed a worksheet for Part 1 , designed to help you track your responses to the individual steps and how they are combined into overall risk values. It’s worth noting that although answering a series of questions seems simple, you will almost certainly need additional people to help - whether they are stakeholders, data analysts, information technology professionals, or representatives from a vendor that you are working with. Don’t expect to complete this part of the toolkit in just a few hours. Some of the steps will evoke considerable discussion.
Part 2: Manage Algorithm Risk
Although it’s helpful to know how concerned you should be about various aspects of your algorithm, that’s really only half the battle. Although there may be a few cases where the risks are too severe to proceed, there are often ways to mitigate them. Using Part 2 of the toolkit, you identify specific techniques to help address the considerations you identified in Part 1.
The results of Part 2 will be highly customized and specific to the factors you evaluated in part 1. Some of the recommendations can introduce significant burdens that are more appropriately addressed within large-scale programs, such as those that support the social safety net. It is not unusual to need executive and political support to be successful.
Although this isn’t specifically required reading in order to use the toolkit, the appendices provide plenty of additional context and depth. The first appendix contains a list of in-depth questions to help you understand your data in more detail. The second provides additional background on bias and how easily it can arise.
Got feedback for us?
We are grateful for the media stories covering the toolkit. If you’d like to write an article or otherwise help spread the work, please contact the Center for Government Excellence.
- The promise and peril of algorithms in local government, Bloomberg Cities, 20 November 2018
- Algorithms Are Fraught with Bias. Is There a Fix?, Brink News, 19 November 2018
- Data Points Podcast Episode 57: Ethics and Algorithms, GovEx Datapoints Podcast, 24 September 2018
- Applying Ethical Principles to Technologies… Finally!, GovEx Blog, 20 September 2018
- 7 things we (and 600 visitors) learned from CityLab Detroit, Detroit Free Press, 30 October 2018
- Toolkit Targets Bias in Government Algorithms, Techwire, 25 September 2018
- Anti-Bias Toolkit Offers Government a Closer Look at Automated Decision-Making, GovTech, 24 September 2018
- Workshops Tackled Big, Real-World Problems at Data for Good Exchange 2018, Tech @ Bloomberg, 21 September 2018
- New toolkit helps governments vet ‘black box’ algorithms for bias, StateScoop, 20 September 2018
- The toolkit that protects citizens against bias, Smart Cities World, 19 September 2018
- Making Algorithms Less Biased, NextGov, 18 September 2018
- Algorithm toolkit aims to help cities reduce bias from automation, Smart Cities Dive, 18 September 2018
- Is that algorithm safe to use?, GCN, 17 September 2018
- Making Algorithms Less Biased, RouteFifty, 17 September 2018
The contents of this site and the Ethics & Algorithms Toolkit are licensed under a Creative Commons Attribution 4.0 International License.
The site code is licensed under an MIT license.