Digital Identity: No Empowerment without Privacy
The digital revolution has permeated almost every aspect of modern life, from commerce to connection, from financial transactions to friendship, and even language :-). Underpinning all of these is identity: knowing (or not knowing) the underlying person or entity with whom one is transacting or simply conversing.
Centuries ago, communities established one’s identity in local settings among small groups of people, all known to each other. Now, against the backdrop of a modern, global economy and digital networks of thousands, if not millions, of people, that task is much more complicated. As a result, technology is not only increasing the demand for identification—it’s changing the way in which we present ourselves, how others identify us, how we participate in global markets, and how we access government services.
From opening a bank account to getting a mobile phone, from renting an apartment to voting in elections, from accessing health services to receiving a state cash transfer benefit, from enrolling in school to driving for companies like Uber; one must provide some level of identification to participate. Despite this, the World Bank estimates 1.5B lack verifiable identity and thus risk being excluded.
Goal 16.9 of the UN Sustainable Development Goals addresses this gap and calls on governments and other actors to provide identity for all. Countries and companies are making concerted efforts to this end. In recent years, Peru, India, Pakistan—with scattered populations and challenging social issues—have issued IDs to nearly all of their most remote and marginalized residents. Companies like Lenddo and Veridu are employing social media data to help identify and verify individuals, while others are piloting the use of Blockchain and distributed ledgers for this. Gov.UK Verify is using a hybrid approach, certifying a roster of partners in the private sector who can verify an individual and allow her to access public services (usually based on an underlying government identity document that has already been issued).
We are excited about the tremendous promise that digital identity (eID) holds for empowering individuals in myriad ways. However, we believe one important aspect of this equation which risks being overlooked is privacy. Privacy is related to issues of user awareness, choice, and control in regard to their personal information. In other words, how do we ensure that legal systems, corporate interests, and the technological design of state and non-state digital identity systems protect an individual’s right to privacy? How do we ensure they require user awareness of what personal information is being shared and why? How do we ensure they facilitate consumer choice, consent, and control over that information and the decisions they make? We would argue that these elements are essential to building consumer trust in identity systems—what we are calling the “trust architecture.” Otherwise, we risk disempowering individuals.
To examine these privacy issues more fully, we commissioned Consult Hyperion (CHYP) to investigate. They delved into privacy from the perspective of one’s digital identity—exploring issues such as privacy of personal information (how do countries regulate personal data; how do states view corporate data collection vs. government surveillance?), digital identity (what are an eID’s component parts, how protected or “private” is it in practice?), and technology design (what are common eID tech architectures and which are privacy-enhancing?). They evaluated digital identity models in state and non-state applications—from Aadhaar to Facebook to Bitcoin’s id system. Finally, they suggest rules and design decisions governments and companies can implement to protect individuals.
We encourage participants in the identity ecosystem to read Consult Hyperion’s paper. Although it does not necessarily represent the views of Omidyar Network, and is an independent investigation, we find much to recommend. We believe it provides an important perspective from which many, including ourselves, can benefit by helping ensure we are creating a technological system that protects instead of monitors, and empowers instead of controls. Several findings resonated with us:
- CHYP points out that the EU Data Protection Directive (DPD), responsible for data laws that enshrine both right to privacy and right to protection of personal data, has resulted in other countries “following the leader.” As such, 33 non-European countries have legislated for data protection frameworks that substantially incorporate the higher protections of the DPD, rather than other laxer guidelines (pg.19). This is not all altruistic; it is in part driven by restrictions preventing the transfer of data outside the EU to countries that do not have “adequate” protection (important to countries like the Philippines that are data process outsourcing destinations). So while there may be different legal regimes in different countries, the EU framework is likely to spread.
- Concurrent to this rise in data protection are two seemingly contradictory realities: First, despite an increase in privacy and personal data protection laws, most countries are not adequately equipped to enforce them or offer remedy for privacy breaches (pg. 20); second, while private sector data collection is being tamped down, state power to conduct surveillance is increasing. Since Snowden, 10 countries have begun or completed processes to modernize laws to expand surveillance (pg. 21-24). This confusing legal reality suggests that key work for those designing empowering ID systems is not just in the legal framework, but also at the technical design level, to build in the necessary user permissions and controls so individuals are aware of requests to use their data and can give (informed) consent. In this respect, we find Aadhaar’s technical architecture in India encouraging, as it allows for high levels of user control and consent.
- Finally, we found useful the articulation of seven different digital identity technical architectures, with different ways in which each architecture enhances or impinges upon a given user’s trust and privacy and ways to mitigate privacy risks. Consult Hyperion does an extensive examination of these systems—Facebook, Google, and other “monolithic internet identity providers,” vs. state-issued eID cards, vs. “No IDP” models like Bitcoin or Blockchain ID applications that are effectively decentralized, unregistered platforms. All models raise concerns, those of “monolithic internet providers,” so too the state-issued ID models. The most important question is not just which model a company or government employs, but how it does so. For this Consult Hyperion provides a list of privacy principles to help guide system development.
There is much to be done to fulfill the empowerment promise of universal digital ID. One critical step is ensuring that it is accompanied by respect for privacy, user choice, and control. We think Consult Hyperion’s paper is valuable reading for all who are designing and implementing state-issued ID systems.