News
By Martin Tisné

Upholding our data & digital rights

The struggle to uphold our data and digital rights has been both turbulent and exciting over the past few years. We have seen data privacy come of age as a policy issue, popular sentiment towards the tech platforms shifting, specific harms – disinformation, hate speech, manipulation – being better understood and more prevalent, and crackdowns on civic space by authoritarian regimes threatening freedom on and offline. In response, Luminate has continued to evolve its work in this sector to support the courageous activists that lead it, as well as stay true to our commitments to openness, accountability, and participation. 

Data and technology are not neutral impressions of reality, they are reflections of power, the power of those who collect the data and those who design the technology. This informs the substance of the news we consume and the way it is parceled out, the information to which we are privy or excluded, the norms that have been set around data collection, usage, and privacy, the terms of public discourse, the basic health of democracies, and the structure of competition and markets. In a world where data and technology are so pervasive, it is incumbent upon us to ensure that the public has a voice in how these are used and impact their lives and societies. 

We expanded our work in 2014 following the Snowden revelations from a focus on open data to a dual focus on open data and privacy. Two years later, in the run-up to the Brexit referendum and US presidential election, we broadened our strategy to include work on the public scrutiny of automated decision-making systems. Our current strategy takes account of these changes by focusing explicitly on data rights and digital rights. We work to uphold data rights by focusing on three complementary areas: open data, privacy, and artificial intelligence. Our digital rights work focuses on freedom of expression issues such as hate speech, online surveillance of journalists, and net neutrality. Our overall goal is to support people and communities to exercise their human rights in the design and use of data and technology.

What have we learned?

Screen Shot 2019-07-01 at 1.24.36 PM

We under-estimated the collective harm that data can have on societies. For example, the societal impact and harm caused by the Cambridge Analytica breach goes beyond the sum total of individual privacies breached. The framing of our work was historically overweighted on individual rights; we now see a pressing need to explore the collective and community impacts of data collection and use (i.e. by harvesting my data, a company or government makes decisions that affect many others beyond me).

What’s next?

There are three working hypotheses that we will be focussed on developing and building out over the next few years. 

First: if people have a voice in how data and technology are created, shared, and used, the design and use of data and digital technologies will be more outcome-driven and affect people’s lives in more positive ways. What would success look like? Civil society, the media, and policy makers have a better understanding of a current or potential harm of data or technology. The norms and standards that define the use of data in our societies respect and strengthen human rights are designed for and by the people. New and diverse voices enter the debate on data rights. What are we worried about? That people might exercise their voice, but institutions won’t respond and those erstwhile energised people will become less likely to engage in the future. Or simply that people do not exercise their voice because they do not believe that institutions will respond.

The work of our grantee Upturn with a coalition of actors including the American Civil Liberties Union, The National Fair Housing Alliance and other organisations, which led to Facebook committing to no longer allow landlords, employers, creditors, and similar advertisers to target or to intentionally exclude people based on gender, zip code, and other sensitive categories, is a great example of this in practice. Whilst challenges remain with regards to discrimination in digital marketing, this decision was a major win for the groups who have been coordinating around this over the past few years – from drawing attention to the harms resulting from Facebook’s system, to building out a coalition of diverse partners to challenge the platform’s practices and engage in targeted litigation and advocacy – and a first step in addressing explicit discrimination by advertisers on the platform.

Second: if the power held by those who control data and technology is better understood, acknowledged, and held to account to serve the public good, policy makers will be more willing and able to make meaningful regulatory change. What would success look like? Growing demand for credible policy alternatives; successful litigation that creates lasting precedent; governments adopting progressive policy proposals drafted by our grantees; the establishment of legal precedent through litigation that we support. What are we worried about? That policies enacted in one country or region are not followed elsewhere; that companies ignore regulation, pay the fines, and go on with business as usual, and that the threat to the companies is so large that they dwarf our resources a thousand to one and crowd out any chance for us to reach policy makers and politicians.

The impact of sustained strategic litigation is well demonstrated in the recent historic win by our grantee Privacy International (PI). After a five-year battle with the UK government, the UK Supreme Court ruled in PI’s favour in a case brought forward by the organisation that argued that UK Investigatory Powers Tribunal decisions should be subject to judicial review in the High Court. The case stemmed from an initial challenge from PI which alleged that hacking by UK security and intelligence services (as revealed in the Snowden disclosures) violated the European Convention on Human Rights. The full history of the case can be found here

Third: if we strengthen the institutions that underpin our societies’ data and digital infrastructure, we will ensure they are more responsive to people and communities, as well as listened to by policy makers, politicians, and companies. Data and digital rights are human rights. As with human rights, we need to build the infrastructure to sustain, safeguard, and promote them. What would success look like? The new data rights infrastructure should include data rights boards; data cooperatives (which would enable collective action and advocate on behalf of users); ethical data-certification schemes; specialised data-rights civil society organisations, litigators, and auditors; data trustees who act as fiduciaries for members of the general public; and more. What are we worried about? That institutions are neither resilient nor representative of people’s interests; that those new institutions fail to scale sustainably and are captured by elite interests.

The Initiative for Digital Rights in Latin America (Indela) exemplifies our commitment to building strong institutional infrastructure in the regions we work in. Indela was launched to fund projects in Latin America focussed on freedom of expression, privacy, and access to knowledge that aim to advance digital rights through policy and advocacy, public campaigns, applied research, and litigation. The fund is a response to the need to build and strengthen the capacity of Latin American organisations working to transform how digital rights are understood and advanced in challenging environments, that are characterised by issues such as the rise of disinformation on social networks, governments increasingly deploying invasive technologies for surveillance, and platforms and telecommunications companies continuing to undermine the data rights of users online.

Regional priorities

Our work in Data & Digital Rights is both regional and global in nature, with regional priorities in Western Europe and Latin America. In Western Europe, we seek to influence EU regulations via a focus on the two largest EU countries (France and Germany), as well as a focus on the United Kingdom due to the richness of its data & digital rights ecosystem and position as a bridge between the US and EU positions on data regulation. In Latin America, we will invest in key players that are shaping regulation in priority countries and support impact-driven open data organisations with regional impact. Our vision is to build a Latin American voice able to respond to the global challenges affecting human rights in the digital era.

Our Data & Digital Rights strategy has three objectives: to understand the actual and potential social, economic, and political impacts of technology; to develop the policy responses to deal with these impacts, and to develop the infrastructure to implement those policy ideas in practice. Our strategy has evolved over the past few years and will continue to do so to keep pace – and where possible stay ahead of - the challenges we are seeking to tackle. We look forward to feedback from our grantees, partners, co-funders, and all organisations and individuals active and interested in this space that is so vital for the health of our societies.