2018-19 Atlantic Fellow for Social and Economic Equity
Even a decade ago, it was impossible to imagine that some of the poorest people in the global South would own mobile phones and use the internet. But by 2018, more than half of the world's population were using the internet and there were more mobile phones than human beings on Earth.
This signals something quite remarkable about the changing nature of our world. The PwC 2017 Global Artificial Intelligence Survey estimated that by 2030, artificial intelligence would be contributing $15.7 trillion to the global economy. Hence, it’s not surprising that huge investments are being channeled into the development of AI applications targeting millions of people living in the Global South. In China alone, the number of AI patents granted grew by 190% over a five-year period.
Despite all the hype, however, scant attention is paid to understanding the digital lives of the poor, which are markedly different from the lives of the people in Silicon Valley and elsewhere who design and control these emerging AI technologies.
Digital inclusion works like a double-edged sword against the poor. Before they enter the brave new world of digital identities and services, poor people are essentially invisible, and miss out on the many potential opportunities that technology can bring. However, once they are connected, the poor risk becoming hyper-visible: subject to discrimination, biases and surveillance, without any protection.
Today, it is no longer becoming connected that is the primary challenge for people living in poverty in the global South and elsewhere. There are now a wide range entry points to the digital world: a government-issued biometric ID card, a simple feature phone, or access to a basic internet connection. Thanks to massive efforts by global tech giants, governments, UN agencies, and the international development community, today billions of poor people are using mobile phones and accessing the internet in a way that, for many of them, would not have been possible by their individual efforts alone.
Instead, it is what happens when poor people move from digital exclusion to digital inclusion that can be most problematic. A small but growing number of academics and researchers have been voicing concerns about the risks of being poor in the digital age. As Virginia Eubanks argues in her book “Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor”, algorithms and automated decision systems give rise to many new forms of inequalities, while exacerbating the age-old ones. In the United States, for example, these new kinds of digital discrimination can mean that people of colour face an increased likelihood of going to prison, having their children taken away or being denied public housing.
Bangladesh provides a sobering, if much less well-known, example of the risks of digital hyper-visibility. In November 2018, Rohingya refugees went on strike in protest at the dangers posed by the collection of their biometric data. The smart ID cards issued to them by the Bangladesh authorities and the UN’s refugee agency, UNHCR, left the Rohingya in fear of being forced to return to Myanmar. It’s understandable that people who survived a genocide would be wary of trusting governments with their personal data. Crucial questions such as how the data will be used, who will own it, and protection mechanisms in cases of data abuse and negative consequences in future, remain unanswered.
Algorithms and automated systems are often characterised as “neutral” tools that treat individuals fairly regardless of their gender, race, socio-economic status, or any other factors that could lead to discrimination. However in practice, more often than not, they fail to remain neutral. These systems are designed and developed by humans and trained using data generated by humans, and incorporate the biases of their designers, implementers and owners, and of the larger culture in which they are created. What’s more, when it comes to poor or vulnerable people’s data, the minimum standards of privacy, consent and rights are often ignored. The biggest challenge for the poor is that they cannot afford to forgo participation in the digital ecosystem, as that will mean losing current benefits or missing out future opportunities. However, just like everyone else, they don’t want to end up hyper-visible – digitally naked – in front of governments, law enforcement agencies and commercial entities, either.
Six years ago, Facebook CEO Mark Zuckerberg claimed that global connectivity could be a solution to inequality. Today, his optimism about a future in which technology will bring opportunities to the poorest, and lift them out of poverty, is widely shared – but perhaps too widely. Techno-optimists like to ask, “How can we make the poor ready for the digital world?” But perhaps we should approach the challenge from the perspective of the poor, and ask ourselves instead, “Is the digital world, with all of its shiny gadgets and powerful algorithms, ready for the poor?”
The short answer to that question is, not yet. It’s high time we view the challenges of our technological age from the perspective of the most deprived and vulnerable people. Rather than pushing the poor to adapt to new digital solutions, the digital solutions should adapt to their realities. After all, it is not the technologies, but the human intention behind the technologies, that holds the key to reducing inequalities.
The views expressed in this post are those of the author and do not necessarily reflect the position of the Atlantic Fellows for Social and Economic Equity programme, the International Inequalities Institute, or the London School of Economics and Political Science.