Does the digital rights sector help or hinder anti-colonial resistance in the age of “AI”? By Dhaksh Raj Sooriya
Dhaksh Raj Sooriya is a queer, trans, multidisciplinary Tamil creative and surveillance technology expert born in Illankai/Sri Lanka. Their family fled genocidal violence that devastated Sri Lanka. Dhaksh is engaged in rigorous research and creative enquiry for their PhD project titled Technologies of Survival, Resistance and Liberation under Authoritarian Surveillance: A Study of Sri Lanka and its Diaspora, at Critical Indigenous Studies Department at Macquarie University.
DISILLUSIONMENTS WITH DIGITAL RIGHTS
15 years ago I left the engineering profession, no longer willing to operate within its patriarchal, rigid and (quite frankly) boring confines. For years, I cheekily described myself as a “recovering civil engineer”. I wanted to be creative and work on urgent social justice issues, like border policing, climate justice and youth unemployment; not be another cog in a corporate machine building rainfall/runoff models. “I can't do this anymore; I like people more than spreadsheets” I declared to anyone who would listen.
And now, over a decade later, I’m on the verge of breaking up with yet another profession. I am now considering becoming a “recovering digital rights expert”. Admittedly, it is strange timing; just as hoards of people are flocking to claim that they are an expert on “artificial intelligence”[1]. The societal challenges connected to digital technologies are multiplying before our eyes; we are being surveilled more than ever in our homes, workplaces, schools, universities, and in the streets. Data extraction and automated decision making is being used to detain and deport migrants, limit access to employment, credit, social benefits, housing and more. AI chat bots are in the process of encroaching on human connection (as therapists, friends, romantic lovers and sexual partners), and Musk’s Grok is being used to undress women and children on X without their consent. Livestreamed genocides in Palestine, Xinjiang and elsewhere are being fuelled by all-encompassing surveillance systems and automated weapons, communities are protesting land and water hungry data centre developments. Internet blackouts fuel violence and genocide in Iran, India, Sudan and the world over. Our political opinions, purchasing behaviours and life decisions are increasingly algorithmically driven.
So why the potential break up at this crucial and urgent moment?
SLEEPING WITH THE ENEMY
Whilst the digital rights sector (which includes community organisations, charities, international NGOS, human rights institutions, policy think tanks, journalists, academics and more) increasingly use terms such as ‘empire’ and ‘colonialism’ to describe the impact of Big Tech and AI, the sector is in bed with colonial forces.
Major technology and human rights convenings globally such as RightsCon are shamelessly funded by Big Tech (2024 funders included Meta, Google (Alphabet), Microsoft, and more). AI systems developed by Google (called Project Nimbus) and Microsoft (known as Azure cloud technology) have been deployed by Israel’s colonial occupation of Gaza and the West Bank. Google is also finalizing a $32 billion acquisition of Israeli cybersecurity company Wiz.
Despite the the extensive work of Black and Indigenous scholars such as Simone Browne and Elia Zureik describing how digital surveillance (and AI) has its roots in trans-Atlantic slavery and European colonialism, books such as Surveillance Capitalism (2019) by Shoshana Zuboff (which invisibilize colonialism and slavery entirely) have become seminal texts in the field. Not to mention scholars who persistently proliferate ‘new’ theoretical frameworks on tech and colonialism (e.g. Data Colonialism, Digital Neo-colonialism, Digital colonialism, Technofeudalism, and most recently Technocolonialism). These frameworks are distinct but parallel in the way that they largely side-step long standing intellectual traditions essential to understanding the material conditions of today (including Anticolonial Studies, Critical Indigenous Studies, Critical Urban Geography and many more), as well as theory and practice garnered through Indigenous, Black, Queer, Trans, Anti-caste, Feminist, Marxist, Anarchist, and other social movement struggles. And it's not just that these scholars are participating in colonial ‘land grabs’ for ideological dominance. It’s that they erase the collective and continuing histories of resistance (and the intellectual rigour that comes from these lineages).
The widely read Business and Human Rights Groups (BHR) career newsletter openly platforms roles in Israel’s AI industry. The last issue included a role at Cornell Tech in New York, the result of a collaboration between Cornell and Technion (an Israeli University), which involves working together to produce genocidal technology.
Much of the digital rights sector is funded by philanthropic foundations that actively suppress the movement for a free Palestine and/or the multi-billion dollar Effective Altruism movement (which has shifted its focus from global poverty to AI risk in recent years). As Timnit Gebru and Emile P Torres explain, Effective Altruism belongs to a group of ideologies that fall under the banner of TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalists, Effective Altruism[2], and Longtermism)[3] which they argue is rooted in the Anglo-American eugenics tradition of the twentieth century. This interconnected and overlapping set of ideologies has underpinned the unquestioned pursuit of building an all-knowing system of artificial general intelligence (AGI), as well as Musk’s rationale for colonizing other planets.
Over the years, I’ve participated in endless international coalitions, tech policy forums and events on digital ID systems, tech-enabled border policing, online disinformation and hate speech, children’s data, and electoral disinformation etc. Given how funding and corporate capture by tech giants shapes agendas it’s no surprise that we are stuck with framings such as ‘responsible tech’, ‘AI ethics’, ‘tech safety’ and of course ‘digital rights’ (conceptualised as a subset of human rights), that leave no room for explicit analyses of colonialism, Brahminical heteropatriarchy, US and other imperialisms, or neoliberal conceptions of ‘human rights’.[4]
SO, WHERE TO FROM HERE?
What if we built from a shared understanding that projects of colonialism, slavery and caste supremacy[5] are not historical, but instead ongoing formations within a capitalist system that is reforming itself constantly through time. This would require understanding that surveillance and borders have long operated as modes of governance that sort and structure the world according to colonial, imperial, casteist and capitalist interests. Long before the creation of digital technologies, certain people and groups were marked out of punishment, discipline, displacement or death. Certain land was marked out for extraction based upon the techno-scientific profit maximising gaze.
Colonial administrations (working with those atop caste and class hierarchies) surveyed, collected, chronicled and counted information on bodies and acquired territories for the purposes of control, shaping the very concept of information itself. These military, ethnographic, cartographic, and economic visions of the world depended on stores of transferrable information: maps, archives, libraries and compendiums. The foundational tools of surveillance, such as fingerprinting, census taking, map-making and profiling, were refined throughcolonialismandtrans-Atlantic slavery. This means that the digital databases of today (that form the basis of automated decision-making and predictive models including LLMs) are built upon a lineage of knowledge production that is far from neutral. We see this in the founding story of Google (Alphabet), and how Brin and Page’s creation of the search function (within very large datasets) was deeply tied into counter-terrorism, military and homeland security efforts, driven by the US intelligence institutions. What would it look like to reckon with this history?
Taking this a step further, how do we grapple with the fact that AI is premised on the flawed notion that historical data is an accurate basis for the prediction of the future? During my engineering days, I realized historical rainfall data would not help us predict the more extreme rainfall events under a changing climate. Similarly, numerous studies have shown that past human behaviour is not a sound basis to predict future behaviour. The book AI Snake Oil comprehensively details the flaws and ineffectiveness of many AI systems and predictive technologies. The insights presented are so shocking, particularly in the context of widespread acceptance about the inevitability of AI driven-futures. Rasheeda Phillips points out in her new book, Dismantling the Master’s Clock that predicting the future from the past makes little sense, particularly outside Western notions of a linear progression of time. She brilliantly argues that linear time is at odds with quantum physics, as well as the dynamism and fluidity of Black existence. What would the tech justice space look like if it allowed for well resourced experimentation for alternative technologies outside the paradigms of automation, prediction and colonial time?
Further to this, we desperately need more ambitious technology policy that is cognisant of its limitations[6], and that works in concert with other strategies. All over the world people are imagining and executing so many powerful and creative interventions beyond mainstream technology policy:
building counter-institutions and networks (e.g. platform cooperatives)
preserving Indigenous language through tech (e.g. Te Hiku Media)
building alternative tech (Community Tech NY, Não Binário)
worker organising (e.g. Data Workers’ Inquiry, No Tech for Apartheid)
community organizing (e.g. Palestine Action hunger strike in protest of Elbit Systems contract)
resisting land grabs for large-scale infrastructure projects (e.g. global data centre resistance efforts, fights against the increasingly digitised Cop Cities, resistance against Neom and other smart city projects)
rethinking data in line with Indigenous sovereignty (e.g. Principles of Māori Data Sovereignty)
in-depth analysis and research (e.g. Criminal Justice and Police Accountability Project, Equality Labs, 7amleh,The Border Chronicle, Algorithmic settler colonialism, Indigenous futures)
manifesto (e.g. Glitch Feminism, Imagination: A Manifesto)
archival work (e.g. Sudan Digital Archive, Palestine in the Cloud)
art that’s not AI generated (e.g. Indigenous futurist Dr Ambelin Kwaymullina, Adivasi futurist Subash Thebe Limbu
alternative convenings (e.g. Time Zone Protocols based on Afrofuturism and Black Quantum Futurism)
What if initiatives like these were placed at the centre of our resistance and world-building efforts? What if these seeds formed the basis for growing more liberatory tech futures (and histories)? What if we also contributed more to existing movements (such as fights for Indigenous land, housing justice, food security, against gentrification etc)? Then perhaps we could move away from AI as a separate policy area, and instead embed AI considerations within all legislation related to the built environment, urban planning and design, health and safety, engineering, public health, climate policy, etc.
As a trans, Tamil person and migrant, it is increasingly challenging to remain in a sector that wants to pursue slow reforms in the face of persistent state and corporate violence, colludes with colonisers, and ultimately contributes to the automation of dispossession. As Omar Woodard says (in Phillips’ book), there is “no time for the long road to justice”. This is urgent and now is a critical moment to take action.
This is intentionally in air quotes because it is an ill-defined term. Previously AI has functioned more as a marketing term from the tech industry. It has never been useful as a descriptive term due to the vast range of technologies that it includes; chatbots, recommendation algorithms, content moderation algorithms on social media, facial recognition and other forms of predictive policing, and even text prediction and autocorrection. Since the advent of generative artificial intelligence (GenAI) that uses models to generate text, images, videos and more, the term AI has taken off. This term is not only difficult to define, but also makes us more susceptible to the anthropomorphization of technology and feeds the narrative that we need to develop systems that outperform human intelligence.
Effective altruism is defined by the Centre for Effective Altruism as “an intellectual project, using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.” Much has shifted since the movement was established in the 2000s based on the work of philosophers Peter Singer, Toby Ord and Will MacAskill. Since then, organizations have sprung up to help Silicon Valley billionaires (e.g. Elon Musk, Peter Thiel, and Sam Bankman-Fried) find the most cost-effective ways to ‘improve the world’.
TESCREAL is a neologism proposed by computer scientist Timnit Gebru and philosopher Émile P. Torres. An acronym, it stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalists, Effective Altruism, and Longtermism. Gebru and Torres argue that these ideologies should be treated as an "interconnected and overlapping" group with shared origins.
The colonial history of human rights frameworks and institutions has been the preoccupation of several scholars such as Irene Watson, Upendra Baxi and Jessica Whyte. Jessica Whyte’s (2019) book, The Morals of the Market, details how neoliberal thinkers co-opted human rights by tying it fundamentally to the free market, and using it as a weapon against anticolonial projects all over the world. There are numerous case studies of parts of the world that have been abandoned by international law and ravaged by the World Bank, IMF, UN agencies, and international NGOs in the name of development (like Sri Lanka/Illanai where I was born).
Caste is a system that consists of graded levels of alleged purity and places people within a certain hierarchy—leaving those in the lowest tier, called “Dalit” or “untouchable”, subject to abuse, attacks, and systemic social exclusion. It has existed for over 3000 years and predates colonialism. Caste oppression is not exclusive to Hinduism or South Asia and is found in South America, Asia, and Africa.
Well documented limitations of technology policy include the capture of the policy processes and government by corporations and wealthy individuals, its inability to keep pace with new tech development, the ineffectiveness of self-regulation and co-regulation, and inability to create and enforce regulation within a hostile state, and the challenge of enforcing regulations on multi-national corporations across jurisdictions.