facial recognition

Facial Recognition at Borders and the Racialised Traveller

Facial recognition has become the face of Britain’s border modernisation — efficient, invisible, and deeply political. This long-form investigation examines how biometric border control systems reproduce racial bias and reshape the experience of the racialised traveller.
Start

Across Britain’s airports and ports, automated gates have quietly become the new face of national security. The government presents them as efficient, impartial and convenient — an emblem of post-Brexit modernisation. Yet behind every quick passage through an e-gate lies a deeper question of who is recognised, who is delayed and who becomes a data point in a growing experiment of biometric border control.

Facial recognition technologies now define how the United Kingdom regulates human movement. Installed at Heathrow, Gatwick, and the Eurostar terminals, these systems promise seamless travel while feeding a surveillance culture that extends far beyond migration control. Their accuracy, transparency and racial neutrality remain contested. Scholars and human-rights groups warn that algorithmic bias can reproduce the same hierarchies of suspicion that once operated through manual screening.

A futuristic portrait of a woman with laser scanning lines on her face, representing face recognition

The UK’s embrace of biometric travel surveillance reflects a global trend but carries its own cultural and political weight. In a country negotiating its identity after Brexit, the border has become both a literal and symbolic frontier where race, technology and national belonging intersect. The racialised traveller, once profiled by human officers, is now scanned, sorted and sometimes misrecognised by code.

What emerges is a new configuration of mobility: one mediated by machines that inherit the prejudices of the societies that build them. From the design of facial datasets to the narratives of security that justify their expansion, the digital border mirrors old patterns of inequality under the guise of innovation.

Britain’s Biometric Turn

The United Kingdom began experimenting with facial recognition at borders more than a decade ago, but the technology’s expansion accelerated after the pandemic. The Home Office framed automation as both a health measure and an economic necessity. By 2023, the majority of travellers entering through major airports had passed at least once through an e-gate equipped with biometric matching software. Each interaction added data to a national system increasingly integrated with immigration, policing and counter-terrorism databases.

While the policy language emphasised convenience, the operational goal was control. Officials argued that faster processing would free human officers to focus on “high-risk” travellers, a category left deliberately vague. In practice, risk is often mapped onto racialised bodies and non-Western passports. The automation of suspicion did not eliminate bias; it concealed it within lines of code.

Independent reviews, including the Ada Lovelace Institute’s 2024 report on border biometrics, found persistent demographic disparities in match rates. People of African, South Asian and Middle Eastern descent were disproportionately flagged for manual verification. These outcomes mirrored broader findings in algorithmic bias research and challenged claims that technology could neutralise human prejudice.

Civil-liberties groups such as Liberty and Big Brother Watch have since urged Parliament to legislate for stricter oversight. They argue that border surveillance has outpaced public debate, expanding under emergency-style powers introduced after 2016. The lack of transparency about how facial data is stored, shared or deleted leaves travellers with little recourse in cases of misidentification.

Beyond legality lies culture. The spectacle of effortless movement through gleaming e-gates signals progress, yet the very architecture of that progress depends on differential trust. The racialised traveller, aware of being algorithmically evaluated, experiences the border not as a gateway but as a mirror — one reflecting both state anxieties and technological blind spots. The British fascination with technological fixes has long been intertwined with colonial legacies of classification and measurement. From anthropometry to biometrics, systems designed to identify have also served to differentiate. The current border apparatus extends that logic into the digital realm, turning faces into tokens of authenticity and deviation.

Racialised Bodies in Transit

At Heathrow or Manchester, the choreography of border control feels routine: travellers queue, scan, and walk through gates. Yet this apparent normality conceals a hierarchy of experience. Studies by the Migration Observatory at Oxford (2024) note that people of colour are more likely to face secondary checks even after e-gate clearance. The machine’s “neutral” verification coexists with residual human suspicion, forming a hybrid system where automation and bias reinforce each other.

In interviews with passengers conducted by Liberty in 2024, respondents of African and South Asian descent described heightened anxiety at e-gates. They feared misrecognition not because of the machine’s reputation but because of the pattern: those who resemble them often trigger alarms. When an algorithm fails to match a face, the traveller becomes a case study in deviance. The device itself is silent, but the narrative of error echoes loudly.

facial recognition

The psychological toll of constant verification is difficult to quantify. For frequent travellers from racialised backgrounds, every border crossing becomes a rehearsal in self-presentation: posture adjusted, expression neutral, gaze steady. What might be an efficient process for some becomes an exercise in performance for others. This “algorithmic self-discipline”, as cultural theorist Ruha Benjamin terms it, reflects how surveillance reconfigures subjectivity.

In Britain’s airports, where multiculturalism is both celebrated and contested, the digital border produces paradoxical scenes. Passengers queue beneath posters declaring “Welcome to the UK”, yet their welcome is mediated by a lens that struggles to read certain faces. The rhetoric of openness collides with the technical limitations of inclusion. When the system cannot interpret a face, it questions the person’s belonging to the national dataset itself.

Such moments resonate with the country’s broader debates about identity after Brexit. The reassertion of borders was sold as an act of sovereignty, but the practical result has been the embedding of biometric surveillance into everyday life. The racialised traveller thus encounters Britain’s anxieties not only in policy but in pixels.

Campaigners argue that the issue extends beyond accuracy to the political economy of data. Each scan feeds a database that may later inform policing, immigration enforcement or commercial analytics. For individuals already subject to disproportionate scrutiny, the loss of control over their digital likeness deepens existing asymmetries of power.

The state’s insistence on efficiency often marginalises these concerns. Official briefings describe “temporary discomfort” as acceptable collateral for national security. Yet what appears as a minor technical glitch for policy-makers translates into repeated humiliation for travellers. The border’s impartial façade becomes another site where Britain’s unresolved relationship with race surfaces through technology.

Artistic responses have begun to document these experiences. Photographer Aiysha Malik’s 2025 series Unrecognised captures travellers whose faces were misread by border systems, their images blurred into abstraction. The work reframes error as evidence, suggesting that invisibility itself can be testimony.

Global Parallels and the British Context

Although the United Kingdom situates its border strategy within national security discourse, the technologies it deploys are part of a global industry. Japan’s Narita Airport introduced full biometric boarding in 2023, while Singapore’s Changi Airport expanded live facial recognition to all terminals in 2024. British officials cite these examples as proof of innovation, yet the contexts differ sharply. In the UK, historical entanglements of race and migration render surveillance culturally charged.

The comparison with Japan highlights how cultural homogeneity shapes acceptance. Japanese media frame biometric gates as symbols of efficiency and hospitality. In Britain, where post-colonial migration defines much of the population, the same systems evoke colonial memories of classification. The technology travels, but its meanings mutate with local histories.

Singapore’s experience also offers a cautionary tale. Reports from Privacy International (2024) reveal that travellers of South Asian descent faced higher rates of false negatives at Changi’s e-gates, prompting manual checks. These incidents parallel British findings and suggest that algorithmic bias is embedded in global supply chains of biometric software. When the UK imports or licenses such systems, it inherits their structural inequities.

The British government’s collaboration with multinational vendors complicates accountability. Contracts with companies like IDEMIA and Thales are often shielded by commercial confidentiality. Without public access to testing data, it is impossible to verify claims of “bias-free” performance. The Metropolitan Police faced similar criticism after deploying live facial recognition at events in London — a reminder that surveillance debates within the UK mainland cannot be separated from those at its borders.

Parliamentary scrutiny remains limited. The Biometrics and Surveillance Camera Commissioner’s office, though vocal, lacks enforcement power. A 2025 Commons inquiry acknowledged “concerning opacity” in Home Office procurement but stopped short of recommending suspension. The state thus continues to build its biometric border architecture with minimal democratic oversight.

biometric and facial recognition

From a cultural standpoint, Britain’s biometric turn reveals an enduring faith in technology as moral alibi. Efficiency becomes virtue; critique becomes obstruction. This framing allows the government to present automation as progress, deflecting questions about whose freedom it enhances and whose it constrains.

Public opinion, shaped by media coverage of migration crises, tends to conflate security with surveillance. Polls in 2024 showed majority support for automated borders, especially among demographics less likely to be targeted by misrecognition. The debate over fairness thus reproduces the very inequalities it seeks to address: those most affected have the least influence over policy.

Artists and academics have started to intervene in this narrative. Exhibitions such as Digital Borderscapes at the Victoria and Albert Museum (2024) contextualise biometric technologies within Britain’s colonial visual culture. By tracing a lineage from 19th-century anthropometry to 21st-century algorithms, they reveal continuity where officials claim rupture. The result is a public conversation that treats surveillance not merely as governance but as cultural expression.

Cultural Representation and Resistance

Britain’s cultural industries have begun to interrogate the politics of biometric borders. Filmmakers, writers, and visual artists increasingly depict travel through the prism of surveillance. In contemporary cinema, airports and checkpoints no longer represent adventure but algorithmic control. The documentary Coded Bias (dir. Shalini Kantayya, 2020) remains a foundational reference, tracing how facial-recognition systems encode racial and gender bias — a global concern mirrored in Britain’s border infrastructure.

Visual art continues this critique. Works by artists such as Trevor Paglen and Hito Steyerl examine how watching has become both a technical process and a political act. Paglen’s Watching the Watchers (ongoing project, exhibited internationally) exposes the human labour and military networks behind automated vision, while Steyerl’s How Not to Be Seen interrogates the aesthetics of disappearance in a data-saturated world. Exhibitions like Exposed: Voyeurism, Surveillance and the Camera at Tate Modern and, more recently, I Am Not a Robot at the Science Gallery London (2023–24) situate biometric technologies within the history of photographic control.

Writers have also responded. Nadifa Mohamed’s The Fortune Men and Bernardine Evaristo’s Manifesto (2021) both address racialised belonging and institutional visibility, themes that resonate with Britain’s algorithmic borders. Although not directly about technology, their narratives of recognition and exclusion help readers grasp how digital systems can reproduce older hierarchies of perception.

Activism parallels these cultural interventions. UK organisations such as Big Brother Watch, Liberty, and the Open Rights Group document cases of misidentification and campaign for stricter biometric regulation. Their reports connect algorithmic bias to broader struggles for racial justice and data rights. In 2024, Big Brother Watch’s briefing Automated Borders and Civil Liberties called for independent audits of all e-gate technologies, citing disproportionate error rates among travellers of colour.

Academia and community groups increasingly collaborate. The Sociodigital Futures research centre at the University of Bristol runs workshops on digitised borders, translating technical debates into public dialogue. These initiatives follow the participatory approach championed by the Ada Lovelace Institute, which argues that those most affected by automation should have a voice in its governance.

Cultural resistance also emerges in performance and music. The theatre collective Lung Theatre staged Who Runs the World? (2024), exploring data, gender, and power through verbatim testimonies, while sound artist Emeka Ogboh’s installation No Condition Is Permanent (2023) layered airport announcements and migrant voices to evoke the sonic politics of mobility. Such works translate the abstraction of surveillance into sensory experience.

Britain’s art scene, long attuned to political tension, thus becomes a site of counter-vision. By reframing facial recognition as a cultural narrative rather than neutral technology, artists and activists invite audiences to ask who defines safety and belonging in the digital age.

Ultimately, cultural resistance offers what policy rarely delivers: empathy. It rehumanises travellers reduced to data points and challenges a system that too often mistakes visibility for understanding.

Policy, Ethics and Public Accountability

The United Kingdom’s legal framework for biometric border control remains patchy. The Data Protection Act 2018 and the UK GDPR outline principles of necessity and proportionality, yet border operations fall under national-security exemptions. This loophole allows authorities to collect and store facial templates without explicit consent. Civil-rights organisations argue that such exceptions transform extraordinary powers into routine practice.

Parliamentary debate has been tentative. In 2025, the Home Affairs Committee acknowledged “serious governance gaps” but postponed reform, citing the need for further technical study. Meanwhile, new trials at Heathrow and Gatwick — led by British Airways and other airlines — expand contactless boarding using facial matching from check-in to gate. Officials describe this as “frictionless travel”; critics see the steady normalisation of surveillance.

Ethical analysis highlights the imbalance of knowledge between the state and the subject. Travellers seldom know how long their biometric data is stored or which agencies can access it. The Information Commissioner’s Office has urged mandatory “algorithmic impact assessments”, akin to environmental reviews, yet these remain voluntary. Without binding transparency, oversight depends on goodwill rather than law.

facial recognition

Accountability is further diluted when private vendors operate public systems. Companies such as Thales and IDEMIA provide hardware and algorithms but claim commercial secrecy over their models. Independent audits are rare. The result is governance through opacity, where accuracy is promised but rarely evidenced.

Ethicists propose corrective measures: demographic testing before deployment, public registers of algorithmic performance, and fixed data-retention limits. Implementation lags behind innovation. Britain continues to expand its biometric infrastructure faster than it defines citizens’ rights within it.

Media coverage reinforces this imbalance. Sensational stories of “queue-busting tech” overshadow nuanced discussion of racial bias or civil liberties. Slow-journalism platforms like Rock & Art play a vital role in countering this simplification, situating border technologies within broader histories of empire, migration, and representation.

Internationally, Britain’s data-sharing agreements with the EU, the United States, and Australia enable cross-border transfers of biometric information. Without harmonised safeguards, errors can cascade across jurisdictions, leaving travellers with no clear route to correction.

At stake is not only privacy but the definition of citizenship itself. In a world where recognition determines access, fairness cannot be outsourced to software. Accountability requires political courage as well as technical transparency.

The Future of Mobility

The convergence of artificial intelligence, border management and commercial travel will intensify the struggle between convenience and control. The Home Office envisions “seamless corridors” in which passports disappear, replaced by live facial verification from arrival to departure. To policy-makers, this suggests modernisation; to critical observers, it signals the conversion of movement into a data transaction.

Technologists predict that combining facial, iris and gait recognition will enhance accuracy. Yet such precision may heighten vulnerability. For travellers whose identities defy neat categorisation — dual nationals, migrants, refugees — total accuracy can translate into total exposure.

Sociologists warn that perpetual verification corrodes social trust. When every face must be authenticated, trust ceases to be relational and becomes computational. This is what surveillance scholar David Lyon calls the “post-privacy subject” — a person who internalises monitoring as routine. Airports, once gateways of imagination, risk becoming laboratories of obedience.

Environmental and geopolitical pressures will also shape the next decade. Climate displacement and political instability will push more people into movement even as borders harden. Britain’s proposed “Digital Asylum Pilot” mirrors similar EU projects that enrol refugees’ biometrics at entry, raising ethical alarms among humanitarian groups such as Refugee Council UK.

Alternative visions persist. European coalitions advocate “human-centred borders” built on consent, minimal data collection and independent oversight. The Ada Lovelace Institute and the Centre for Data Ethics and Innovation both recommend participatory design and open-source auditing to ensure algorithmic fairness. These frameworks remain aspirational but point towards an ethics of recognition rather than control.

Culturally, the future of mobility will hinge on narrative. Whether societies imagine travel as danger or dialogue determines how technology is built. The racialised traveller stands at this crossroads, symbolising both vulnerability and resilience.

For Britain, the challenge is to reconcile technological ambition with historical accountability. A nation that once mapped the world now maps faces; the moral question is whether it can recognise the humanity behind them.

The Border that Sees and Mis-Sees

Facial recognition at Britain’s borders embodies the paradox of a country that celebrates innovation while inheriting inequality. Automation promises efficiency but often encodes the prejudices it claims to erase. The racialised traveller experiences this contradiction daily: welcomed rhetorically, scrutinised digitally.

Borders have always been theatres of power; technology merely changes the scenery. E-gates and cameras project neutrality while perpetuating selection. Recognition has become both literal and political — a measure of worth inscribed in code.

To move forward, Britain must treat border technology as a matter of democratic choice, not inevitable progress. Transparent testing, independent oversight and inclusive design are essential but insufficient without cultural change. Recognition should mean empathy, not exposure.

Across Britain’s galleries and classrooms, artists and scholars already imagine alternative futures: ones where movement is relational, identity fluid, and technology accountable. Their work reminds us that the question is not only how machines see, but what they fail to understand.

In the end, the contest over biometric borders is a contest over belonging. How Britain resolves it will reveal not only how it guards its frontiers, but how it recognises itself.

References:

Achiume, E. T. (2021). Digital racial borders. AJIL Unbound, 115, 130–135. https://doi.org/10.1017/aju.2021.15

Ada Lovelace Institute. (2023). Ryder Review: The independent review of biometric data governance in the UK. London: Ada Lovelace Institute. https://www.adalovelaceinstitute.org

Amnesty International. (2024). Racial bias in facial recognition algorithms. Retrieved from https://amnesty.ca/features/racial-bias-in-facial-recognition-algorithms

Big Brother Watch. (2024). Automated borders and civil liberties in the UK: Policy briefing. London: Big Brother Watch. https://bigbrotherwatch.org.uk

Centre for Data Ethics and Innovation. (2023). Responsible innovation in biometric technologies: Policy paper. London: CDEI. https://www.gov.uk/government/organisations/centre-for-data-ethics-and-innovation

Data Protection Act. (2018). Data Protection Act 2018, c.12 (UK). https://www.legislation.gov.uk/ukpga/2018/12

De Oliveira, A. M., Dias, J. A., & da Silva, E. M. (2025). Influence of racial bias in the use of facial recognition applied to access control. Research, Society and Development, 14(3), 1–15. https://doi.org/10.33448/rsd-v14i3.48186

Evaristo, B. (2021). Manifesto: On never giving up. London: Hamish Hamilton.

Information Commissioner’s Office. (2023). Guidance on biometric data and algorithmic transparency. Wilmslow: ICO. https://ico.org.uk

Liberty. (2024). Briefing on facial recognition and border security powers in the UK. London: Liberty. https://www.libertyhumanrights.org.uk

Lyon, D. (2022). Pandemic surveillance. Cambridge: Polity Press.

Migration Observatory, University of Oxford. (2024). Automation and discrimination in UK border management. Oxford: University of Oxford. https://migrationobservatory.ox.ac.uk

Paglen, T. (2019). Watching the watchers. Exhibition and ongoing visual research project. Retrieved from https://paglen.studio

Privacy International. (2024). Biometrics at airports: Lessons from Singapore and beyond. London: Privacy International. https://privacyinternational.org

Refugee Council UK. (2024). Digital asylum systems and data justice: Policy response. London: Refugee Council. https://www.refugeecouncil.org.uk

Science Gallery London. (2023). I am not a robot [Exhibition catalogue]. London: King’s College London.

Steyerl, H. (2013). How not to be seen: A fucking didactic educational .MOV file [Video installation]. Exhibited internationally, 2013–present.

UK Parliament Home Affairs Committee. (2025). Biometric data and border technologies: Governance inquiry. London: House of Commons.

University of Bristol Sociodigital Futures Research Centre. (2024). Digitised borders: Public engagement workshops report. Bristol: University of Bristol. https://www.bristol.ac.uk/sociodigital-futures

Yücer, S., et al. (2024). Racial bias within face recognition: A survey. ACM Computing Surveys, 57(8), 1–34. https://doi.org/10.1145/3705295


Keep Independent Voices Alive!

Rock & Art – Cultural Outreach is more than a magazine; it’s a movement—a platform for intersectional culture and slow journalism, created by volunteers with passion and purpose.

But we need your help to continue sharing these untold stories. Your support keeps our indie media outlet alive and thriving.

Donate today and join us in shaping a more inclusive, thoughtful world of storytelling. Every contribution matters.”


Sarah Beth Andrews (Editor)

A firm believer in the power of independent media, Sarah Beth curates content that amplifies marginalised voices, challenges dominant narratives, and explores the ever-evolving intersections of art, politics, and identity. Whether she’s editing a deep-dive on feminist film, commissioning a piece on underground music movements, or shaping critical essays on social justice, her editorial vision is always driven by integrity, curiosity, and a commitment to meaningful discourse.

When she’s not refining stories, she’s likely attending art-house screenings, buried in an obscure philosophy book, or exploring independent bookshops in search of the next radical text.

Nina Simmons (Author)

Nina Simmons is a UK-based writer and critic with a passion for rock music, cultural commentary, and the language of rebellion. Her work captures the attitude, poetry, and philosophy of music, from classic anthems to underground scenes.

guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Categories

Don't Miss Out!

condor - operation condor - platform governance - content moderation

From Operation Condor to Content Takedowns: How Platforms Police South American Memory

From junta-era raids to algorithmic invisibility, Latin America’s memory work faces new gatekeepers. What might digital transitional justice look like
Black and white image of travelers at a bustling airport in Pasay, capturing the essence of global travel.

When Airports Become Laboratories of Identity: Mobility, Belonging, and the Black Atlantic

At five in the morning in Lagos, the terminal hums with the promise of order. Murtala Muhammed International Airport, recently