The US Custom and Border Protection (CBP) is quietly using drones and artificial intelligence beyond the border to surveil migrants before they even reach the US-Mexico border. CBP’s massive annual budget increases, continued contracts with the private sector, as well as a general bi-partisan support from policymakers for more technologies along the border should all raise concern for general speculation. But what is the actual logic behind using unmanned aerial vehicles (UAVs/drones) for migration purposes?
Perhaps the primary motivation for such deployment of drones is due to the securitization of immigration, which continues to be viewed as a threat to national security. This leads to wondering, is the US developing drones and surveillance practices through immigration for possible future warfare?
Warfare is increasingly becoming a digital game that is less risky for operatives while simultaneously aimed to produce more efficient results. Similarly, the practices and policies the drone program carried out by CBP seem to mirror a parallel discourse.
The drone program implemented by CBP under DHS is a means to cut down dependency on agents in the field while deploying military-grade drones from remote stations within the US. The program was initially implemented by CBP in 2004, to detect suspicious activity and surveil anyone who may seem a threat. However, the more complex ways in which CBP operates its drone and artificial intelligence program strong-arms the agency and the DHS department with excessive force and control both along and beyond the border.
CBP uses a combination of larger, military style drones and smaller hand launched devices from a range of different manufacturers; the most terrifying is perhaps General Atomic’s notorious Predator B aircraft. Its predecessor, the ‘A’, was used extensively by the US Military, beginning its service as a surveillance tool in the Balkans intervention in the 1990s before it evolved into an armed flying weapon.
While CBP uses other technology capabilities for prolonged surveillance, the drones are specifically used for targeted investigations. In the attempt to try and avoid detection, many migrants are opting for much riskier paths into higher mountains and vast deserts. Despite the artificial intelligence algorithms and drones used by CBP, which could be used to help migrants in dangerous situations, practices under such circumstances only seem to be linked to the increased number of migrant deaths along these riskier paths.
A report found CBP drones are not used exclusively for border patrol and are often offered to other agencies. Flight hour reports between 2011 and 2014 indicate drones were often used over restricted and foreign airspace. The practices of drone surveillance have not only allowed for CBP to extend its capabilities beyond the physical geographical border but has also created an environment which allows for law enforcement to spy from people above and beyond the border.
In addition, are the multi-layered interests of private technology and security & arms companies. Companies such as Palantir and Lockheed Martin are not only acquiring a massive profit from the government, but such new technologies becoming classified operations allow for many of their products to go unchecked by auditing committees. Such collaboration is concerning in that few implications are addressed over the use of new and untested technologies.
A recent report found the US government currently has over 29 active contracts with Palantir alone that are worth approximately $1.5 billion dollars. Palantir has also taken on Project Maven, a controversial program initiated by the Pentagon to build an AI-powered surveillance platform for unmanned aerial vehicles, despite major concerns over ethics. Whilst the figure the firm is receiving from Project Maven is unconfirmed, Peterson contends that it’s vital to the US Government, ‘[and is as] important as America’s race to develop a nuclear weapon during World War II’.
The US government continues to advance the CBP drone program despite the abundance of clear ethical and human rights concerns over the current use of such technologies. The collection of data on migrants before they have even arrived at the border is highly concerning as the Inspector General’s report concluded there are currently not enough safeguards in place for protection of this highly sensitive data.
Algorithms continue to be highly criticized for their mistakes and biases that occur, yet DHS continues to use such data in processing applications. Armed with the collection of biased and potentially inaccurate data, immigration interviews can easily be turned into an interrogation. The lack of legal rights given to migrants from such policies make for highly vulnerable subjects of an exploitative program. However, the continued push towards automated decision-based processing shows the larger systemic problem that is the current US immigration system.
About the author: Erin Harris is currently an MSc candidate in International Development.
BordersDetention and DeportationEnforcement
COMPAS, School of Anthropology, University of Oxford, 58 Banbury Road, Oxford, OX2 6QS
T. +44 (0)1865 274 711
Privacy | Terms & Conditions | Copyrights | Accessibility
©2023 University of Oxford
Managed by REDBOT