Predictive Policing at the Border: How Algorithmic Immigration Enforcement Targets Latino Communities

A license plate reader stands along the side of a road, Wednesday, Oct. 15, 2025, in Stockdale, Texas. (AP Photo/David Goldman).

ICE’s newest surveillance platform provides agents with near real time visibility to track undocumented immigrants’ movements and prioritize them for deportation based on algorithmic assessments. The $30 million system, built by Palantir Technologies, pulls data from passport records, Social Security, Internal Revenue Service (IRS), and license plate readers to decide who gets targeted for removal. The platform is designed to streamline mass deportations by automatically flagging individuals. Despite processing sensitive data on countless immigrants, these algorithmic systems operate entirely beyond constitutional review. 

The implementation of these new technologies in immigration enforcement marks a new frontier in immigration enforcement. While ICE has long relied on officers’ discretion to identify and detain immigrants, agencies now deploy predictive analytics, risk scoring algorithms, and biometric surveillance to make those decisions at scale. From Border Patrol’s license plate recognition networks that flag “suspicious” travel patterns to U.S. Customs and Border Protection’s (CBP) Automated Targeting System (ATS) that assigns terrorism risk scores to travelers, algorithmic patterning now determines who gets stopped, searched, and detained. However, these systems operate in immigration law’s constitutional blind spot. Through doctrines like plenary power and the entry fiction, courts routinely decline to review discriminatory enforcement practices that would be unconstitutional in any other context. As algorithmic immigration enforcement expands and disproportionately impacts Latino communities, courts can no longer allow this legal targeting to shield systems from equal protection review. 

The numbers reflect these trends as well. While Latinos comprise roughly 45% of all immigrants in the United States, they account for 90% of all ICE arrests in the first 6 months of 2025. Their arrest rates nearly triple what their demographic presence would predict. Of ICE’s top five countries for arrests, all are Latin American: Mexico leads with 69,364 (36% of the total), followed by Guatemala (36,104), Honduras (27,978), Ecuador (22,936), and Colombia (20,123). This disparity reveals how Palantir’s systems don’t justly enforce immigration law, they amplify existing patterns of discrimination at unprecedented scale. 

Consider a case from Border Patrol’s license plate reader network, which flagged Lorenzo Gutierrez Lugo, a truck driver transporting furniture to Mexican families across the southern United States. The algorithm deemed his route suspicious based on travel patterns common among Latino immigrants: frequent trips to the border, service to immigrant communities, and cash payments traditional in Latino culture. Despite finding no contraband, authorities arrested him on money laundering charges and seized thousands in cash from his Latino customers. Prosecutors eventually dropped all charges, but still left his employer with $20,000 in legal fees. 

These technological tools embed racial and cultural profiling into code. Palantir's ImmigrationOS prioritizes deportations by pulling data from international passport records, tracking remittances sent to family abroad, and flagging travel patterns near border regions. What ICE markets these tools as “targeting and enforcement prioritization” maliciously functions as automated discrimination systems that transform routine aspects of Latino life into grounds for surveillance, detention, and removal.

Immigration law falls within a constitutional grey zone. Under the plenary power doctrine, Congress and the executive branch possess near absolute authority over immigration, with courts offering only minimal review. The Supreme Court has repeatedly held that decisions regarding immigration involve “political questions” best left to elected branches, particularly when they implicate foreign affairs or national security. This judicial deference extends even to people physically present in the United States through “entry fiction,” as immigrants are treated as perpetually seeking entry and thus entitled to fewer constitutional protections than United States citizens or lawful permanent residents. 

These 19th century doctrines now shield 21st century algorithmic systems from scrutiny. When Kilmar Abrego Garcia—a Salvadorian man with a 2019 court order protecting him from deportation—was wrongfully deported to El Salvador's infamous CECOT mega-prison, ICE's databases had labeled him a gang member based on unverified police reports and hearsay from a confidential informant. Despite the clear error, courts initially struggled to intervene because immigration authorities claimed their determination fell within their unreviewable discretion. Only after months of litigation and a Supreme Court ruling did Garcia return to the United States, and even then, authorities continue attempting to deport him.

This accountability gap means algorithmic systems in the government can flag individuals for deportation using opaque criteria, pull data from error prone government databases, and prioritize enforcement based on country of origin, all without the equal protection review that would apply to similar AI systems in criminal law, employment, or housing.

The human cost of automating immigration enforcement falls overwhelmingly on Latino communities. Yet the impact extends beyond those detained. Research from Pew Research shows that 47% of Latinos, including naturalized citizens and legal residents, now worry about deportation for themselves or family members. One third report stress, anxiety, and sleep disruption. In Chicago, real estate agents note Latino clients canceling home purchases out of fear. Families restructure their entire lives around the threat of enforcement, yet the system driving these deportations operate with virtually no judiciary oversight.

Immigration’s constitutional exception made sense in an era of human discretion, when individual officers had to make judgement calls at the border. However, in modern times algorithmic systems have transformed enforcement into something fundamentally different. Palantir's ImmigrationOS processes millions of records to identify patterns, assign risk scores, and flag entire categories of people for removal. When these systems disproportionately target Latino communities—pulling data on travel to Mexico, flagging cash transactions common in immigrant neighborhoods, and prioritizing deportations from Latin American countries—they function as automated discrimination at scale.

Courts can no longer treat automated systems of immigration enforcement as just another exercise of executive discretion. These systems demand the same scrutiny that applies to AI systems when used in the context of criminal justice, employment, and housing. The outcome of continued judicial deference is a two-tiered constitutional order: one where algorithmic discrimination is illegal in every context except immigration, leaving Latino communities subject to surveillance and removal based on data driven profiling that would be unconstitutional anywhere else. As predictive technologies continue to reshape immigration law’s enforcement, equal protection must apply, or the Constitution's promise of equal justice becomes meaningless for millions of Americans and immigrants alike.


Isabella Pazmino-Schell, CC’28, is a Staff Writer majoring in Human Rights and Film & Media. Her academic interests include immigration law, constitutional protections, and the way that legal narratives shape public understanding of justice.

Previous
Previous

Noem v. Vasquez Perdomo: The Erosion of Undocumented Immigrants’ Constitutional Rights

Next
Next

Miami’s Latino“Entrepreneur Boom” Isn’t a Success Story. It’s a Warning.