Cities that were at the forefront of limiting their own participation in aggressive federal immigration enforcement are now expanding the scope of their work: Protecting their residents from data-collection and surveillance, too.
Wilmer Catalan-Ramirez has been awaiting deportation at the McHenry County Jail since seven Immigration and Customs Enforcement (ICE) agents entered his home in March, and forcefully arrested him—fracturing one of his arms. Catalan-Ramirez, a father of U.S. citizen children, was a high priority for arrest because he was in Chicago police’s gang database. But according to the lawsuit recently filed on his behalf, he was never in a gang.
Many elements of the case remain in dispute, and Catalan-Ramirez had been deported once before. But there’s one factor that’s clear-cut: ICE relied on the city for information crucial to his detention—even in a city like Chicago considered a “sanctuary” because of its restriction on local resources devoted to federal immigration enforcement.
As the federal government increasingly relies on data from localities, some cities are developing protective policies that broaden the definition of “sanctuary.” While the term has long meant withholding some local cooperation for immigration enforcement, it is now starting to mean withholding some other data, too, to protect vulnerable communities—citizen and non-citizen—from the ever-growing surveillance dragnet. These policies seek to create what some advocates are calling “digital sanctuaries.” They aim to better define what data is gathered, by what means, and how it is used.
“Now cities are going to have to ask: What should we close? What should we segment? What should we purge? What should we put a time-limit on?” says Greta Byrum, director of the Resilient Communities program at New America. “If their primary goal is to protect the rights of their residents, they may have to make different choices about what they do with data.”
Surveillance tech: “beta-tested” on the vulnerable
With the rapid advancement of biometric technology and a shift in immigration politics over the last few decades, the type and extent of information required of people seeking entry to the U.S. has changed dramatically.
“A fairly straightforward examination of a person has become something of a different nature,” says Alvaro Bedoya, executive director of the Center on Privacy & Technology at Georgetown Law. “If you are an immigrant seeking a visa to enter this country permanently or on a temporary basis, you basically submit your body to be measured and tracked.”
When such a person enters the U.S., their photographs are enrolled in a Department of Homeland Security (DHS) repository and their fingerprints are scanned against the FBI database. To track their exit and entry even better, the White House is now encouraging iris recognition pilots at the border and planning to roll out facial recognition at U.S. airports. “This is not how things have usually been historically,” Bedoya says. “Previously, your body wasn’t tracked unless you were arrested by the police.”
In his January executive order, for example, Trump announced that non-citizens would not benefit from a federal law to prevent agencies from sharing information with one another. Advocates worry that the move will allow data collected during the Obama era through programs like the Deferred Action for Childhood Arrivals (DACA) to be used to deport immigrants. The administration has also issued a new rule allowing DHS to collect online information about immigrants, including about their social media handles, connections, and search results.
We already know ICE is going to great lengths to ramp up deportations, targeting anyone who comes up on their radar in ways that even longtime members of their own agency find concerning. “You’re seeing signs that the gloves have come off,” Bedoya says.
The data cities provide is crucial to this tracking process, and often, willingly shared. The most well-known pipelines of data are created through formal arrangements some police departments have with the FBI and ICE, which deputize local officers to do enforcement work. But even the large number of cities who have blocked some of these arrangements by dubbing themselves so-called “sanctuary cities” are funneling significant amounts of information to the federal government.
Among the sources ICE can pull from are local law enforcement repositories, DMV and benefits records, license plate reader data, FBI criminal databases, and student visa records. They also have access to the National Security Entry-Exit Registration System (NSEERS) database—the controversial “Muslim registry” of the Bush era; information from it was never purged. Local police also have increasingly used new technologies like stingrays and social media analytical tools—often disproportionately in minority neighborhoodsor against protesters. A 2016 Georgetown Law analysis revealed that across the country, more and more departments are using biasedfacial recognition software, subjecting residents in even non-criminal databases to a “virtual line-up.” Of course, many in the law enforcement community—including in liberal cities—insist that expanded surveillance is crucial to preventing crime and terrorist activity.
But privacy advocates argue that the rules surrounding these technologies are lax, and their impact on vulnerable populations, not very clear. Plus, taxpayers often don’t know what’s being used and how. It’s this secrecy that activists are trying to disrupt at the local level—not just for the benefit of the immigrants and other communities of color, but for every resident.
“What we've seen throughout history is that the U.S. government will basically do its beta testing, if you will, on these hyper vulnerable communities,” says Christina Sinha, who leads the National Security and Civil Rights Program at Asian-Americans Advancing Justice-Asian Law Caucus, a legal and civil rights organization based in San Francisco. “It will roll out a massive invasion of civil rights on a more vulnerable population and then extend it out further and further as it becomes more normalized.”
Local “digital sanctuary” laws
Cities, of course, cannot stop the federal government from carrying out its goals in the ways it sees fit. But they can determine the scope of how their own resources are used for that purpose. That’s the principle underlying the range of policies, loosely called “sanctuary policies,” that seek to limit the involvement of their local police in immigration enforcement. Attorney General Jeff Sessions has repeatedly tried to to punish these cities, saying that these policies contribute to crime. So far, though, these cities are winning in the courts.
So how does a city become a digital sanctuary? A helpful answer comes in the form of a guide to responsible municipal data policy in the Trump era, released by The Sunlight Foundation. In short, the ten recommendations ask cities to limit the sensitive information they collect, protect the data that they have in their possession, and make their collection practices more transparent. Below are some some local laws and proposals that are built upon these principles:
Limiting cooperation on federal terror agreements
The move has garnered criticism from former FBI officials, who say it’s going to slacken anti-terrorism efforts. “It’s cutting off your left hand to spite your right hand,” James McJunkin, who led the second-largest Joint Terrorism Task Force in D.C., told The Washington Post. “It makes no sense at all.”
But even before the city ended it, advocates had pushed through an ordinance that asked the police department to comply with three conditions if it intended to continue its JTTF agreement: One, the actions of the city’s police officers needed to comply with local law. “If you're wearing a San Francisco Police Department badge, you have to follow the same laws that everybody else wearing that badge has to follow,” Sinha said. “It doesn't matter that you're sitting in a different building.”
Eliminating gang databases
Even in so-called sanctuaries on immigration policies like Chicago and Los Angeles, police share an array of data with ICE—including gang databases that have been criticized as broad, inaccurate, and racially biased. Like Catalan-Ramirez, who is awaiting deportation after an alleged gang designation, many others have complained that they’ve been listed because of what they look like and where they live—not because they’re actually gang members. A state audit in California found that 42 entries in California’s gang database were one year old at the time their names were included. And an investigation by The Oregonian revealed that over 80 percent of the gang list entries in Portland were minorities.
In 2017, however, Portland announced that it was doing away with these gang designations and intended to purge the data. The database had resulted in “unintended consequences,” the police said.
After Pearl Harbor, Census Bureau block-level data helped enable the displacement and internment of Japanese Americans. And following 9/11, then-President Bush created NSEERS, the “Muslim registry,” that President Trump has talked about reinstating.
To ensure that locally collected data does not bolster federal efforts such as these, San Francisco passed an ordinance prohibiting municipal resources or personnel from being used for any registry based on race, religion, or national origin in March 2017. It also allows residents to sue if they feel the city has violated the policy. Spokaneand Chicago have also passed similar legislation this year; and the Colorado state legislature is considering its own version.
These provisions partly serve a symbolic purpose, given that the Trump administration hasn’t actually taken steps to put a registry in action. They’re moral proclamations. But they also serve as preemptive defense—a way of opting out in case the federal government does go ahead with a registry and asks local governments to help.
Purging municipal ID data
Many cities have implemented municipal ID laws so that various populations without documents—including immigrants, homeless, and the elderly—could have official identification that allows them to open up bank accounts, access libraries, and ride public transit.
New York City’s program, called IDNYC, came with a provision that allowed the data collected through it to be purged after two years. “In this case, we were mindful that we were designing a program for individuals who do have a [particular privacy and security] policy and need,”said Bitta Mostofi, acting commissioner of the New York City Mayor’s Office of Immigrant Affairs. When concerns arose about IDNYC data being misused after Trump took office, the city considered flushing the information. In a preliminary ruling, a Staten Island judge ruled that it had the right to do so.
Public vetting of surveillance tools
In 2013, residents learned of Oakland’s citywide surveillance project that aggregated data from a variety of sources, including license plate readers and surveillance cameras. The real purpose of this initiative, leaked emails later showed, was to keep an eye on protesters.
The furor that followed the revelation led to the creation of a privacy commission that has drafted a surveillance ordinance, based on an American Civil Liberties Union guide. The legislation requires public vetting of any new surveillance technology acquired by the city, including a public discussion, assessment of impact, and annual report. The legislation has spread to other counties in the Bay Area and enjoys bipartisan appeal, says Brian Hofer, who chairs the privacy advisory commission.
”It's just good governance. These were decisions made by law enforcement—unilaterally—in the dark; no oversight.” he said. “Some of this equipment is garbage, it's just snake oil…a waste of taxpayer money.”
Regulating facial recognition
Georgetown Law’s 2016 report found that 117 million American adults are in the facial recognition networks used by law enforcement across the U.S. Police departments are testing this technology on DMV photos and on surveillance camera footage in real time. Via the authors of the report:
Of the 52 agencies that we found to use (or have used) face recognition, we found only one, the Ohio Bureau of Criminal Investigation, whose face recognition use policy expressly prohibits its officers from using face recognition to track individuals engaging in political, religious, or other protected free speech.
To remedy that, Georgetown Law proposes a model state law and police department policy defining the conditions under which this technology can be used. It includes getting consent and makes “reasonable suspicion” a prerequisite in running the program.
In 2017, Congress voted to do away Obama-era internet privacy regulations, allowing service providers like Comcast or Verizon to store search histories without consent. That means these private entities have extremely detailed portraits of its customers’ lives—their health issues, financial situations, and potentially, immigration status. “It's as if your most private papers are now just out on the market for anybody to buy or sell,” said Byrum, from New America. According to advocates, this move has opened up immigrants and poor people of color to various negative outcomes—predatory advertising and political maneuvering, for example, or even immigration enforcement.
Several states have introduced legislation addressing these concerns. Cities like New York and Seattle are also trying to do their part by educating vulnerable communities about the risks and by specifying privacy rules in cases where they’re working with private internet service providers in the public realm. “The broader point here is that the Trump administration, when it comes to immigration enforcement, has been relying upon private companies to build the tools that they need,”said Renderos, from Center for Media Justice. “We need to be careful about what private companies are allowed and not allowed to do.”
With the advent of new technology that promises to make cities smarter and local governments more efficient, these types of laws are meant to ensure that vulnerable communities do not become the spoils of this progress—but are instead protected and included in the digital urban realm.
“Technology is very much discussed in the context of solutions…introduced as this silver bullet,”said Renderos. “But without civil rights protection, the bias and racism that exists within the world tend to be baked into the technology that we use.”
Source: City Lab
This article is culled from daily press coverage from around the world. It is posted on the Urban Gateway by way of keeping all users informed about matters of interest. The opinion expressed in this article is that of the author and in no way reflects the opinion of UN-Habitat.