Are city governments prepared to address the potential impacts and very real risks of technology with surveillance implications? Few municipalities have meaningful policy in place to address the potential impacts of these sorts of technologies.
In this, our third and final episode of City Surveillance Watch, we assess the current state of government policy and law for surveillance tech (hint: there isn’t much), and we offer practical tools and advice. Find out how municipalities that are ahead-of-the-curve are crafting surveillance tech policy, and get guidance for how cities - even those with few resources - can upgrade traditional tech procurement processes to ensure a solid footing to contend with rapidly evolving tech use and its potential impact on communities.
Find the entire City Surveillance Watch series on Apple Podcasts, Spotify, Stitcher or wherever you listen to podcasts.
TRANSCRIPT:
Sources featured in this episode by order of appearance:
- Jay H. Banks, New Orleans City Councilmember (District B)
- Brian Hofer, chairman, City of Oakland (CA) Privacy Advisory Commission
- Kelsey Finch, senior council, Future of Privacy Forum
- Sen. Jeff Merkley, D-OR
- Dierdre Mulligan, professor, UC Berkeley School of Information
- Capt. Tony Jones, City of Oakland (CA) Police Department
- Ross Bourgeois, New Orleans Real Time Crime Center administrator
- Jameson Spivack, policy associate, Center on Privacy & Technology at Georgetown Law
- Ginger Armbruster, chief privacy officer, City of Seattle
- Hector Dominguez, open data coordinator, Smart City PDX
JAY H. BANKS
I want this as tight as beeswax, because we know that it is extremely detrimental to families and individuals when this stuff goes off the rails.
KATE KAYE
New Orleans City Councilmember Jay H Banks and his colleagues were in the midst of a remote city council meeting in December, over Zoom. Banks, sharply dressed in a light grey suit, joined from a room adorned with photos, paintings and framed certificates. Another member Zoomed in from his car.
The topic at hand? A ban on police department use of select forms of surveillance technology. Banks and other city councilmembers discussed a possible city ordinance that would block use of facial recognition, cellphone-tracking stingrays, predictive policing software and biometric and characteristic tracking tech.
BANKS
We’ve got a timebomb ticking throughout this country with people being picked up for stuff that they didn’t do, and New Orleans has the opportunity now to try to put bumpers on that.
KAYE
There was back-and-forth between councilmembers and the New Orleans Superintendent of Police during the meeting. Then, after months of deferment on a vote on the ordinance, the New Orleans City Council – including Banks – voted to pass it.
NOLA’s surveillance ordinance, passed December 17th, came on the heels of another surveillance ordinance passed the previous month, thousands of miles away in San Diego. That one was more comprehensive and also established a privacy advisory board. But could these recent surveillance tech laws spark a growing surveillance tech legislation trend?
Brian Hofer, the chair of the Privacy Advisory Commission in Oakland which inspired San Diego’s new privacy board -- he says yes, and he thinks San Diego’s politics have something to do with it.
Though its political demographics have shifted, San Diego is home to several military bases, and was a Republican stronghold for decades in presidential races until 2008. And though the current mayor is a Democrat, the city has elected Republicans to that office as recently as 2016.
So, Hofer says San Diego’s decision to establish policies for acquisition and use of surveillance tech could have a ripple effect by convincing agencies that surveillance tech oversight isn’t just for liberal governments.
Here’s Hofer.
BRIAN HOFER
It proves Oakland is not an anomaly. You know, I think a lot of our wins in Oakland, it’s like, well you’re in Oakland, you’re supposed to do that. You guys are the far lefty radicals – you’re supposed to do these things. So it kind of gets downplayed a little bit. San Diego’s the eighth-largest city in the country, a military border city, and it’s a game-changer.
You know, when it’s just an Oakland thing, it’s a hard sell. Now that San Diego’s on board, it’s a much easier proposal.
KAYE
While some police or other city agencies have piecemeal privacy and usage policies for individual types of tech, few municipalities in the U.S. or across the globe have robust policies addressing citywide use of surveillance technologies.
Most don’t have any sort of overarching privacy policies either.
I’m Kate Kaye. In our first two episodes of City Surveillance Watch, a limited podcast series from Smart Cities Dive, we discussed the issues and risks surrounding surveillance tech, and dug into stories from across the country that illuminate those issues – including the expanding use of a connected camera network in New Orleans – which, by the way, is not affected by the city’s new surveillance tech ordinance.
In this, the final episode of City Surveillance Watch, we’re getting practical. We’re talkin’ policy and providing real guidance cities can use to address surveillance tech and data use in meaningful ways.
[Sound - Intro theme]
KELSEY FINCH
While we do see a handful of cities really paving the way and getting mature on these issues, setting a really positive example, I think it’s still the case that most cities in the U.S. aren’t quite there, and particularly in places that are less urban or less wealthy. So, hopefully they’re getting examples to look to, but it does take a lot of time and resources.
KAYE
Kelsey Finch has her ears to the ground on city data privacy and surveillance policy development. She's senior council at the Future of Privacy Forum where she helps coordinate the nonprofit group’s Civic Privacy Leaders Network – it’s a peer network for local government staff and their partners to pool knowledge and resources for developing fair and transparent data policies and practices.
FINCH
We are still seeing that it is more the exception than the norm to have these sort of robust surveillance approaches.
KAYE
So, before we delve into things like, how cities evaluate the efficacy of technology, policy for pandemic tech, policy for privately-funded surveillance tech use and practical steps for city surveillance policy, let’s get a lay of the legal land.
What’s out there? Well, at least 18 U.S. jurisdictions have adopted surveillance technology policies, many of which use model legislation drafted by the ACLU as their foundation.
The ACLU’s model legislation — it’s called CCOPS, short for Community Control Over Police Surveillance — it’s focused on law enforcement use of surveillance tech. It calls for city councils to have approval rights over surveillance tech use and for cities to conduct surveillance impact reviews.
Today, in most places, city councils don’t always have control over surveillance tech purchases by police. The ACLU’s model also requires cities to publish annual surveillance tech reports and create community engagement programs when considering surveillance tech use.
Eighteen municipalities with surveillance tech policies, in a country with 19,000 local municipalities? It’s not a lot. When Smart Cities Dive asked our Twitter followers whether their municipalities have a specific policy related to the evaluation and use of surveillance tech, just 15% said yes, 75% said no and 10% said they were currently working on it.
And sure, I mean, it was a small sample, not exactly scientific — but the poll results seemed to reflect the fact that municipal governments lack policy for these technologies despite their growing interest in using them.
Some U.S. cities, of course, have banned facial recognition use specifically. The nonprofit Fight for the Future lists 13 cities, many in California and Massachusetts, that have done so.
And surveillance tech policy isn’t much more robust or comprehensive at the state and federal level either. As in most cities, some state surveillance laws that do exist are focused on police facial recognition use. California, New Hampshire and Oregon prohibit use of face recognition in connection to police body-worn cameras.
Vermont in October passed a law banning police use of facial recognition. Illinois, Texas and Washington have laws limiting commercial collection and sales of biometric data – that’s the type of information gathered by facial recognition or tech that identifies people by, say, the gait of their walk or through iris recognition.
Meanwhile, California’s privacy law gives people some rights over the use and sale of their biometric data.
There are some statewide license plate reader-related laws, too. According to the National Conference of State Legislatures, at least 16 states have statutes that expressly address the use of LPRs and LPR data retention.
And there are several bills floating around state legislatures yet to pass addressing surveillance tech.
Ok, what about the federal level? Although federal privacy laws that do exist, such as Health Insurance Portability and Accountability Act (HIPAA) and the Children's Online Privacy Protection Act (COPPA) – they do address biometrics as a form of personal information – there isn’t much on the books directly addressing the use of surveillance tech.
The U.S. Department of Justice does have privacy and data handling requirements for stingrays – those devices that mimic cellphone towers. Federal law enforcement agencies as well as state and local agencies working with the federal government are required to obtain a search warrant to use them.
And while the Federal Aviation Administration has lots of rules for drone use, it hasn’t done much to address data use or privacy in relation to them. In fact, groups including the Electronic Privacy Information Center have criticized the FAA for its failure to take action to limit the use of drones for surveillance or establish rules that consider the privacy and civil liberties implications of drone use by law enforcement and private drone operators.
There aren’t any laws on the books at the federal level addressing license plate readers or facial recognition either. This is despite the fact that the Federal Bureau of Investigation and Immigrations and Customs Enforcement are among federal law enforcement agencies that have used LPRs and facial recognition or data collected by these technologies.
But federal laws could be coming. Heightened concerns about a surveillance state and awareness of the disparate impacts that facial recognition inaccuracies can have on people of color have prompted a handful of bills in Congress curbing use of facial recognition and biometric tech.
One lawmaker who has been especially active in introducing or sponsoring federal bills that would curb surveillance tech use is Senator Jeff Merkley. The Oregon Democrat co-sponsored three bills in 2020 that would restrict the use of facial recognition and other biometric tech.
I happened to talk with Senator Merkley in July (2020) for a podcast series about Portland’s facial recognition ban which I produced, called Banned in PDX.
Merkley told me then he wants to limit use of facial recognition and other biometric technologies because he’s concerned they create invasive data privacy infringements and enable racial discrimination. But, there are other risks with far more broad-reaching implications, Merkley told me at the time.
SEN. JEFF MERKLEY
We need to look at several of these emerging technologies. I have been most focused on facial recognition because it is here and now in ways that are very concerning, but then also just — when is it appropriate to have such a database at all to begin with? Because if you have that database, are you essentially, even if it’s in the public realm alone, are you setting up a surveillance state where Americans are tracked everywhere they go?
KAYE
Merkley says there’s bipartisan support for federal laws limiting use of facial recognition and biometric tech, both by public and private entities.
MERKLEY
But I can tell you there are also a lot of Democrats and Republicans that share a concern about an all-powerful federal government tracking us everywhere we go. So there is bipartisan support, not just on the criminal justice side, but also certainly on the side of how much power should the federal government have in its creating this surveillance state. And you can have that happen through private companies as well. So, it’s this concern about this Big Brother surveillance state both through private side and the public side.
KAYE
Still, passage of federal legislation addressing surveillance technology may not come soon. Afterall, the U.S. has yet to pass any comprehensive law addressing personal data privacy, despite efforts to do so by lawmakers for years. Plus – there’s the pandemic to contend with.
As governments at every level have grappled with the COVID-19 pandemic, they’ve encountered a whole new set of technologies and policy issues to navigate.
Kelsey Finch says the pandemic has been a proving ground for the value of a proactive policy approach. She says most municipalities in the U.S. have been caught off guard, attempting to evaluate emerging tech developed in response to the pandemic — such as contact tracing apps or body temperature scanners or face mask detection systems — without foundational data privacy, data management or surveillance tech policies in place.
Here’s Finch.
FINCH
Having these sorts of more formal systems are actually really helpful to enable cities to address situations like the pandemic. So, I think we’ve seen throughout the pandemic that, you know, countries that had clear privacy regulatory structures in place for data governance and data protection were able to move much more quickly to develop things like tracking dashboards and exposure notification apps than we were in the U.S., where we don’t have those clear rules for the road.
And I know that it was especially chaotic in March or April of this year as our cities’ leaders were being just inundated with unproven technology "solutions" – and solutions in air-quotes there – and those that had transparent and trustworthy review systems that they were familiar with and that they used before were able to make much more informed choices than those who were navigating these really significant trust and privacy issues on the fly.
So I think that, you know, it does take some time and effort to be proactive about these things, but they pay off pretty quickly in situations where you want to be able to use data and you need public trust to be able to achieve your mission. The absence of these sorts of review processes can actually hold you back.
KAYE
Finch had to look far yonder to point to a place that has gotten pandemic tech policy right: New Zealand. She says the country employed existing data privacy and transparency policies to take a principled approach to tech evaluation and establish what she calls a “trust infrastructure.”
FINCH
My shining example is actually New Zealand, which I think has done one of the more incredible responses to the pandemic that we’ve seen globally.
You know, they are a really great example of the way in which privacy and surveillance infrastructure is also trust infrastructure. And I know that one of the things that New Zealand, and in fact the folks who are the privacy leads at the City of Wellington and other places within New Zealand, they really credit their ability to eradicate the disease to the strong sense of public trust in their government, and that trust particularly that their government will use the tools and the data available in a responsible and transparent and accountable manner.
You know, so, I think ... it’s not surprising that they’ve been publishing an updated, very detailed privacy impact assessment on their national contact tracing applications since May that’s been updated four or five times as the technology grows and evolves, and it’s always made available to the public in a way that really does help people understand what the tradeoffs and choices are in the technology systems, what the particular risks are and how they’ve been mitigated, and so that people can make their own informed choices and don’t have to just, you know, believe in the best intentions of their government. They can be able to have, can audit that, can prove that and can check that for themselves.
KAYE
Public health and tech evaluations aside, the pandemic has exacerbated another overwhelming issue many municipalities may have already struggled with before COVID-19: finances.
As discussed earlier in this series, cost is an important factor when cities evaluate tech. Though they may not always keep track of effectiveness of technologies using quantitative data analysis, they certainly look at the bill.
But there’s something that could reduce the cost of tech for cities: data. In fact, data could become like a currency fueling a new barter system for surveillance tech, says Oakland’s Brian Hofer.
HOFER
And obviously the longer the pandemic goes on, our tax base is more and more impacted. I expect in 2021 we’re going to see a flurry of solicitation from vendors that will front all the upfront cost. I think they’ll give us this stuff for free in exchange for our data. I think that’s the proposal that’s coming and as we alluded to earlier, I think here in the Bay Area where, like a lot of people we fall for the next new shiny gadget, I think they’re going to come out and say, hey, we’ll give you all the shiny gadgets you want for free; we just want to go monetize your data.
KAYE
As we see online every day, where companies collect consumer data in exchange for free content and services, data holds immense value. The same goes for firms selling tech to city governments. Tech firms might use it to feed and train and improve their algorithms. Or they might create entirely new revenue streams using aggregated data generated by people within city environments to produce analysis and reports sold to real estate developers, insurance firms or even advertisers.
Already, Oakland’s contract with one surveillance tech firm gives the company ownership of the data: ShotSpotter. If we dig into that a little bit, we can understand how cities can benefit from having policies in place for data collection and use by tech vendors, and by requiring regular reviews for evaluating tech, its costs and its effectiveness.
So, ShotSpotter makes sensors and cloud-based subscription software that detects the sound of gunfire. The company locates where shots are heard, then it alerts law enforcement agencies and security personnel in real-time.
So, just what type of data is it collecting from Oakland and the more than 100 other U.S. cities that use ShotSpotter? Data generated by the system includes precise lat-long locations and corresponding addresses where shots are detected, along with information like the number of rounds fired and the type of gunfire.
This can be some very revealing information. Over time it can show patterns of gunfire near a particular home address or in a specific neighborhood, and on what days of the week or times it tends to happen, whether or not police responded to the incident.
ShotSpotter declined to make anyone available for an interview. The company’s 2018 marketing materials say – quote – “This information is key to better protecting officers by providing them with increased tactical awareness. It also enables law enforcement agencies to better connect with their communities and bolsters their mission to protect and serve.”
Like with many tech firms, monetizing data is at the heart of ShotSpotter’s growth plans. ShotSpotter CEO Ralph Clark said as much during the firm’s third quarter 2020 earnings conference call. Here's a bit of audio from that call.
[Sound clip: Ralph Clark speaking in a ShotSpotter earnings call]
I would say that all our franchise is really about taking data and translating that data into actionable processes. ... Even in the face of a pandemic and defund the police, etc. The police departments really do rely on us and trust us. So, we’re in a great pole position we believe to bring kind of new capabilities to bear.
KAYE
Sometimes cities don’t recognize the value of data they’re handing over to tech vendors, says Deirdre Mulligan, professor at the School of Information at UC Berkeley.
DEIRDRE MULLIGAN
And the vendors are often not all that forthcoming about what data their systems collect, and sometimes the people who come in to sell technology to cities, they are not – they’re the salespeople. They’re not the engineers, they’re not the, you know, the data warehouse people, they’re not the data analytics people, so they actually don’t always have a good idea of what information the system is actually collecting either explicitly or kind of, like, passively.
So it becomes a really important opportunity for the city to learn what sort of information is being captured and also, I think, to get an understanding about what it is that they’re giving, right, often to these vendors. Because what’ll happen is a new infrastructure will be put in place in the city, and all the city will end up getting back is reports. They actually don’t get the data about movement patterns in their own city or something, and so in some ways they end up kind of becoming hostage to companies that now know more about the urban infrastructure than the city may because they have the raw data — or the data — and the city is just left with a set of reports.
KAYE
ShotSpotter clients pay per square mile covered by its sensors. Company spokespeople told me the company does not disclose its fees, but Clark mentioned during that earnings call that the company recently raised prices, in some cases from $65,000 to $70,000 per square mile, per year.
Hofer, in Oakland, says he recognizes some benefits of ShotSpotter such as getting police to crime scenes, identifying witnesses or rushing people to the hospital faster. But he isn’t so sure it’s worth the money.
Still, because Oakland’s surveillance ordinance calls for annual reviews of technologies like ShotSpotter, the city actually has an opportunity when the review comes up later this year to evaluate the data ownership agreement, as well as the system’s cost-effectiveness.
Here’s Hofer.
HOFER
The ShotSpotter contract, they own the data. That’s something we’re going to push on when that contract comes back up for renewal. It is still totally unproven as to the efficacy, and ShotSpotter is not cheap. We spend millions of dollars in Oakland, which is a lot of money in Oakland, for this technology.
The annual report says, you’ve got to justify this continued use. To me that’s the real issue, is, does this technology work? We’re going to have to start saying no at some point to some of this technology that’s just not worth the money.
KAYE
The Oakland Police Department will include some measurement data about ShotSpotter in its upcoming report to the privacy commission. That’s according to Tony Jones, the captain of the OPD’s ceasefire division. He points to an example in which he says officers responded to 96 shooting victims despite the fact there were no 911 calls made related to those incidents.
Here’s Captain Jones.
TONY JONES
The fact that police can get to people and get them help – almost 100 people – when no one even knew that they had been shot, except the person that did it obviously, that’s the strongest data point going that justifies why we need something like that.
KAYE
Jones told me sometimes the impact of the technology is tough to gauge. Say, for instance, ShotSpotter data shows that gunfire just occurred in an area known for conflict among specific groups. In the past, that sort of information has prompted the department to deploy officers to strongholds of rival groups in the hopes of preventing retaliatory attacks. It’s not exactly something easily quantified. Or sometimes the technology has been used in place of traditional police interactions that have been considered discriminatory.
Here’s Jones.
JONES
If we have technologies that can take us right to where these gunshots occurred at, then our cops don’t have to drive up to a corner full of teenagers, who in many instances will be minority teenagers because the gunfire is concentrated in those areas, and asking them, “Hey, have you heard any shots?” And those kind of contacts can lead to things we just don’t want, right? They can be viewed as accusatory. It’s not — we view them as negative contacts. And so to avoid any potential profiling, or officers are just stopping people because they’re standing, you know, a hundred yards away from where some shots could have been, it takes the guess work out of locating these scenes where gunfire is being found.
KAYE
In many cases, it’s law enforcement that employs surveillance tech, and often cities give them control over tech procurement and review. So, what are some other ways law enforcement agencies measure the value of these technologies?
Let’s take a look.
Police departments are strapped for cash. Budgets have been cut amid the pandemic’s recession, and in some cities, as a result of pressure to defund law enforcement.
While technologies like ShotSpotter are relatively expensive, as I’ve mentioned in earlier episodes, a lot of surveillance tech is decreasing in cost as it moves into the realm of subscription-based Software-as-a-Service with data stored in the cloud.
Time is money, they say. And, sometimes these technologies are marketed and perceived as time-savers, which matters in cities struggling through a budget crisis. Proponents of facial recognition systems, for example, they often point to the hours and hours of time police officers save using them. Time that would have been spent paging through mugshot photos to identify suspects.
So, police departments sometimes use time savings as a metric for assessing the value of surveillance tech investments. If technology can help get to a suspect faster, that’s a big selling point for them. Remember that Kansas City Police Department video mentioned in the second episode of City Surveillance Watch, and how it trumpeted the use of surveillance camera footage and license plate readers to identify a murder suspect in a matter of hours?
[Sound: KCPD video clip]
Plus, police labor — it costs money. With cities having to pay shrinking police forces overtime, that’s an important factor.
When I spoke with Ross Bourgeois, the administrator of the New Orleans Real Time Crime Center, for our second episode, he told me the multi-million-dollar program – the hundreds of connected cameras feeding data into it, the monitoring center’s computer equipment, its analytics and video management software, its data storage – it has saved law enforcement thousands of hours of time.
Here’s Bourgeois.
ROSS BOURGEOIS
For it not the Real Time Crime Center, if on a shooting investigation, a detective would have to go out and canvas the neighborhood and knock on people’s doors and ask them to provide video footage if they had it or go to businesses and go to that business and ask for that video footage – potentially you have to make several trips because at a business, the person who’s there on the weekends might not have access to the footage, you have to go back.
So, if you – for every time we provide relevant evidentiary video, we conservatively estimate that we've saved our law enforcement partners about an hour and a half. So, every time we’ve provided video to our law enforcement or public safety partners, we’ve been able to save an hour and a half of time. And when you multiply that out by the number of cases that we handle in any given month, we’re well over 10,000 cases – or 10,000 hours.
KAYE
While some police departments might work with outside organizations, such as the Urban Institute, to evaluate the cost-effectiveness of surveillance program investments, quantifying the impact of law enforcement tech procurement through cost-benefit analysis is not a standard practice, say experts.
Brian Hofer had this little anecdote to illustrate this. He said when he went to an event at NYU with surveillance tech vendors and police chiefs in attendance, he told them that municipalities in California are forcing tech vendors and law enforcement to demonstrate efficacy through cost-benefit analysis.
He says some police were shocked. But first before he tells that story, let’s take a quick break.
[AD: Sign up for Smart Cities Dive's daily newsletter, here]
Here’s Hofer.
HOFER
All these chiefs were talking about how they never had to do a cost-benefit analysis and nobody ever asked them, and so I spoke up and I said, "Well that’s why in California we’re mandating it" — and the whole room just went cold on me [laughs]. It’s like, I’m never getting invited back, they kind of started beating up on me a little bit. And I’m like, you guys never did it because you never had to justify yourselves. And I said, "now we’re forcing you to demonstrate efficacy."
Some of this stuff is not the big civil liberties concern that, you know, some of the conspiracy theory guys make it out to be. It just doesn’t work, you know. It just doesn’t work. There’s a lot of marketing hype out there and we fall for it because I’d much rather go buy, you know, software-as-a-subscription for my $200 a month than have to invest, you know, half-a-million-dollars from the time we recruit the cadet, get him through the academy which is very expensive and then hire and retain him at the benefits that we have to pay in the very expensive Bay Area.
It’s a ton of money to try to train humans and hire them, trying to get community resource officers, community policing. Just asking police to walk a beat again, and get out of their vehicles, actually reestablish relationships is a big problem. So it’s just like, you know, so the easy route, let’s just go buy this piece of technology; they said it’s going to solve all our problems. It’s just not happening.
KAYE
The promise of tech that helps police do more with less is appealing to city government, says Jameson Spivak. He’s policy associate with the Center on Privacy and Technology at Georgetown Law.
JAMESON SPIVACK
Regardless of how much money law enforcement is getting, their budgets have shrunk since the recession. So if you combine shrinking budgets and increased responsibilities, they’re going to rely on these kinds of technologies.
So you know, I think that now that more people are having the conversation about the effects on civil liberties, on civil rights, on privacy, on equity and racial justice — now that these conversations are happening more, I think that there is more pressure on them to consider these things, whether that’s because of a law that’s being passed or policies that the departments have written up, or there’s just informal pressure. I think they’re considering it more, but absolutely, being able to do more with less is a huge, is a huge driver of whether they take up these tools.
KAYE
So, you might say there’s some irony here. Law enforcement budgets are decreasing, sometimes in response to pressures to defund the police in the hopes of ending police violence against BIPOC communities. But to fill the gaps and save money, city governments and law enforcement are enticed by promises of time savings and improved public safety enabled through surveillance tech, which could have disparate impacts on the very people the police defund movement aims to help.
The pressure to ensure safe environments and create more efficiently-run localities – all as government coffers run dry – it means cities might look elsewhere to fund surveillance tech, through federal grants, police foundations, public-private partnerships and other private money. (There were a lot of examples of that mentioned in episode two if you haven’t listened to it.)
But as police and governments seek funding from outside government budgets, democratic processes that once determined tech and data decisions, they slip away. Government officials and the people who elect them have less oversight of how these technologies are deployed, on who can access the information these technologies produce and how that information is managed.
Here’s Deirdre Mulligan.
MULLIGAN
It would be a poor result if what happened is basically all the infrastructure got built up in the private sector as a way to kind of route around some of the privacy and equity concerns that cities are trying to address. So, I think it’s important to have – to make sure that the breadth of review covers the concerns. So the concerns have to follow the data.
KAYE
Tech firms doing business with governments often require strict non-disclosure agreements (NDAs). They prevent cities from revealing all sorts of details about how tech was built, how it works, data ownership and more. Plus, some cities with less tech-savvy staff, they don’t bother to inspect these details to begin with.
If you want an idea of how NDAs can have a real impact, just file a public records request with your hometown. Ask to see the technical specification documents associated with a piece of software government uses. Or better yet, ask to see the algorithm for a piece of automated tech, and see what they tell you. A lot of times, if you get any information at all, those NDAs, they mean it’s likely to be heavily redacted.
When Mailyn Fidler, a research affiliate at the Berkman Klein Center for Internet and Society at Harvard Law School evaluated 14 city surveillance tech policies, she found that fewer than half included any sort of provisions addressing the use of NDAs. She suggested that this neglect to even address NDAs was a troubling practice used to shield scrutiny of technologies.
Overall, despite the risk of local governments losing control over tech and data use, policymakers and legal experts interviewed for this series suggest municipal governments are far from figuring out how to address policy for public-private surveillance tech partnerships or other private surveillance tech affecting the public.
Just take a look at the facial recognition bans that have been established in cities across the country. A reason only one of them limits use in privately-owned places? Regulating private entities can be a challenge for municipalities.
To provide solid legal framework for its ordinance banning use of facial recognition in privately-owned places accessible to the public, the drafters in Portland, OR had to look to federal and state civil rights and anti-discrimination law. The Americans with Disabilities Act and Oregon state law prohibiting discrimination in places of public accommodation provided some legal foundation to work from.
On top of that, drafters actually created a new digital justice designation in Portland’s city code to connect the city’s commitment to non-discrimination to its decision to ban facial recognition.
The code says that, “Face Recognition Technologies have been shown to falsely identify women and People of Color on a routine basis” and, “Portland’s commitment to equity means that we prioritize the safety and well-being of communities of color and other marginalized and vulnerable community members.”
The logical conclusion embedded in that digital justice concept: Don’t use tech found to discriminate against communities of color or other marginalized and vulnerable groups.
But despite efforts to ensure facial recognition cannot be used anywhere within Portland, while a place of business accessible to the public like a convenience store or office building lobby is covered, the city is not able to outlaw facial recognition use inside a private workplace such as private office or, say, a factory or shipping distribution warehouse.
With all the legal hoops to jump through, maybe it’s not surprising it took Portland more than a year to develop and pass its facial recognition ban.
Navigating tech policy for private entities is a tricky balance, according to Brian Hofer. He says while some California jurisdictions have surveillance policies in place that apply if police want to access privately-owned Amazon Ring camera data, for instance — if surveillance tech is solely contained inside, say, a private area like a homeowner’s association, he hasn’t found a legal mechanism to regulate that without being too heavy-handed.
Here’s Hofer.
HOFER
And it is extremely problematic for good reasons to regulate private-party behavior.
One idea I’ve been exploring, and so I can’t give you a conclusion or a great solution right now, but can we regulate private-party behavior, without again being too draconian, via licensing – permitting? You know, maybe you have to get a business permit. If you’re going to use a certain type of surveillance equipment, there will be conditions for use. That way, you know, you’re already getting a business permit when you open a business. An HOA, these other guys are getting licenses, so maybe we can regulate it that way without causing other unintended consequences.
KAYE
Finch has some practical advice for cities interested in developing meaningful tech and data use policies of their own. She and others say cities can start small with few resources and build up to more comprehensive programs.
The first step she recommends? Establish a set of privacy principles.
FINCH
The first step that I always recommend is to start with privacy principles. I think it’s a really great opportunity to bring everybody in your city together who’s got an idea or who might have strong feelings about where you are and where you’re going and about what the vision and the values for the city are.
KAYE
Finch says she’s counted 14 cities in the U.S. that have established privacy principles. Others that have not necessarily established their own privacy principles have signed on to data privacy principles through group efforts such as New York City-led principles for smart city IoT technologies.
More than a year before Portland passed its facial recognition ban, city council members there unanimously passed a privacy resolution. Not only did those principles lay the groundwork for the ban, they formed the foundation of the city’s future work around tech and data policy, which eventually could include surveillance or biometrics tech policy.
And to be perfectly clear, we’re not talking about privacy policies buried in fine print on a city’s website. These sorts of resolutions and principles tend to be broad-reaching, covering how cities approach their use of data and its impacts from a privacy, security and equity perspective.
Portland’s privacy principles, for example, focus on key themes: equity, transparency, accountability, non-discrimination.
As another example, when Minneapolis established its data privacy principles, the city addressed data accuracy, allowing people to correct inaccurate data about themselves. Minneapolis also said it would encourage its tech vendors and services partners to “protect data on individuals and uphold the spirit of these principles.”
Finch…
FINCH
It doesn’t have to be really detailed. These are high-level, you know, statements that your city can live up to now and in the future so you don’t have to worry about revisiting them every time the technology changes.
KAYE
Once privacy principles are in place, Finch advises that cities move on toward the next step: surveillance impact reviews.
The City of Seattle took that multi-step process. First they passed privacy principles in 2015. Now, the city is in the midst of conducting detailed surveillance impact reviews to assess their list of surveillance tech mentioned in the first episode – things used by its transportation, lighting, fire and police departments such as license plate readers for parking enforcement and police helicopters used to locate crime suspects.
In Seattle, their surveillance impact reviews, or SIR for short, require agencies to respond to several questions about how they use a specific technology, how data is collected, who can access it.
They also must answer questions about the impacts of the tech on racial equity. For instance, how do decisions around data storage and retention have the potential for disparate impact on historically targeted communities? And, what is the department doing to mitigate those risks?
Ginger Armbruster, chief privacy officer of the City of Seattle, says her team manages the SIR process and works closely with city agencies.
GINGER ARMBRUSTER
Our job in the privacy office has been to work with departments, and say, OK, we have technology X. We will work with you to make sure the responses are accurate and complete. Wherever you have policy, reference your policy, and then we’ll start working on public engagement. So, we project-manage the SIR, but it’s written by the department, they’re the ones with the knowledge, the subject matter expertise.
KAYE
And it can be a long slog. Armbruster’s team assists department staff to complete each SIR draft, which is made available for public comment. It goes through a fiscal analysis. Next, an external working group takes a crack at it, offering their comments. There’s back-and-forth between that group and the city’s chief technology officer, says Armbruster.
And then…
ARMBRUSTER
And then we bundle it all together and we send it off to council. It takes somewhere in the eight- to nine-month frame from the time we start to the time we’re done.
KAYE
So, yeah – Seattle’s is a lengthy process. Following public engagement, once the SIR is complete, use of the technology, it’s put up to a full Seattle City Council vote.
Both Seattle and Oakland also require agencies to conduct annual reviews of surveillance tech.
Here’s Brian Hofer from Oakland.
HOFER
Every single year for every approved technology, staff has to come back with an annual report and demonstrate how they use it. So we can look at disparate impact or policy amendments that need to get made to either better protect civil liberties or maybe to achieve public safety goals.
KAYE
UC Berkeley’s Deirdre Mulligan suggests that surveillance impact reviews can help cities ensure data generated by technologies doesn’t unintentionally creep from one agency to another.
MULLIGAN
Often when a city department is looking at a task, they don’t have a threat model in mind. They have a use case in mind, right? And the privacy impact assessments or the surveillance impact assessments, often what they bring in is a little bit of a threat model. They say, oh, we understand that you want to use it for this purpose. Let’s make sure that the data you’re collecting is really useful for that purpose if we think it’s legitimate and doesn’t have any other, kind of, potential either negative effects even within your use case, right? Because a lot of the ordinances also think about disparate impact sort of issues.
But more importantly that it doesn’t lead to some unintended uses just because you haven’t thought about somebody, whether it’s a private party or another government agency, wanting to piggyback on your system in a way that might ultimately end up undermining public confidence in your system.
KAYE
So, can municipalities with limited resources build the sorts of comprehensive surveillance tech policy programs that cities like Oakland and Seattle have in place? The people who do this work say they can — and there are lots of resources out there to help them.
Cities don’t have to reinvent the wheel. Kelsey Finch from Future of Privacy Forum says municipalities can look to existing models for privacy principles and privacy and surveillance impact assessments.
There’s that ACLU model legislation mentioned earlier, for one thing. But Finch says cities also might look to federal agency models for privacy impact assessments under the Privacy Act – or even look to the private sector, where corporations have been devising policies in response to Europe’s general data protection regulation and California’s recently updated privacy law.
Finch…
FINCH
And it might start small with one person or half of a full-time person to get you started on these issues but then it can grow pretty rapidly as cities recognize how essential tackling these questions are in this new, connected, digital world and economy that we live in. So, start with principles, start to incorporate impact assessments and then build towards a more comprehensive privacy program as your time and resources allow. Those are my, sort of, three-step program.
KAYE
Municipalities might also seek assistance from local academia, says Finch.
FINCH
If you’re looking for other places to get resources, we’ve seen a lot of success of cities partnering with local academic institutions actually. So, universities who might have graduate students, or law programs or public policy graduate systems who can provide a lot of the advice and expertise, or will have a semester or a year to run down some of these more complicated issues and help get you set up for success.
KAYE
The City of Portland has Smart City PDX – a group of three people who work with other agency staff, including people from the city’s Office of Equity and Human Rights and city attorneys, to develop tech and data use policies.
For various projects, the group has brought in outside experts from local academia and community groups and sought guidance from existing programs in cities such as Seattle. And now, as Portland develops future surveillance tech policy, they hope to include other local jurisdictions through the city’s office of government relations.
Hector Dominguez is open data coordinator at Smart City PDX. He calls it crowdsourcing.
HECTOR DOMINGUEZ
We are trying to literally crowdsource a lot of that effort. So, our team, our smart cities team is relatively small, you know – we are only three people in our smart cities team. We cannot do everything. We help others to do something, and we are learning from others as well.
Something similar we are trying now and building up with other jurisdictions for instance, we consider partners in all this. We are doing that through our office of community, our office of government relations. Right now, that work, we are still developing the strategy of how to reach out to different partners.
KAYE
But with government agencies knee-deep in COVID-19 response, Dominguez says it hasn’t been easy to get other local and regional governments to devote much attention to surveillance tech policy issues.
DOMINGUEZ
It’s very clear to us that we are not getting the attention that we are getting because of COVID-19. Everybody has different priorities right now. This is something extra. And looks great on paper but in practice, yeah, they are not ready. But at least we are trying to coordinate that with our office of government relations so we can do that in a better, more structured way.
KAYE
No matter what approach cities take – whether it be with paid or volunteer staff, experts say it’s important for cities to make tech policy discussions public and give the public meaningful input into the policy writing.
As a journalist observing the world of city surveillance tech use and policy, I will say there are people immersed in this subject matter who are open to collaboration and want to work with others to share resources and knowledge.
Finch, for example, invites people working on municipal tech and data policy to check out that Civic Data Privacy Leaders Network, the peer network operated by Future of Privacy Forum she helps oversee.
It’s clear that traditional city government tech procurement policies simply do not provide a solid enough footing for cities to contend with rapidly evolving tech use and its potential impact on their communities.
Now is the time to start constructing a policy foundation to replace legacy practices, says Finch.
FINCH
In the meantime the lights need to stay on and the trash trucks need to be delivered and it’s difficult to achieve all these things at once, but it does seem as we [in] this increasingly digitized connected world, that it’s ever more important to wrap our arms around it now before it gets even more connected and even greater potential for surveillance grade.
Ideally this is not something that you can throw just a little bit of attention to and then forget about it. It’s something that’s going to continue to need cultivation and resources to be sustainable, but I think it really will pay off quite quickly.
KAYE
Thank you for listening to City Surveillance Watch, a limited podcast series from Smart Cities Dive.
City Surveillance Watch was reported and produced by yours truly, Kate Kaye. This was our final episode.
I’d like to give a special thanks to Renard Bridgewater from New Orleans, aka Slangston Hughes, who provided a couple of his tracks we used in previous episodes.
If you haven’t heard it, our first episode provides an overview of surveillance tech issues and unintended consequences, and pokes at the gray areas that lie between smart city tech and excessive surveillance. Our second episode takes listeners on a journey across the country, exploring how private funding is driving surveillance tech proliferation and how surveillance tech use affects real people in places like New Orleans, Detroit, Eugene, OR and even a small community in Tennessee.
There’s a lot packed into City Surveillance Watch. The idea is to learn from it. That’s why we’ve made it easy to go back and revisit the information presented in this series. You’ll find complete transcripts of all three episodes online at SmartCitiesDive.com. And there’s an extensive archive of links to articles and other resources referenced throughout the series.
Thanks again for listening.