Artificial intelligence stole the show at this year’s Smart City Expo USA, held in New York City last week. “You can't swing a dead cat without bumping into someone talking about AI,” Justin Tormey, a director at the tech-focused consulting firm Slalom, said during one panel.
Here are some key takeaways from conference speakers about what city leaders should know about the AI landscape and how to prepare for what’s next.
When to move fast on AI — and when to exercise caution
What are leaders actually referring to when they talk about AI? There’s a difference between machine learning and generative AI, said Jerele Neeld, interim chief information officer for Chattanooga, Tennessee. Put simply, machine learning uses algorithms to learn from data and make informed predictions or decisions; generative AI is a type of artificial intelligence that can create new content, from text to images.
Many machine-learning applications have already been tested in cities, Neeld said, and cities should move as fast as they can to scale up those use cases.
Applications of generative AI are still in earlier stages of development, however — and cities need to plan now for how they will move forward, said Stephen Caines, chief innovation officer for the city of San Jose, California. “AI is going to work its way into your city at some point,” whether it’s through a vendor or an intern, he said, advising cities to figure out the rules of the road now.
One way to take the initiative is to identify a handful of initial use cases to test AI applications, said Alex Foard, executive director of research and collaboration at the New York City Office of Technology & Innovation.
Making AI a priority
Despite the hype AI has sparked, planning for it is still just one of the many priorities city leaders juggle. That poses a major near-term challenge in scaling up AI use, said Ernie Fernandez, Microsoft’s vice president for state and local government. Fernandez recalled a recent conversation with a “young, really progressive mayor” that shocked him: The mayor told him that he hadn’t had the chance to think about AI because he was so busy.
“The technologists are going to be deep into this, but the leaders, they're focused on their top-level issues,” Fernandez said. “We're going to have to work on bringing them along.” Early AI use cases governments are exploring include 311 assistance, permitting, policy development, language translation and briefing documents, he said.
There are some action items public sectors need to focus on “like, yesterday,” said John Paul Farmer, president of WeLink Cities, a broadband technology and service provider. First, “get your data house in order,” he said. “There can be all kinds of amazing tools out there. If your data is a mess, you're not going to get the outcomes that you expect.”
Governments must intentionally upskill their workforce to use AI, rather than leave it to individuals to figure it out in their free time, he said. “Every government agency should be finding time, making it a part of the job to learn how to use these AI tools to do that job better, serve people better.”
Keep ethics and transparency top of mind
AI promises to make cities more efficient, said Little Rock, Arkansas, Mayor Frank Scott, Jr., who described how AI technologies helped cut his community’s recovery time after a tornado from a year to about six months. But he urged cities to be wary of unintended consequences.
“While it may be the new, hip, fad thing to do, we at cities really have to focus on the cost — not only from a financial standpoint, but also from a community and political standpoint,” Scott said. He advised cities to be very upfront with residents about how they are using AI.
Amen Ra Mashariki, the Bezos Earth Fund’s director of AI and data strategies, called for more precise, useful policies and guidelines to ensure AI is used ethically. “Right now, people are putting out policies just to make themselves look like they're thinking about it,” he said. “But it doesn't seem like it's implementable in any real way.”
WeLink’s Farmer said every government should develop its own artificial intelligence strategy that centers people’s “digital rights,” including privacy, security, equity, accountability and transparency. That might require difficult conversations within the community and city council.
“Sometimes maximizing privacy actually means less equity or less security,” Farmer said. “Having those hard conversations is how you actually come to understand what the priorities are for your community and make sure that you're using AI responsibly in the context of what that community wants.”