Artificial intelligence (AI) could reshape the United States economy and the nature of employment, but the technology must ensure it avoids inherent bias and the government should take a leading role in promoting and regulating it.
Those were some of the key takeaways from The AI Agenda, a one-day event hosted by The Economist in Washington, DC, which brought together public and private sector leaders for discussion on the future of AI and how it will impact public policy and education. And while the technology is sure to be disruptive and touch many areas of life, including in cities, speakers said it will be key to manage that change.
"I think the simple answer to how AI will evolve and will effect lots of changes in our society is: the evolution will be incremental and it will be bumpy,” Daniel Weitzner, founding director of the MIT Internet Policy Research Initiative and a principal research scientist at MIT’s Computer Science & Artificial Intelligence Laboratory, said during a panel discussion. “It's going to happen in different places at different times and it's going to happen on different scales."
Labor and education effects
Various organizations have said AI will have a tremendous effect on the U.S. economy and on cities, causing a shift in the job market. In a report last year, the National League of Cities (NLC) warned municipalities should take “strategic steps” to prepare their economies and their workforces for increased automation of jobs in the coming decades, and speakers said that process will start even earlier with a shift in education and a renewed emphasis on lifelong learning.
Weitzner said that while there have been many gloomy predictions of how AI will quickly take over a vast array of jobs, the reality will be a lot slower and will not result in the likes of truck drivers, bank clerks and taxi drivers losing their jobs in one fell swoop. NLC’s own report argued there is disagreement on the impact of AI and automation on jobs; it cites one study that says 47% of American jobs will become automated, and another that says 9% of jobs will be affected.
Instead of simply losing jobs, Weitzner said change is coming for at least some people, including in jobs that are reliant on driving a traditional car. That echoes concerns from the likes of U.S. Sen. Gary Peters, D-MI, who said last year that the advent and growth of autonomous vehicles (AVs) could spell doom for the labor market. Weitzner said any changes will be complex.
"As best as one can predict, the changes are going to be much more complicated,” Weitzner said. “We're not getting rid of taxi drivers, we're changing them a little. We're not getting rid of truck drivers, we actually can't hire enough truck drivers apparently, we're changing the way those jobs work."
Instead, he said, it will be key to manage that change and assess how jobs that do become automated should look. "I think most importantly, how do we understand that change better and make sure that the change is happening in a way that's consistent with the values we have as societies?" Weitzner asked.
As part of that preparation, there were calls to ramp up educational efforts to both ensure students have the necessary skills to adapt to the changing market, and also to help adults learn about new and emerging technologies. "We need to foster an environment where we're used to the idea of lifelong learning,” Lynne Parker, assistant director for AI at the White House Office of Science and Technology Policy, said during a panel discussion.
"The future of work depends on the skills the next generation are learning right now and need to learn."
France Cordova
Director, National Science Foundation
And it needs to start early, with the likes of MIT creating its College of Computing last year "to address the global opportunities and challenges presented by the ubiquity of computing." Previously, computing education would be less of a priority in many academic areas, but Weitzner said it has become so necessary that students need to be prepared to use it regardless of what they study.
"It looks from the higher education perspective that computing is like math now,” he said. “Everyone needs some math, and soon everyone will need some computing skills."
"The future of work depends on the skills the next generation are learning right now and need to learn," National Science Foundation (NSF) director France Cordova agreed.
Once students are out in the working world, speakers said they need to be ready to keep up that spirit of learning, especially with AI developing rapidly as well as other emerging technologies like blockchain. As humans’ understanding and uses of them evolves, they must be prepared to take advantage, especially as those new technologies take root in new jobs.
"The jobs of the future have yet to be created,” Michael Kratsios, deputy assistant to the President and deputy U.S. chief technology officer, said in an on-stage interview.
Bias
While technology like AI is starting to become more important every day, it must also be kept honest to ensure that any biases are eradicated.
That comes on the heels of a study from Georgia Tech University that found some technology used in AVs has trouble detecting pedestrians with darker skin. Meanwhile, facial recognition technology like Amazon’s Rekognition has come under fire for potential impacts on communities of color.
To combat this, Nuala O'Connor, president and CEO at the Center for Democracy and Technology, said in an on-stage interview that companies must be transparent about how they collect their data and use it, especially if it is put to work in areas like criminal justice, which has traditionally negatively impacted minorities more.
“The intent obviously is to eradicate racial bias, to eradicate disparate outcomes in sentencing and the like,” O’Connor said. “The worry is that if you feed in bad data or you feed in biased values, you're going to get the same outcomes. We really have to stress test and be transparent about the inputs and the outputs of these programs."
That includes ensuring test cases are diverse, too. O’Connor cited the example of a virtual reality (VR) headset that made it to market. The headsets, which had not been tested on any women, gave female users headaches due to the way women's eyes focus in different ways than men's. "If your teams are not diverse, your products are not going to work for diverse audiences. Full stop," she said.
Some technology companies suggest they have higher ideals such as being focused on bringing people together and connecting them, but have been accused of only paying lip service to diversity. O’Connor said the private sector must be truthful with itself and its customers about what it is doing, otherwise it could mean painful results.
"We are not citizens of the world. You have an inherent bias in your creation,” she said. “Own it. Be honest about who you are and what you're trying to create, because frankly people will figure it out."
The government response
For governments looking to both regulate and capitalize on this new technology, speakers called on them to have a coherent strategy, including at the national level.
Last month, President Donald Trump signed an executive order to launch the American Artificial Intelligence Initiative, which orders federal agencies to prioritize and provide resources for U.S.-based AI technology research and development. It focuses on investing in AI R&D; setting governance standards; building an AI workforce; and promoting international engagement while protecting the American interests and values.
As part of that, Kratsios announced the launch of the government website AI.Gov, which he said is part of the executive order and is designed to bring together federal resources, remove obstacles to innovation and coordinate spending and research, all while protecting American workers.
He said it will serve as a "hub for all the AI projects being done across the agencies," which includes the likes of the U.S. Department of Transportation (USDOT), National Oceanic and Atmospheric Administration (NOAA) and NSF, and also help different agencies and industries think more about how AI can be used.
“It really speaks to and highlights the whole government approach that we're taking to ensuring American leadership in this very important field," Kratsios said during an on-stage interview.
Meanwhile, Kratsios noted Trump’s budget proposal, which has come under fire in recent days from advocates for public transportation as well as other sectors, keeps funding high for AI and quantum computing, a decision he said is intended to "turbocharge that creative, innovative research and development ecosystem." And as part of his AI strategy, Trump has called for federal grant funding for AI initiatives to be prioritized.
But O’Connor said while the Trump administration has the right idea by having a national strategy for AI, more must be done. She said that government should do more to partner with the private sector and could even raise standards by only contracting with companies that are the most committed to privacy in data management.
And, she said, a federal privacy law would help regain U.S. leadership on AI, as both consumers and businesses currently struggle with uncertainty due to a patchwork of state laws that are inconsistent. New laws are under discussion presently in Congress, and O’Connor said that passing one would help alleviate what she said is currently a “compliance headache for small and medium-sized enterprises."
One other way the government can help is to simplify language around new technology, ensuring that public discourse is better informed and not drowning in jargon. And while she acknowledged that may be easier said than done, it could be another way to engage with citizens who may otherwise feel separate from and threatened by innovation. "The tone from the top matters," she said.