Dive Brief:
- The European Commission released a report with 33 recommendations to guide trustworthy, human-centric use of artificial intelligence (AI) in Europe with a focus on sustainability, growth, competitiveness and inclusion.
- Some of the policy and investment recommendations include:
- Ban AI-enabled mass scoring of individuals, and set clear, strict rules for surveillance for national security purposes
- Fund and facilitate the development of AI tools to detect bias and prejudice in governmental decision making
- Develop auditing mechanisms for AI systems
- Address gender bias in algorithmic decision making
- Refrain from establishing legal personality for AI systems or robots
- Consider certifications for AI systems
- Next steps include initiating some AI ecosystem analyses to determine which actions from this report should be taken right away. A pilot program will also start in the second half of this year for a related report released in April, titled "Ethics guidelines for trustworthy AI."
Dive Insight:
Some of the recommendations address getting all EU member states on board and having central oversight through the European Commission to collaborate and ensure consistency. The recommendations primarily deal with identifying funding for AI system development and encouraging regulations to minimize risks to humans and society.
AI can enhance economies when used correctly, according to the report. It recommends creating a friendly environment for AI developers and investors. The report stresses that action should begin immediately to harness the opportunities AI can offer the EU and to ensure member states are competitive in the global space.
But the report also notes AI could be used for nefarious purposes and create unintended consequences. A number of the recommendations deal with eliminating bias. Like numerous tech sectors, women and racially diverse groups are underrepresented within the AI industry, prompting New York University's AI Now Institute to declare a "diversity crisis" within the industry.
Technological bias causes the unintended consequences of a product or system, benefiting only a narrow audience and potentially harming other audiences. For example, a March study from Georgia Tech University found that some autonomous vehicle (AV) technologies have difficulty detecting pedestrians with dark skin. Without correction that could result in AVs colliding with people who have darker skin while those with lighter skin are kept safe.
The report also suggests banning AI systems used for "mass scoring of individuals" and implementing strict rules about surveillance. A number of U.S. cities currently are grappling with this as well. San Francisco's Board of Supervisor's approved an ordinance in May to ban the city from using facial recognition technology, and to require advanced approval for departments that want to procure other forms of surveillance technologies.
Oakland, CA is among the other cities considering a similar ban. But Orlando, FL is one of the cities running facial recognition pilot programs; it's believed the police department could use the technology to fight crimes and track down criminals.
Although the European Commission's report only offers recommendations and stops short of introducing regulations, this could be the first step toward unified AI governance in the EU.
As U.S. cities also examine how to deal with AI and protect citizens, the federal government could emulate the EU model to provide greater unity and clarity for AI regulations. However, as noted in the European Commission report, AI is in its nascent stages and regulating a field that's rapidly evolving presents many challenges.