With any new technological development, questions about its impacts, both positive and negative, arise over time. Artificial intelligence is no exception, and talks of government regulation on the technology is already an extremely relevant topic of discussion. In particular, it was a hot topic at the recent World Economic Forum. Artificial Intelligence is still in its early stages of existence, but tech leaders and government officials agree that regulation is essential, and it is best not to wait until severe negative impacts reveal themselves. Instead, conversations on the topic will turn to action, sooner rather than later, in order to prevent the negative effects of the AI from wreaking havoc on industry.
The World Economic Forum annual meeting took place in January in Davos, Switzerland, and Artificial Intelligence was a hot discussion topic, according to CIO Dive. Microsoft President Brad Smith was one of the more vocal leaders at the forum, calling on governments to take immediate steps to regulate the technology. He warns against waiting for the technology to fully develop and argues instead to set ethical standards before we begin to see the inevitable negative consequences. However, he opposes a complete ban on the technology, stating that the benefits outweigh the consequences. “I’m really reluctant to say ‘let’s stop people from using technology in a way that will reunite families when it can help them do it,'” Smith says. Some argue that AI should only be regulated within government agencies. Others say that the actual functions of AI should not be regulated, but rather the practical applications of the functions. In other words, we should not instate rules and regulations that will hinder scientific advancement, only the ways in which the technology can be used.
Google CEO Sundar Pichai added to the conversation, agreeing that regulation is necessary, although he did not specify the type or extent of the regulation. He notes the importance of “international alignment” in determining how the technology will be regulated. The EU, for example, tends to take a more aggressive approach to regulations, while other government bodies are more laid back in their approach. According to Pichai, finding common ground in standards and regulations will be a key challenge.
Ginni Rommetty, CEO of IBM, led a panel at the World Economic Forum aimed at preventing bias in AI. In preparation for the panel, IBM issued policy proposals that seek a compromise between the loose guidelines that industry leaders would prefer and the strict laws and regulations governments would likely produce. In her panel, alongside White House aide, Chris Liddell, OECD Secretary-General Jose Angel Gurria and Siemens AG CEO Joe Kaeser, Rommetty urges companies to work closely with governments to establish standards that will prevent discrimination and bias in technology that uses facial recognition, historical data, or any other element that may carry a bias. IBM also recommends that companies appoint an “AI ethics official” to assess and communicate the impact of certain AI systems on the individuals affected. IBM has also been working with the Trump administration since last summer to solidify guidelines on federal agencies’ use of AI technology.
The discussions of AI and all of its complications at the World Economic Forum is just the beginning. Just as the invention of broadcast television led to regulations and censorship after initial opposition to government involvement, the development of AI technology will result in a consensus among government bodies, tech companies, and consumers to protect all parties involved. The cooperation of tech leaders in the discussion of these issues and policy proposals is a step in the right direction, but also an indicator that prompt action must be taken, as these leaders would normally oppose such regulation if its need weren’t so urgent. As conversations continue, it will become more clear what the future of AI looks like, what it can and cannot be used for, and how it will affect consumers.