Silicon Valley, Artificial Intelligence, and Vestager's Nonchalance in Battling Tech Giants... Again.
The European Union has been considering new legally binding requirements for developers of artificial intelligence to ensure modern technology is developed and used ethically. The EU is set to propose the new rules apply to “high-risk sectors,” like healthcare and transport, and suggest the trading bloc updates safety and liability laws according to a draft of a white paper on artificial intelligence obtained by Bloomberg. This is part of the EU’s wider effort to catch up with countries like the US and China on advancements in AI, but to simultaneously promote the European value of user privacy and data protection. In the draft of the white paper, the EU defines high-risk applications as “applications of artificial intelligence which can produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage for the individual or the legal entity.” EU officials strongly believe that harmonising rules across the region pertaining to innovation around AI will boost development, contesting the opinion that stringent protection laws could hinder innovation. The first draft of the new AI policy is to be announced in two days time, on Wednesday the 19th of February 2020, along with broader recommendations outlining the bloc’s digital strategy for the coming years.
The EU’s value for user data is precedented and is evident in all of the cases the Commission has taken on in the last few years. EU privacy watchdogs have been keeping a close eye on tech giants like Facebook, Google, Apple, and most recently, Amazon, after revelations that Amazon workers monitored people’s conversations with their Alexa digital assistants. Tine Larsen, head of the data protection authority in Luxembourg, said that EU regulators are not working on a common approach on how to police the technology. “Because it’s a question of principle, the members of the EDPB should work out a common position in line with the consistency mechanism to apply data protection rules in a harmonised way for this type of treatment,” she said, referring to a panel of regulators from across the 28-nation EU. In August 2019, Apple announced that it made changes to Siri in response to customers being concerned about the Siri quality evaluation process. In the same vein, in September 2019, Google also contended that it would add new security protection to the way its workers listen to audio snippets, which was meant to help improve the product’s quality.
European Commission President Ursula von der Leyen has pledged her team would present a new legislative approach on artificial intelligence within the first 100 days of her mandate, which started on the 1st of December 2019. She has also handed the task to Margrethe Vestager, the EU’s digital chief top and “most aggressive” competition regulator to coordinate. One of the reports outlined a set of seven key requirements that AI systems should obtain to be deemed trustworthy – these could include incorporating human oversight, respect for privacy, traceability and avoiding unfair bias in decisions taken by the AI systems.
EU rules often echo across the globe as companies refrain from building software or hardware which would be prohibited from such a developed market. Thus, the tightening of EU data protection and tech regulation laws have instigated tougher action around the world, and Silicon Valley has made their way to Brussels upon hearing about the legal development for the regulation of AI in the EU. Google CEO (and Alphabet Inc.’s chief executive officer) Sundar Pichai spoke at the Bruegel event in Brussels in January 2020 and agreed that AI needs to be regulated, but also urged that the US and EU coordinate their regulatory approaches. He emphasised the implications of successful AI systems, touting a Google Health algorithm that can spot breast cancer more accurately than doctors, and cautiously encouraged plans for rules taking a “proportionate approach, balancing potential harms with social opportunities.” Today, Facebook boss Mark Zuckerberg is in Brussels to meet with Vestager to discuss new rules and regulations for the internet – just two days before the bloc unveils its plans to legislate artificial intelligence. It’s unlikely that Zuckerberg’s appearance will be of pivotal importance to the outcome of the first draft, as Vestager has made clear that she “will do [her] best to avoid unintended consequences, but, obviously, there will be intended consequences.” Facebook declined to comment.
The debate over the policies is expected to last through 2020, but the EU’s ultimate goal is to stray away from an American-led view of tech, which leaves Silicon Valley companies alone, uninhibited, to grow without much scrutiny.
Sources -
https://www.bloomberg.com/news/articles/2020-01-16/europe-mulls-new-tougher-rules-for-artificial-intelligence
https://www.bloomberg.com/news/articles/2020-02-17/brussels-edition-zuckerberg-s-eu-networking?srnd=fixed-income
https://www.bloomberg.com/news/articles/2020-01-17/amazon-s-snooping-on-alexa-chats-spurs-eu-wide-privacy-response
https://www.nytimes.com/2020/02/16/technology/europe-new-AI-tech-regulations.html