Google AI research lab calls for ‘responsible’ approach

Toronto –
The chief business officer of Google’s Artificial Intelligence Lab said the world is having a “Eureka moment” around artificial intelligence, but we must hold technology accountable.
The explosion of interest in AI is due to recent technological advances that have allowed people to use AI in conversational languages, rather than previously dabbling in AI primarily by programmers. Because of that, says Colin Murdoch, of Google’s DeepMind.
“When my mom and dad were able to do this, it suddenly became so much more accessible,” he said in an interview with The Canadian Press.
“Anyone can do it.”
The surge in people and companies experimenting with AI was sparked by last year’s release of ChatGPT, a generative AI chatbot capable of human-like conversations and tasks, developed by San Francisco-based OpenAI.
The release kicked off an AI race with other top tech companies, including Google and its rival Bard, and put more attention on DeepMind, which is headquartered in the UK but also has offices in Montreal and Toronto.
Today, companies from healthcare to oil and gas to technology companies are all touting their use and plans for AI.
But Murdoch said going ubiquitous requires a cautious approach and thoughtful consideration of all the risks AI poses.
“What we think about this is boldness and responsibility because it’s a balance,” he said.
“What we want to make sure is that we are doing this in a way that allows society to benefit from the incredible potential of this technology, but at the same time, we are making exceptional promises. Therefore, we must act responsibly.” Why should we pioneer responsibly? “
But what does responsible AI look like?
At Google, that first means accepting criticism at every stage of the AI development process.
Murdoch said the company relies on internal and external review boards from the day an idea is born until it is released to the public.
“We’re making sure we have good oversight of our work. For example, we have ethicists alongside policy experts and machine learning experts,” he said.
“They pressure test the job from start to finish to identify how to maximize the benefits of the job and also address any potential changes needed.”
Sometimes they encourage staff to talk to more outside experts about the impact. For example, when he was building AlphaFold, he consulted with 30 people, ranging from biologists to biosecurity experts to farmers.
AlphaFold can predict 3D models of protein structures. Murdoch believes that the technology will map all of his 200 million proteins known to science, allowing protein structures to be determined in minutes, and even seconds, rather than years, so in the process He believes it saved a billion years of research time.
It is being used by researchers at the University of Toronto to identify drug targets for liver cancer.
Murdoch said that in addition to ensuring products include external reviews, responsible AI also takes bias into account. Many say that bias arises in AI due to the lack of diversity and voice in the building and training phases.
“It is very important to build, deploy, and make sure that AI practitioners are somehow reflective of the broader society,” he said.
Education and community engagement help increase industry transparency and address bias, allowing small, resource-poor startups to learn from powerhouses like Google.
Murdoch’s remarks come from a visit to Toronto from the UK, where he spoke at the four-day Collision technology conference on Wednesday about how he feels AI is changing the world.
Later in the day, AI pioneer Jeffrey Hinton, who left Google in May to allow more free discussion about the dangers of AI, took the stage to discuss the giant leaps technology has made in the past year. but he didn’t understand it himself. He predicted that he would come so early.
Hinton has been deeply concerned about the impact of AI for months, and on Wednesday outlined six harms technology poses, including prejudice and discrimination, unemployment, echo chambers, fake news, robots at war and survival risks. bottom.
He said the technology could greatly help how humanity tackles climate change and health care, but warned that it could trigger career and even safety changes.
For example, the child of Atlantic CEO Nick Thompson, who is interviewing Hinton on stage, said that AI is so much more capable of completing tasks essential to non-trade-based work. , he suggested, to pursue plumbing rather than the media.
On an existential level, Hinton said he fears the Pentagon is building robots for war and will need an international treaty to stop it.
“I think it’s important for people to understand that this isn’t just science fiction, it’s not just horror,” he said.
“This is a real risk that we need to think about and think about in advance how to deal with it.”
As for Murdoch, the world should not focus on any single risk posed by AI, but should instead take a ‘holistic’ approach, and we are still in the early stages of using and integrating this technology. I want you to remember one thing.
“We are still in the early stages, but with each step we will become stronger and more capable.”
This report by the Canadian Press Agency was first published on June 29, 2023.