Society isn’t prepared for the rapid advancement of AI.
The CEO of Google and Alphabet, Sundar Pichai, warned that society must get ready for technologies similar to those it has already introduced and that “every product of every company” will be touched by the rapid development of AI.
Pichai stated that the issue of fake news, disinformation, and imagery will be “much bigger” and that “it could cause harm” while warning of the effects of AI.
Last month, Google made Bard, an AI chatbot, available to the public as an experimental product. It came after Microsoft said in January that OpenAI’s GPT technology will be incorporated into Bing, its search engine, which drew interest after ChatGPT started in 2022.
But in recent weeks, the general public and detractors have also become concerned about the effects of the quick advancement. Elon Musk, Steve Wozniak, and a significant number of researchers demanded in March that training “experiments” involving huge language models “more powerful than GPT-4,” OpenAI’s flagship LLM, be immediately suspended. Since then, the letter has received more than 25,000 signatures.
Recommendations for Regulating AI
Google released a document with “recommendations for regulating AI,” but Pichai insisted that society must quickly adjust to the regulation, including laws to penalize abuse, international agreements to make AI safe for everyone, and policies that “Align with human values, including morality.”
Pichai stated, “A company should not decide.” This is why, in my opinion, social scientists, ethicists, philosophers, and others should also be involved in the creation of this technology in addition to engineers.
In response to the question of whether society is ready for AI technology like Bard, Pichai said, “On one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch.”
He continued, however, that he was upbeat since, in contrast to earlier innovations, “the number of people who have started worrying about the implications” did so at an early stage.
After Pelley mentioned that he queried Bard about inflation and instantly received choices for five books that, when he investigated later, didn’t truly exist, Pichai added that Bard has a lot of hallucinations.
Pelley also expressed concern when Pichai claimed that chatbots operate in “a black box,” where “you don’t fully understand” why or how they generate particular responses.