Yui Mok/POOL/AFP via Getty Images
- Fei-Fei Li criticized extreme AI rhetoric as misleading and unhelpful for public discourse.
- She urged balanced, factual communication about artificial intelligence and its impact.
- Other AI leaders, including Andrew Ng and Yann LeCun, have also called for balanced AI messaging.
The current rhetoric around AI is far too dramatic, says the Godmother of AI.
“I like to say I’m the most boring speaker in AI these days because precisely my disappointment is the hyperbole on both sides,” Fei-Fei Li said in a talk at Stanford University published on Thursday.
“We’ve got the total extinction, doomsday, and all that talk about AI will ruin humanity, machine overlord,” she said. On the other hand, she said, there is the “total utopian” scenario where people use words like “post-scarcity” and “infinite productivity.”
Li is a longtime Stanford computer science professor famous for inventing ImageNet. Last year, she cofounded World Labs, a company building AI models to perceive, generate, and interact with 3D environments.
At the Stanford talk, she added that this “extreme rhetoric” is filling tech discourse and misinforming vulnerable people.
“The world’s population, especially those who are not in Silicon Valley, need to hear the facts, need to hear what this truly is,” she said. “Yet that kind of discourse, that kind of communication, that kind of public education is not as good as I hope it is.”
Li is among the top computer scientists who are advocating for more balanced messaging around AI and its impact on society.
In July, Google Brain founder Andrew Ng said that he thinks artificial general intelligence is overrated.
AGI refers to a stage when AI systems possess human-level cognitive abilities and can learn and apply knowledge just like people. The execs of leading AI labs are often asked when they think AGI is coming and what it will mean for human workers.
“AGI has been overhyped,” Ng said in a talk at Y Combinator. “For a long time, there’ll be a lot of things that humans can do that AI cannot.”
Meta’s former chief AI scientist, Yann LeCun, has said that large language models are “astonishing” but limited.
“They’re not a road towards what people call AGI,” he said in an interview last year. “I hate the term. They’re useful, there’s no question. But they are not a path towards human-level intelligence.”
Last Month, LeCun announced on LinkedIn that he was leaving Meta after 12 years to launch an AI startup.