
In a noteworthy address during a graduation ceremony at the University of Toronto, Ilya Sutskever, one of the founding minds behind OpenAI and a leading scientist in artificial intelligence, painted a vivid picture of a future where machines are capable of matching all human capabilities.
Sutskever—who now holds a key role in his new initiative to build "safe superintelligence"—did not hesitate to affirm that this transformation is inevitable and that its impact will extend to every aspect of our lives.
During his speech, delivered upon receiving an honorary doctorate, Sutskever noted that the question is not "if" but "when" AI will reach a stage where it can perform every human task.
He explained his logic, stating: "We have a brain, the brain is a biological computer, so why shouldn't a digital computer, a digital brain, be able to do the same things? That, in a single sentence, is the summary of why AI will be able to do all those things."
While acknowledging that current AI models, despite their power in some areas, are still "quite lacking" and require significant development in others, Sutskever believes progress is continuing at a pace that may not be slow. In this context, he estimated that reaching what he described as superintelligence could take "three, five, maybe ten years."
According to Sutskever, this future raises fundamental questions about the role of humans in a world where machines can do everything.
Among the potential outcomes he hinted at were an accelerated pace of scientific discovery, unprecedented economic growth, and widespread automation, which would lead to a phase where "the rate of progress becomes very, very fast for at least some time."
Sutskever also made a point to offer advice to the graduates, urging them to "accept reality as it is, try not to regret the past, and strive to improve the current situation."
This advice carries particular weight given his personal journey, including his role in the events at OpenAI in late 2023 concerning Sam Altman's leadership—a role for which he later expressed regret.
In discussing the challenges, Sutskever described the one posed by AI as "truly unprecedented and extreme," even calling it "humanity's greatest challenge ever."
He added that overcoming it would also bring the greatest rewards. He concluded with a fact he sees as firmly established: "Whether you like it or not, your life is going to be very, very substantially affected by AI."
It is worth noting that Sutskever left OpenAI in May 2024 to found his own company, Safe Superintelligence (SSI).
As its name suggests, the company focuses on safely developing superintelligent AI and has already attracted significant funding despite its recent inception and having not yet released a product.
The future Sutskever paints remains a subject for deep reflection and wide-ranging debate, but his message was clear: change is coming, and artificial intelligence will be at the heart of this transformation.