close_game
close_game

Tech Tonic | Where’s Ilya? He aims for AI smarter than humans, but not dangerous

Jun 23, 2024 07:00 AM IST

Sutskever’s start-up, Safe Superintelligence, aims to create what the name suggests – a super-intelligent machine which surpasses anything we’ve seen thus far

"Where’s Ilya?" a question that's kept many of us awake at night, recently. In case you’d also been wondering where former OpenAI’s co-founder and former chief scientist was all this while (and that was a valid question), considering he’d been off the radar since allegedly leading what turned out to a series of disastrous events, in an attempt to oust CEO Sam Altman from the company, we now have an answer.

Ilya Sutskever speaks during a talk at Tel Aviv University in Tel Aviv, Israel.(Reuters) PREMIUM
Ilya Sutskever speaks during a talk at Tel Aviv University in Tel Aviv, Israel.(Reuters)

Altman is very much back in charge, OpenAI’s partnership with Microsoft still going strong, and now they have a big role to play in Apple’s AI chapter that will and truly arrive later this year. Ilya, who then left OpenAI earlier in the summer, has been working on something that I’ll let him describe – “one goal and one product: a safe superintelligence.”

Sutskever’s start-up, Safe Superintelligence, aims to create what the name suggests – a super-intelligent machine which surpasses anything we’ve seen thus far with artificial intelligence (AI), and is smarter than humans but isn’t dangerous as a result of its capabilities. His co-founders are Daniel Gross, who has worked at Apple, as well as Daniel Levy, who’s worked with Sutskever at OpenAI.

Quite what Sutskever’s team managed at OpenAI, is nothing short of definitive. OpenAI’s ChatGPT online chatbot set off a chain of events that has seen AI find its way into our lives as a talking chatbot, a writing assistant, generate code, draft email replies for us, take notes of meetings and generate an image or a music track exactly how we like it. It is only logical that I wouldn’t bet against Sutskever or Safe Superintelligence, with what they’ve set out to achieve. Quite how the contours shape up, we’ll only know in due course. He’s been one of the few vocal voices, who have expressed concern over time about the potential of artificial intelligence becoming dangerous.

Good point then, to touch upon artificial superintelligence. If you think AI as we interface with it across devices, apps and platforms is smart, the idea behind the next evolution is to make these models smarter than humans – that is, the most intelligent and gifted variety. In a paper published by IBM last year, they tried to answer the question about artificial superintelligence.

“Artificial superintelligence (ASI) is a hypothetical software-based artificial intelligence (AI) system with an intellectual scope beyond human intelligence. At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human,” the IBM paper titled 'What is artificial superintelligence?' says.

ASI may be hypothetical for now (as I mentioned earlier, its contours aren’t clear yet), but it must be contrasted with where we are for now. That is, Artificial Narrow Intelligence (ANI), weak AI or narrow AI technologies. You have heard a lot about AI trying to understand context and hold conversations. It’s still trying to learn the basics of human interactions.

What would ASI need? Data sets and language models are much bigger than the ones in use for commercially available AI tools, for now. Multisensory capabilities to handle and process multiple forms of data inputs at a rapid pace, and neural networks that mimic how a human brain works. And of course, the conversational as well as contextual skills that are evolving for the likes of Microsoft's Copilot, OpenAI's ChatGPT, Google Gemini and the upcoming Apple Intelligence with a smarter Siri. It'll all bear fruit, as the next chapter for artificial intelligence is written. I realise what I’ve done here is oversimplification, but you get the idea.

What are the risks that Sutskever and others who’ve been warning us about AI, are referring to? These machines, at some point, could find a bypass for human control, become self-aware and make their own moves. “Its superior cognitive abilities could allow it to manipulate systems or even gain control of advanced weapons,” the IBM study references the basic reaction with the possible worst-case scenario.

Safe Superintelligence says its focus is singular. “We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” the introductory note says. But what really makes an AI system safer than others, what are the guardrails and the metrics for identification of possible red flags? Your guess is as good as mine. We’ll let the scientists tell us when the time is right.

Vishal Mathur is the technology editor for the Hindustan Times. Tech Tonic is a weekly column that looks at the impact of personal technology on the way we live, and vice-versa. The views expressed are personal.

Unlock the power of data-driven insights with IIT Delhi's Data Science & Machine Learning Certificate Program! Click here to know more.

See more

Continue reading with HT Premium Subscription

Daily E Paper I Premium Articles I Brunch E Magazine I Daily Infographics
freemium
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Saturday, June 29, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On