dr Geoffry Hinton, widely dubbed the “godfather” of AI, has confirmed in an interview with The New York Times that he quit his job at Google – to talk about the dangers of the technology he helped create.
Hinton’s pioneering work on neural networks – for which he won the 2018 Turing Award along with two other university professors – laid the foundation for the current advancement in generative AI.
The lifelong academic and computer scientist joined Google in 2013 after the tech giant spent $44 million to acquire a company founded by Hinton and two of his students, Ilya Sutskever (now Chief Scientist at OpenAI) and Alex Krishevsky. Their neural network system eventually led to the development of ChatGPT and Google Bard.
But Hinton has some regrets about his life’s work, he told the NYT. “I console myself with the usual excuse: If I hadn’t done it, someone else would have,” he said. He decided to leave Google to speak freely about the dangers of AI and make sure his warnings don’t pertain to the company itself.
In today’s NYT, Cade Metz hints that I left Google to criticize Google. Actually, I left to talk about the dangers of AI without thinking about how it affects Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
Join the TNW Conference in June and save 50% now
Take advantage of our 2for1 sale and bring your friend with you
According to the interview, Hinton was prompted by Microsoft’s integration of ChatGPT into its Bing search engine, which he fears will push tech giants into potentially unstoppable competition. This could lead to an overabundance of fake photos, videos and text to the point where the average person can no longer “tell what’s true”.
But misinformation aside, Hinton also raised concerns about the potential for AI to eliminate jobs and even write and run its own code, given its apparent ability to become smarter than humans much sooner than expected.
Hinton believes that the more companies improve artificial intelligence without control, the more dangerous it becomes. “Look at how it was five years ago and how it is now. Take the difference and propagate it to the front. That’s spooky.”
The need to control AI development
Geoffry Hinton isn’t the only one expressing concerns about the rapid and uncontrolled development of AI.
In late March, more than 2,000 industry professionals and executives in North America signed an open letter demanding a six-month hiatus in training systems that are more powerful than GPT-4, ChatGPT’s successor.
The signatories – including DeepMind researchers, computer scientist Yoshua Bengio and Elon Musk – stressed the need for regulatory guidance and warned that “powerful AI systems should only be developed when we are confident that their impacts will be positive and their risks manageable are .”
Across the Atlantic, the growth of ChatGPT has stimulated efforts by EU and national authorities to efficiently regulate the development of AI without stifling innovation.
Individual member states are trying to monitor the operation of advanced models. For example, Spain, France and Italy have launched investigations into ChatGPT over privacy concerns – with the latter becoming the first western country to regulate its use after imposing a temporary ban on the service.
The union as a whole is also moving closer to passage of the anticipated AI law – the world’s first AI law by a major regulator. Last week, members of the European Parliament agreed to take the bill to the next phase, known as the trilogue, where lawmakers and member states will work out the final details of the bill.
The bloc is likely to agree on the law later this year, according to Margrethe Vestager, the EU’s chief tech regulator, and companies could already start considering its implications.
“With these groundbreaking rules, the EU is leading the development of new global norms to ensure AI can be trusted. By setting standards, we can pave the way for ethical technology worldwide and ensure the EU stays competitive along that path,” Vestager said when the bill was first announced.
Unless regulatory efforts in Europe and the world are accelerated, we may risk repeating Oppenheimer’s approach, which Hinton is now sounding the alarm about: “If you see something that’s technically cute, do it and argue about it, what to do after technical success.”
Comments are closed.