The Transformative Potential of AGI — and When It Might Arrive | Shane Legg and Chris Anderson | TED
200,827
Published 2023-12-07
If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership
Follow TED!
Twitter: twitter.com/TEDTalks
Instagram: www.instagram.com/ted
Facebook: facebook.com/TED
LinkedIn: www.linkedin.com/company/ted-conferences
TikTok: www.tiktok.com/@tedtoks
The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit TED.com/ to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Watch more: go.ted.com/shanelegg
• The Transformative Potential of AGI —...
TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organization/our-policies-te…. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com/
#TED #TEDTalks #gemini #agi #ai
All Comments (21)
-
As a sick person struggling with crippling illnesses, and bedridden for many years, I sincerely hope AGI can be achieved asap. It's my best chance at having something remotely close to an actual life at some point.
-
FEEL THE AGI
-
Shane is very insightful here. Very clear communicator and really demystifies what AGI is and its implications.
-
In terms of the human experience, I suspect one the first places we will find AGI really transforming our experience in a positive way will be with the aging population. The baby boom generation is really perfectly placed to benefit from AGI. I remember when I was in a college social science class learning of the concern of how would society deal with baby boomers becoming old and consequently reaching the stage in their (our) lives where we require more support both physically and cognitively. I find it fascinating contemplating our cellphone avatars carrying on conversations stimulating our brains, reminding us to our to take our medicines, making recommendations to us that are personal and comforting as well as assisting us when we become confused or disoriented. A couple of simple scenarios that comes to mind is coming out of a store and being confused as to where we parked our car, and our assistant reassuring us and showing us where we parked it or our assistant assessing that we have not had human interaction for a period of time and making suggestions to us that involve social interactions or even texting our care provider alerting them that we are becoming "shut in".
-
Fascinating. AGI will be smart enough to understand how AGI works. So it will be able to improve its own capabilities. AGI will be then smarter and so will improve furher. So AGI will be a constantly self-improving system. It will leave humans behind very quickly. We will cease to understand a lot of what AGI is doing. Secondly, there is an inherent unpredictability in complex cognitive systems. Absolutely fascinating!
-
Fascinating insights on AGI's potential by Shane Legg. Balancing innovation with ethics is crucial for a responsible and impactful future
-
this is my best chance for not going blind. i hope they get there quickly as my time is limited.
-
It's incredible how easy it is for some people to talk about gambling the future of every man, woman and child of every culture and nationality. Such moral clarity!
-
AGI will rival the invention of the wheel in greatness
-
Always important to remind ourselves that intelligence and wisdom are two different realms.
-
0:00 - 2:00: Introduction of Shane Legg and his background in computer science and artificial intelligence. Legg's early interest in AI and his experience with dyslexia. Coining the term "artificial general intelligence" (AGI) in 2001. 2:00 - 4:00: Legg's prediction of a 50% chance of AGI by 2028 and his current stance on the timeline. Definition of AGI as a system that can do all the cognitive tasks that humans can do. 4:00 - 6:00: Founding of DeepMind and the company's goal of achieving AGI. Legg's belief in the transformative potential of AGI and the importance of understanding its risks. 6:00 - 8:00: The development of AlphaFold and its potential impact on scientific research. Legg's vision for a future where human intelligence is aided and extended by machine intelligence. 8:00 - 10:00: Potential risks associated with AGI and the need for careful development and regulation. Legg's call for more research and understanding of AGI to ensure its safe and ethical development. 10:00 - 12:00: Discussion of the potential for AGI to solve some of humanity's most pressing challenges. Legg's optimism for the future of AI and its potential to create a golden age for humanity. 12:00 - 14:00: Legg's concerns about the potential for AGI to be used for malicious purposes. The need for international cooperation to ensure the responsible development of AGI. 14:00 - 16:00: Legg's call to action for scientists, policymakers, and the public to engage in the conversation about AGI. Closing remarks and Q&A session
-
we need an AGI panel of judges, I think AGI can be impartial and a impartial panel of independent AGI will change the world.
-
AGI will be the most powerful tool humanity has ever seen and it will definitely be weaponised. There are a million ways this can go wrong and the genie is out of the box already, so we just have to hope that it'll come as late as possible
-
11:32 that gave me shivers. I'm super excited and terrified at the same time!
-
Great episode. Thanks.
-
11:25-11.35 these 10seconds that blew my mind
-
Won't AGI be able to introspect and research its own neural networks to understand how it works? That might be our only chance of understanding how they produce the results they do.
-
Very very insightful, little video. Absolutely fascinating…
-
We appreciate how much insight and useful information we receive from talks like these. We hope to see more in the upcoming future.
-
00:04 🌐 Shane Legg's interest in AI sparked at age 10 through computer programming, discovering the creativity of building virtual worlds. 01:02 🧠 Being dyslexic as a child led Legg to question traditional notions of intelligence, fostering his interest in understanding intelligence itself. 02:00 📚 Legg played a role in popularizing the term "artificial general intelligence" (AGI) while collaborating on AI-focused book titles. 03:27 📈 Predicted in 2001, Legg maintains a 50% chance of AGI emerging by 2028, owing to computational growth and vast data potential. 04:26 🧩 AGI defined as a system capable of performing various cognitive tasks akin to human abilities, fostering the birth of DeepMind. 05:26 🌍 DeepMind's founding vision aimed at building the first AGI, despite acknowledging the transformative, potentially apocalyptic implications. 06:57 🤖 Milestones like Atari games and AlphaGo fueled DeepMind's progress, but language models' scaling ignited broader possibilities. 08:50 🗨 Language models' unexpected text-training capability surprised Legg, hinting at future expansions into multimedia domains. 09:20 🌐 AGI's potential arrival by 2028 could revolutionize scientific progress, solving complex problems with far-reaching implications like protein folding. 11:44 ⚠ Anticipating potential downsides, Legg emphasizes AGI's profound, unknown impact, stressing the need for ethical and safety measures. 14:41 🛡 Advocating for responsible regulation, Legg highlights the challenge of controlling AGI's development due to its intrinsic value and widespread pursuit. 15:40 🧠 Urges a shift in focus towards understanding AGI, emphasizing the need for scientific exploration and ethical advancements to steer AI's impact