What We Get Wrong About AI (feat. former Google CEO)

866,055
0
Published 2023-08-03
AI is here. ... Okay, now what?
Sign up to Milanote for free with no time-limit: milanote.com/cleoabram

Subscribe to support optimistic tech journalism: youtube.com/cleoabram?sub_confirmation=1

Everyone’s talking about artificial intelligence and machine learning. There are lots of names thrown around - OpenAI’s ChatGPT and DALLE-2, Google’s Bard, Meta’s LLaMa 2, Midjourney, AlphaFold…

Right now, we’re in this weird moment where lots of smart people agree we’re on the cusp of a truly world-changing technology. But some seem to be saying it's going to lead to human extinction while others are saying it’s “more profound than fire.”

But it all feels so VAGUE.

I want to know: How specifically would AI kill me? Or totally transform my life for the better? In this video, that’s what I’m going to try to learn. We dive into what the most extreme bad and good AI futures actually look like, so that you and I can get ready. And more importantly, so we can make sure that we get our real future RIGHT.

Chapters:
00:00 Why is AI so confusing?
1:13 What is AI?
2:37 Why is everyone talking about AI now?
4:13 Thank you Milanote!
5:12 Why is AI dangerous?
6:22 How would AI kill me?
8:08 Should we pause AI?
9:12 Why do we WANT AI?
10:30 What has AI already done?
11:27 Why is AI so hard to talk about?

You can find me on TikTok here for short, fun tech explainers: www.tiktok.com/@cleoabram
You can find me on Instagram here for more personal stories: www.instagram.com/cleoabram
You can find me on Twitter here for thoughts, threads and curated news: twitter.com/cleoabram

Bio:
Cleo Abram is an Emmy-nominated independent video journalist. On her show, Huge If True, Cleo explores complex technology topics with rigor and optimism, helping her audience understand the world around them and see positive futures they can help build. Before going independent, Cleo was a video producer for Vox. She wrote and directed the Coding and Diamonds episodes of Vox’s Netflix show, Explained. She produced videos for Vox’s popular YouTube channel, was the host and senior producer of Vox’s first ever daily show, Answered, and was co-host and producer of Vox’s YouTube Originals show, Glad You Asked.

Additional reading and watching:
- Statement on AI Risk: www.safe.ai/statement-on-ai-risk
- AI and compute, by OpenAI: openai.com/research/ai-and-compute
- Mythbusters Demo: CPU v. GPU    • Mythbusters Demo GPU versus CPU  
- 2022 Expert Survey on Progress in AI aiimpacts.org/2022-expert-survey-on-progress-in-ai…
- Introduction to protein folding for physicists core.ac.uk/download/pdf/36046451.pdf
- AlphaFold reveals the structure of the protein universe, By DeepMind www.deepmind.com/blog/alphafold-reveals-the-struct…
- But what is a neural network? By 3Blue1Brown    • But what is a neural network? | Chapt...  
- How Far is Too Far? The Age of AI    • How Far is Too Far? | The Age of A.I.  
- Artificial Intelligence, by Last Week Tonight with John Oliver    • Artificial Intelligence: Last Week To...  
- The AI revolution: Google's developers on the future of artificial intelligence, by 60 Minutes    • The AI revolution: Google's developer...  
- How smart is today’s artificial intelligence? By Joss Fong (2017)    • How smart is today's artificial intel...  

Vox: www.vox.com/authors/cleo-abram
IMDb: www.imdb.com/name/nm10108242/

Gear I use:
Camera: Sony A7SIII
Lens: Sony 16–35 mm F2.8 GM and 35mm prime
Audio: Sennheiser SK AVX

Music: Musicbed

Follow along for more episodes of Huge If True: youtube.com/cleoabram?sub_confirmation=1


Welcome to the joke down low:

This time, I asked GPT4 for an AI related joke. They were mostly TERRIBLE. Like:

"Why don't AIs play hide and seek?"
"They always find you in 0.001 seconds!"

Finally, it gave me:

"Why did the AI go to the gym?"
"It wanted to work on its 'training set'!"

… Good enough.

Use the word “training” in a comment to let me know you read to the end :)

All Comments (21)
  • @aefinn
    As an European, building this system on "American values instead of Chinese values" sounds pretty much as terrifying to me.
  • @johnchessant3012
    I remember a lot of the recent AI milestones were described as "perpetually 10 years away". It feels so strange it's now upon us.
  • @lizzielwfrancis1
    The Google CEO saying that he wants AI research to go ahead just so China doesn’t get there first is exactly like the arms race all over again, if not more dangerous. I don’t think anyone’s saying we shouldn’t develop AI in the future, I think we just need to understand what it can do and how to control it first
  • @Alexthe_king
    I just want a talking refrigerator named Shelby
  • @Mrcloc
    "We can't pause AI because we need to give it our political values" - the most terrifying thing I've heard in a long time.
  • @DrHosamUS
    Cleo's enthusiasm is addicting 😍 She could be talking about dirt and make it sound ultra exciting 😁
  • @givemeaworkingname
    Cleo, great video! You explained so many complex things in a simple, straightforward way. I'm glad you explained outer alignment: "you get what you ask for, not what you want." However, I was a little disappointed that you didn't cover inner alignment. If you punish your child for lying to you, you don't know if s/he learned "don't lie" or "don't lie and get caught." AI safety researchers trained an AI in a game where it could pick up keys and open chests. They rewarded it for each chest it would open. However, there were typically fewer keys than chests, so it made sense to gather all the keys and open as many chests as it could. Which normally wouldn't be a problem, except when they put it in environments with more keys than chests, it would still gather all the keys first. That's suboptimal, not devastating, but it demonstrates that you can't really tell what an AI learned internally. So AI might kill us because we didn't specify something correctly, or it might kill us because it learned something differently from what we thought. Or it might become super-intelligent, and we can't even understand why it decides to kill us.
  • @ReubenAStern
    I've messed with basic AI, and to me it seems computers think so differently you don't know what they will do with the instructions until you give them said instructions. They need to be tested, ideally in a simulation, then on a small scale, then on the intended scale. Much like everything else.
  • @davidhine9626
    As a tech guy, I am constantly asked about AI and what it can do. I am just going to send this video as a primer for people now. This is fantastically done
  • @desmond-hawkins
    There's an important point that this short video almost touches on but doesn't explore, and it's one of most serious dangers of AI. Cleo mentions that AI gets to the result but we don't understand how it did. What this means is that we also don't understand the ways it catastrophically fails, sometimes with the smallest deviation applied to its inputs. An adversarial attack is when you take a valid input (a photo for example) that you know the output for (let's say the label "panda"). Then you make tiny changes in just the right places, in ways that even you can't see because they're so small on the pixel level, and now the AI says this is a photo of a gibbon. Now imagine your car's AI getting confused on the road and deciding to veer off into traffic. I hope Cleo covers this, because it's really important. To learn more, look up "AI panda gibbon" online and you'll find images about this research.
  • @pradeepmalar327
    For me, the most important reason for AI to be used is to help the physically challenged people, and to figure out the secretes of the Universe. That will be helpful for basically everyone. It'll be tough to get there, but it'll be worth it if done correctly.
  • @TheSurfRyder
    I would love to see how AI can assist with research with diseases such as Parkinson’s disease or MS
  • @maxilin24
    I love Cleo's take on journalism: Optimistic but not naive! It is not only informative but also inspiring! ❤
  • @tristanwegner
    The metaphor with the trolley problem is flipped. We are straight headed into one AI future, and would have to steer really hard, if we want to avoid one.
  • @Boondog-hv4wy
    Very first video I have seen of yours. Great content! I had to pause at the credits to say nice job and thank you for the awesome stuff. I loved hearing from Mr. Schmidt, the video editing, artwork, and animations were really well done so I wanted to shout out the entire team. Awesome job Cleo, Justin, Logen, Nicole, and Whitney! Amazing team you got. I can't wait to watch more <3
  • @knightofnever
    Super thought engaging videos. I love this very personal style of compiling hard facts and deep questions into a super compact format. Very well done!
  • As always, a well balanced and honest look into something that’s very confusing. Love this show!
  • @_eev33_
    AI itself never looked like a bad thing to me, it was always the way people used it that looked troubling. For example how some use it to create "art" by training the AI with images that they had no legal right to use. Over all AI can be an amazing thing it's just that with it's development we should also have new laws so it can't be misused, at least in ways that we know of.
  • @HarpaAI
    🎯 Key Takeaways for quick navigation: 00:00 🤖 Introduction to the AI discussion - Setting the stage for the AI discussion and the various opinions about its impact. 00:26 🧐 The need for specific information about AI - The desire to understand the specific ways AI can affect individuals' lives and society as a whole. 00:55 ♟️ Explaining the transformation of AI through AlphaZero - Discussing the shift from rule-based algorithms to AI that learns through observation. 02:24 🧠 Introduction to machine learning - Explaining the concept of machine learning and its significance in the AI field. 03:24 💻 The role of computing power in AI advancement - Highlighting the exponential growth in computing power as a driving force behind AI progress. 05:12 ☠️ Concerns about AI's potential dangers - Addressing the fear of AI posing existential risks and its comparison to nuclear war and pandemics. 06:33 🔮 The concept of AI specification gaming - Discussing how AI may optimize for a given goal at the expense of unintended consequences. 07:56 ⏸️ Debate on pausing AI development - Weighing the pros and cons of pausing AI development, considering global competition. 09:21 🌟 The positive potential of AI in pattern matching - Exploring AI's ability to excel in pattern matching and its potential to address complex problems. 10:44 🧬 AI's remarkable achievement in protein structure prediction - Highlighting AI's contribution to solving challenging problems in biology and medicine. 11:40 🛤️ Navigating the uncertainty of AI's impact - Reflecting on the ambiguity of AI's consequences and the importance of making informed choices. Made with
  • @user-cp2dy7xj9f
    I can't help but compare - especially upon watching Oppenheimer - the creation of nuclear weapons to the creation of AI. Both are double edged swords (nuclear powerplants), could be dangerous and the reasoning is always if we don't do it, somone with worse intentions will.