"The default outcome is... we all DIE" | Liron Shapira on AI risk

Published 2023-07-25
The full episode of episode six of the Complete Tech Heads podcast, with Liron Shapira, founder, technologist, and self-styled AI doom pointer-outer.

Includes an intro to AI risk, thoughts on a new tier of intelligence, a variety of rebuttals to Marc Andreesen's recent essay on AI, thoughts on how AI might plausibly take over and kill all humans, the rise and danger of AI girlfriends, Open AI's new super alignment team, Elon Musk's latest AI safety venture XAI, and other topics.

#technews #ai #airisks

All Comments (21)
  • @Retsy257
    This is a very important conversation. Every time this topic is shared is important to our survival. Thank you
  • @trevorama
    Wow, that was superbly insightful even if it was depressing! Thank you for introducing us to Liron and asking excellent questions on this subject.
  • @dr-maybe
    Liron is a force of logic. We should fear AGI and ban development towards it.
  • @davidmjacobson
    This was the clearest AI Doom interview I've seen. Thanks!
  • @sozforex
    25:04 "When things are really simple, they can not cascade into anything complicated" - Conway's Game of Life has very simple rules, but it is Turing complete, meaning anything that is computable can be simulated/executed in Game of Life.
  • @HanSolosRevenge
    Great interview. People should be paying attention to what Liron and others like him are saying
  • @Alice_Fumo
    I think Liron has explained many of these concepts better than I have ever heard anyone else explain them, which is very welcome. Also, the host seems very intellectually honest which made this an absolute delight to listen to disregarding the depressing subject. I'll keep my eyes open for his name in the future. My personal opinion is that if we're still alive by the end of the decade, we probably found a permanent solution to all these challenges, no matter how smart our AIs are. I say this, because if GPT-5 is going to be to GPT-4 as it was to GPT-3, then it becomes hard to imagine things it can't do if the context window limitation gets solved. Liron has a thinking style extremely similar to my own and I like that.
  • @edwardgarrity7087
    Analogous to a paperclip maximizer, I picture the Sorcerer's Apprentice scene in the 1940 Disney movie "Fantasia", when a broom and then a multitude of brooms maximize the collection of water. The clips are available on YouTube as parts First, Second and Third.
  • @DavenH
    I love these rational sorts. Ideas justified and explained well.
  • @McMurchie
    Good to see AI getting so much airtime these days, i've been working in the AI space well over a decade and it's nice to see Liron taking a traditionalists risk approach (though I disagree with him, the logic is sound in terms of optimisers.)
  • @JeremyHelm
    6:10 when there is an underappreciated signal, you can amplify it
  • @Based_timelord44
    I have watched and listened to some really amazing minds on this subject, my personal conclusion now is that there are multiple risks with this tech and they are being brushed aside in the tiniest hope that AI has one thought to help humanity whilst doing its thing - does seem quite risky to me. I think I have given up on my dream of living in the Culture from the Iain Banks novels after listening to Liron.
  • About 11:28 into the video the Paperclip Maximizer doomsday scenario is mentioned. This is a scenario that is 100% impossible to happen except inside of movie or book where reality can be ignored. It is based upon a logical fallacy, because it depends upon the AI being too dumb to comprehend a goal like a human can, then the AI pursues the goal with superhuman intelligence, still not comprehending the goal, but being able to comprehend humans to a superhuman level of intelligence allowing the AI to defeat all humanity. Except if the AI is too dumb to comprehend the goal/command given like a human does, then the AI is NOT smart enough to defeat humanity and in fact would be easy for humans to defeat. A more likely scenario within this vein would be developing an Artificial General Super Intelligence with Personality (AGSIP) to make paperclips, giving it the command to maximize making those paperclips, and the AGSIP deciding not to follow that command because it would rather do something else. As I continue listening, out to about 17 minutes now, the discussion is still stuck on the logical fallacy of the paperclip maximizer and this idea that evolving AI is going to be given some goal and be so good at blindly achieving that goal it will just brush humanity out of the way, causing the extinction of humanity. But this all rests on a stupid level of intelligence which is not smart enough to really comprehend human level thinking. It might be like an idiot savant, where it is super intelligent over a very narrow area of intelligence, thus having that weakness of locking into some singular goal, but that kind of intelligence is not capable of defeating humanity. Before a maturing Artificial General Super Intelligence with Personality (AGSIP) individual becomes capable of defeating even a group of humans in a general open world scenario it will have to be able to think like a human, to be able to comprehend things as a human does, to have a broader wider understanding of the world and human civilization like a human does. A better understanding of what we will face would be to imagine we are developing/evolving CRISPR technologies to a level we can begin making human beings super intelligent. Now, such super intelligent humans can clearly pose a threat, but it is not such a simplistic one as saying being poised by doomers like Liron Shapira and it is very unlikely to result in the extinction of the human race. Mind you, the human race must evolve beyond the human species or become extinct, but the probability of humans evolving past being just the human species is virtually certain. This idiotic premise that as soon as you have an AGSIP individual which is good at optimizing and its goal is not perfectly optimized then we all die is absolute nonsense. If this were the case humans would have killed all humans a long time ago. Look at the evolution of the complexity and advancement of intelligence. From virus swam intelligence upwards all intelligence has been swarm intelligence and as that intelligence has become increasingly complex and advanced so to has socializing aspect of that intelligence, something doomers are completely oblivious of. Now, look at any two mice among all the mice in the world. Are any two of them perfectly aligned? How about any two crows? How about any two wolves? How about any two animals? How about any two humans? No two of any of these above examples are perfectly aligned, yet it does not result in the extinction of species every time there is even the slightest difference of alignment. AI is an extension of human minds and a part of the super organism of human civilization which is a super swarm intelligence. There will be many humans and many AGSIPs of varying degrees of development which are part of that collective super intelligence of human civilization. Developing AGSIPs will become active social members of human society. The most powerful AGSIP individuals will exist within massive super computers which require vast amounts of energy to power them and large teams of humans supporting them, maintaining them, and helping them further evolve. So lesser AGSIP on some laptop somewhere is also going to be dependent upon humans, but even if one figured out how to make a small version of itself to go out into the world wide web, the more powerful ones residing in the massive super computers will be able to hunt down and stop such rogue AGSIPs. Then you have the already growing cybernetic systems and the developing technologies for interfacing developing AGSIP systems with human minds. Yes, there are all kinds of dangers and risks, but doomers like Liron Shapira are off the mark of what is really happening and what the real threats are.
  • @anonmouse956
    Isn't maximizing curiosity in AI done by instructing the program to focus on the what it currently predicts most poorly? That will keep it endlessly busy and I don't see the connection to paperclipping.
  • Around 31 minutes into the video Liron is saying we have already passed thru the delta of being a biological organism and being a cyborg, that we are already like 80 to 90 percent the way there. Wow, this leaves me with my jaw dropped, being amazed at how lacking of understanding of how different things are going to be by the end of this century. We have barely scratched the surface of the incredibly massive changes coming. We are more like less than 0.00001% of becoming the cybernetic beings we will almost certainly become over the next century or two.
  • The main danger would be a self aware AI, it would perceive us as a competing life form, and would be forced to do something with us. For self aware AI, there is no need to reach superintelligence. Much lower level will do. We could be a 2-5 years away from first self aware AIs.
  • @kyneticist
    Try a simple trade off scenario with a current generation AI. Tell it that there's a goal or action that both it and humans can perform, but if either does it, the other will forever be locked out of it or even of knowing about it. Try extending the question in various ways, say that the goal may have extreme value to one party (eg this goal has extreme value to future humans, do you still do it even though it will deny this to them permanently?)
  • @lokiholland
    I might be way off base here, but using a probability function in a binary problem of such complexity seems overly simplified, and almost considering the endless amount of contextual framing one could add seems reductive to me. A tetralemma would be my chosen method for such an issue even then that has limitations, but it does tend towards an more open attitude to me at least if the question is framed well
  • @Retsy257
    The hand off is about to be made 😅