Is the Intelligence-Explosion Near? A Reality Check.

541,591
0
Published 2024-06-13
Learn more about neural networks and large language models on Brilliant! First 30 days are free and 20% off the annual premium subscription when you use our link ➜ brilliant.org/sabine.

I had a look at Leopold Aschenbrenners recent (very long) essay about the supposedly near "intelligence explosion" in artificial intelligence development. I am not particularly convinced by his argument. You can read his essay here: situational-awareness.ai/

πŸ€“ Check out my new quiz app ➜ quizwithit.com/
πŸ’Œ Support me on Donorbox ➜ donorbox.org/swtg
πŸ“ Transcripts and written news on Substack ➜ sciencewtg.substack.com/
πŸ‘‰ Transcript with links to references on Patreon ➜ www.patreon.com/Sabine
πŸ“© Free weekly science newsletter ➜ sabinehossenfelder.com/newsletter/
πŸ‘‚ Audio only podcast ➜ open.spotify.com/show/0MkNfXlKnMPEUMEeKQYmYC
πŸ”— Join this channel to get access to perks ➜
youtube.com/channel/UC1yNl2E66ZzKApQdRuTQ4tw/join
πŸ–ΌοΈ On instagram ➜ www.instagram.com/sciencewtg/

#science #sciencenews #tech #technews #ai

All Comments (21)
  • @lokop-bq3ov
    Artificial Intellignce is nothing compared to Natural Stupidity
  • @framwork1
    Do you all remember before the internet, that people thought the cause of stupidity was lack of access to information? Yeah. It wasn't that.
  • @RigelOrionBeta
    In this post truth era, what people are searching for isn't truth, but rather comfort. They want someone to tell them what the answer is, regardless of the truth of the answer. There is a lot of uncertainty right now about the future, and that is the cause of all this anxiety. It's so much easier just to point at an algorithm and listen to it. That way, no one is responsible when its wrong - it's the algorithms fault. AI is trained, at the end of the day, on how humans understand the world. It's limits, therefore, will be human. Garbage in, garbage out. Seems a lot of engineers these days seem to think that basic axiom isn't true anymore, because these language models are confident in their answers. Confident does not mean correct.
  • @k.vn.k
    β€œI can’t see no end!” Said the man who earned money from seeing no end. πŸ˜…πŸ˜…πŸ˜… That’s gold, Sabine!
  • @hmmmblyat6178
    All Im saying is, is that if you need 10 Nuclear reactors to run artfificial general intelligence while humans only need a cheese sandwich, I believe we win this round.
  • @pirobot668beta
    In 1997, I was working at University. A Faculty member gave me an assignment: write a program that can negotiate as well as a human. "The test subjects shouldn't be able to tell if it's a machine or a human." Apparently, she had never heard of the Turing Test. When we told her of the difficulty of the task, she confidently told us "I'll give you two more weeks." The point? There are far too many people with advanced degrees but no common sense making predictions about something never seen before.
  • @calmhorizons
    Selling shovels has always been the best way to make money in a goldrush.
  • @zigcorvetti
    Never underestimate the capability and resourcefulness of corporate greed- especially when it's a collective effort.
  • @michaelbuckers
    There's another issue, with language models anyway. The learning database already includes virtually 100% of all text written by humans, including the internet. But also, now the internet is flooded with AI-generated text, so you can't use the internet anymore, because that would be AI version of Habsburg royal lineage.
  • @pablovirus
    I love how Sabine is deadpan serious throughout most videos and yet she can still make one laugh with unexpectod jokes
  • Nailed it .. this technocentric mindset is pervasive is so many fields .. but rarely scrutinised holistically for its resource needs, land use changes, social impacts
  • @msromike123
    If I will be able to ask Google home why I went to the kitchen, I am on board!
  • @bulatker
    "I can't see no end" says anyone in the first half of the S-curve
  • I love the toast joke. Keep up the good work. The logic seems concise in how you described the two primary constraints.
  • Regarding failed predictions, you should also acknowledge that in terms of ai, there were many predictions that already were accomplished years earlier than predicted.
  • You read a 165-page essay, even though you knew the contents inside would be dubious at best. Sabine is heroic.
  • @davidbonn8740
    I think there are a couple of problems here that you don't point out. The biggest one is that we don't have a rigorous definition of what the end result is. Saying "Artificial General Intelligence" without a strong definition of what you actually mean doesn't mean anything at all, since you can easily move the goalposts in either direction and we can expect people to do exactly that. Another one is that current neural networks are inefficient learners and learn a very inefficient representation of their data. We are rapidly reaching a point of diminishing returns in that area and without some fundamental breakthroughs neural networks, as currently modeled, won't get us there. Whereever "there" ends up. There also seems to be some blind spots in current AI research. There are large missing pieces to the puzzle that we don't yet have and that people who should know better are all to willing to handwave away. One example is that I can give examples of complex behavior in the animal world (honeybee dances are a good one) that it would be very hard to replicate using neural networks all by themselves. What that other piece is is currently unspecified.
  • You made me legitimately laugh at least three time with very clever puns, while holding a straight face. I am now subscribed. I agree with most of your points (would mostly just add he left out even more considerations). But great content! Thanks!!
  • @OryAlle
    I am unconvinced the data issue is a true blocker. We humans do not need to read the entirety of the internet, why should an AI model? If the current ones require that, then that's a sign they're simply not correct - the algorithm needs an upgrade.