Generative AI Has Peaked? | Prime Reacts

212,354
0
Publicado 2024-05-23
Recorded live on twitch, GET IN

Reviewed Video
   • Has Generative AI Already Peaked? - C...  
By: Computerphile |    / @computerphile  

My Stream
twitch.tv/ThePrimeagen

Best Way To Support Me
Become a backend engineer. Its my favorite site
boot.dev/?promo=PRIMEYT

This is also the best way to support me is to support yourself becoming a better backend engineer.

MY MAIN YT CHANNEL: Has well edited engineering videos
youtube.com/ThePrimeagen

Discord
discord.gg/ThePrimeagen


Have something for me to read or react to?: www.reddit.com/r/ThePrimeagenReact/

Kinesis Advantage 360: bit.ly/Prime-Kinesis

Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
turso.tech/deeznuts

Todos los comentarios (21)
  • @JGComments
    Devs: Solve this problem AI: 10 million examples please
  • @drditup
    if only all windows users would start taking pictures of everything they do so the AI algorithms can get more data. Maybe like a screen shot every few seconds. I think I recall something like that
  • @MrSenserus
    The computerphile guys are my uni lecturers atm and for the coming year. It's pretty cool to see this.
  • @NunTheLass
    Isn't it crazy that you hear more people worry about AI polluting the internet for training future AI's than the fact that it is polluting the internet for , you know, YOU AND ME?
  • @amesasw
    One major problem, is if I ask a person how to do something that non of us know the solution to. They may be able to theorize a solution but they will often tell you they are guessing and not 100% sure about some parts of their proposed solution. Chatgpt can't really theorize for me or tell me that it is not sure of an answer but is theorizing a solution based on its understanding or internal model.
  • @granyte
    "steer me into my own bad ideas at an incredible speed" LMAO this is perfect it's exactly what it does when it even works at all. I don't know if my skills have improved that much since gpt-4 came out or what but it feel like copilot and chat-gpt have become way dumber since launch.
  • @Afro__Joe
    AI is becoming like ice cream to me, good every once in a while, but I get sick of too much of it. With Samsung trying to shove it into everything in my phone, MS trying to shove it into everything PC-related, Google pushing it at every turn, and so on... ugh.
  • @MrKlarthums
    There's plenty of software that has simultaneously improved while having an entirely degraded user experience. If companies feel that it makes them more money, they'll absolutely do it. LLMs will probably be part of that.
  • It can do anything except tell you that it doesn’t know how to do something.
  • Modern image generators can do supriringly well even with a bit weird prompts such as "Minotaur centaur with unicorn horn in the head, steampunk style, award winning photograph" or "Minotaur centaur with unicorn horn in the head, transformers style, arc reactor, award winning photograph". Even "A transformers robot that looks like minotaur centaur, award winning photograph, dramatic lighting" outputs acceptable results. However, ask it for "a photograph of Boeing 737 MAX with cockpit windows replaced with cameras" and it will totally fail. The latter case has way less possible implementations and this exactness makes it to fail.
  • @SL3DApps
    It’s crazy how OpenAi’s only way to stay relevant in this market vs big tech such as Google and MS is to sell the hype that Ai will not peak in the near future. Yet, they are the company everyone is relying on to say if Ai has or has not peaked… why would they ever want to admit to anything that can be damaging to their own company?
  • The young ones may not remember the VR craze of the late 90s and early 00s, but us oldkins do. AI feels like that to me.
  • @GigaFro
    Just last year, I was sitting in a makeshift tent in an office in downtown San Francisco, attending a Gen AI meetup. The event was a mix of investors and developers, each taking turns to share their projections on the future progress of AI. Most of the answers were filled with exponential optimism, and I found myself dumbfounded by the sheer enthusiasm. When it was my turn, I projected that we were peaking in terms of model performance, and I was certain I was about to be ostracized for my view. That day I learned that as soon as hype enters the room, critical thinking goes out the window - even for the most intelligent minds.
  • @bwhit7919
    Most people misunderstand when they hear AI follows a “power law”. If you read OpenAI’s paper on the scaling laws, you need a 10x increase in both compute and data to lead to a .3 reduction in the loss function. In other words, you need exponentially more data to keep making the models better. It’s not that the models are getting exponentially better.
  • We didn't even go "from nothing to something" - current LLMs are just a marked spike/ breakthrough in capability of machine learning that's been around for ages. I think we'll still see huge improvement in the technology that enabled that breakthrough but doubt there will be a "next level" - that's not just a tech company branding a new product as such - for a good few years.
  • @TomNook.
    I hate how AI has been forced into everything, just like crypto and NFTs a couple of years ago
  • @hamm8934
    Read up on the “frame problem” and “grounding problem”. This is old news and has been known for decades. Engineers and venture capital just dont care because its not in their interest. Edit: also Wittgensteins work on family resemblance and language games. Edit2: I should clarify that I am referring to the epistemological interpretation of the frame problem, not the classical AI interpretation. That is, the concern of an infinite regress from an inability to explicitly account for non-effects when defining a problem space; this is specifically at the level of computation, not representation. For example, if agent is told "the spoon is next to the plate", how are all of the other non-effects, like a table, placemat, chair, room, etc. successfully transmitted and understood, while irrelevant inaccuracies like a swimming pool, cows, cars, etc. omitted and not included in the transmission of information. Fodor, Dennett, McDermett, and Dreyfus have plenty of canonical citations and works articulating this problem.
  • @tonym4953
    8:20 Open AI is doing the same thing with the consumer version of Chat GPT. They essentially are charging users to train their model. Genius and very cheeky!
  • @snarkyboojum
    The main issue is that the people responsible for the fundamental approaches being used in deep learning today have never wrestled with the problem of induction. They need to read the classic positioning by Hume and then follow up with Popper. Humans don’t used induction to reason about the world. It’s shocking to me that otherwise highly educated people have never read basic philosophy of epistemology. Narrow education is to blame really.