The $1 trillion dollar question for AGI | Aravind Srinivas and Lex Fridman
20,143
Published 2024-06-22
Please support this podcast by checking out our sponsors:
- Cloaked: cloaked.com/lex and use code LexPod to get 25% off
- ShipStation: shipstation.com/lex and use code LEX to get 60-day free trial
- NetSuite: netsuite.com/lex to get free product tour
- LMNT: drinkLMNT.com/lex to get free sample pack
- Shopify: shopify.com/lex to get $1 per month trial
- BetterHelp: betterhelp.com/lex to get 10% off
GUEST BIO:
Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet.
PODCAST INFO:
Podcast website: lexfridman.com/podcast
Apple Podcasts: apple.co/2lwqZIr
Spotify: spoti.fi/2nEwCF8
RSS: lexfridman.com/feed/podcast/
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
SOCIAL:
- Twitter: twitter.com/lexfridman
- LinkedIn: www.linkedin.com/in/lexfridman
- Facebook: www.facebook.com/lexfridman
- Instagram: www.instagram.com/lexfridman
- Medium: medium.com/@lexfridman
- Reddit: reddit.com/r/lexfridman
- Support on Patreon: www.patreon.com/lexfridman
All Comments (19)
-
Full podcast episode: https://www.youtube.com/watch?v=e-gwvmhyU7A Lex Fridman podcast channel: youtube.com/lexfridman Guest bio: Arvind Srinivas is CEO of Perplexity, a company that aims to revolutionize how we humans find answers to questions on the Internet.
-
This guy is smart and balanced, rare that
-
Creating AGI that can create new knowledge and make new decisions is the exact thing that can make it dangerous.
-
You won't have transformational capable AIs until they get AIs to learn in more ways than just language-based. A human learns language after it learned a lot from looking around, mimicking their mother's actions, figuring out how to walk without searching websites for written content about walking. Language is a way to record down insight, it's rarely the source of insight. Sometimes learning a new language and then thinking through the lens of that new language after a lifetime of thinking through the lens of another language, brings insight. But that's a deeper thing than studying recorded text. Some of these companies need to find a way to get these AIs to watch the world like a human baby, find a way for the AI to absorb what it "sees" around itself and replicate - like walking. Iterate, discover they're doing it wrong, then try again, over and over.
-
I think ultimately it needs to be either embedded in a design, formula, or code or physical object and then iterated upon. Ultimately that feedback with reality is the key part, it seems that a lot of solutions are like a multi stage lock with different depth search probabilities, and once it comes all together it unlocks the lock, then the optimization step occurs. A solution is an artifact.
-
if the range is finite but the granularity within that range is analogue and quasi infinite , wich question make you learn to learn wich you do not know that you do not know?
-
It seems that nobody can talk about tech anymore without in the same breath mentioning cap table, growth potential, market cap, valuation, salary, top talent, equity, solvency headhunting pump pump money money money. It used to be that way with Biotech. They struggled with that problem for a while. Remember when they were the Belle of the Ball? You are just as likely now to get a Wall Street Bro as you are a pioneering ai researcher in these casts. Many times one can wear both masks.
-
ai can not do tetration on hardware side !
-
As someone on the right, we have been very interested to find out where COVID started. It was the people on the left I know who didn't want to know. If an AI told me something that makes sense, I'm willing to believe it.
-
You won't have transformational capable AIs until they get AIs to learn in more ways than just language-based.
-
✨ Summary: - Regulation should focus on AI application and access, not model weights. - Future breakthroughs in AI are likely incremental rather than a single transformative moment. - Higher intelligence in AI would involve algorithmic truth and logical deductions. - A significant milestone for AI would be providing new, impactful insights on complex issues, comparable to human PhD-level research. - Real-world applications include generating new scientific knowledge, like advancements in physics or medicine. - AI's ability to generate new truths could transform fields, but humans might quickly take these advances for granted. - Concerns about AI insights being politicized or dismissed due to biases. - Achieving AI that provides novel perspectives requires avoiding hard-coded curiosity and ensuring diverse worldviews in AI systems. - A future AI milestone would be an AI serving as an advanced assistant to top thinkers, providing transformative insights.
-
DARPA probably has classified AI tech decades ahead of anything the tech bros are talking about. Seriously. National security and all that.
-
chat gpt is better than agi
-
People with lots of money always get the most advanced technology first. AI is not different. The problem is what will those with a lot of money do with AI Do people with huge amounts of money today use their resources to help everyone? No. Will that change with AI? No. That’s the problem
-
God, this guy's so monotonic...
-
contradiction ? Give ai the 10000 failed coffee stain experiment each computer physical innerd server physical innerd . The environment etc .how can 10000 academia attempt fail ? 1 failed team humlred me and did the work on paper instead of device . They succeeded ,i bet they do not know why ! If i am right , in the imaginary index it's various signal . A cumulation . Like charging a batery but it can be glass oil ... Then boom . Too much . Now it's back in balance faraday cage make thing worst
-
We already have AGI
-
gpt 4 is agi