No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

By Conviction and Pod People

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.

Category: Technology

Open in Apple Podcasts

Open RSS feed

Open Website

Rate for this podcast

Subscribers: 8
Reviews: 0


At this moment of inflection in technology, co-hosts Elad Gil and Sarah Guo talk to the world's leading AI engineers, researchers and founders about the biggest questions: How far away is AGI? What markets are at risk for disruption? How will commerce, culture, and society change? What’s happening in state-of-the-art in research? “No Priors” is your guide to the AI revolution. Email feedback to Sarah Guo is a startup investor and the founder of Conviction, an investment firm purpose-built to serve intelligent software, or "Software 3.0" companies. She spent nearly a decade incubating and investing at venture firm Greylock Partners. Elad Gil is a serial entrepreneur and a startup investor. He was co-founder of Color Health, Mixer Labs (which was acquired by Twitter). He has invested in over 40 companies now worth $1B or more each, and is also author of the High Growth Handbook.

Episode Date
How do we go from search engines to answer engines? With Perplexity AI’s Aravind Srinivas and Denis Yarats
With advances in machine learning, the way we search for information online will never be the same. This week on the No Priors podcast, we dive into a startup that aims to be the most trustworthy place to search for information online. is a search engine that provides answers to questions in a conversational way and hints at what the future of search might look like. Aravind Srinivas is a Co-founder and CEO of Perplexity. He is a former research scientist at Open AI and completed his PhD in computer science at University of California Berkeley. Denis Yarats is a Co-Founder and Perplexity’s CTO. He has a background in machine learning, having worked as a Research Scientist at Facebook AI Research and a machine learning engineer at Quora. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Aravind Srinivas on Google Scholar Denis Yarats on Google Scholar Perplexity AI Perplexity AI Discord AI Chatbots Are Coming to Search Engines. Can You Trust Them? - Scientific American Sign up for new podcasts every week. Email feedback to Follow us on Twitter: @Saranormous | @EladGil | @AravSrinivas | @denisyarats Show Notes:  [1:46] - How Perplexity AI iterates quickly and how the company has changed over time [5:46] - Approach to hiring and building a fast-paced team [10:43] - Why you don’t need AI pedigree to transition to work or research AI [14:01] - Challenges when transitioning from AI research to running a company as CEO & CTO [16:50] - Why Perplexity only shows answers it can cite [19:33] - How Perplexity approaches reinforcement learning [20:49] - Trustworthiness and if an answer engine needs a personality [23:05] - Why answer engines will become their own market segment [26:38] - Implications of “the era of fewer clicks” on publishers and advertisers [30:20] - Monetization strategy [33:20] - Advice for those deciding between academia or startups
Mar 23, 2023
What is the future of search? With Neeva’s Sridhar Ramaswamy
For the first time in decades web search might be at risk for disruption. Bing is allied with OpenAI to integrate LLMs. Google has committed to launching new products. New startups are emerging. Sridhar Ramaswamy co-founded the challenger AI-powered, private search platform Neeva in 2019. He is a former 16-year Google veteran who most recently led the internet’s most profitable business as SVP in charge of Google Ads, Commerce and Privacy. Sridhar, Elad and Sarah talk about the challenge of building search, how LLMs have changed the landscape, and how chatbots and "answer services" will affect web publishers. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: LinkedIn Neeva Search Neeva Gist Poe by Quora Sign up for new podcasts every week. Email feedback to Follow us on Twitter: @Saranormous | @EladGil | @RamaswmySridhar Show Notes:  [1:32] - Why Sridhar started a private search engine after leaving Google [11:11] - Information Retrieval Problems, Mapping Search Queries and LLMs [15:25] - Google and Bing’s approach to search with LLMs [19:06] - Scale challenges when building a search engine startup [22:26] - Distribution challenges and why they release Neeva Gist [24:11] - Why Neeva is a privacy centric subscription service  [28:25] - The relationship between search and publishers/content creators [30:16] - Sridhar’s predictions on how AI will disrupt current ecosystems
Mar 16, 2023
What is the role of academia in modern AI research? With Stanford Professor Dr. Percy Liang
When AI research is evolving at warp speed and takes significant capital and compute power, what is the role of academia? Dr. Percy Liang – Stanford computer science professor and director of the Stanford Center for Research on Foundation Models talks about training costs, distributed infrastructure, model evaluation, alignment, and societal impact. Sarah Guo and Elad Gil join Percy at his office to discuss the evolution of research in NLP, why AI developers should aim for superhuman levels of performance, the goals of the Center for Research on Foundation Models, and Together, a decentralized cloud for artificial intelligence. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: See Percy’s Research on Google Scholar See Percy’s bio on Stanford’s website Percy on Stanford’s Blog: What to Expect in 2023 in AI Together, a decentralized cloud for artificial intelligence Foundation AI models GPT-3 and DALL-E need release standards - Protocol The Time Is Now to Develop Community Norms for the Release of Foundation Models - Stanford Sign up for new podcasts every week. Email feedback to Follow us on Twitter: @Saranormous | @EladGil | @PercyLiang Show Notes:  [1:44] - How Percy got into machine learning research and started the Center for Research and Foundation Models at Stanford [7:23] - The role of academia and academia’s competitive advantages [13:30] - Research on natural language processing and computational semantics [27:20] - Smaller scale architectures that are competitive with transformers [35:08] - Helm, holistic evaluation of language models, a project with the the goal is to evaluate language models [42:13] - Together, a decentralized cloud for artificial intelligence
Mar 09, 2023
How AI can make drug discovery fail less, with Daphne Koller from Insitro
Life-saving therapeutics continue to grow more costly to discover. At the same time, recent advances in using machine learning for the life sciences and medicine are extraordinary. Are we on the verge of a paradigm shift in biotech? This week on the podcast, a pioneer in AI, Daphne Koller, joins Sarah Guo and Elad Gil on the podcast to help us explore that question. Daphne is the CEO and founder of Insitro — a company that applies machine learning to pharma discovery and development, specifically by leveraging “induced pluripotent stem cells.” We explain Insitro’s approach, why they’re focused on generating their own data, why you can’t cure schizophrenia in mice, and how to design a culture that supports both research and engineering. Daphne was previously a computer science professor at Stanford, and co-founder and co-CEO of edutech company Coursera. Show Links:  Insitro - About  Video: AWS re:Invent 2019 – Daphne Koller of insitro Talks About Using AWS to Transform Drug Development  Sign up for new podcasts every week. Email feedback to Follow us on Twitter: @Saranormous | @EladGil | @DaphneKoller Show Notes:  [1:49] - How Daphne combined her biology and tech interests and ran a bifurcated lab at Stanford [4:34] - Why Daphne resigned an endowed chair at Stanford to build Coursera  [14:14] - How insitro approaches target identification problems and training data  [18:33] - What are pluripotent stem cells and how insitro identifies individual neurons  [24:08 ] - How insitro operates as an engine for drug discovery and partners to create the drugs themselves [26:48] - Role of regulations, clinical trials and disease progression in drug delivery  [33:19] - Building a team and workplace culture that can bridge both bio and computer sciences  [39:50] - What Daphne is paying attention to in the so-called golden age of machine learning   [43:12] - Advice for leading a startup in edtech and healthtech
Mar 02, 2023
Why the Future of Machine Learning is Open Source with Huggingface’s Clem Delangue
After starting as a talking emoji companion, Hugging Face is now an organizing force for the open source AI research ecosystem. Its models are used by companies such as Apple, Salesforce and Microsoft, and it's working to become the GitHub for ML. This week on the podcast, Sarah Guo and Elad Gil talk to Clem Delangue, co-founder and CEO of Hugging Face. Clem shares how they shifted away from their original product, why every employee at Hugging Face is responsible for community-building, the modalities he's most interested in, and what role open source has in the AI race. Show Links: Hugging Face website The $2 Billion Emoji: Hugging Face Wants To Be Launchpad For A Machine Learning Revolution - Forbes Sign up for new podcasts every week. Email feedback to Follow us on Twitter: @Saranormous | @EladGil | @ClementDelangue Show Notes:  [01:53] - how Clem first became interested in ML, being shouted at by eBay sellers, and the foretelling of the end of barcode scanning [3:34] - early iterations of Hugging Face, trying to make a less boring AI tamagotchi, and switching directions towards open source tools [5:36] - advice for founders considering a change in direction, 30%+ experimentation [7:39] - 1st users, MLTwitter, approach to community [10:47] - enterprise ML maturity, days to production [12:54] - open source vs. proprietary models [15:56] - main model tasks, architectures and sizes [19:12] - decentralized infrastructure, data opt out [24:16] - Hugging Face’s business model, GitHub [28:09] - What Clem is excited about in AI
Feb 23, 2023
Founder Stories: What’s behind the largest commercial autonomous system on earth? With Zipline’s Keller Rinaudo Cliffton
This is a special bonus episode from our Founder Stories series, where entrepreneurs share the story of their startup journey. A delivery with Zipline is the closest thing we have to teleportation. It sounds like science fiction, but Zipline delivers life saving medical supplies such as blood and vaccines to hospitals, doctors and people in need around the world with the world's largest autonomous drone network. This week on the podcast, Sarah Guo talks to Keller Rinaudo Cliffton, the co-founder and CEO of Zipline, about building a full-stack business that involves software, hardware and operations, how a culture of ruthless engineering practicality enabled them to do unlikely things, the state of autopilot in aircraft, their AI acoustic detect-and-avoid system, and why founders should build for users beyond the "golden billion." Show Links: Zipline's website Video: Drone Delivery Start-Up Zipline Beats Amazon, UPS And FedEx To The Punch | CNBC Keller Rinaudo: How we're using drones to deliver blood and save lives | TED Talk Meet Romotive: An Ambitious Startup That Blew Our Minds Sign up for new podcasts every week. Email feedback to Follow us on Twitter: @Saranormous | @EladGil | @KellerRinaudo Show Notes:  [2:07] - Keller’s earlier projects and early inspiration for Zipline and transforming logistics  [7:40] - Why Zipline focused on healthcare logistics and Zipline’s early near death experiences as a company  [15:32] - How Zipline iterated on the hardware while being ruthlessly practical with getting products in the customers’ hands  [21:52] - The difference between AI and Autopilot [25:51] - How Zipline developed AI acoustic-based detect and avoid system [31:30] - Zipline’s partnership with Rwanda’s public health system  [34:25] - Challenges in the business model 
Feb 20, 2023
How can we make sure that everyone has access to AI? Can small models outperform large models? With Stability AI’s Emad Mostaque
AI-generated images have been everywhere over the past year, but one company has fueled an explosive developer ecosystem around large image models: Stability AI. Stability builds open AI tools with a mission to improve humanity. Stability AI is most known for Stable Diffusion, the AI model where a user puts in a natural language prompt and the AI generates images. But they're also engaged in progressing models in natural language, voice, video, and biology. This week on the podcast, Emad Mostaque joins Sarah Guo and Elad Gil to talk about how this barely one-year-old, London-based company has changed the AI landscape, scaling laws, progress in different modalities, frameworks for AI safety and why the future of AI is open. Show Links: Stability.AI Stable Diffusion V2 on Hugging Face  Sign up for new podcasts every week. Email feedback to Follow us on Twitter: @Saranormous | @EladGil | @EMostaque Show Notes:  [2:00] - Emad’s background as one of the largest investors in video games and artificial intelligence [7:24] - Open-source efforts in AI [13:09] - Stability.AI as the only independent multimodal AI company in the world [15:28] - Computational biology, medical information and medical models [23:29] - Pace of Adoption [26:31] - AGI versus intelligence augmentation [31:38] - Stability.AI’s business model [37:44] - AI Safety
Feb 16, 2023
What does AI-powered content creation look like? with Runway ML’s Cristobal Valenzuela
For a long time, AI-generated images and video felt like a fun toy. Cool, but not something that would bring value to professional content creators. But now we are at the exciting moment where machine learning tools have the power to unlock more creative ideas. This week on the podcast, Sarah Guo and Elad Gil talk to Cristobal Valenzuela, a technologist, artist and software developer. He’s also the CEO and co-founder of Runway, a web-based tool that allows creatives to use machine learning to generate and edit video. You've probably already seen Runway's work in action on the Late Show with Stephen Colbert and in the feature film Everything Everywhere All at Once. Show Links: Watch Cris Valenzuela’s 2018 thesis presentation at New York University’s ITP program. Read how Runway is used on the Late Show and in Everything Everywhere All at Once on the Runway Blog. Sign up for new podcasts every week. Email feedback to Follow us on Twitter: @Saranormous | @EladGil | @c_valenzuelab Show Notes:  [1:50] - Cris’s background and how he doesn’t see barriers between art and machine learning [6:46] - How Runway works as a tool [8:36] - The origins and early iterations of Runway [12:22] - Product sequencing and roadmapping in a fast growing space [15:43] - Runway as an applied research company [19:10] - Common pitfalls for founders to avoid [22:35] - How Runway structures teams for effective collaboration [24:22] - Learnings from how Runway built Greenscreen product [28:01] - Building a long-term and sustainable business [32:34] - Finding Product Market Fit [36:34] - The influence of AI tools in art as an artistic movement
Feb 09, 2023
This is “No Priors”
AI is transforming our future, but what does that really mean? In ten years, will humans be forced to please our AGI overlords or will we have unlocked unlimited capacity for human potential? That's why Sarah Guo and Elad Gil started this new podcast, named No Priors. In each episode, Sarah and Elad talk with the leading engineers, researchers and founders in AI, across the stack. We'll talk about the technical state of the art, how that impacts business, and get them to predict what's next. Follow the podcast wherever you listen so you never miss an episode. We’ll see you next week with a new episode. Email feedback to
Feb 02, 2023
The bot Cicero can collaborate, scheme and build trust with humans. What does this mean for the next frontier of AI? With Noam Brown, Research Scientist at Meta
AGI can beat top players in chess, poker, and, now, Diplomacy. In November 2022, a bot named Cicero demonstrated mastery in this game, which requires natural language negotiation and cooperation with humans. In short, Cicero can lie, scheme, build trust, pass as human, and ally with humans. So what does that mean for the future of AGI? This week’s guest is research scientist Noam Brown. He co-created Cicero on the Meta Fundamental AI Research Team, and is considered one of the smartest engineers and researchers working in AI today. Co-hosts Sarah Guo and Elad Gil talk to Noam about why all research should be high risk, high reward, the timeline until we have AGI agents negotiating with humans, why scaling isn’t the only path to breakthroughs in AI, and if the Turing Test is still relevant. Show Links: More about Noam Brown Read the research article about Cicero (diplomacy) published in Science.  Read the research article about Liberatus  (heads-up poker) published in Science.  Read the research article about Pluribus (multiplayer poker) published in Science.  Watch the AlphaGo Documentary. Read “How Smart Are the Robots Getting?” by New York Times reporter Cade Metz  Sign up for new podcasts every week. Email feedback to Follow us on Twitter: @Saranormous | @EladGil | @Polynoamial Show Notes:  [01:43] - What sparked Noam’s interest in researching AI that could defeat games [6:00] - How the AlexaNET and AlphaGo changed the landscape of AI research [8:09] - Why Noam chose Diplomacy as the next game to work on after poker [9:51] - What Diplomacy is and why the game was so challenging for an AI bot [14:50] - Algorithmic breakthroughs and significance of AI bots that win in No-Limit Texas Hold'em poker [23:29] - The Nash Equilibrium and optimal play in poker [24:53] - How Cicero interacted with humans  [27:58] - The relevance and usefulness of the Turing Test [31:05] - The data set used to train Cicero [31:54] - Bottlenecks to AI researchers and challenges with scaling [40:10] - The next frontier in researching games for AI [42:55] - Domains that humans will still dominate and applications for AI bots in the real world [48:13] - Reasoning challenges with AI
Feb 02, 2023