80,000 Hours Podcast with Rob Wiblin

By The 80000 Hours team

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.

Description

A show about the world's most pressing problems and how you can use your career to solve them. Subscribe by searching for '80,000 Hours' wherever you get podcasts. Produced by Keiran Harris. Hosted by Rob Wiblin, Director of Research at 80,000 Hours.

Episode Date
#35 - Tara Mac Aulay on the audacity to fix the world without asking permission
01:22:46
<em>"You don't need permission. You don't need to be allowed to do something that's not in your job description. If you think that it's gonna make your company or your organization more successful and more efficient, you can often just go and do it."</em><br><br> How broken is the world? How inefficient is a typical organisation? Looking at Tara Mac Aulay’s life, the answer seems to be ‘very’.<br><br> At 15 she took her first job - an entry-level position at a chain restaurant. Rather than accept her place, Tara took it on herself to massively improve the store’s shambolic staff scheduling and inventory management. After cutting staff costs 30% she was quickly promoted, and at 16 sent in to overhaul dozens of failing stores in a final effort to save them from closure.<br><br> That’s just the first in a startling series of personal stories that take us to a hospital drug dispensary where pharmacists are wasting a third of their time, a chemotherapy ward in Bhutan that’s killing its patients rather than saving lives, and eventually the Centre for Effective Altruism, where Tara becomes CEO and leads it through start-up accelerator Y Combinator.<br><br> In this episode Tara shows how the ability to do practical things, avoid major screw-ups, and design systems that scale, is both rare and precious. <br><br> <a href="https://80000hours.org/podcast/episodes/tara-mac-aulay-operations-mindset/"><b>Full transcript, key quotes and links to learn more.</b></a><br><br> People with an operations mindset spot failures others can't see and fix them before they bring an organisation down. This kind of resourcefulness can transform the world by making possible critical projects that would otherwise fall flat on their face.<br><br> But as Tara's experience shows they need to figure out what actually motivates the authorities who often try to block their reforms.<br><br> We explore how people with this skillset can do as much good as possible, what 80,000 Hours got wrong in our article <a href="https://80000hours.org/articles/operations-management/">'Why operations management is one of the biggest bottlenecks in effective altruism’</a>, as well as:<br><br> * Tara’s biggest mistakes and how to deal with the delicate politics of organizational reform.<br> * How a student can save a hospital millions with a simple spreadsheet model.<br> * The sociology of Bhutan and how medicine in the developing world often makes things worse rather than better.<br> * What most people misunderstand about operations, and how to tell if you have what it takes.<br> * And finally, operations jobs people should consider applying for, such as those open now at the <a href="https://www.centreforeffectivealtruism.org/careers/">Centre for Effective Altruism</a>.<br><br> <b>The 80,000 Hours podcast is produced by Keiran Harris.</b><br><br> <em>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.</em><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/OLehnNHk9lk" height="1" width="1" alt=""/>
Jun 21, 2018
Rob Wiblin on the art/science of a high impact career
01:31:34
Today's episode is a cross-post of an interview I did with The Jolly Swagmen Podcast which came out this week. I recommend regular listeners skip to 24 minutes in to avoid hearing things they already know. Later in the episode I talk about my contrarian views, utilitarianism, how 80,000 Hours has changed and will change in the future, where I think EA is performing worst, how to use social media most effectively, and whether or not effective altruism is any sacrifice.<br><br> <a href="https://thejollyswagmen.com/new-blog/robwiblin">Blog post of the episode to share, including a list of topics and links to learn more.</a><br><br> "Most people want to help others with their career, but what’s the best way to do that? Become a doctor? A politician? Work at a non-profit? How can any of us figure out the best way to use our skills to improve the world?<br><br> Rob Wiblin is the Director of Research at 80,000 Hours, an organisation founded in Oxford in 2011, which aims to answer just this question and help talented people find their highest-impact career path. He hosts a popular podcast on ‘the world’s most pressing problems and how you can use your career to solve them’.<br><br> After seven years of research, the 80,000 Hours team recommends against becoming a teacher, or a doctor, or working at most non-profits. And they claim their research shows some common careers do 10 or 100x as much good as others.<br><br> 80,000 Hours was one of the organisations that kicked off the effective altruism movement, was a Y Combinator-backed non-profit, and has already shifted over 80 million career hours through its advice.<br><br> Joe caught up with Rob in Berkeley, California, to discuss how 80,000 Hours assesses which of the world’s problems are most pressing, how you can build career capital and succeed in any role, and why you could easily save more lives than a doctor - if you think carefully about your impact."<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/yfcq5JJ1IkA" height="1" width="1" alt=""/>
Jun 08, 2018
#34 - We use the worst voting system that exists. Here's how Aaron Hamlin is going to fix it.
02:18:49
In 1991 Edwin Edwards won the Louisiana gubernatorial election. In 2001, he was found guilty of racketeering and received a 10 year invitation to Federal prison. The strange thing about that election? By 1991 Edwards was already notorious for his corruption. Actually, that’s not it.<br><br> The truly strange thing is that Edwards was clearly the <em>good guy</em> in the race. How is that possible?<br><br> His opponent was former Ku Klux Klan Grand Wizard David Duke.<br><br> How could Louisiana end up having to choose between a criminal and a Nazi sympathiser?<br><br> It’s not like they lacked other options: the state’s moderate incumbent governor Buddy Roemer ran for re-election. Polling showed that Roemer was massively preferred to both the career criminal and the career bigot, and would easily win a head-to-head election against either.<br><br> Unfortunately, in Louisiana every candidate from every party competes in the first round, and the top two then go on to a second - a so-called ‘jungle primary’. Vote splitting squeezed out the middle, and meant that Roemer was eliminated in the first round.<br><br> Louisiana voters were left with only terrible options, in a run-off election mostly remembered for the proliferation of bumper stickers reading <em>“Vote for the Crook. It’s Important.”</em><br><br> We could look at this as a cultural problem, exposing widespread enthusiasm for bribery and racism that will take generations to overcome. But according to Aaron Hamlin, Executive Director of The Center for Election Science (CES), there’s a simple way to make sure we never have to elect someone hated by more than half the electorate: change how we vote.<br><br> He advocates an alternative voting method called <em>approval voting</em>, in which you can vote for as many candidates as you want, not just one. That means that you can always support your honest favorite candidate, even when an election seems like a choice between the lesser of two evils.<br><br> <a href="https://80000hours.org/podcast/episodes/aaron-hamlin-voting-reform/"><b>Full transcript, links to learn more, and summary of key points.</b></a><br><br> <a href="https://www.eventbrite.com/o/the-center-for-election-science-14642817210"><b>If you'd like to meet Aaron he's doing events for CES in San Francisco, DC, Philadelphia, New York and Brooklyn over the next two weeks - RSVP here.</b></a><br><br> While it might not seem sexy, this single change could transform politics. <a href="https://en.wikipedia.org/wiki/Approval_voting">Approval voting</a> is adored by voting researchers, who regard it as the best simple voting system available.<br><br> Which do they regard as unquestionably the worst? <em>First-past-the-post</em> - precisely the disastrous system used and exported around the world by the US and UK.<br><br> Aaron has a practical plan to spread approval voting across the US using ballot initiatives - and it just might be our best shot at making politics a bit less unreasonable.<br><br> <a href="https://electology.org/what-we-do">The Center for Election Science</a> is a U.S. non-profit which aims to fix broken government by helping the world adopt smarter election systems. They recently received <a href="https://www.openphilanthropy.org/giving/grants/the-center-for-election-science-general-support">a $600,000 grant</a> from the Open Philanthropy Project to scale up their efforts.<br><br> <b>Get this episode now by subscribing to our podcast on the world’s most pressing problems and how to solve them: type <em>80,000 Hours</em> into your podcasting app. Or check out the transcript below.</b><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/5FGQ9b74cAU" height="1" width="1" alt=""/>
Jun 01, 2018
#33 - Dr Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war
01:25:06
Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded? <br><br> According to our last guest, <a href="https://80000hours.org/podcast/episodes/bryan-caplan-case-for-and-against-education/?utm_campaign=podcast__anders-sandberg-2&utm_source=80000+Hours+Podcast&utm_medium=podcast">Bryan Caplan</a>, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees.<br><br> Like Stalin he has eyes for his own immortality - including an insurance plan that will cover the cost of cryogenically freezing himself after he dies - and thinks the technology to achieve it might be around the corner. <br><br> Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University. <br><br> <a href="https://80000hours.org/podcast/episodes/anders-sandberg-extending-life/?utm_campaign=podcast__anders-sandberg-2&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Full transcript of the conversation, summary, and links to learn more.</b></a><br><br> The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever - among many other questions. <br><br> Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford. <br><br> His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base. <br><br> ***Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app.*** <br><br> Last time we asked him <a href="https://80000hours.org/podcast/episodes/anders-sandberg-fermi-paradox/?utm_campaign=podcast__anders-sandberg-2&utm_source=80000+Hours+Podcast&utm_medium=podcast">why we don’t see aliens, and how to most efficiently colonise the universe</a>. In today’s episode we ask about Anders’ other recent papers, including:<br><br> * Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done? <br> * How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened?<br> * If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians?<br> * What long-shot drugs can people take in their 70s to stave off death?<br> * Can science extend human (waking) life by cutting our need to sleep?<br> * How bad would it be if a solar flare took down the electricity grid? Could it happen?<br> * If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it?<br> * Will lifelike robots make us more inclined to dehumanise one another? <br><br> <em>The 80,000 Hours Podcast is produced by Keiran Harris.</em><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/F09e5Q4qs6Q" height="1" width="1" alt=""/>
May 29, 2018
#32 - Bryan Caplan on whether his Case Against Education holds up, totalitarianism, & open borders
02:25:33
Bryan Caplan’s claim in *The Case Against Education* is striking: education doesn’t teach people much, we use little of what we learn, and college is mostly about trying to seem smarter than other people - so the government should slash education funding.<br><br> It’s a dismaying - almost profane - idea, and one people are inclined to dismiss out of hand. But having read the book, I have to admit that Bryan can point to a surprising amount of evidence in his favour.<br><br> After all, imagine this dilemma: you can have either a Princeton education without a diploma, or a Princeton diploma without an education. Which is the bigger benefit of college - learning or convincing people you’re smart? It’s not so easy to say.<br><br> For this interview, I searched for the best counterarguments I could find and challenged Bryan on what seem like his weakest or most controversial claims.<br><br> Wouldn’t defunding education be especially bad for capable but low income students? If you reduced funding for education, wouldn’t that just lower prices, and not actually change the number of years people study? Is it really true that students who drop out in their final year of college earn about the same as people who never go to college at all?<br><br> What about studies that show that extra years of education boost IQ scores? And surely the early years of primary school, when you learn reading and arithmetic, *are* useful even if college isn’t.<br><br> I then get his advice on who should study, what they should study, and where they should study, if he’s right that college is mostly about separating yourself from the pack. <br><br> <a href="https://80000hours.org/podcast/episodes/bryan-caplan-case-for-and-against-education/?utm_campaign=podcast__bryan-caplan&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Full transcript, links to learn more, and summary of key points.</b></a><br><br> We then venture into some of Bryan’s other unorthodox views - like that immigration restrictions are a human rights violation, or that we should worry about the risk of global totalitarianism.<br><br> Bryan is a Professor of Economics at George Mason University, and a blogger at *EconLog*. He is also the author of *Selfish Reasons to Have More Kids: Why Being a Great Parent is Less Work and More Fun Than You Think*, and *The Myth of the Rational Voter: Why Democracies Choose Bad Policies*.<br><br> <b>Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app.</b><br><br> In this lengthy interview, Rob and Bryan cover:<br><br> * How worried should we be about China’s new citizen ranking system as a means of authoritarian rule?<br> * How will advances in surveillance technology impact a government’s ability to rule absolutely?<br> * Does more global coordination make us safer, or more at risk? <br> * Should the push for open borders be a major cause area for effective altruism? <br> * Are immigration restrictions a human rights violation?<br> * Why aren’t libertarian-minded people more focused on modern slavery?<br> * Should altruists work on criminal justice reform or reducing land use regulations?<br> * What’s the greatest art form: opera, or Nicki Minaj? <br> * What are the main implications of Bryan’s thesis for society?<br> * Is elementary school more valuable than university?<br> * What does Bryan think are the best arguments against his view?<br> * Is it possible that we wouldn’t want education to accurately predict workforce value?<br> * Do years of education affect political affiliation?<br> * How do people really improve themselves and their circumstances?<br> * Who should and who shouldn’t do a masters or PhD?<br> * The value of teaching foreign languages in school<br> * Are there some skills people can develop that have wide applicability?<br><br> <em>The 80,000 Hours podcast is produced by Keiran Harris.</em><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/0RHc924AlaM" height="1" width="1" alt=""/>
May 22, 2018
#31 - Prof Dafoe on defusing the political & economic risks posed by existing AI capabilities
00:48:07
The debate around the impacts of artificial intelligence often centres on ‘superintelligence’ - a general intellect that is much smarter than the best humans, in practically every field.<br><br> But according to Allan Dafoe - Assistant Professor of Political Science at Yale University - even if we stopped at today's AI technology and simply collected more data, built more sensors, and added more computing capacity, extreme systemic risks could emerge, including:<br><br> * Mass labor displacement, unemployment, and inequality;<br> * The rise of a more oligopolistic global market structure, potentially moving us away from our liberal economic world order;<br> * Imagery intelligence and other mechanisms for revealing most of the ballistic missile-carrying submarines that countries rely on to be able to respond to nuclear attack;<br> * Ubiquitous sensors and algorithms that can identify individuals through face recognition, leading to universal surveillance;<br> * Autonomous weapons with an independent chain of command, making it easier for authoritarian regimes to violently suppress their citizens.<br><br> Allan is Co-Director of the Governance of AI Program, at the Future of Humanity Institute within Oxford University. His goals have been to understand the causes of world peace and stability, which in the past has meant studying why war has declined, the role of reputation and honor as drivers of war, and the motivations behind provocation in crisis escalation.<br><br> <a href="https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/?utm_campaign=podcast__allan-dafoe&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Full transcript, links to learn more, and summary of key points.</b></a><br><br> His current focus is helping humanity safely navigate the invention of advanced artificial intelligence.<br><br> I ask Allan:<br><br> * What are the distinctive characteristics of artificial intelligence from a political or international governance point of view?<br> * Is Allan’s work just a continuation of previous research on transformative technologies, like nuclear weapons?<br> * How can AI be well-governed?<br> * How should we think about the idea of arms races between companies or countries?<br> * What would you say to people skeptical about the importance of this topic?<br> * How urgently do we need to figure out solutions to these problems? When can we expect artificial intelligence to be dramatically better than today?<br> * What’s the most urgent questions to deal with in this field?<br> * What can people do if they want to get into the field?<br> * Is there anything unusual that people can look for in themselves to tell if they're a good fit to do this kind of research?<br><br> *The 80,000 Hours podcast is produced by Keiran Harris.*<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/7CeLuypLh4w" height="1" width="1" alt=""/>
May 18, 2018
#30 - Dr Eva Vivalt on how little social science findings generalize from one study to another
02:01:45
If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else?<br /><br /> Dr Eva Vivalt is a lecturer in the Research School of Economics at the Australian National University. She compiled a huge database of impact evaluations in global development - including 15,024 estimates from 635 papers across 20 types of intervention - to help answer this question.<br><br> Her finding: not confident at all.<br><br> The typical study result differs from the average effect found in similar studies so far by almost 100%. That is to say, if all existing studies of a particular education program find that it improves test scores by 10 points - the next result is as likely to be negative or greater than 20 points, as it is to be between 0-20 points.<br><br> She also observed that results from smaller studies done with an NGO - often pilot studies - were more likely to look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably. <br /><br /> For researchers hoping to figure out what works and then take those programs global, these failures of generalizability and ‘external validity’ should be disconcerting.<br /><br /> Is ‘evidence-based development’ writing a cheque its methodology can’t cash? Should this make us invest less in empirical research, or more to get actually reliable results?<br /><br /> Or as some critics say, is interest in impact evaluation distracting us from more important issues, like national or macroeconomic reforms that can’t be easily trialled?<br /><br /> We discuss this as well as Eva’s other research, including Y Combinator’s basic income study where she is a principal investigator.<br /><br /> <a href="https://80000hours.org/podcast/episodes/eva-vivalt-social-science-generalizability/?utm_campaign=podcast__eva-vivalt&utm_source=80000+Hours+Podcast&utm_medium=podcast" rel="nofollow" target="_blank">Full transcript, links to related papers, and highlights from the conversation.</a><br /><br /> <b>Links mentioned at the start of the show:</b><br /> * <a href="https://80000hours.org/job-board/" rel="nofollow" target="_blank">80,000 Hours Job Board</a><br /> * <a href="https://www.surveymonkey.com/r/BX56XKV" rel="nofollow" target="_blank">2018 Effective Altruism Survey</a><br /><br /> **Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type *80,000 Hours* into your podcasting app.**<br /><br /> Questions include:<br /><br /> * What is the YC basic income study looking at, and what motivates it?<br /> * How do we get people to accept clean meat?<br /> * How much can we generalize from impact evaluations?<br /> * How much can we generalize from studies in development economics?<br /> * Should we be running more or fewer studies?<br /> * Do most social programs work or not?<br /> * The academic incentives around data aggregation<br /> * How much can impact evaluations inform policy decisions?<br /> * How often do people change their minds?<br /> * Do policy makers update too much or too little in the real world?<br /> * How good or bad are the predictions of experts? How does that change when looking at individuals versus the average of a group?<br /> * How often should we believe positive results?<br /> * What’s the state of development economics?<br /> * Eva’s thoughts on our article on social interventions<br /> * How much can we really learn from being empirical?<br /> * How much should we really value RCTs?<br /> * Is an Economics PhD overrated or underrated?<br /><br /> *The 80,000 Hours podcast is produced by Keiran Harris.*<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/t6PYN3UdV3g" height="1" width="1" alt=""/>
May 15, 2018
#29 - Dr Anders Sandberg on 3 new resolutions for the Fermi paradox & how to colonise the universe
01:21:26
Part 2 out now: <b>#33 - Dr Anders Sandberg on what if we ended ageing, solar flares & the annual risk of nuclear war</b><br><br> The universe is so vast, yet we don’t see any alien civilizations. If they exist, where are they? Oxford University’s Anders Sandberg has an original answer: they’re ‘sleeping’, and for a very compelling reason.<br><br> Because of the thermodynamics of computation, the colder it gets, the more computations you can do. The universe is getting exponentially colder as it expands, and as the universe cools, one Joule of energy gets worth more and more. If they wait long enough this can become a 10,000,000,000,000,000,000,000,000,000,000x gain. So, if a civilization wanted to maximize its ability to perform computations – its best option might be to lie in wait for trillions of years.<br><br> Why would a civilization want to maximise the number of computations they can do? Because conscious minds are probably generated by computation, so doing twice as many computations is like living twice as long, in subjective time. Waiting will allow them to generate vastly more science, art, pleasure, or almost anything else they are likely to care about.<br><br> <a href="https://80000hours.org/podcast/episodes/anders-sandberg-fermi-paradox/">Full transcript, related links, and key quotes.</a><br><br> But there’s no point waking up to find another civilization has taken over and used up the universe’s energy. So they’ll need some sort of monitoring to protect their resources from potential competitors like us.<br><br> It’s plausible that this civilization would want to keep the universe’s matter concentrated, so that each part would be in reach of the other parts, even after the universe’s expansion. But that would mean changing the trajectory of galaxies during this dormant period. That we don’t see anything like that makes it more likely that these aliens have local outposts throughout the universe, and we wouldn’t notice them until we broke their rules. But breaking their rules might be our last action as a species.<br><br> This ‘aestivation hypothesis’ is the invention of Dr Sandberg, a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he looks at low-probability, high-impact risks, predicting the capabilities of future technologies and very long-range futures for humanity.<br><br> In this incredibly fun conversation we cover this and other possible explanations to the Fermi paradox, as well as questions like:<br><br> * Should we want optimists or pessimists working on our most important problems?<br> * How should we reason about low probability, high impact risks?<br> * Would a galactic civilization want to stop the stars from burning?<br> * What would be the best strategy for exploring and colonising the universe?<br> * How can you stay coordinated when you’re spread across different galaxies?<br> * What should humanity decide to do with its future?<br><br> <b>The 80,000 Hours podcast is produced by Keiran Harris.</b><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/AaBI9vLVNHE" height="1" width="1" alt=""/>
May 08, 2018
#28 - Dr Cotton-Barratt on why scientists should need insurance, PhD strategy & fast AI progresses
01:03:15
A researcher is working on creating a new virus – one more dangerous than any that exist naturally. They believe they’re being as careful as possible. After all, if things go wrong, their own life and that of their colleagues will be in danger. But if an accident is capable of triggering a global pandemic – hundreds of millions of lives might be at risk. How much additional care will the researcher actually take in the face of such a staggering death toll?<br><br> In a new paper Dr Owen Cotton-Barratt, a Research Fellow at Oxford University’s Future of Humanity Institute, argues it’s impossible to expect them to make the correct adjustments. If they have an accident that kills 5 people – they’ll feel extremely bad. If they have an accident that kills 500 million people, they’ll feel even worse – but there’s no way for them to feel 100 million times worse. The brain simply doesn’t work that way.<br><br> So, rather than relying on individual judgement, we could create a system that would lead to better outcomes: research liability insurance. <br><br> <a href="https://80000hours.org/podcast/episodes/owen-cotton-barratt-regulating-risky-research/?utm_campaign=podcast__owen-cb&utm_source=80000+Hours+Podcast&utm_medium=podcast">Links to learn more, summary and full transcript.</a><br><br> Once an insurer assesses how much damage a particular project is expected to cause and with what likelihood – in order to proceed, the researcher would need to take out insurance against the predicted risk. In return, the insurer promises that they’ll pay out – potentially tens of billions of dollars – if things go really badly.<br><br> This would force researchers think very carefully about the cost and benefits of their work – and incentivize the insurer to demand safety standards on a level that individual researchers can’t be expected to impose themselves.<br><br> ***Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.***<br><br> Owen is <a href="http://www.fhi.ox.ac.uk/rsp/">currently hiring</a> for a selective, two-year research scholars programme at Oxford.<br><br> In this wide-ranging conversation Owen and I also discuss:<br><br> * Are academics wrong to value personal interest in a topic over its importance?<br> * What fraction of research has very large potential negative consequences?<br> * Why do we have such different reactions to situations where the risks are known and unknown?<br> * The downsides of waiting for tenure to do the work you think is most important.<br> * What are the benefits of specifying a vague problem like ‘make AI safe’ more clearly?<br> * How should people balance the trade-offs between having a successful career and doing the most important work?<br> * Are there any blind alleys we’ve gone down when thinking about AI safety?<br> * Why did Owen give to an organisation whose research agenda he is skeptical of?<br> *The 80,000 Hours podcast is produced by Keiran Harris.*<br><br><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/w1uL-hGHa-c" height="1" width="1" alt=""/>
Apr 27, 2018
#27 - Dr Tom Inglesby on careers and policies that reduce global catastrophic biological risks
02:17:00
How about this for a movie idea: a main character has to prevent a new contagious strain of Ebola spreading around the world. She’s the best of the best. So good in fact, that her work on early detection systems contains the strain at its source. Ten minutes into the movie, we see the results of her work – nothing happens. Life goes on as usual. She continues to be amazingly competent, and nothing continues to go wrong. Fade to black. Roll credits.<br><br> If your job is to prevent catastrophes, success is when nobody has to pay attention to you. But without regular disasters to remind authorities why they hired you in the first place, they can’t tell if you’re actually achieving anything. And when budgets come under pressure you may find that success condemns you to the chopping block. <br><br> Dr Tom Inglesby, Director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health, worries this may be about to happen to the scientists working on the ‘Global Health Security Agenda’.<br><br> In 2014 Ebola showed the world why we have to detect and contain new diseases before they spread, and that when it comes to contagious diseases the nations of the world sink or swim together. Fifty countries decided to work together to make sure all their health systems were up to the challenge. Back then Congress provided 5 years’ funding to help some of the world’s poorest countries build the basic health security infrastructure necessary to control pathogens before they could reach the US.<br><br> <a href="https://80000hours.org/podcast/episodes/tom-inglesby-health-security/?utm_campaign=podcast__tom-inglesby&utm_source=80000+Hours+Podcast&utm_medium=podcast">Links to learn more, job opportunities, and full transcript.</a><br><br> But with Ebola fading from public memory and no recent tragedies to terrify us, Congress may not renew that funding and the project could fall apart. (Learn more about how you can help: http://www.nti.org/analysis/articles/protect-us-investments-global-health-security/ )<br><br> But there are positive signs as well - the center Inglesby leads recently received a $16 million grant from the Open Philanthropy Project to further their work preventing global catastrophes. It also runs the [Emerging Leaders in Biosecurity Fellowship](http://www.centerforhealthsecurity.org/our-work/emergingbioleaders/) to train the next generation of biosecurity experts for the US government. And Inglesby regularly testifies to Congress on the threats we all face and how to address them.<br><br> In this in-depth interview we try to provide concrete guidance for listeners who want to to pursue a career in health security. Some of the topics we cover include:<br><br> * Should more people in medicine work on security?<br> * What are the top jobs for people who want to improve health security and how do they work towards getting them?<br> * What people can do to protect funding for the Global Health Security Agenda.<br> * Should we be more concerned about natural or human caused pandemics? Which is more neglected?<br> * Should we be allocating more attention and resources to global catastrophic risk scenarios?<br> * Why are senior figures reluctant to prioritize one project or area at the expense of another? <br> * What does Tom think about the idea that in the medium term, human-caused pandemics will pose a far greater risk than natural pandemics, and so we should focus on specific counter-measures?<br> * Are the main risks and solutions understood, and it’s just a matter of implementation? Or is the principal task to identify and understand them?<br> * How is the current US government performing in these areas?<br> * Which agencies are empowered to think about low probability high magnitude events?<br> And more...<br><br> *The 80,000 Hours podcast is produced by Keiran Harris.*<br><br><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/WfHyiYKHjIM" height="1" width="1" alt=""/>
Apr 18, 2018
#26 - Marie Gibbons on how exactly clean meat is made & what's needed to get it in every supermarket
01:44:31
First, decide on the type of animal. Next, pick the cell type. Then take a small, painless biopsy, and put the cells in a solution that makes them feel like they’re still in the body. Once the cells are in this comfortable state, they'll proliferate. One cell becomes two, two becomes four, four becomes eight, and so on. Continue until you have enough cells to make a burger, a nugget, a sausage, or a piece of bacon, then concentrate them until they bind into solid meat.<br><br> It's all surprisingly straightforward in principle according to Marie Gibbons​, a research fellow with The Good Food Institute, who has been researching how to improve this process at Harvard Medical School. We might even see clean meat sold commercially within a year.<br><br> The real technical challenge is developing large bioreactors and cheap solutions so that we can make huge volumes and drive down costs.<br><br> This interview covers the science and technology involved at each stage of clean meat production, the challenges and opportunities that face cutting-edge researchers like Marie, and how you could become one of them.<br><br> <a href="https://80000hours.org/podcast/episodes/marie-gibbons-clean-meat/?utm_campaign=podcast__marie-gibbons&utm_source=80000+Hours+Podcast&utm_medium=podcast">Full transcript, key points, and links to learn more.</a> <br><br> Marie’s research focuses on turkey cells. But as she explains, with clean meat the possibilities extend well beyond those of traditional meat. Chicken, cow, pig, but also panda - and even dinosaurs could be on the menus of the future.<br><br> Today’s episode is hosted by Natalie Cargill, a barrister in London with a background in animal advocacy. Natalie and Marie also discuss:<br><br> * Why Marie switched from being a vet to developing clean meat<br> * For people who want to dedicate themselves to animal welfare, how does working in clean meat fare compared to other career options? How can people get jobs in the area?<br> * How did this become an established field?<br> * How important is the choice of animal species and cell type in this process?<br> * What are the biggest problems with current production methods?<br> * Is this kind of research best done in an academic setting, a commercial setting, or a balance between the two?<br> * How easy will it be to get consumer acceptance?<br> * How valuable would extra funding be for cellular agriculture?<br> * Can we use genetic modification to speed up the process?<br> * Is it reasonable to be sceptical of the possibility of clean meat becoming financially competitive with traditional meat any time in the near future?<br><br> *The 80,000 Hours podcast is produced by Keiran Harris.*<br><br> **If you subscribe to our podcast, you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can do so by searching ‘80,000 Hours’ wherever you get your podcasts.**<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/XNz2XvQClmw" height="1" width="1" alt=""/>
Apr 10, 2018
#25 - Prof Robin Hanson on why we have to lie to ourselves about why we do what we do
02:39:42
On February 2, 1685, England’s King Charles II was struck by a sudden illness. Fortunately his physicians were the best of the best. To reassure the public they kept them abreast of the King’s treatment regimen. King Charles was made to swallow a toxic metal; had blistering agents applied to his scalp; had pigeon droppings attached to his feet; was prodded with a red-hot poker; given forty drops of ooze from “the skull of a man that was never buried”; and, finally, had crushed stones from the intestines of an East Indian goat forced down his throat. Sadly, despite these heroic efforts, he passed away the following week. <br><br> Why did the doctors go this far?<br><br> Prof, Robin Hanson, Associate Professor of Economics at George Mason University suspects that on top of any medical beliefs they also had a hidden motive: it needed to be clear, to the king and the public, that the physicians cared enormously about saving His Royal Majesty. Only by going ‘all out’ would they be protected against accusations of negligence should the King die. <br><br> <a href="https://80000hours.org/2018/03/robin-hanson-on-lying-to-ourselves/?utm_campaign=podcast__robin-hanson&utm_source=80000+Hours+Podcast&utm_medium=podcast">Full transcript, summary, and links to articles discussed in the show.</a><br><br> If you believe Hanson, the same desire to be seen to care about our family and friends explains much of what’s perverse about our medical system today.<br><br> And not just medicine - Robin thinks we’re mostly kidding ourselves when we say our charities exist to help others, our schools exist to educate students and our politics are about choosing wise policies. <br><br> So important are hidden motives for navigating our social world that we have to deny them to ourselves, lest we accidentally reveal them to others.<br><br> Robin is a polymath economist, who has come up with surprising and novel insight in a range of fields including psychology, politics and futurology. In this extensive episode we discuss his latest book with Kevin Simler, *The Elephant in the Brain: Hidden Motives in Everyday Life*, but also:<br><br> * What was it like being part of a competitor group to the ‘World Wide Web’, and being beaten to the post?<br /> * If people aren’t going to school to learn, what’s education all about?<br /> * What split brain patients tell us about our ability to justify anything<br /> * The hidden motivations that shape religions<br /> * Why we choose the friends we do<br /> * Why is our attitude to medicine mysterious?<br /> * What would it look like if people were focused on doing as much good as possible? <br /> * Are we better off donating now, when we’re older, or even wait until well after our deaths?<br /> * How much of the behavior of ‘effective altruists’ can we assume is genuinely motivated by wanting to do as much good as possible?<br /> * What does Robin mean when he refers to effective altruism as a youth movement? Is that a good or bad thing?<br /> * And much more...<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/xXllu98LC7Q" height="1" width="1" alt=""/>
Mar 28, 2018
#24 - Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause
00:55:08
How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?<br><br> Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.<br><br> <a href="https://80000hours.org/2018/03/stefan-schubert-considering-considerateness/?utm_campaign=podcast__stefan-schubert&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Summary, related links and full transcript.</b></a><br><br> In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.<br><br> But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.<br><br> In this interview Stefan offers a range of views on the projects and culture that make up ‘effective altruism’ - including where it’s going right and where it’s going wrong.<br><br> Stefan did his PhD in formal epistemology, before moving on to a postdoc in political rationality at the London School of Economics, while working on advocacy projects to improve truthfulness among politicians. At the time the interview was recorded Stefan was a researcher at the Centre for Effective Altruism in Oxford. <br><br> We discuss:<br><br> * Should we trust our own judgement more than others’?<br> * How hard is it to improve political discourse?<br> * What should we make of well-respected academics writing articles that seem to be completely misinformed?<br> * How is effective altruism (EA) changing? What might it be doing wrong?<br> * How has Stefan’s view of EA changed?<br> * Should EA get more involved in politics, or steer clear of it? Would it be a bad idea for a talented graduate to get involved in party politics?<br> * How much should we cooperate with those with whom we have disagreements?<br> * What good reasons are there to be inconsiderate?<br> * Should effective altruism potentially focused on a more narrow range of problems?<br><br> *The 80,000 Hours podcast is produced by Keiran Harris.*<br><br> **If you subscribe to our podcast, you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can do so by searching ‘80,000 Hours’ wherever you get your podcasts.**<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/7DW-keGSJSg" height="1" width="1" alt=""/>
Mar 20, 2018
#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike
00:45:29
Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer science and mathematics. Prioritize harder courses over easier ones. Publish at least one paper before you apply for a PhD. Find a supervisor who’ll have a lot of time for you. Go to the top conferences and meet your future colleagues. And finally, get yourself hired.<br /><br /> That’s Dr Jan Leike’s advice on how to join him as a Research Scientist at DeepMind, the world’s leading AI team.<br /><br /> Jan is also a Research Associate at the Future of Humanity Institute at the University of Oxford, and his research aims to make machine learning robustly beneficial. His current focus is getting AI systems to learn good ‘objective functions’ in cases where we can’t easily specify the outcome we actually want.<br /><br /> <a href="https://80000hours.org/2018/03/jan-leike-ml-alignment/?utm_campaign=podcast__jan-leike&utm_source=80000+Hours+Podcast&utm_medium=podcast" rel="nofollow" target="_blank"><b>Full transcript, summary and links to learn more.</b></a><br><br> How might you know you’re a good fit for research? <br /><br /> Jan says to check whether you get obsessed with puzzles and problems, and find yourself mulling over questions that nobody knows the answer to. To do research in a team you also have to be good at clearly and concisely explaining your new ideas to other people.<br /><br /> We also discuss:<br /><br /> * Where Jan's views differ from those expressed by <a href="https://80000hours.org/2017/07/podcast-the-world-needs-ai-researchers-heres-how-to-become-one/">Dario Amodei in episode 3</a><br> * Why is AGI safety one of the world’s most pressing problems? <br /> * Common misconceptions about AI<br /> * What are some of the specific things DeepMind is researching?<br /> * The ways in which today’s AI systems can fail<br> * What are the best techniques available today for teaching an AI the right objective function?<br> * What’s it like to have some of the world’s greatest minds as coworkers?<br /> * Who should do empirical research and who should do theoretical research<br /> * What’s the DeepMind application process like?<br /> * The importance of researchers being comfortable with the unknown.<br /><br /> *The 80,000 Hours podcast is produced by Keiran Harris.*<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/ncj_k4kXDPY" height="1" width="1" alt=""/>
Mar 16, 2018
#22 - Dr Leah Utyasheva on the non-profit that figured out how to massively cut suicide rates
01:08:13
How people kill themselves varies enormously depending on which means are most easily available. In the United States, suicide by firearm stands out. In Hong Kong, where most people live in high rise buildings, jumping from a height is more common. And in some countries in Asia and Africa with many poor agricultural communities, the leading means is drinking pesticide.<br><br> There’s a good chance you’ve never heard of this issue before. And yet, of the 800,000 people who kill themselves globally each year 20% die from pesticide self-poisoning.<br><br> <a href="https://80000hours.org/2018/03/leah-utyasheva-pesticide-suicide-prevention/?utm_campaign=podcast__leah-utyasheva&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Full transcript, summary and links to articles discussed in today's show.</b></a><br><br> Research suggests most people who try to kill themselves with pesticides reflect on the decision for less than 30 minutes, and that less than 10% of those who don't die the first time around will try again.<br><br> Unfortunately, the fatality rate from pesticide ingestion is 40% to 70%.<br><br> Having such dangerous chemicals near people's homes is therefore an enormous public health issue not only for the direct victims, but also the partners and children they leave behind.<br><br> Fortunately researchers like Dr Leah Utyasheva have figured out a very cheap way to massively reduce pesticide suicide rates.<br><br> In this episode, Leah and I discuss:<br><br> * How do you prevent pesticide suicide and what’s the evidence it works?<br> * How do you know that most people attempting suicide don’t want to die?<br> * What types of events are causing people to have the crises that lead to attempted suicide?<br> * How much money does it cost to save a life in this way?<br> * How do you estimate the probability of getting law reform passed in a particular country?<br> * Have you generally found politicians to be sympathetic to the idea of banning these pesticides? What are their greatest reservations?<br> * The comparison of getting policy change rather than helping person-by-person<br> * The importance of working with locals in places like India and Nepal, rather than coming in exclusively as outsiders<br> * What are the benefits of starting your own non-profit versus joining an existing org and persuading them of the merits of the cause?<br> * Would Leah in general recommend starting a new charity? Is it more exciting than it is scary?<br> * Is it important to have an academic leading this kind of work?<br> * How did The Centre for Pesticide Suicide Prevention get seed funding?<br> * How does the value of saving a life from suicide compare to savings someone from malaria<br> * Leah’s political campaigning for the rights of vulnerable groups in Eastern Europe <br> * What are the biggest downsides of human rights work?<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/CVQgiskrKQs" height="1" width="1" alt=""/>
Mar 07, 2018
#21 - Holden Karnofsky on times philanthropy transformed the world & Open Phil’s plan to do the same
02:35:36
The Green Revolution averted mass famine during the 20th century. The contraceptive pill gave women unprecedented freedom in planning their own lives. Both are widely recognised as scientific breakthroughs that transformed the world. But few know that those breakthroughs only happened when they did because of a philanthropist willing to take a risky bet on a new idea.<br><br> Today’s guest, Holden Karnofsky, has been looking for philanthropy’s biggest success stories because he’s Executive Director of the Open Philanthropy Project, which gives away over $100 million per year - and he’s hungry for big wins.<br><br> <a href="https://80000hours.org/2018/02/holden-karnofsky-open-philanthropy/?utm_campaign=podcast__holden-karnofsky&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Full transcript, related links, job opportunities and summary of the interview.</b></a><br><br> In the 1940s, poverty reduction overseas was not a big priority for many. But the Rockefeller Foundation decided to fund agricultural scientists to breed much better crops for the developing world - thereby massively increasing their food production.<br><br> In the 1950s, society was a long way from demanding effective birth control. Activist Margaret Sanger had the idea for the pill, and endocrinologist Gregory Pincus the research team – but they couldn’t proceed without a $40,000 research check from biologist and women’s rights activist Katherine McCormick.<br><br> In both cases, it was philanthropists rather than governments that led the way.<br><br> The reason, according to Holden, is that while governments have enormous resources, they’re constrained by only being able to fund reasonably sure bets. Philanthropists can transform the world by filling the gaps government leaves - but to seize that opportunity they have to hire outstanding researchers, think long-term and be willing to fail most of the time. <br><br> Holden knows more about this type of giving than almost anyone. As founder of GiveWell and then the Open Philanthropy Project, he has been working feverishly since 2007 to find outstanding giving opportunities. This practical experience has made him one of the most influential figures in the development of the school of thought that has come to be known as effective altruism.<br><br> We’ve recorded this episode now because [the Open Philanthropy Project is hiring](https://www.openphilanthropy.org/get-involved/jobs) for a large number of positions, which we think would allow the right person to have a very large positive influence on the world. They’re looking for a large number of entry lever researchers to train up, 3 specialist researchers into potential risks from advanced artificial intelligence, as well as a Director of Operations, Operations Associate and General Counsel.<br><br> But the conversation goes well beyond specifics about these jobs. We also discuss:<br><br> * How did they pick the problems they focus on, and how will they change over time?<br> * What would Holden do differently if he were starting Open Phil again today?<br> * What can we learn from the history of philanthropy?<br> * What makes a good Program Officer.<br> * The importance of not letting hype get ahead of the science in an emerging field.<br> * The importance of honest feedback for philanthropists, and the difficulty getting it.<br> * How do they decide what’s above the bar to fund, and when it’s better to hold onto the money?<br> * How philanthropic funding can most influence politics.<br> * What Holden would say to a new billionaire who wanted to give away most of their wealth.<br> * Why Open Phil is building a research field around the safe development of artificial intelligence<br> * Why they invested in OpenAI.<br> * Academia’s faulty approach to answering practical questions.<br> * What potential utopias do people most want, according to opinion polls? <br><br> Keiran Harris helped produce today’s episode.<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/9U4aPUmgfPI" height="1" width="1" alt=""/>
Feb 27, 2018
#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming
01:18:00
Before the US Civil War, it was easier for the North to morally oppose slavery. Why? Because unlike the South they weren’t profiting much from its existence. The fight for abolition was partly won because many no longer saw themselves as having a selfish stake in its continuation.<br><br> Bruce Friedrich, executive director of The Good Food Institute (GFI), thinks the same may be true in the fight against speciesism. 98% of people currently eat meat. But if eating meat stops being part of most people’s daily lives -- it should be a lot easier to convince them that farming practices are just as cruel as they look, and that the suffering of these animals really matters.<br><br> <a href="https://80000hours.org/2018/02/bruce-friedrich-good-food-institute/?utm_campaign=podcast__bruce-friedrich&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Full transcript, related links, job opportunities and summary of the interview.</b></a><br><br> That’s why GFI is “working with scientists, investors, and entrepreneurs” to create plant-based meat, dairy and eggs as well as clean meat alternatives to animal products. In 2016, Animal Charity Evaluators named GFI one of its recommended charities.<br><br> In this interview I’m joined by my colleague Natalie Cargill, and we ask Bruce about:<br><br> * What’s the best meat replacement product out there right now?<br> * How effective is meat substitute research for people who want to reduce animal suffering as much as possible?<br> * When will we get our hands on clean meat? And why does Bruce call it clean meat, rather than in vitro meat or cultured meat?<br> * What are the challenges of producing something structurally identical to meat?<br> * Can clean meat be healthier than conventional meat?<br> * Do plant-based alternatives have a better shot at success than clean meat?<br> * Is there a concern that, even if the product is perfect, people still won’t eat it? Why might that happen?<br> * What’s it like being a vegan in a family made up largely of hunters and meat-eaters?<br> * What kind of pushback should be expected from the meat industry?<br><br> Keiran Harris helped produce today’s episode.<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/4BXqpIi6WNs" height="1" width="1" alt=""/>
Feb 19, 2018
#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war
01:04:29
Rogue elements within a state’s security forces enrich dozens of kilograms of uranium. It’s then assembled into a crude nuclear bomb. The bomb is transported on a civilian aircraft to Washington D.C, and loaded onto a delivery truck. The truck is driven by an American citizen midway between the White House and the Capitol Building. The driver casually steps out of the vehicle, and detonates the weapon. There are more than 80,000 instant deaths. There are also at least 100,000 seriously wounded, with nowhere left to treat them. <br><br> <a href="https://80000hours.org/2018/02/samantha-pk-nuclear-security/?utm_campaign=podcast__samantha-pk&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Full blog post about this episode, including a transcript, summary and links to resources mentioned in the show</b></a><br><br> It’s likely that one of those immediately killed would be Samantha Pitts-Kiefer, who works only one block away from the White House.<br><br> Samantha serves as Senior Director of The Global Nuclear Policy Program at the Nuclear Threat Initiative, and warns that the chances of a nuclear terrorist attack are alarmingly high. Terrorist groups have expressed a desire for nuclear weapons, and the material required to build those weapons is scattered throughout the world at a diverse range of sites – some of which lack the necessary security. <br><br> When you combine the massive death toll with the accompanying social panic and economic disruption – the consequences of a nuclear 9/11 would be a disasterare almost unthinkable. And yet, Samantha reminds us – we must confront the possibility.<br><br> Clearly, this is far from the only nuclear nightmare. We also discuss:<br><br> * In the case of nuclear war, what fraction of the world's population would die?<br> * What is the biggest nuclear threat?<br> * How concerned should we be about North Korea?<br> * How often has the world experienced nuclear near misses?<br> * How might a conflict between India and Pakistan escalate to the nuclear level?<br> * How quickly must a president make a decision in the result of a suspected first strike?<br> * Are global sources of nuclear material safely secured?<br> * What role does cyber security have in preventing nuclear disasters?<br> * How can we improve relations between nuclear armed states?<br> * What do you think about the campaign for complete nuclear disarmament?<br> * If you could tell the US government to do three things, what are the key priorities today? <br> * Is it practical to get members of congress to pay attention to nuclear risks?<br> * Could modernisation of nuclear weapons actually make the world safer?<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/zVEukDUfxLs" height="1" width="1" alt=""/>
Feb 14, 2018
#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction
01:18:59
Ofir Reich started out doing math in the military, before spending 8 years in tech startups - but then made a sharp turn to become a data scientist focussed on helping the global poor.<br><br> At UC Berkeley’s Center for Effective Global Action he helps prevent tax evasion by identifying fake companies in India, enable Afghanistan to pay its teachers electronically, and raise yields for Ethiopian farmers by messaging them when local conditions make it ideal to apply fertiliser. Or at least that’s the hope - he’s also working on ways to test whether those interventions actually work. <br><br> <a href="https://80000hours.org/2018/01/ofir-reich-data-science/?utm_campaign=podcast__ofir-reich&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Full post about this episode, including a transcript and relevant links to learn more.</b></a><br><br> Why dedicate his life to helping the global poor?<br><br> Ofir sees little moral difference between harming people and failing to help them. After all, if you had to press a button to keep all of your money from going to charity, and you pressed that button, would that be an action, or an inaction? Is there even an answer?<br><br> After reflecting on cases like this, he decided that to not engage with a problem is an active choice, one whose consequences he is just as morally responsible for as if he were directly involved. On top of his life philosophy we also discuss: <br><br> * The benefits of working in a top academic environment<br> * How best to start a career in global development<br> * Are RCTs worth the money? Should we focus on big picture policy change instead? Or more economic theory?<br> * How the delivery standards of nonprofits compare to top universities<br> * Why he doesn’t enjoy living in the San Francisco bay area<br> * How can we fix the problem of most published research being false?<br> * How good a career path is data science?<br> * How important is experience in development versus technical skills?<br> * How he learned much of what he needed to know in the army<br> * How concerned should effective altruists be about burnout?<br><br> Keiran Harris helped produce today’s episode.<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/B4uobkk3aM0" height="1" width="1" alt=""/>
Jan 31, 2018
#17 - Prof Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster
01:52:13
Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He <a href="https://80000hours.org/2018/01/will-macaskill-moral-philosophy/?utm_campaign=podcast__will-macaskill&utm_source=80000+Hours+Podcast&utm_medium=podcast">also thought</a> that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races.<br><br> Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?<br><br> <a href="https://80000hours.org/2018/01/will-macaskill-moral-philosophy/?utm_campaign=podcast__will-macaskill&utm_source=80000+Hours+Podcast&utm_medium=podcast">Full transcript, key points and links to articles and career guides discussed in the show.</a><br><br> If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide.<br><br> Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism community. In this interview we discuss a wide range of topics:<br><br> * How would we go about a ‘long reflection’ to fix our moral errors?<br> * Will’s forthcoming book on how one should reason and act if you don't know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’?<br> * If we basically solve existential risks, what does humanity do next?<br> * What are some of Will’s most unusual philosophical positions?<br> * What are the best arguments for and against utilitarianism?<br> * Given disagreements among philosophers, how much should we believe the findings of philosophy as a field?<br> * What are some the biases we should be aware of within academia?<br> * What are some of the downsides of becoming a professor?<br> * What are the merits of becoming a philosopher?<br> * How does the media image of EA differ to the actual goals of the community?<br> * What kinds of things would you like to see the EA community do differently?<br> * How much should we explore potentially controversial ideas?<br> * How focused should we be on diversity?<br> * What are the best arguments against effective altruism?<br><br> <b>Get free, one-on-one career advice</b><br><br> We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, <a href="https://80000hours.org/coaching/?int_campaign=podcast__will-macaskill">find out if our coaching can help you.</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/92Cn7gAvZIE" height="1" width="1" alt=""/>
Jan 19, 2018
#16 - Dr Hutchinson on global priorities research & shaping the ideas of intellectuals
00:55:01
In the 40s and 50s neoliberalism was a fringe movement within economics. But by the 80s it had become a dominant school of thought in public policy, and achieved major policy changes across the English speaking world. How did this happen?<br><br> In part because its leaders invested heavily in training academics to study and develop their ideas. Whether you think neoliberalism was good or bad, its history demonstrates the impact building a strong intellectual base within universities can have.<br><br> Michelle Hutchinson is working to get a different set of ideas a hearing in academia by setting up the Global Priorities Institute (GPI) at Oxford University. The Institute, which is <a href="https://globalprioritiesinstitute.org/opportunities/">currently hiring for three roles</a>, aims to bring together outstanding philosophers and economists to research how to most improve the world. The hope is that it will spark widespread academic engagement with effective altruist thinking, which will hone the ideas and help them gradually percolate into society more broadly.<br><br> <a href="https://80000hours.org/2017/12/michelle-hutchinson-global-priorities/?utm_campaign=podcast__michelle-hutchinson&utm_source=80000+Hours+Podcast&utm_medium=podcast">Link to the full blog post about this episode including transcript and links to learn more</a><br><br> Its <a href="https://globalprioritiesinstitute.org/wp-content/uploads/2017/12/gpi-research-agenda.pdf">research agenda</a> includes questions like:<br><br> * How do we compare the good done by focussing on really different types of causes?<br> * How does saving lives actually affect the world relative to other things we could do?<br> * What are the biggest wins governments should be focussed on getting?<br><br> Before moving to GPI, Michelle was the Executive Director of Giving What We Can and a founding figure of the effective altruism movement. She has a PhD in Applied Ethics from Oxford on prioritization and global health.<br><br> We discuss:<br><br> * What is global priorities research and why does it matter?<br> * How is effective altruism seen in academia? Is it important to convince academics of the value of your work, or is it OK to ignore them?<br> * Operating inside a university is quite expensive, so is it even worth doing? Who can pay for this kind of thing?<br> * How hard is it to do something innovative inside a university? How serious are the administrative and other barriers?<br> * Is it harder to fundraise for a new institute, or hire the right people?<br> * Have other social movements benefitted from having a prominent academic arm?<br> * How can people prepare themselves to get research roles at a place like GPI?<br> * Many people want to have roles doing this kind of research. How many are actually cut out for it? What should those who aren’t do instead?<br> * What are the odds of the Institute’s work having an effect on the real world?<br><br> <b>Get free, one-on-one career advice</b><br><br> We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, <a href="https://80000hours.org/coaching/?utm_campaign=podcast__michelle-hutchinson&utm_source=80000+Hours+Podcast&utm_medium=podcast">find out if our coaching can help you.</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/0Bj2gKduj4w" height="1" width="1" alt=""/>
Dec 22, 2017
#15 - Prof Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise
01:23:59
Prof Philip Tetlock is a social science legend. Over forty years he has researched whose predictions we can trust, whose we can’t and why - and developed methods that allow all of us to be better at predicting the future.<br><br> After the Iraq WMDs fiasco, the US intelligence services hired him to figure out how to ensure they’d never screw up that badly again. The result of that work – <em>Superforecasting</em> – was a media sensation in 2015.<br><br> <a href="https://80000hours.org/2017/11/prof-tetlock-predicting-the-future/?utm_campaign=podcast__phil-tetlock&utm_source=80000+Hours+Podcast&utm_medium=podcast"><b>Full transcript, brief summary, apply for coaching and links to learn more.</b></a><br><br> It described Tetlock’s Good Judgement Project, which found forecasting methods so accurate they beat everyone else in open competition, including thousands of people in the intelligence services with access to classified information.<br><br> Today he’s working to develop the best forecasting process ever, by combining top human and machine intelligence in the <a href="https://www.hybridforecasting.com/">Hybrid Forecasting Competition</a>, which you can sign up and participate in.<br><br> We start by describing his key findings, and then push to the edge of what is known about how to foresee the unforeseeable:<br><br> * Should people who want to be right just adopt the views of experts rather than apply their own judgement?<br> * Why are Berkeley undergrads worse forecasters than dart-throwing chimps?<br> * Should I keep my political views secret, so it will be easier to change them later?<br> * How can listeners contribute to his latest cutting-edge research?<br> * What do we know about our accuracy at predicting low-probability high-impact disasters?<br> * Does his research provide an intellectual basis for populist political movements?<br> * Was the Iraq War caused by bad politics, or bad intelligence methods?<br> * What can we learn about forecasting from the 2016 election?<br> * Can experience help people avoid overconfidence and underconfidence?<br> * When does an AI easily beat human judgement?<br> * Could more accurate forecasting methods make the world more dangerous?<br> * How much does demographic diversity line up with cognitive diversity?<br> * What are the odds we’ll go to war with China?<br> * Should we let prediction tournaments run most of the government?<br><br> Listen to it. <b>Get free, one-on-one career advice.</b> Want to work on important social science research like Tetlock? We’ve helped hundreds of people compare their options and get introductions. <a href="https://80000hours.org/coaching/?utm_campaign=podcast__phil-tetlock&utm_source=80000+Hours+Podcast&utm_medium=podcast">Find out if our coaching can help you.</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/nPFrpLrk6A4" height="1" width="1" alt=""/>
Nov 20, 2017
#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse
01:25:57
What if you knew that ducks were being killed with pitchforks? Rabbits dumped alive into containers? Or pigs being strangled with forklifts? Would you be willing to go undercover to expose the crime?<br><br> That’s a real question that confronts volunteers at Animal Equality (AE). In this episode we speak to Sharon Nunez and Jose Valle, who founded AE in 2006 and then grew it into a multi-million dollar international animal rights organisation. They’ve been chosen as one of the most effective animal protection orgs in the world by Animal Charity Evaluators for the last 3 consecutive years.<br><br> <a href="https://80000hours.org/2017/11/animal-equality-exposing-cruelty/?utm_campaign=podcast__sharon-nunez&utm_source=80000+Hours+Podcast&utm_medium=podcast">Blog post about the episode, including links and full transcript.</a><br><br> <a href="https://80000hours.org/2017/09/lewis-bollard-end-factory-farming/">A related previous episode, strongly recommended: Lewis Bollard on how to end factory farming as soon as possible.</a><br><br> In addition to undercover investigations AE has also designed a 3D virtual-reality farm experience called iAnimal360. People get to experience being trapped in a cage – in a room designed to kill then - and can’t just look away. How big an impact is this having on users?<br><br> Sharon Nuñez and Jose Valle also tackle:<br><br> * How do they track their goals and metrics week to week?<br> * How much does an undercover investigation cost?<br> * Why don’t people donate more to factory farmed animals, given that they’re the vast majority of animals harmed directly by humans?<br> * How risky is it to attempt to build a career in animal advocacy?<br> * What led to a change in their focus from bullfighting in Spain to animal farming?<br> * How does working with governments or corporate campaigns compare with early strategies like creating new vegans/vegetarians?<br> * Has their very rapid growth been difficult to handle?<br> * What should our listeners study or do if they want to work in this area?<br> * How can we get across the message that horrific cases are a feature - not a bug - of factory farming?<br> * Do the owners or workers of factory farms ever express shame at what they do?<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/PHjSRbGeQNs" height="1" width="1" alt=""/>
Nov 13, 2017
#13 - Claire Walsh on testing which policies work & how to get governments to listen to the results
00:52:27
In both rich and poor countries, government policy is often based on no evidence at all and many programs don’t work. This has particularly harsh effects on the global poor - in some countries governments only spend $100 on each citizen a year so they can’t afford to waste a single dollar.<br><br> Enter MIT’s Poverty Action Lab (J-PAL). Since 2003 they’ve conducted experiments to figure out what policies actually help recipients, and then tried to get them implemented by governments and non-profits.<br><br> Claire Walsh leads J-PAL’s Government Partnership Initiative, which works to evaluate policies and programs in collaboration with developing world governments, scale policies that have been shown to work, and generally promote a culture of evidence-based policymaking.<br><br> <a href="https://80000hours.org/2017/10/claire-walsh-evidence-in-development/?utm_campaign=podcast__claire-walsh&utm_source=80000+Hours+Podcast&utm_medium=podcast">Summary, links to career opportunities and topics discussed in the show.</a><br><br> We discussed (her views only, not J-PAL’s):<br><br> * How can they get evidence backed policies adopted? Do politicians in the developing world even care whether their programs actually work? Is the norm evidence-based policy, or policy-based evidence?<br> * Is evidence-based policy an evidence-based strategy itself?<br> * Which policies does she think would have a particularly large impact on human welfare relative to their cost?<br> * How did she come to lead one of J-PAL’s departments at 29?<br> * How do you evaluate the effectiveness of energy and environment programs (Walsh’s area of expertise), and what are the standout approaches in that area?<br> * 80,000 Hours has warned people about the downsides of starting your career in a non-profit. Walsh started her career in a non-profit and has thrived, so are we making a mistake?<br> * Other than J-PAL, what are the best places to work in development? What are the best subjects to study? Where can you go network to break into the sector?<br> * Is living in poverty as bad as we think?<br><br> And plenty of other things besides.<br><br> We haven’t run an RCT to test whether this episode will actually help your career, but I suggest you listen anyway. Trust my intuition on this one.<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/j1qD0WLkpzg" height="1" width="1" alt=""/>
Oct 31, 2017
#12 - Dr Cameron works to stop you dying in a pandemic. Here’s what keeps her up at night.
01:45:16
<em>“When you're in the middle of a crisis and you have to ask for money, you're already too late.”</em><br><br> That’s Dr Beth Cameron, who leads <em>Global Biological Policy and Programs</em> at the Nuclear Threat Initiative.<br><br> Beth should know. She has years of experience preparing for and fighting the diseases of our nightmares, on the <em>White House Ebola Taskforce</em>, in the <em>National Security Council</em> staff, and as the Assistant Secretary of Defense for <em>Nuclear, Chemical and Biological Defense Programs</em>.<br><br> <a href="https://80000hours.org/2017/10/beth-cameron-pandemic-preparedness/?utm_campaign=podcast__beth-cameron&utm_source=80000+Hours+Podcast&utm_medium=podcast">Summary, list of career opportunities, extra links to learn more and coaching application.</a><br><br> Unfortunately, the countries of the world aren’t prepared for a crisis - and like children crowded into daycare, there’s a good chance something will make us all sick at once.<br><br> During past pandemics countries have dragged their feet over who will pay to contain them, or struggled to move people and supplies where they needed to be. At the same time advanced biotechnology threatens to make it possible for terrorists to bring back smallpox - or create something even worse.<br><br> In this interview we look at the current state of play in disease control, what needs to change, and how you can build the career capital necessary to make those changes yourself. That includes:<br><br> * What and where to study, and where to begin a career in pandemic preparedness. Below you’ll find a lengthy list of people and places mentioned in the interview, and others we’ve had recommended to us.<br> * How the Nuclear Threat Initiative, with just 50 people, collaborates with governments around the world to reduce the risk of nuclear or biological catastrophes, and whether they might want to hire you.<br> * The best strategy for containing pandemics.<br> * Why we lurch from panic, to neglect, to panic again when it comes to protecting ourselves from contagious diseases.<br> * Current reform efforts within the World Health Organisation, and attempts to prepare partial vaccines ahead of time.<br> * Which global health security groups most impress Beth, and what they’re doing.<br> * What new technologies could be invented to make us safer.<br> * Whether it’s possible to help solve the problem through mass advocacy.<br> * Much more besides.<br><br> <b><a href="https://80000hours.org/coaching/?int_campaign=podcast__beth-cameron" title="" class="btn btn-primary">Get free, one-on-one career advice to improve biosecurity</a></b><br><br> Considering a relevant grad program like a biology PhD, medicine, or security studies? Able to apply for a relevant job already? We’ve helped dozens of people plan their careers to work on pandemic preparedness and put them in touch with mentors. <b>If you want to work on the problem discussed in this episode, you should apply for coaching:</b><br><br> <a href="https://80000hours.org/coaching/?utm_campaign=podcast__beth-cameron&utm_source=80000+Hours+Podcast&utm_medium=podcast" title="" class="btn btn-primary">Read more</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/MLAt0MjtGlg" height="1" width="1" alt=""/>
Oct 25, 2017
#11 - Dr Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm
01:29:17
Do most meat eaters think it’s wrong to hurt animals? Do Americans think climate change is likely to cause human extinction? What is the best, state-of-the-art therapy for depression? How can we make academics more intellectually honest, so we can actually trust their findings? How can we speed up social science research ten-fold? Do most startups improve the world, or make it worse?<br><br> If you’re interested in these question, this interview is for you.<br><br> <a href="https://80000hours.org/2017/10/spencer-greenberg-social-science/?utm_campaign=podcast__spencer-greenberg&utm_source=80000+Hours+Podcast&utm_medium=podcast">Click for a full transcript, links discussed in the show, etc.</a><br><br> A scientist, entrepreneur, writer and mathematician, Spencer Greenberg is constantly working to create tools to speed up and improve research and critical thinking. These include:<br><br> * Rapid public opinion surveys to find out what most people actually think about animal consciousness, farm animal welfare, the impact of developing world charities and the likelihood of extinction by various different means;<br> * Tools to enable social science research to be run en masse very cheaply;<br> * ClearerThinking.org, a highly popular site for improving people’s judgement and decision-making;<br> * Ways to transform data analysis methods to ensure that papers only show true findings;<br> * Innovative research methods;<br> * Ways to decide which research projects are actually worth pursuing.<br><br> In this interview, Spencer discusses all of these and more. If you don’t feel like listening, that just shows that you have poor judgement and need to benefit from his wisdom even more!<br><br> <b><a href="https://80000hours.org/coaching/?int_campaign=podcast__spencer-greenberg" title="" class="btn btn-primary">Get free, one-on-one career advice</a></b><br><br> We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. <b>If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.</b><br><br><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/VxGWOMIV0Xo" height="1" width="1" alt=""/>
Oct 17, 2017
#10 - Dr Nick Beckstead on how to spend billions of dollars preventing human extinction
01:51:48
What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project - people like Dr Nick Beckstead.<br><br> Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion.<br><br> <a href="https://80000hours.org/2017/10/nick-beckstead-giving-billions/?utm_campaign=podcast__nick-beckstead&utm_source=80000+Hours+Podcast&utm_medium=podcast">Full transcript, coaching application form, overview of the conversation, and links to resources discussed in the episode:</a><br><br> This episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including:<br><br> * * Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, <a href="https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxuYmVja3N0ZWFkfGd4OjExNDBjZTcwNjMxMzRmZGE">On the Overwhelming Importance of Shaping the Far Future</a>. (The first 31 minutes is a snappier version of <a href="https://80000hours.org/articles/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it/">my conversation with Toby Ord</a>.)<br> * Is clean meat (aka *in vitro* meat) technologically feasible any time soon, or should we be looking for plant-based alternatives?<br> * What are the greatest risks to human civilisation?<br> * To stop malaria is it more cost-effective to use technology to <a href="https://www.openphilanthropy.org/focus/scientific-research/miscellaneous/target-malaria-general-support">eliminate mosquitos</a> than to distribute bed nets?<br> * Should people who want to improve the future work for changes that will be very useful in a specific scenario, or just generally try to improve how well humanity makes decisions?<br> * What specific jobs should our listeners take in order for Nick to be able to spend more money in useful ways to improve the world?<br> * Should we expect the future to be better if the economy grows more quickly - or more slowly?<br><br> <b>Get free, one-on-one career advice</b><br><br> We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. <a href="https://80000hours.org/coaching/?int_campaign=nick-beckstead-podcast" title="" class="btn btn-primary">If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.</a><br><br><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/-8KE01tJ55I" height="1" width="1" alt=""/>
Oct 11, 2017
#9 - Christine Peterson on how insecure computers could lead to global disaster, and how to fix it
01:45:10
Take a trip to Silicon Valley in the 70s and 80s, when going to space sounded like a good way to get around environmental limits, people started cryogenically freezing themselves, and nanotechnology looked like it might revolutionise industry – or turn us all into grey goo.<br><br> <a href="https://80000hours.org/2017/10/christine-peterson-computer-security/?utm_campaign=podcast__christine-peterson&utm_source=80000+Hours+Podcast&utm_medium=podcast">Full transcript, coaching application form, overview of the conversation, and extra resources to learn more:</a><br><br> In this episode of the 80,000 Hours Podcast Christine Peterson takes us back to her youth in the Bay Area, the ideas she encountered there, and what the dreamers she met did as they grew up. We also discuss how she came up with the term ‘open source software’ (and how she had to get someone else to propose it).<br><br> Today Christine helps runs the Foresight Institute, which fills a gap left by for-profit technology companies – predicting how new revolutionary technologies could go wrong, and ensuring we steer clear of the downsides.<br><br> We dive into:<br><br> * Whether the poor security of computer systems poses a catastrophic risk for the world. Could all our essential services be taken down at once? And if so, what can be done about it?<br> * Can technology ‘move fast and break things’ without eventually breaking the world? Would it be better for technology to advance more quickly, or more slowly?<br> * How Christine came up with the term ‘open source software’ (and why someone else had to propose it).<br> * Will AIs designed for wide-scale automated hacking make computers more or less secure?<br> * Would it be good to radically extend human lifespan? Is it sensible to cryogenically freeze yourself in the hope of being resurrected in the future?<br> * Could atomically precise manufacturing (nanotechnology) really work? Why was it initially so controversial and why did people stop worrying about it?<br> * Should people who try to do good in their careers work long hours and take low salaries? Or should they take care of themselves first of all?<br> * How she thinks the the effective altruism community resembles the scene she was involved with when she was wrong, and where it might be going wrong.<br><br> <b>Get free, one-on-one career advice</b><br><br> We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. <a href="https://80000hours.org/coaching/?int_campaign=christine-peterson-podcast" title="" class="btn btn-primary">If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.</a><br><br><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/N-3h8JyafzM" height="1" width="1" alt=""/>
Oct 04, 2017
#8 - Lewis Bollard on how to end factory farming in our lifetimes
03:16:55
Every year tens of billions of animals are raised in terrible conditions in factory farms before being killed for human consumption. Over the last two years Lewis Bollard – Project Officer for Farm Animal Welfare at the Open Philanthropy Project – has conducted extensive research into the best ways to eliminate animal suffering in farms as soon as possible. This has resulted in $30 million in grants to farm animal advocacy.<br><br> <a href="https://80000hours.org/2017/09/lewis-bollard-end-factory-farming/?utm_campaign=podcast__lewis-bollard&utm_source=80000+Hours+Podcast&utm_medium=podcast">Full transcript, coaching application form, overview of the conversation, and extra resources to learn more:</a><br><br> We covered almost every approach being taken, which ones work, and how individuals can best contribute through their careers.<br><br> We also had time to venture into a wide range of issues that are less often discussed, including:<br><br> * Why Lewis thinks insect farming would be worse than the status quo, and whether we should look for ‘humane’ insecticides;<br> * How young people can set themselves up to contribute to scientific research into meat alternatives;<br> * How genetic manipulation of chickens has caused them to suffer much more than their ancestors, but could also be used to make them better off;<br> * Why Lewis is skeptical of vegan advocacy;<br> * Why he doubts that much can be done to tackle factory farming through legal advocacy or electoral politics;<br> * Which species of farm animals is best to focus on first;<br> * Whether fish and crustaceans are conscious, and if so what can be done for them;<br> * Many other issues listed below in the <i>Overview of the discussion</i>.<br><br> <b>Get free, one-on-one career advice</b><br><br> We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. <a href="https://80000hours.org/coaching/?int_campaign=lewis-bollard-podcast" title="" class="btn btn-primary">If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.</a><br><br> <b>Overview of the discussion</b><br><br> **2m40s** What originally drew you to dedicate your career to helping animals and why did Open Philanthropy end up focusing on it?<br> **5m40s** Do you have any concrete way of assessing the severity of animal suffering? <br> **7m10s** Do you think the environmental gains are large compared to those that we might hope to get from animal welfare improvement?<br> **7m55s** What grants have you made at Open Phil? How did you go about deciding which groups to fund and which ones not to fund?<br> **9m50s** Why does Open Phil focus on chickens and fish? Is this the right call?<br> <a href="https://80000hours.org/2017/09/lewis-bollard-end-factory-farming?utm_campaign=podcast__lewis-bollard&utm_source=80000+Hours+Podcast&utm_medium=podcast">More...</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/KbBkTNLFOWA" height="1" width="1" alt=""/>
Sep 27, 2017
#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad
01:14:17
The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong. <br><br> Julia Galef - a well-known writer and researcher focused on improving human judgment, especially about high stakes questions - believes that if we could again develop new techniques to predict the future, resolve disagreements and make sound decisions together, it could dramatically improve the world across the board. We brought her in to talk about her ideas.<br><br> <b>This interview complements a <a href="https://80000hours.org/problem-profiles/improving-institutional-decision-making/?utm_campaign=podcast__julia-galef&utm_source=80000+Hours+Podcast&utm_medium=podcast">new detailed review</a> of whether and how to follow Julia’s career path.</b> <a href="https://80000hours.org/2017/09/is-it-time-for-a-new-scientific-revolution-julia-galef-on-how-to-make-humans-smarter/?utm_campaign=podcast__julia-galef&utm_source=80000+Hours+Podcast&utm_medium=podcast">Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more.</a><br><br> Julia has been host of the Rationally Speaking podcast since 2010, co-founder of the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements.<br><br> In our conversation we ended up speaking about a wide range of topics, including:<br><br> * Her research on how people can have productive intellectual disagreements.<br> * Why she once planned to become an urban designer.<br> * Why she doubts people are more rational than 200 years ago.<br> * What makes her a fan of Twitter (while I think it’s dystopian).<br> * Whether people should write more books.<br> * Whether it’s a good idea to run a podcast, and how she grew her audience.<br> * Why saying you don’t believe X often won’t convince people you don’t.<br> * Why she started a PhD in economics but then stopped.<br> * Whether she would recommend an unconventional career like her own.<br> * Whether the incentives in the intelligence community actually support sound thinking.<br> * Whether big institutions will actually pick up new tools for improving decision-making if they are developed.<br> * How to start out pursuing a career in which you enhance human judgement and foresight.<br><br> <b>Get free, one-on-one career advice to help you improve judgement and decision-making</b><br><br> We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. **<b>If you want to work on any of the problems discussed in this episode, find out if our coaching can help you:</b>**<br><br> <a href="https://80000hours.org/coaching/?utm_campaign=podcast__julia-galef&utm_source=80000+Hours+Podcast&utm_medium=podcast" title="" class="btn btn-primary">APPLY FOR COACHING</a><br><br> <b>Overview of the conversation</b><br><br> **1m30s** So what projects are you working on at the moment?<br> **3m50s** How are you working on the problem of expert disagreement?<br> **6m0s** Is this the same method as the double crux process that was developed at the Center for Applied Rationality?<br> **10m** Why did the Open Philanthropy Project decide this was a very valuable project to fund?<br> **13m** Is the double crux process actually that effective?<br> **14m50s** Is Facebook dangerous?<br> **17m** What makes for a good life? Can you be mistaken about having a good life?<br> **19m** Should more people write books?<br> <a href="https://80000hours.org/2017/09/is-it-time-for-a-new-scientific-revolution-julia-galef-on-how-to-make-humans-smarter/?utm_campaign=podcast__julia-galef&utm_source=80000+Hours+Podcast&utm_medium=podcast">Read more...</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/bgI3vVvSUDM" height="1" width="1" alt=""/>
Sep 13, 2017
#6 - Dr Toby Ord on why the long-term future matters more than anything else & what to do about it
02:14:02
Of all the people whose well-being we should care about, only a small fraction are alive today. The rest are members of future generations who are yet to exist. Whether they’ll be born into a world that is flourishing or disintegrating – and indeed, whether they will ever be born at all – is in large part up to us. As such, the welfare of future generations should be our number one moral concern.<br><br> This conclusion holds true regardless of whether your moral framework is based on common sense, consequences, rules of ethical conduct, cooperating with others, virtuousness, keeping options open – or just a sense of wonder about the universe we find ourselves in.<br><br> That’s the view of Dr Toby Ord, a philosophy Fellow at the University of Oxford and co-founder of the effective altruism community. In this episode of the 80,000 Hours podcast Dr Ord makes the case that aiming for a positive long-term future is likely the best way to improve the world.<br><br> <a href="https://80000hours.org/articles/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it/?utm_campaign=podcast__toby-ord&utm_source=80000+Hours+Podcast&utm_medium=podcast">Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more.</a><br><br> We then discuss common objections to long-termism, such as the idea that benefits to future generations are less valuable than those to people alive now, or that we can’t meaningfully benefit future generations beyond taking the usual steps to improve the present.<br><br> Later the conversation turns to how individuals can and have changed the course of history, what could go wrong and why, and whether plans to colonise Mars would actually put humanity in a safer position than it is today.<br><br> This episode goes deep into the most distinctive features of our advice. It’s likely the most in-depth discussion of how 80,000 Hours and the effective altruism community think about the long term future and why - and why we so often give it top priority. <br><br> It’s best to subscribe, so you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can do so by searching ‘80,000 Hours’ wherever you get your podcasts.<br><br> <b>Want to help ensure humanity has a positive future instead of destroying itself? We want to help.</b><br><br> We’ve helped 100s of people compare between their options, get introductions, and jobs important for the the long-run future. <b>If you want to work on any of the problems discussed in this episode, such as artificial intelligence or biosecurity, <a href="https://80000hours.org/coaching/?utm_campaign=podcast__toby-ord&utm_source=80000+Hours+Podcast&utm_medium=podcast" title="" class="btn btn-primary">find out if our coaching can help you.</a></b><br><br> <b>Overview of the discussion</b><br><br> <b>3m30s</b> - Why is the long-term future of humanity such a big deal, and perhaps the most important issue for us to be thinking about?<br> <b>9m05s</b> - Five arguments that future generations matter<br> <b>21m50s</b> - How bad would it be if humanity went extinct or civilization collapses? <br> <b>26m40s</b> - Why do people start saying such strange things when this topic comes up?<br> <b>30m30s</b> - Are there any other reasons to prioritize thinking about the long-term future of humanity that you wanted to raise before we move to objections?<br> <b>36m10s</b> - What is this school of thought called?<br> <a href=" https://80000hours.org/articles/why-the-long-run-future-matters-more-than-anything-else-and-what-we-should-do-about-it/?utm_campaign=podcast__toby-ord&utm_source=80000+Hours+Podcast&utm_medium=podcast">Read more...</a><br><br><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/nBll-tCQHdk" height="1" width="1" alt=""/>
Sep 06, 2017
#5 - Alex Gordon-Brown on how to donate millions in your 20s working in quantitative trading
01:45:34
Quantitative financial trading is one of the highest paying parts of the world’s highest paying industry. 25 to 30 year olds with outstanding maths skills can earn millions a year in an obscure set of ‘quant trading’ firms, where they program computers with predefined algorithms to allow them to trade very quickly and effectively.<br><br> <b>Update: we're headhunting people for quant trading roles</b><br><br> Want to be kept up to date about particularly promising roles we're aware of for earning to give in quantitative finance? <a href="https://80000hours.typeform.com/to/JiDHyM?int_campaign=blog-post__agb-interview" title="" class="btn btn-primary"><b>Get notified by letting us know here.</b></a><br><br> This makes it an attractive place to work for people who want to ‘earn to give’, and we know several people who are able to donate over a million dollars a year to effective charities by working in quant trading. Who are these people? What is the job like? And is there a risk that their work harms the world in other ways? <br><br> <a href="https://80000hours.org/2017/08/the-life-of-a-quant-trader-how-to-earn-and-donate-millions-within-a-few-years/?utm_campaign=podcast__agb&utm_source=80000+Hours+Podcast&utm_medium=podcast">Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more.</a><br><br> I spoke at length with Alexander Gordon-Brown, who has worked as a quant trader in London for the last three and a half years and donated hundreds of thousands of pounds. We covered:<br><br> * What quant traders do and how much they earn.<br> * Whether their work is beneficial or harmful for the world.<br> * How to figure out if you’re a good personal fit for quant trading, and if so how to break into the industry.<br> * Whether he enjoys the work and finds it motivating, and what other careers he considered.<br> * What variety of positions are on offer, and what the culture is like in different firms.<br> * How he decides where to donate, and whether he has persuaded his colleagues to join him.<br><br> <b>Want to earn to give for effective charities in quantitative trading? We want to help.</b><br><br> We’ve helped dozens of people plan their earning to give careers, and put them in touch with mentors. If you want to work in quant trading, apply for our free coaching service.<br><br> <b><a href="https://80000hours.org/coaching/?utm_campaign=podcast__agb&utm_source=80000+Hours+Podcast&utm_medium=podcast" title="" class="btn btn-primary">APPLY FOR COACHING</a></b><br><br> <b>What questions are asked when?</b><br><br> 1m30s - What is quant trading and how much do they earn?<br> 4m45s - How do quant trading firms manage the risks they face and avoid bankruptcy?<br> 7m05s - Do other traders also donate to charity and has Alex convinced them?<br> 9m45s - How do they track the performance of each trader?<br> 13m00s - What does the daily schedule of a quant trader look like? What do you do in the morning, afternoon, etc?<br> <a href="https://80000hours.org/2017/08/the-life-of-a-quant-trader-how-to-earn-and-donate-millions-within-a-few-years/?utm_campaign=podcast__agb&utm_source=80000+Hours+Podcast&utm_medium=podcast">More...</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/e2hi_qiK3I4" height="1" width="1" alt=""/>
Aug 28, 2017
#4 - Howie Lempel on pandemics that kill hundreds of millions and how to stop them
02:35:45
What disaster is most likely to kill more than 10 million human beings in the next 20 years? Terrorism? Famine? An asteroid?<br><br> Actually it’s probably a pandemic: a deadly new disease that spreads out of control. We’ve recently seen the risks with Ebola and swine flu, but they pale in comparison to the Spanish flu which killed 3% of the world’s population in 1918 to 1920. A pandemic of that scale today would kill 200 million.<br><br> In this in-depth interview I speak to Howie Lempel, who spent years studying pandemic preparedness for the Open Philanthropy Project. We spend the first 20 minutes covering his work at the foundation, then discuss how bad the pandemic problem is, why it’s probably getting worse, and what can be done about it.<br><br> <a href="https://80000hours.org/2017/08/podcast-we-are-not-worried-enough-about-the-next-pandemic/?utm_campaign=podcast__howie-lempel&utm_source=80000+Hours+Podcast&utm_medium=podcast">Full transcript, apply for personalised coaching to help you work on pandemic preparedness, see what questions are asked when, and read extra resources to learn more.</a><br><br> In the second half we go through where you personally could study and work to tackle one of the worst threats facing humanity.<br><br> <em>Want to help ensure we have no severe pandemics in the 21st century? We want to help.</em><br><br> We’ve helped dozens of people formulate their plans, and put them in touch with academic mentors. If you want to work on pandemic preparedness safety, apply for our free coaching service.<br><br> <a href="https://80000hours.org/coaching/?utm_campaign=podcast__howie-lempel&utm_source=80000+Hours+Podcast&utm_medium=podcast"></b>APPLY FOR COACHING</b></a><br><br> <b>2m</b> - What does the Open Philanthropy Project do? What’s it like to work there?<br> <b>16m27s</b> - What grants did OpenPhil make in pandemic preparedness? Did they work out?<br> <b>22m56s</b> - Why is pandemic preparedness such an important thing to work on?<br> <b>31m23s</b> - How many people could die in a global pandemic? Is Contagion a realistic movie?<br> <b>37m05s</b> - Why the risk is getting worse due to scientific discoveries<br> <b>40m10s</b> - How would dangerous pathogens get released?<br> <b>45m27s</b> - Would society collapse if a billion people die in a pandemic?<br> <b>49m25s</b> - The plague, Spanish flu, smallpox, and other historical pandemics<br> <b>58m30s</b> - How are risks affected by sloppy research security or the existence of factory farming?<br> <b>1h7m30s</b> - What's already being done? Why institutions for dealing with pandemics are really insufficient.<br> <b>1h14m30s</b> - What the World Health Organisation should do but can’t.<br> <b>1h21m51s</b> - What charities do about pandemics and why they aren’t able to fix things<br> <b>1h25m50s</b> - How long would it take to make vaccines?<br> <b>1h30m40s</b> - What does the US government do to protect Americans? It’s a mess.<br> <b>1h37m20s</b> - What kind of people do you know work on this problem and what are they doing?<br> <b>1h46m30s</b> - Are there things that we ought to be banning or technologies that we should be trying not to develop because we're just better off not having them?<br> <b>1h49m35s</b> - What kind of reforms are needed at the international level?<br> <b>1h54m40s</b> - Where should people who want to tackle this problem go to work?<br> <b>1h59m50s</b> - Are there any technologies we need to urgently develop?<br> <b>2h04m20s</b> - What about trying to stop humans from having contact with wild animals?<br> <b>2h08m5s</b> - What should people study if they're young and choosing their major; what should they do a PhD in? Where should they study, and with who?<br> <a href="https://80000hours.org/2017/08/podcast-we-are-not-worried-enough-about-the-next-pandemic/?utm_campaign=podcast__howie-lempel&utm_source=80000+Hours+Podcast&utm_medium=podcast">More...</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/TSxwAbxiduM" height="1" width="1" alt=""/>
Aug 23, 2017
#3 - Dr Dario Amodei on OpenAI and how AI will change the world for good and ill
01:38:21
Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal.<br><br> Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society. <br><br> I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about:<br><br> * OpenAI’s latest plans and research progress.<br> * His paper *Concrete Problems in AI Safety*, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid.<br> * How listeners can best go about pursuing a career in machine learning and AI development themselves.<br><br> <a href="https://80000hours.org/2017/07/podcast-the-world-needs-ai-researchers-heres-how-to-become-one/?utm_campaign=podcast__amodei&utm_source=80000+Hours+Podcast&utm_medium=podcast">Full transcript, apply for personalised coaching to work on AI safety, see what questions are asked when, and read extra resources to learn more.</a><br><br> 1m33s - What OpenAI is doing, Dario’s research and why AI is important <br> 13m - Why OpenAI scaled back its Universe project <br> 15m50s - Why AI could be dangerous <br> 24m20s - Would smarter than human AI solve most of the world’s problems? <br> 29m - Paper on five concrete problems in AI safety <br> 43m48s - Has OpenAI made progress? <br> 49m30s - What this back flipping noodle can teach you about AI safety <br> 55m30s - How someone can pursue a career in AI safety and get a job at OpenAI <br> 1h02m30s - Where and what should people study? <br> 1h4m15s - What other paradigms for AI are there? <br> 1h7m55s - How do you go from studying to getting a job? What places are there to work? <br> 1h13m30s - If there's a 17-year-old listening here what should they start reading first? <br> 1h19m - Is this a good way to develop your broader career options? Is it a safe move? <br> 1h21m10s - What if you’re older and haven’t studied machine learning? How do you break in? <br> 1h24m - What about doing this work in academia? <br> 1h26m50s - Is the work frustrating because solutions may not exist? <br> 1h31m35s - How do we prevent a dangerous arms race? <br> 1h36m30s - Final remarks on how to get into doing useful work in machine learning<br><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/zcU_WzWmuns" height="1" width="1" alt=""/>
Jul 21, 2017
#2 - Prof David Spiegelhalter on risk, stats and improving understanding of science
00:33:43
Recorded in 2015 by Robert Wiblin with colleague Jess Whittlestone at the Centre for Effective Altruism, and recovered from the dusty 80,000 Hours archives.<br><br> David Spiegelhalter is a statistician at the University of Cambridge and something of an academic celebrity in the UK.<br><br> Part of his role is to improve the public understanding of risk - especially everyday risks we face like getting cancer or dying in a car crash. As a result he’s regularly in the media explaining numbers in the news, trying to assist both ordinary people and politicians focus on the important risks we face, and avoid being distracted by flashy risks that don’t actually have much impact.<br><br> <a href="https://80000hours.org/2017/06/podcast-prof-david-spiegelhalter-on-risk-statistics-and-improving-the-public-understanding-of-science/?utm_campaign=podcast__spiegelhalter&utm_source=80000+Hours+Podcast&utm_medium=podcast">Summary, full transcript and extra links to learn more.</a><br><br> To help make sense of the uncertainties we face in life he has had to invent concepts like the microlife, or a 30-minute change in life expectancy. (<a href="https://en.wikipedia.org/wiki/Microlife">https://en.wikipedia.org/wiki/Microlife</a>)<br><br> We wanted to learn whether he thought a lifetime of work communicating science had actually had much impact on the world, and what advice he might have for people planning their careers today.<img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/I8UXpzQ0ZII" height="1" width="1" alt=""/>
Jun 21, 2017
#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts
00:55:15
Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do.<br><br> This interview complements <a href="https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/?utm_campaign=podcast__miles_brundage&utm_source=80000+Hours+Podcast&utm_medium=podcast">our profile of the importance of positively shaping artificial intelligence</a> and <a href="https://80000hours.org/articles/ai-policy-guide/?utm_campaign=podcast__miles_brundage&utm_source=80000+Hours+Podcast&utm_medium=podcast">our guide to careers in AI policy and strategy</a><br><br> <a href="https://80000hours.org/2017/06/the-world-desperately-needs-ai-strategists-heres-how-to-become-one/?utm_campaign=podcast__miles_brundage&utm_source=80000+Hours+Podcast&utm_medium=podcast">Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more. </a><br><br><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/cuhkIKKtM48" height="1" width="1" alt=""/>
Jun 05, 2017
[Archive] Maria Gutierrez and Robert Wiblin on doing good through art (May 2016)
00:23:12
Note that this interview was recorded before we were running a professional podcast. Summary and discussion: <a href="https://80000hours.org/2016/06/interview-with-maria-gutierrez-about-doing-good-through-art/?utm_campaign=podcast__maria_gutierrez&utm_source=80000+Hours+Podcast&utm_medium=podcast">https://80000hours.org/2016/06/interview-with-maria-gutierrez-about-doing-good-through-art/</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/uw-lBow1oiI" height="1" width="1" alt=""/>
Jun 02, 2016
[Archive] Dillon Bowen and Roman Duda, on why to do an economics PhD (Jan 2016)
00:26:49
Note that this interview was recorded before we were running a professional podcast. Summary and discussion: <a href="https://80000hours.org/2016/02/plan-change-story-interview-with-dillon-bowen-leader-of-effective-altruism-at-tufts-university/?utm_campaign=podcast__dillon_bowen&utm_source=80000+Hours+Podcast&utm_medium=podcast">https://80000hours.org/2016/02/plan-change-story-interview-with-dillon-bowen-leader-of-effective-altruism-at-tufts-university/</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/m6uVcEn2Jb0" height="1" width="1" alt=""/>
Feb 03, 2016
[Archive] Ben West and Ben Todd on donating most of your income from entrepreneurship(Dec 2015)
00:47:35
Note that this interview was recorded before we were running a professional podcast. Summary and discussion here: <a href="https://80000hours.org/2015/12/interview-with-ben-who-raised-eight-figures-for-charity-through-tech-entrepreneurship/?utm_campaign=podcast__ben_west&utm_source=80000+Hours+Podcast&utm_medium=podcast">https://80000hours.org/2015/12/interview-with-ben-who-raised-eight-figures-for-charity-through-tech-entrepreneurship/</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/ktagjOdMcg0" height="1" width="1" alt=""/>
Dec 24, 2015
[Archive] Matt Clifford and Ben Todd on doing good by being a startup founder(Dec 2015)
00:43:58
Note that this interview was recorded before we were running a professional podcast. Summary and discussion here: <a href="https://80000hours.org/2015/12/podcast-with-founder-of-entrepreneur-first-about-being-a-startup-founder/?utm_campaign=podcast__matt_clifford&utm_source=80000+Hours+Podcast&utm_medium=podcast">https://80000hours.org/2015/12/podcast-with-founder-of-entrepreneur-first-about-being-a-startup-founder/</a><img src="http://feeds.feedburner.com/~r/80000HoursPodcast/~4/K4x5IwBFzQo" height="1" width="1" alt=""/>
Dec 21, 2015