Moderated Content

By evelyn douek

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.


Category: Technology

Open in Apple Podcasts


Open RSS feed


Open Website


Rate for this podcast

Subscribers: 15
Reviews: 0

Description

Moderated Content from Stanford Law School is podcast content about content moderation, moderated by assistant professor Evelyn Douek. The community standards of this podcast prohibit anything except the wonkiest conversations about the regulation—both public and private—of what you see, hear and do online.

Episode Date
New York Attorney General v. Blogging Law Professor re: Online Hate Speech
00:52:44
In the wake of the Buffalo shooting in May, New York passed a law imposing certain obligations on social media networks regarding "hateful conduct" on their services. It went into effect at the start of December and Eugene Volokh, a professor at UCLA Law who runs a legal blog, is challenging the law as unconstitutional. Evelyn sits down with Eugene and Genevieve Lakier from UChicago Law to discuss.
Dec 09, 2022
MC Weekly Update 12/5: THE MODERATED CONTENT FILES
00:31:59

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • Using a powerful AI language model developed by OpenAI, the new ChatGPT tool allows anyone to generate short text that is indecipherable from written text and that draws upon vast amounts of publicly available information. - Janus Rose/ Vice, Ina Fried/ Axios
    • More: ChatGPT has fun and informative uses, but is also easy to abuse — from generating recipes or funny movie scripts, to the spread of misinformation or nefarious tips to get away with crimes.
  • TikTok and Bumble are adopting a tool developed by Meta with international charity SWGfL’s Revenge Porn Helpline. The tool uses hashing technology for submitted content to identify and block non-consensual intimate media from participating platforms. - Olivia Solon/ Bloomberg News
  • More: Victims make a tradeoff on whether a single human reviewer seeing their intimate image outweighs its spread across the social media and dating services using the technology. - @oliviasolon
  • Rumble and the Volokh Conspiracy, a blog run by UCLA law professor Eugene Volokh, are challenging a New York law that prohibits hate speech in a federal lawsuit, claiming it would violate First Amendment free expression protections. - Chris Dolmetsch/ Bloomberg News
  • The “Twitter Files” were released in a staggered thread of more than 40 tweets on Friday evening. The string of tweets includes screenshots of Twitter staff’s internal communications and external email correspondence which lack any smoking gun. Instead, the thread is most likely to reinforce existing beliefs about the decision to suppress the Hunter Biden laptop story and related content. - Cat Zakrzewski, Faiz Siddiqui/ The Washington Post
  • Twitter CEO Elon Musk disputed news reports on research by advocacy and civil rights groups that found hate speech slurs were more prevalent on the platform. Musk claimed the data actually shows a decrease in the reach of hate speech on the platform since his acquisition and said the Twitter safety team will publish weekly reports on the data going forward. - Mohar Chatterjee/ Politico
    • More: As University of California, Berkeley researcher Jonathan Stray points out, both sides can claim they are right depending on the data and measurement of success. More transparency and collaboration could move these efforts in the right direction. That seems unlikely for now, but could be required under the EU’s new digital regulations.

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Dec 05, 2022
MC Weekly Update 11/28: Alex the Demon Overlord
00:25:39

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • As protestors against China’s zero-Covid policy fill the streets, images of them fill the internet and China’s censors are struggling to contain them. – Liza Lin, Karen Hao / Wall Street Journal
    • This is partly because the Chinese people have had years of practice at evading censors and know a trick or two. – Paul Mozur / Twitter
  • So China is trying to bury that content with its own spam about escorts, porn and gambling. – Jon Porter / The Verge
  • Elon doesn’t seem too concerned though. He’s too busy picking a fight with Apple. – Elon Musk / Twitter
    • And maybe drawing up plans for his own phone if Apple kicks Twitter out of the app store? Elon Musk / Twitter 
  • Meta published its quarterly adversarial threat report this week, which included information about accounts it took down conducting information operations that had links to the US government. – Meta
  • Alex gives Evelyn an apparently now-weekly update on Stanford football news.

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Nov 29, 2022
MC Weekly Update 11/21: Bot Populi, Bot Dei
00:28:25

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • How else would Elon Musk decide to reinstate former President Donald Trump’s account than a Twitter poll? Okay, well maybe the content moderation council he proposed to deal with reinstatement decisions. - Faiz Siddiqui, Drew Harwell, Isaac Arnsdorf / The Washington Post
  • Musk’s mind is also made up on conspiracy theorist Alex Jones whose account will not be reinstated on the platform. - Brian Fung/ CNN 
  • Former Twitter trust and safety lead Yoel Roth penned a New York Times opinion piece on why he left Twitter and the influence that app store operators have on content moderation. - Yoel Roth/ The New York Times (commentary)
  • The EU might just scare Musk straight. After the Financial Times reported the headline “Elon Musk’s Twitter on ‘collision course’ with EU regulators,” European Commission Executive Vice President Margrethe Vestager responded that “We are never on a collision course with anyone because we consider ourselves a mountain.” - Javier Espinoza/ Financial Times, Silvia Amaro/ CNBC
  • Mastodon might not be the paradise we hoped we could toot freely and safely in. Content moderation is hard and there’s less control or quality assurance in a federated model, as Block Party CEO Tracy Chou already knew too well before she had a post blocked and now faces torrents of harassment. - @triketora, @mmasnick
  • A Mastodon server administrator is deciding who is a journalist while other server operators block those verified journalists from being seen on their “instances.” - Mathew Ingram/ Columbia Journalism Review
  • Meta “has fired or disciplined more than two dozen employees and contractors over the last year whom it accused of improperly taking over user accounts, in some cases allegedly for bribes.” - Kirsten Grind, Robert McMillan/ The Wall Street Journal
  • FBI Director Chris Wray testified that TikTok poses a national security challenge for the United States because the Chinese government may be able to access extensive data collected by the app or even use recommendation algorithms to push the country’s influence operations on users. - Chris Strohm, Daniel Flatley/ Bloomberg News, David Shepardson/ Reuters, Suzanne Smalley/ CyberScoop
  • Sport ball is happening in Qatar “without controversy,” and Meta is using the moment to highlight its recently introduced anti-harassment features on Instagram to block or limit offensive messages aimed at players and encourage fans to think twice before sending potentially abusive content. - Jess Weatherbed/ The Verge, Meta

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Nov 21, 2022
“Elon puts rockets into space, he's not afraid of the FTC”
00:51:24
Come for the discussion of whether Musk is going to find himself in hot water with the FTC, stay for the discussion of privacy and data security regulation more generally. Evelyn discusses Twitter’s data security problems and what this says about privacy regulation more generally with Whitney Merrill, the Data Protection Officer and Privacy Counsel at Asana and long-time privacy lawyer including as an attorney at the FTC, and Riana Pfefferkorn, a Research Scholar at the Stanford Internet Observatory.
Nov 17, 2022
MC Weekly Update 11/14: Elections and Elon, again
00:33:57

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to 

subscribe and share

 the podcast with friends!

Nov 15, 2022
MC Weekly News Roundup 11/7: The Elon Musk JD Program
00:32:55

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • Elon Musk announced that Twitter will start charging $8 for users to keep or gain blue check marks on the platform, changing the meaning of the symbol to indicate subscribers to the “Twitter Blue” service. The company then delayed launch until after the midterms. - Ines Kagubare/ The Hill, @elonmusk
  • Blue-chip companies including General Mills, Pfizer, and Volkswagen have all paused advertising on Twitter over concerns that Musk will limit content moderation on the platform. - Suzanne Vranica, Patience Haggin/ The Wall Street Journal
  • After single-handedly hosting a call for Twitter with civil society and advocacy organizations, many of those participants were among the more than 60 advocacy and civil society organizations that called for an ad boycott on the platform. - Rebecca Klar/ The Hill, Rebecca Kern, Mark Scott/ Politico
  • Elon Musk responded to a right-wing influencer’s tweet suggesting he “has tortious interference claims” against activist groups involved in the ad boycott campaign. (spoiler: he doesn’t) - @elonmusk, Mark Frauenfelder/ Boing Boing
  • People are leaving Twitter and fleeing to… Mastodon? - Rachel Metz/ CNN
  • Rumble has suspended services in France, blaming government rules banning Russian state media and government accounts. - @rumblevideo
  • Rumble is building its own cloud services, a move similar to Parler, but that would require a more expansive scale for more highly trafficked video content. - Kaitlyn Tiffany/ The Atlantic, Taylor Hatmaker/ TechCrunch

     
  • “The Intercept had a big story this week that is making the rounds, suggesting that ‘leaked’ documents prove the DHS has been coordinating with tech companies to suppress information. The story has been immediately picked up by the usual suspects, claiming it reveals the ‘smoking gun’ of how the Biden administration was abusing government power to censor them on social media.” - Mike Masnick/ Techdirt
    • More: “The only problem? It shows nothing of the sort.”
  • The Election Integrity Partnership published a blog on rumors and false and misleading narratives to expect on and after Election Day. - Election Integrity Partnership

     
  • India is amending an IT law that regulates social media content moderation by adding a panel with three government-appointed members to review social media grievances. - Manish Singh, Jagmeet Singh/ TechCrunch, Scroll
  • A revised Online Safety Bill is expected to head back to the UK House of Commons later this month with amendments that limit the government from forcing platforms to take action on “harmful but lawful” content. - Dev Kundaliya/ Computing, Chloe Chaplain/ i newspaper

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Nov 07, 2022
MC Weekly News Roundup Halloween Edition
00:27:11

SHOW NOTES

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • Elon Musk has been busy since officially acquiring Twitter.
    • He tweeted that the company will form “a content moderation council with widely diverse viewpoints.” That sparked comparisons to Meta’s Oversight Board while others noted that Twitter already has a Trust and Safety Council, but wondered if Musk was aware. 
    • He also said no major decisions will be made about reinstating accounts or changing content rules until that body comes together and reiterated in a quote tweet that no changes have been made to Twitter’s content moderation policies, likely in response to a reported rise in specific hate speech terms on the platform. - Emma Roth/ The Verge
  • Indian authorities conducted searches at The Wire newsroom and the homes of four editors after a complaint was filed by the ruling party official at the center of reporting that was retracted by the news publication. - Scroll
  • The Election Integrity Partnership published an analysis of social media platform policies finding that many election rules are vague and lack transparency for how they are enforced. - Election Integrity Partnership
  • Elon Musk tweeted and then deleted a link to a conspiracy theory about the Paul Pelosi attack in reply to a tweet from Hillary Clinton. - Gina Martinez/ CBS News, Kurtis Lee/ The New York Times, Elizabeth Dwoskin, Faiz Siddiqui/ The Washington Post
  • Meta was fined nearly $25 million by Washington state for violating campaign finance disclosure laws and ordered to pay the state’s legal fees. - Associated Press, Rebecca Falconer/ Axios, Eli Sanders
  • The Digital Services Act (DSA) was published in the Official Journal of the European Union. The publication provides the final text of the DSA and begins the countdown for the DSA to enter into force and its application for large and then all covered platforms and search engines. - Luca Bertuzzi/ Euractiv

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Oct 31, 2022
Musk Flips the Bird
00:34:49
Evelyn and Alex talk about, what else, Musk’s acquisition of Twitter. He says he’s freed the bird, but there’s a whole bunch of restraints he clearly hasn’t thought about. He’s got some not-so-fun meetings and phone calls coming up.
Oct 29, 2022
Content Moderation in the Stack
01:04:25
When we talk about content moderation, we often focus on companies at the application layer of the internet, like the Facebooks and Twitters of the world. But there are a whole bunch of other companies in the internet stack that have the power to knock things offline. So what is similar or different about content moderation when it moves into the infrastructure layers of the internet? Evelyn spoke with Alissa Starzak, the Vice President and Global Head of Public Policy at Cloudflare and Emma Llanso, the Director of CDT’s Free Expression Project to explore this increasingly pressing question.
Oct 27, 2022
MC Weekly News Roundup 10/24: Fun Facts about Railroads
00:26:40

SHOW NOTES

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • The Wire retracted recent coverage of Meta and will conduct an internal review of past coverage by staff involved with the reporting. - The Wire
  • French police are investigating severed fiber-optic cables that disrupted internet and phone services in the Marseille area. Alex urges caution before jumping to any conclusions. John Leicester/ Associated Press
  • Turkey's parliament voted to adopt a law that could send social media users to jail for up to three years for spreading false information to "create fear and disturb public order" despite free speech and media freedom concerns. - Reuters
  • Brazilian authorities granted the power to order that online platforms remove content to the country’s elections chief who also sits on the supreme court. - Jack Nicas/ The New York Times
  • Kiwi Farms was available at its original URL over the last month but is back down. - Ellie Hall/ BuzzFeed
  • The Republican National Committee sued Google over alleged spam filtering bias. It still has not enrolled in a new pilot program Google created with FEC approval to address those concerns. - Sara Fischer, Ashley Gold/ Axios
  • Elon may very well buy Twitter — could an alternative platform pop up? - Perry Bacon Jr./ The Washington Post (commentary)

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Oct 25, 2022
MC's Weekly Update: Down to The Wire v. Meta in India
00:30:34

SHOW NOTES

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • An article with bombshell allegations against Meta, the parent company of Instagram and Facebook, appears to be based on forgeries, but the news outlet continues to stand by the reporting and now claims a technical expert at the publication was hacked. - Aditi Agrawal/ newslaundry, OpIndia
    • More: Last week, an article was published by The Wire, a nonprofit Indian digital news organization, claiming an internal Instagram report revealed an official in charge of social media for India’s ruling party, the BJP, had special privileges to report pieces of content to Instagram and have them taken down automatically.
      • Meta spokesperson Andy Stone denied the report saying that that was not how the XCheck program worked, and that the “the underlying documentation appears to be fabricated.”
    • But Wait, There’s More: The next day, The Wire published a new article claiming to have an email in which Meta’s Stone asked employees how the document leaked.
      • Meta CISO Guy Rosen denied the allegations and explained how he determined the evidence and email were forgeries.
    • Then: This weekend, The Wire released another story standing by their reporting with evidence that the internal email and report URL were real. The story included a video explanation of their technical analysis.
    • We’re Still Not Done: Meta released an updated blog post debunking the purported internal system shown in The Wire’s video as an external account created after the story was reported.
      • The Wire responded in a statement saying that the reason why Meta keeps denying their reporting is to try and get them to publish more information that will reveal their sources but they “are not prepared to play this game any further.” The statement was later edited to delete the description of a “personal” relationship with a source.
    • Got All That? Here’s Some Context: India is pushing ahead with legislation that would create a government-appointed panel to review user complaints about social media content moderation decisions. - Megha Mandavia/ The Wall Street Journal
  • Ye, the artist formerly known as Kanye West, has reached an agreement to buy the conservative social media platform Parler. The move marks a growing trend of billionaires buying social media companies when their posts are moderated. - Ryan Browne/ CNBC, Marlene Lenthang/ NBC News, Bobby Allyn/ NPR, Kelly Hooper/ Politico
  • The Katmai National Park and Preserve’s Fat Bear Week bracket voting tournament was marred by an attempt to artificially inflate votes for 435 Holly over Bear 747 in the semifinal round. Luckily, the organizers caught the fishy business and preserved the sanctity of the tournament which had a record of more than one million total votes. - Miles Klee/ Rolling Stone

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Oct 17, 2022
The Supreme Court Takes up Section 230
00:53:17
Earlier this month, the Supreme Court granted cert in two cases concerning the scope of platform liability for content on their services: Gonzalez v. Google, about whether platforms lose section 230 immunity when they recommend content to users, and Twitter v. Taamneh, about whether platforms can be found to have aided and abetted terrorism if they are found to have been insufficiently aggressive in removing terrorist content from their sites. The cert grants were a surprise, and the cases are complicated. Evelyn sat down with Daphne Keller, the podcast’s Supreme Court Correspondent, to dig into the details.
Oct 13, 2022
MC’s Weekly Update: Everyone’s Interested in Content Moderation
00:24:30

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

  • The Supreme Court agreed to hear two cases that could determine the scope of liability for websites and social media platforms that host and promote user content. - Rebecca Kern/ Politico, Rachel Lerman/ The Washington Post, David Ingram/ NBC News
    • Gonzalez v. Google is the case getting the most attention because somehow the words “Section 230” have become clickbait — quite an achievement for a random provision of federal law. The question in Gonzalez is whether platforms lose Section 230 protections for content that they promote. The family of a victim of the 2015 Paris terrorist attacks brought the suit.
    • But we also really want to highlight Twitter Inc. v. Taamneh, which is about whether platforms can be found to have “aided and abetted” terrorism by having terrorist content on their services. The case was brought by the family of a victim of a 2017 terrorist attack in Istanbul which claims Twitter, Google, and Facebook aided and abetted terrorism by allowing the Islamic State on their platforms in violation of the Anti-Terrorism Act.
  • Meta took down influence operations linked to China and Russia. The Chinese campaign was the first to target U.S. politics ahead of the midterms, but was clearly fake and had low engagement. The larger Russian network replicated media organizations to spread pro-Kremlin narratives about the war in Ukraine. - Steven Lee Myers/ The New York Times, Donie O'Sullivan/ CNN, Ben Nimmo/ Meta, Nika Aleksejeva, Roman Osadchuk, Sopo Gelava, Jean Le Roux, Mattia Caniglia, Daniel Suárez Pérez, Alyssa Kann/ DFRLab
  • Spotify announced it is acquiring content moderation company Kinzen, bringing expertise and proprietary tools in house to improve trust and safety. - Sarah Perez/ TechCrunch
  • PayPal is facing blowback after proposing rules that would have allowed it to fine users $2,500 for promoting misinformation — which the online payment service has since called an error. - Cristiano Lima/ The Washington Post
  • California passed a “cyberflashing” law that allows recipients of unwanted sexual imagery to take legal action against the sender for up to $30,000 in civil damages. California is the third state to pass a law that provides legal recourse for this form of sexual harrassment and abuse. - Cristiano Lima/ The Washington Post
    • More: The dating app Bumble played a significant role pushing for the new law. The app requires women to send the first messages to matches in an attempt to create a better dating experience.
    • Context: The new law may be a sign of a trend across state legislatures which are increasingly passing measures against online harms and abuse. A Bumble executive to The Washington Post the company plans to push for similar legislation in Maryland, New York and D.C.

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Oct 11, 2022
Texas vs. Platforms … vs. The First Amendment
01:03:59
Last week the Fifth Circuit upheld a Texas social media law that, among other things, prevents platforms from discriminating against users based on their viewpoint. The leading opinion declared that a bunch of things we thought we knew about how the First Amendment and content moderation work are wrong. Next stop: the Supreme Court. evelyn talks with Daphne Keller, director of the Program on Platform Regulation at Stanford's Cyber Policy Center, and Genevieve Lakier, Professor of Law and the Herbert and Marjorie Fried Teaching Scholar at the University of Chicago, about what the ruling said and what it means—to the extent that’s decipherable.
Sep 22, 2022