Emoji as a Predictor
21:25
https://traffic.libsyn.com/secure/dataskeptic/emoji-as-a-predictor.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/4/9/b/c/49bc44c7f6c7ef95bafc7308ab683e82/Physically_Distributed_6.jpg
57912689118
4f648ef2-f0ac-4180-8e73-bdc7acc485d0
Emojis are arguably one of the most effective ways to express emotions when texting. In today’s episode, Xuan Lu shares her research on the use of emojis by developers. She explains how the study of emojis can track the emotions of remote workers and predict future behavior. Listen to find out more!
|
May 23, 2022 |
Polarizing Trends in the Gig Economy
46:17
https://traffic.libsyn.com/secure/dataskeptic/polarizing-trends-in-the-gig-economy.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/3/e/6/33e66133118d484d27a2322813b393ee/Physically_Distributed_5.jpg
57521590895
63ab8f67-9681-427e-a81f-64690e8a615d
On the show today, Fabian Braesemann, a research fellow at the University of Oxford, joins us to discuss his study analyzing the gig economy. He revealed the trends he discovered since remote work became mainstream, the factors causing spatial polarization and some downsides of the gig economy. Listen to learn what he found.
|
May 16, 2022 |
Remote Learning in Applied Engineering
25:15
https://traffic.libsyn.com/secure/dataskeptic/remote-learning-in-applied-engineering.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/4/6/c/f/46cfe3e1149598515f2e77a3093c12a1/Physically_Distributed_4.jpg
57521251776
4a9f3bd6-f25e-4023-a899-94068924deb2
On the show today, we interview Mouhamed Abdulla, a professor of Electrical Engineering at Sheridan Institute of Technology. Mouhamed joins us to discuss his study on remote teaching and learning in applied engineering. He discusses how he embraced the new approach after the pandemic, the challenges he faced and how he tackled them. Listen to find out more. Click here for additional show notes on our website! Thanks to our sponsor! https://neptune.ai/ Log, store, query, display, organize, and compare all your model metadata in a single place
|
May 12, 2022 |
Remote Productivity
29:48
https://traffic.libsyn.com/secure/dataskeptic/remote-productivity.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/b/2/5/fb252556ca1446efa04421dee9605cbd/Physically_Distributed_3.jpg
57218896368
1f5180c8-d360-4df6-af1f-16a7bbbf1bcd
It is difficult to estimate the effect on remote working across the board. Darja Šmite, who speaks with us today, is a professor of Software Engineering at the Blekinge Institute of Technology. In her recently published paper, she analyzed data on several companies' activities before and after remote working became prevalent. She discussed the results found, why they were and some subtle drawbacks of remote working. Check it out! Click here for additional show notes on our website!
|
May 09, 2022 |
Does Remote Learning Work?
48:10
https://traffic.libsyn.com/secure/dataskeptic/does-remote-learning-work.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/6/6/e/566e2254277a84cd88c4a68c3ddbc4f2/Physically_Distributed.jpg
56496178266
19a759cb-dd12-4292-8477-1f7bbbbbf3af
We explore this complex question in two interviews today. First, Kasey Wagoner describes 3 approaches to remote lab sessions and an analysis of which was the most instrumental to students. Second, Tahiya Chowdhury shares insights about the specific features of video-conferencing platforms that are lacking in comparison to in-person learning. Click here for additional show notes on our website! Thanks to our sponsor! ClearML is an open-source MLOps solution users love to customize, helping you easily Track, Orchestrate, and Automate ML workflows at scale.
|
May 01, 2022 |
Covid-19 Impact on Bicycle Usage
31:12
https://traffic.libsyn.com/secure/dataskeptic/covid-19-impact-on-bicycle-usage.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/b/3/c/1b3c19b100012b6ae5bbc093207a2619/Physically_Distributed_2.jpg
55818722277
fbc58a39-200f-490b-9278-bc7f5131e218
In this episode, we speak with Abdullah Kurkcu, a Lead Traffic Modeler. Abdullah joins us to discuss his recent study on the effect of COVID-19 on bicycle usage in the US. He walks us through the data gathering process, data preprocessing, feature engineering, and model building. Abdullah also disclosed his results and key takeaways from the study. Listen to find out more. Click here for additional show notes on our website. Thanks to our sponsor! Astrato is a modern BI and analytics platform built for the Snowflake Data Cloud. A next-generation live query data visualization and analytics solution, empowering everyone to make live data decisions.
|
Apr 25, 2022 |
Learning Digital Fabrication Remotely
33:31
https://traffic.libsyn.com/secure/dataskeptic/learning-digital-fabrication-remotely.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/7/e/0/07e04fd9ff95c94be5bbc093207a2619/Jennifer_Jacobs__Nadya_Peek.png
55505296928
3c131d65-e6ab-41d0-8565-ee3c2ea2fcf9
Today, we are joined by Jennifer Jacobs and Nadya Peek, who discuss their experience in teaching remote classes for a course that is largely hands-on. The discussion was focused on digital fabrication, why it is important, the prospect for the future, the challenges with remote lectures, and everything in between. Click here for additional show notes on our website! Thanks to our sponsor! https://neptune.ai/ Log, store, query, display, organize, and compare all your model metadata in a single place
|
Apr 22, 2022 |
Remote Software Development
37:33
https://traffic.libsyn.com/secure/dataskeptic/remote-software-development.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/4/4/e/144e32c7bc89a6f0bafc7308ab683e82/denae-ford.png
55087072823
4721d0b5-adfb-42b3-9fa4-4fca5522a1f0
Today, we are joined by Denae Ford, a Senior Researcher at Microsoft Research and an Affiliate Assistant Professor at the University of Washington. Denae discusses her work around remote work and its culminating impact on workers. She narrowed down her research to how COVID-19 has affected the working system of software engineers and the emerging challenges it brings. Click here to access additional show notes on our website! Thanks to our sponsor!
|
Apr 18, 2022 |
Quantum K-Means
39:52
https://traffic.libsyn.com/secure/dataskeptic/quantum-k-means.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/0/b/7/60b79a618a32f222e55e3c100dce7605/jonas_Landman_.jpg
54309974901
c146a421-8478-4fd8-997c-82579ae8bcad
In this episode, we interview Jonas Landman, a Postdoc candidate at the University of Edinburg. Jonas discusses his study around quantum learning where he attempted to recreate the conventional k-means clustering algorithm and spectral clustering algorithm using quantum computing. Click here to access additional show notes on our website!
|
Apr 11, 2022 |
K-Means in Practice
30:41
https://traffic.libsyn.com/secure/dataskeptic/k-means-in-practice.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/e/6/d/9e6db8e7660d917888c4a68c3ddbc4f2/Mujtaba_Anwer.jpg
53442429533
8135f8ed-a257-4a3d-9565-12f151a3756d
K-means is widely used in real-life business problems. In this episode, Mujtaba Anwer, a researcher and Data Scientist walks us through some use cases of k-means. He also spoke extensively on how to prepare your data for clustering, find the best number of clusters to use, and turn the ‘abstract’ result into real business value. Listen to learn. Click here to access additional show notes on our website! Thanks to our sponsor! ClearML is an open-source MLOps solution users love to customize, helping you easily Track, Orchestrate, and Automate ML workflows at scale.
|
Apr 04, 2022 |
Fair Hierarchical Clustering
34:26
https://traffic.libsyn.com/secure/dataskeptic/fair-hierarchical-clustering.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/4/9/c/a/49ca4bbe9814533b88c4a68c3ddbc4f2/Anshuman_Chhabra.jpg
52803868461
8eae9047-9691-4af5-965c-754e8ac79b93
Building a fair machine learning model has become a critical consideration in today’s world. In this episode, we speak with Anshuman Chabra, a Ph.D. candidate in Computer Networks. Chhabra joins us to discuss his research on building fair machine learning models and why it is important. Find out how he modeled the problem and the result found. Click here to access additional show notes on our webiste! Thanks to our sponsor! https://astrato.io Astrato is a modern BI and analytics platform built for the Snowflake Data Cloud. A next-generation live query data visualization and analytics solution, empowering everyone to make live data decisions.
|
Mar 28, 2022 |
Matrix Factorization For k-Means
30:07
https://traffic.libsyn.com/secure/dataskeptic/matrix-factorization-for-k-means.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/0/6/5/60651190f9e4b5a8e55e3c100dce7605/Sibylle_Hess_cover.jpg
52072792104
721ad75a-5240-4d4c-a273-762ab4675e5c
Many people know K-means clustering as a powerful clustering technique but not all listeners will be as familiar with spectral clustering. In today’s episode, Sibylle Hess from the Data Mining group at TU Eindhoven joins us to discuss her work around spectral clustering and how its result could potentially cause a massive shift from the conventional neural networks. Listen to learn about her findings. Visit our website for additional show notes Thanks to our sponsor, Weights & Biases
|
Mar 21, 2022 |
Breathing K-Means
42:55
https://traffic.libsyn.com/secure/dataskeptic/breathing-k-means.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/b/2/7/9b2726612e930e64e5bbc093207a2619/Bernd_Fritzke_.jpg
51264266710
0b9d687e-9eef-4a69-8ea3-9bbc85d725a1
In this episode, we speak with Bernd Fritzke, a proficient financial expert and a Data Science researcher on his recent research - the breathing K-means algorithm. Bernd discussed the perks of the algorithms and what makes it stand out from other K-means variations. He extensively discussed the working principle of the algorithm and the subtle but impactful features that enables it produce top-notch results with low computational resources. Listen to learn about this algorithm.
|
Mar 14, 2022 |
Power K-Means
32:38
https://traffic.libsyn.com/secure/dataskeptic/power-k-means.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/a/4/5/8a459f127fd322bc27a2322813b393ee/jason_xu.png
50464804034
d918b45d-2262-471e-84a3-2afc306b4511
In today’s episode, Jason, an Assistant Professor of Statistical Science at Duke University talks about his research on K power means. K power means is a newly-developed algorithm by Jason and his team, that aims to solve the problem of local minima in classical K-means, without demanding heavy computational resources. Listen to find out the outcome of Jason's study. Click here to access additional show notes on our website! Thanks to our Sponsors: ClearML is an open-source MLOps solution users love to customize, helping you easily Track, Orchestrate, and Automate ML workflows at scale. https://clear.ml Springboard Springboard offers end-to-end online data career programs that encompass data science, data analytics, data engineering, and machine learning engineering.
|
Mar 07, 2022 |
Explainable K-Means
25:53
https://traffic.libsyn.com/secure/dataskeptic/explainable-k-means.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/0/8/9/a089160d66166dcb88c4a68c3ddbc4f2/Lucas_Murtinho.png
50043878013
9782813d-d7c0-4fea-a52a-8cf70d501356
In this episode, Kyle interviews Lucas Murtinho about the paper "Shallow decision treees for explainable k-means clustering" about the use of decision trees to help explain the clustering partitions. Thanks to our Sponsors: ClearML is an open-source MLOps solution users love to customize, helping you easily Track, Orchestrate, and Automate ML workflows at scale.
|
Mar 03, 2022 |
Customer Clustering
22:03
https://traffic.libsyn.com/secure/dataskeptic/customer-clustering.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/6/0/3/960329246896d62fa04421dee9605cbd/Ehsan_Barkhordar.png
49710251893
f6cdf71a-baeb-4ba9-9591-d25469418895
Have you ever wondered how you can use clustering to extract meaningful insight from a time-series single-feature data? In today’s episode, Ehsan speaks about his recent research on actionable feature extraction using clustering techniques. Want to find out more? Listen to discover the methodologies he used for his research and the commensurate results. Visit our website for extended show notes! https://clear.ml/ ClearML is an open-source MLOps solution users love to customize, helping you easily Track, Orchestrate, and Automate ML workflows at scale.
|
Feb 28, 2022 |
k-means Image Segmentation
23:01
https://traffic.libsyn.com/secure/dataskeptic/k-means-image-segmentation.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/4/b/4/e/4b4ed08cfae86366e55e3c100dce7605/yoshi_episode_2.png
48963811269
aabcc384-d247-4bbf-979b-4ff0550e2932
Linh Da joins us to explore how image segmentation can be done using k-means clustering. Image segmentation involves dividing an image into a distinct set of segments. One such approach is to do this purely on color, in which case, k-means clustering is a good option. Thanks to our Sponsors: & Nomad Data In the image below, you can see the k-means clustering segmentation results for the same image with the values of 2, 4, 6, and 8 for k. 
|
Feb 22, 2022 |
Tracking Elephant Clusters
26:27
https://traffic.libsyn.com/secure/dataskeptic/tracking-elephant-clusters.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/1/d/9/c1d99c99fb6593c4e55e3c100dce7605/gregory_glatzer_copy.png
48633847685
27921b40-c0d9-4d00-8651-5e3ad7bf8034
In today’s episode, Gregory Glatzer explained his machine learning project that involved the prediction of elephant movement and settlement, in a bid to limit the activities of poachers. He used two machine learning algorithms, DBSCAN and K-Means clustering at different stages of the project. Listen to learn about why these two techniques were useful and what conclusions could be drawn. Click here to see additional show notes on our website! Thanks to our sponsor, Astrato
|
Feb 18, 2022 |
k-means clustering
24:22
https://traffic.libsyn.com/secure/dataskeptic/k-means-clustering.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
48145141324
eccbc657-0bb0-4cdb-a7b4-761a206bea27
Welcome to our new season, Data Skeptic: k-means clustering. Each week will feature an interview or discussion related to this classic algorithm, it's use cases, and analysis. This episode is an overview of the topic presented in several segments.
|
Feb 14, 2022 |
Snowflake Essentials
46:43
https://traffic.libsyn.com/secure/dataskeptic/snowflake-essentials.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/6/8/5/968570beb8ac503c16c3140a3186d450/frank_bell.jpg
47352146374
9cfca113-b85c-45e3-93ef-6cc56093fdd0
Frank Bell, Snowflake Data Superhero, and SnowPro, joins us today to talk about his book “Snowflake Essentials: Getting Started with Big Data in the Cloud.” Thanks to our Sponsors: - Find Better Data Faster with Nomad Data. Visit nomad-data.com
- Visit Springboard and use promo code DATASKEPTIC to receive a $750 discount
|
Feb 07, 2022 |
Explainable Climate Science
34:50
https://traffic.libsyn.com/secure/dataskeptic/explainable-climate-science.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/4/7/1/047139801cd85dac88c4a68c3ddbc4f2/labe.jpg
46707461216
1c05f3f0-f923-4ca8-87c8-239fc5734c28
|
Jan 31, 2022 |
Energy Forecasting Pipelines
43:21
https://traffic.libsyn.com/secure/dataskeptic/energy-forecasting-pipelines.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/4/0/5/0405fca9c43a75adbafc7308ab683e82/erin_boyle.jpg
46104743586
1666c574-03eb-4334-8bdf-ae01089267b1
Erin Boyle, the Head of Data Science at Myst AI, joins us today to talk about her work with Myst AI, a time series forecasting platform and service with the objective for positively impacting sustainability. https://docs.myst.ai/docs Visit Weights and Biases at wandb.me/dataskeptic Find Better Data Faster with Nomad Data. Visit nomad-data.com
|
Jan 24, 2022 |
Matrix Profiles in Stumpy
39:09
https://traffic.libsyn.com/secure/dataskeptic/matrix-profiles-in-stumpy.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/4/7/5/04759934d13edcb2d959afa2a1bf1c87/Sean_Law.jpg
45314512314
3c67313d-1eb3-4773-93d2-bbb1d23c68d6
Sean Law, Principle Data Scientist, R&D at a Fortune 500 Company, comes on to talk about his creation of the STUMPY Python Library. Sponsored by Hello Fresh and mParticle: Go to Hellofresh.com/dataskeptic16 for up to 16 free meals AND 3 free gifts! Visit mparticle.com to learn how teams at Postmates, NBCUniversal, Spotify, and Airbnb use mParticle’s customer data infrastructure to accelerate their customer data strategies.
|
Jan 17, 2022 |
The Great Australian Prediction Project
25:19
https://traffic.libsyn.com/secure/dataskeptic/the-great-australian-prediction-project.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/a/a/5/faa548886759c5f5e55e3c100dce7605/Richard_Saunders.jpg
44928312676
3253aaa9-9ab2-4993-8685-4daddc85f73d
Data scientists and psychics have at least one major thing in common. Both professions attempt to predict the future. In the case of a data scientist, this is done using algorithms, data, and often comes with some measure of quality such as a confidence interval or estimated accuracy. In contrast, psychics rely on their intuition or an appeal to the supernatural as the source for their predictions. Still, in the interest of empirical evidence, the quality of predictions made by psychics can be put to the test. The Great Australian Psychic Prediction Project seeks to do exactly that. It's the longest known project tracking annual predictions made by psychics, and the accuracy of those predictions in hindsight. Richard Saunders, host of The Skeptic Zone Podcast, joins us to share the results of this decadal study. Read the full report: https://www.skeptics.com.au/2021/12/09/psychic-project-full-results-released/ And follow the Skeptics Zone: https://www.skepticzone.tv/
|
Jan 14, 2022 |
Water Demand Forecasting
26:00
https://traffic.libsyn.com/secure/dataskeptic/water-demand-forecasting.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/9/8/6/f9869e8f469ea06ba04421dee9605cbd/Georgia_Papacharalampous.jpg
44520898769
536e04c8-9d44-4410-bf46-c21c1b638df0
Georgia Papacharalampous, Researcher at the National Technical University of Athens, joins us today to talk about her work “Probabilistic water demand forecasting using quantile regression algorithms.” Visit Springboard and use promo code DATASKEPTIC to receive a $750 discount 
|
Jan 10, 2022 |
Open Telemetry
36:18
https://traffic.libsyn.com/secure/dataskeptic/open-telemetry.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/9/e/1/59e113334d8b43a55f2e77a3093c12a1/John_Watson.jpg
43685891212
0afca517-c107-435c-b6e2-47dd203b4417
John Watson, Principal Software Engineer at Splunk, joins us today to talk about Splunk and OpenTelemetry.
|
Jan 03, 2022 |
Fashion Predictions
34:42
https://traffic.libsyn.com/secure/dataskeptic/fashion-predictions.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/5/5/e/755e160438427257a04421dee9605cbd/Yusan_Lin.jpg
42949533093
154cb9c1-a602-40f8-bfce-7cd3ace54bbc
Yusan Lin, a Research Scientist at Visa Research, comes on today to talk about her work "Predicting Next-Season Designs on High Fashion Runway."
|
Dec 27, 2021 |
Time Series Mini Episodes
36:53
https://traffic.libsyn.com/secure/dataskeptic/time-series-mini-episodes.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
42722924385
3bb355f0-e829-4cbc-aaf1-106e69f999f2
Time series topics on Data Skeptic predate our current season. This holiday special collects three popular mini-episodes from the archive that discuss time series topics with a few new comments from Kyle.
|
Dec 25, 2021 |
Forecasting Motor Vehicle Collision
39:12
https://traffic.libsyn.com/secure/dataskeptic/forecasting-motor-vehicle-collision-rates.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/0/3/e/703e5d44ce758848d959afa2a1bf1c87/Darren_Shannon.jpg
42222021953
94acd2f8-43f2-4711-8547-5e7b292326d3
Dr. Darren Shannon, a Lecturer in Quantitative Finance in the Department of Accounting and Finance, University of Limerick, joins us today to talk about his work "Extending the Heston Model to Forecast Motor Vehicle Collision Rates."
|
Dec 20, 2021 |
Deep Learning for Road Traffic Forecasting
31:57
https://traffic.libsyn.com/secure/dataskeptic/deep-learning-for-road-traffic-forecasting.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/f/0/f/2f0f22b443594c53d959afa2a1bf1c87/Eric_L._Manibardo.jpg
41509456376
41f50ce2-bc02-41b5-adf4-196ff081ed4d
Eric Manibardo, PhD Student at the University of the Basque Country in Spain, comes on today to share his work, "Deep Learning for Road Traffic Forecasting: Does it Make a Difference?"
|
Dec 13, 2021 |
Bike Share Demand Forecasting
40:41
https://traffic.libsyn.com/secure/dataskeptic/bike-share-demand-forecasting.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/e/c/b/fecbfa1f0aa5766ce5bbc093207a2619/_Daniele_Gammelli.jpg
40764503821
7a76204c-15a2-488f-8a0a-35cafc95c8f7
Daniele Gammelli, PhD Student in Machine Learning at Technical University of Denmark and visiting PhD Student at Stanford University, joins us today to talk about his work "Predictive and Prescriptive Performance of Bike-Sharing Demand Forecasts for Inventory Management."
|
Dec 06, 2021 |
Forecasting in Supply Chain
36:05
https://traffic.libsyn.com/secure/dataskeptic/forecasting-in-supply-chain.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/2/b/a/02bada42f2f1ef0f16c3140a3186d450/Mahdi_Abolghasemi_.jpg
40014060412
6a571c83-26db-41c6-affd-9aa31ddd88ad
Mahdi Abolghasemi, Lecturer at Monash University, joins us today to talk about his work "Demand forecasting in supply chain: The impact of demand volatility in the presence of promotion."
|
Nov 29, 2021 |
Black Friday
44:55
https://traffic.libsyn.com/secure/dataskeptic/black-friday.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
39723570213
3f5aba4c-bb70-4c33-9ecb-cc72d3fa43db
The retail holiday “black Friday” occurs the day after Thanksgiving in the United States. It’s dubbed this because many retail companies spend the first 10 months of the year running at a loss (in the red) before finally earning as much as 80% of their revenue in the last two months of the year. This episode features four interviews with guests bringing unique data-driven perspectives on the topic of analyzing this seeming outlier in a time series dataset.
|
Nov 26, 2021 |
Aligning Time Series on Incomparable Spaces
33:45
https://traffic.libsyn.com/secure/dataskeptic/aligning-time-series-on-incomparable-spaces.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/5/5/0/55505da040b4a76f16c3140a3186d450/Alex_Terenin.jpg
39284413550
79a5ec5c-a8cf-4945-80ae-05e8ca4e7b56
Alex Terenin, Postdoctoral Research Associate at the University of Cambridge, joins us today to talk about his work "Aligning Time Series on Incomparable Spaces."
|
Nov 22, 2021 |
Comparing Time Series with HCTSA
42:51
https://traffic.libsyn.com/secure/dataskeptic/comparing-time-series-with-hctsa.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/b/6/2/7b62338df938c3baa04421dee9605cbd/Ben_Fulcher__.jpg
38563500520
46625701-2f42-4cad-bda9-d8b381f28e72
Today we are joined again by Ben Fulcher, leader of the Dynamics and Neural Systems Group at the University of Sydney in Australia, to talk about hctsa, a software package for running highly comparative time-series analysis.
|
Nov 15, 2021 |
Change Point Detection Algorithms
30:49
https://traffic.libsyn.com/secure/dataskeptic/change-point-detection-algorithms.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/2/7/9/827978e1935d1f2e88c4a68c3ddbc4f2/Gerrit_van_den_Burg.jpg
37816409254
0f996b11-445d-4fe9-bfb8-18e6e6da172b
Gerrit van den Burg, Postdoctoral Researcher at The Alan Turing Institute, joins us today to discuss his work "An Evaluation of Change Point Detection Algorithms."
|
Nov 08, 2021 |
Time Series for Good
37:42
https://traffic.libsyn.com/secure/dataskeptic/time-series-for-good.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/1/3/f/813f6e6094c904aed959afa2a1bf1c87/Bahman_Rostami-Tabar.jpg
36969164967
8b6d0370-6581-4b1a-8935-a0b3ff66511d
Bahman Rostami-Tabar, Senior Lecturer in Management Science at Cardiff University, joins us today to talk about his work "Forecasting and its Beneficiaries."
|
Nov 01, 2021 |
Long Term Time Series Forecasting
37:45
https://traffic.libsyn.com/secure/dataskeptic/long-term-time-series-forecasting.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/c/8/a/6c8ae6edddcb38ba88c4a68c3ddbc4f2/alex_mallen__Henning_Lange.jpg
36138598157
f4be7a67-9c5c-43d4-9d8a-bac0ff53d4f5
Alex Mallen, Computer Science student at the University of Washington, and Henning Lange, a Postdoctoral Scholar in Applied Math at the University of Washington, join us today to share their work "Deep Probabilistic Koopman: Long-term Time-Series Forecasting Under Periodic Uncertainties."
|
Oct 25, 2021 |
Fast and Frugal Time Series Forecasting
37:30
https://traffic.libsyn.com/secure/dataskeptic/fast-and-frugal-time-series-forecasting.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/a/d/2/2ad2e973dd5fe6a4bafc7308ab683e82/Fotios_Petropoulos.jpg
35322082763
02e0a78c-1056-4bc2-bb6f-2f0461b70236
Fotios Petropoulos, Professor of Management Science at the University of Bath in The U.K., joins us today to talk about his work "Fast and Frugal Time Series Forecasting."
|
Oct 17, 2021 |
Causal Inference in Educational Systems
41:28
https://traffic.libsyn.com/secure/dataskeptic/causal-inference-in-educational-systems.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/7/c/5/77c5dad8cd086772d959afa2a1bf1c87/MANIE_TADAYON.jpg
34517478800
4858f0b8-7955-45e6-bf3c-a2009d97ca16
Manie Tadayon, a PhD graduate from the ECE department at University of California, Los Angeles, joins us today to talk about his work “Comparative Analysis of the Hidden Markov Model and LSTM: A Simulative Approach.”
|
Oct 11, 2021 |
Boosted Embeddings for Time Series
28:59
https://traffic.libsyn.com/secure/dataskeptic/boosted-embeddings-for-time-series.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/1/b/f/e1bfbcd73db8c03be5bbc093207a2619/Sankeerth_Rao_Karingula.jpg
33758969718
9b5cf0e3-39f7-4fba-bc8b-2babe74bc651
Sankeerth Rao Karingula, ML Researcher at Palo Alto Networks, joins us today to talk about his work “Boosted Embeddings for Time Series Forecasting.” Works Mentioned Boosted Embeddings for Time Series Forecasting by Sankeerth Rao Karingula, Nandini Ramanan, Rasool Tahmasbi, Mehrnaz Amjadi, Deokwoo Jung, Ricky Si, Charanraj Thimmisetty, Luisa Polania Cabrera, Marjorie Sayer, Claudionor Nunes Coelho Jr https://www.linkedin.com/in/sankeerthrao/ https://twitter.com/sankeerthrao3 https://lod2021.icas.cc/
|
Oct 04, 2021 |
Change Point Detection in Continuous Integration Systems
33:37
https://traffic.libsyn.com/secure/dataskeptic/change-point-detection-in-continuous-integration-systems.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/8/b/f/68bf4c094045424de55e3c100dce7605/david_daly.jpg
32936772741
d624e548-1210-402f-9f1d-0edecfb401b9
|
Sep 27, 2021 |
Applying k-Nearest Neighbors to Time Series
24:09
https://traffic.libsyn.com/secure/dataskeptic/applying-k-nearest-neighbors-to-time-series.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/a/e/e/daeee8eeea8c9e29bafc7308ab683e82/Samya_Tajmouati.jpg
32226265585
ffbda165-35bd-43d8-a716-264d45596e23
Samya Tajmouati, a PhD student in Data Science at the University of Science of Kenitra, Morocco, joins us today to discuss her work Applying K-Nearest Neighbors to Time Series Forecasting: Two New Approaches.
|
Sep 20, 2021 |
Ultra Long Time Series
28:13
https://traffic.libsyn.com/secure/dataskeptic/ultra-long-time-series.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/2/9/4/d29493f7b26be1f916c3140a3186d450/Feng_Li.jpg
31576242074
a56cb2e4-764d-4d00-a09b-43e1d71e46b7
Dr. Feng Li, (@f3ngli) is an Associate Professor of Statistics in the School of Statistics and Mathematics at Central University of Finance and Economics in Beijing, China. He joins us today to discuss his work Distributed ARIMA Models for Ultra-long Time Series.
|
Sep 13, 2021 |
MiniRocket
25:31
https://traffic.libsyn.com/secure/dataskeptic/minirocket.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/4/0/0/c40080e8b200596dbafc7308ab683e82/angus_dempster.jpg
30817741975
c5b56597-2302-4ec4-bf7c-26977f6adb6e
Angus Dempster, PhD Student at Monash University in Australia, comes on today to talk about MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series Classification, a fast deterministic transform for time series classification. MINIROCKET reformulates ROCKET, gaining a 75x improvement on larger datasets with essentially the same performance. In this episode, we talk about the insights that realized this speedup as well as use cases.
|
Sep 06, 2021 |
ARiMA is not Sufficient
22:35
https://traffic.libsyn.com/secure/dataskeptic/arima-is-not-sufficient.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/2/9/e/d29e60e5501ef2b4bafc7308ab683e82/Chongshou_Li_copy.jpg
30067505315
dcf86747-5f14-4acd-aea2-d117c37e34b3
Chongshou Li, Associate Professor at Southwest Jiaotong University in China, joins us today to talk about his work Why are the ARIMA and SARIMA not Sufficient.
|
Aug 30, 2021 |
Comp Engine
36:04
https://traffic.libsyn.com/secure/dataskeptic/comp-engine.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/f/8/e/7f8ed59d28f5b793d959afa2a1bf1c87/Ben_Fulcher__.jpg
29364944766
9153fc96-80c3-4088-9a4a-b870a0fb218e
Ben Fulcher, Senior Lecturer at the School of Physics at the University of Sydney in Australia, comes on today to talk about his project Comp Engine. Follow Ben on Twitter: @bendfulcher For posts about time series analysis : @comptimeseries comp-engine.org
|
Aug 23, 2021 |
Detecting Ransomware
31:23
https://traffic.libsyn.com/secure/dataskeptic/detecting-ransomware.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/2/d/5/f2d5df13b2db18d516c3140a3186d450/Nitin_Pundir.jpg
28634771003
ccb998a8-79b3-4cc6-8d10-ebd51593eab4
|
Aug 16, 2021 |
GANs in Finance
23:08
https://traffic.libsyn.com/secure/dataskeptic/gans-in-finance.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/8/c/f/b8cfa279921c48ff27a2322813b393ee/florian_eckereli.jpg
27977050102
f2f6023a-74a5-4100-b989-65e225441542
Florian Eckerli, a recent graduate of Zurich University of Applied Sciences, comes on the show today to discuss his work Generative Adversarial Networks in Finance: An Overview.
|
Aug 09, 2021 |
Predicting Urban Land Use
27:06
https://traffic.libsyn.com/secure/dataskeptic/predicting-urban-land-use.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/f/a/1/ffa1cab5171dd656bafc7308ab683e82/Daniel_Omeiza_.jpg
27384407737
9ba1d23d-9390-44e0-a466-2e63c99d706c
Today on the show we have Daniel Omeiza, a doctoral student in the computer science department of the University of Oxford, who joins us to talk about his work Efficient Machine Learning for Large-Scale Urban Land-Use Forecasting in Sub-Saharan Africa.
|
Aug 02, 2021 |
Opportunities for Skillful Weather Prediction
34:13
https://traffic.libsyn.com/secure/dataskeptic/opportunities-for-skillful-weather-prediction.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/c/3/9/8c39696a126b49f0e5bbc093207a2619/elizabeth_libby_barnes.jpg
26792496323
d99e53b2-9028-4683-a0c1-a7008705402b
Today on the show we have Elizabeth Barnes, Associate Professor in the department of Atmospheric Science at Colorado State University, who joins us to talk about her work Identifying Opportunities for Skillful Weather Prediction with Interpretable Neural Networks. Find more from the Barnes Research Group on their site. Weather is notoriously difficult to predict. Complex systems are demanding of computational power. Further, the chaotic nature of, well, nature, makes accurate forecasting especially difficult the longer into the future one wants to look. Yet all is not lost! In this interview, we explore the use of machine learning to help identify certain conditions under which the weather system has entered an unusually predictable position in it’s normally chaotic state space.
|
Jul 26, 2021 |
Predicting Stock Prices
34:17
https://traffic.libsyn.com/secure/dataskeptic/predicting-stock-prices.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/8/4/2/b8420684cf8ba3daa04421dee9605cbd/Andrea_Fronzetti_Colladon_.jpg
26259376100
7170ceb7-9ad0-4745-a57b-f6b0c3a042be
Today on the show we have Andrea Fronzetti Colladon (@iandreafc), currently working at the University of Perugia and inventor of the Semantic Brand Score, joins us to talk about his work studying human communication and social interaction. We discuss the paper Look inside. Predicting Stock Prices by Analyzing an Enterprise Intranet Social Network and Using Word Co-Occurrence Networks.
|
Jul 19, 2021 |
N-Beats
34:15
https://traffic.libsyn.com/secure/dataskeptic/nbeats.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/a/3/c/7a3ce0474ce31d5a88c4a68c3ddbc4f2/boris_oreshkin.jpg
26230601949
1ea17766-4fc0-475a-b2f1-5e5ff2096aab
Today on the show we have Boris Oreshkin @boreshkin, a Senior Research Scientist at Unity Technologies, who joins us today to talk about his work N-BEATS: Neural Basis Expansion Analysis for Interpretable Time Series Forecasting. Works Mentioned: N-BEATS: Neural Basis Expansion Analysis for Interpretable Time Series Forecasting By Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, Yoshua Bengio https://arxiv.org/abs/1905.10437 Social Media Linkedin Twitter
|
Jul 12, 2021 |
Translation Automation
36:07
https://traffic.libsyn.com/secure/dataskeptic/translation-automation.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/9/9/a/f99a6acff1ff314388c4a68c3ddbc4f2/carl_stimson.jpg
25523102549
5f1d5951-e943-4d37-9552-16d027c8b825
Today we are back with another episode discussing AI in the work field. AI has, is, and will continue to facilitate the automation of work done by humans. Sometimes this may be an entire role. Other times it may automate a particular part of their role, scaling their effectiveness. Carl Stimson, a Freelance Japanese to English translator, comes on the show to talk about his work in translation and his perspective about how AI will change translation in the future.
|
Jul 06, 2021 |
Time Series at the Beach
23:01
https://traffic.libsyn.com/secure/dataskeptic/time-series-at-the-beach.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/f/3/d/ef3db8a167818f34d959afa2a1bf1c87/shane_ross_TS.jpg
24932976013
ac8deaef-10de-4ac5-966c-3b6d0b19ae39
Shane Ross, Professor of Aerospace and Ocean Engineering at Virginia Tech University, comes on today to talk about his work “Beach-level 24-hour forecasts of Florida red tide-induced respiratory irritation.”
|
Jun 28, 2021 |
Automatic Identification of Outlier Galaxy Images
36:19
https://traffic.libsyn.com/secure/dataskeptic/automatic-identification-of-outlier-galaxy-images.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/b/4/f/2b4f1fd78f595dfda04421dee9605cbd/Lior_Shamir.jpg
24374416399
8d1dde9f-b60a-4d1a-8e16-e8e06dc0b720
Lior Shamir, Associate Professor of Computer Science at Kansas University, joins us today to talk about the recent paper Automatic Identification of Outliers in Hubble Space Telescope Galaxy Images. Follow Lio on Twitter @shamir_lior
|
Jun 21, 2021 |
Do We Need Deep Learning in Time Series
29:19
https://traffic.libsyn.com/secure/dataskeptic/do-we-need-deep-learning-in-time-series.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/6/a/0/76a01ead71ef8631e5bbc093207a2619/Shereen_Elsayed___Daniela_Thyssens_Daniela_Thyssens__.jpg
23838797267
de6588fe-bc89-4f87-8a33-6212c6673d56
Shereen Elsayed and Daniela Thyssens, both are PhD Student at Hildesheim University in Germany, come on today to talk about the work “Do We Really Need Deep Learning Models for Time Series Forecasting?”
|
Jun 16, 2021 |
Detecting Drift
27:19
https://traffic.libsyn.com/secure/dataskeptic/detecting-drift.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/5/9/e/159e6a5a0494fbac27a2322813b393ee/samuel_ackerman_copy.jpg
23263906638
0677c67b-f5ee-4056-b8e0-4189924b6c0e
Sam Ackerman, Research Data Scientist at IBM Research Labs in Haifa, Israel, joins us today to talk about his work Detection of Data Drift and Outliers Affecting Machine Learning Model Performance Over Time.
|
Jun 11, 2021 |
Darts Library for Time Series
25:12
https://traffic.libsyn.com/secure/dataskeptic/darts-library-for-time-series.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/e/7/4/6e744e3125447e3988c4a68c3ddbc4f2/Julien_Herzen.jpg
22718601128
48b7c0f6-1d20-4b48-a606-190d314060f1
Julien Herzen, PhD graduate from EPFL in Switzerland, comes on today to talk about his work with Unit 8 and the development of the Python Library: Darts.
|
May 31, 2021 |
Forecasting Principles and Practice
31:40
https://traffic.libsyn.com/secure/dataskeptic/forecasting-principles-and-practice.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/4/1/e/541ec0dea033239e5f2e77a3093c12a1/rob_hyndman.jpg
22186567385
deaf03c4-00ee-4e57-a6a5-b5abb446f4fe
Welcome to Timeseries! Today’s episode is an interview with Rob Hyndman, Professor of Statistics at Monash University in Australia, and author of Forecasting: Principles and Practices.
|
May 24, 2021 |
Prequisites for Time Series
08:41
https://traffic.libsyn.com/secure/dataskeptic/prequisites-for-time-series.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
22001487378
b117c763-6d6d-4c21-a913-7b4bae951578
Today's experimental episode uses sound to describe some basic ideas from time series. This episode includes lag, seasonality, trend, noise, heteroskedasticity, decomposition, smoothing, feature engineering, and deep learning.
|
May 21, 2021 |
Orders of Magnitude
33:13
https://traffic.libsyn.com/secure/dataskeptic/oom.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/1/9/0/e1903ba323465c3d16c3140a3186d450/kyle_polich_copy.jpg
21020927082
d412b683-f6e7-4a54-86c2-903ff826aacc
Today’s show in two parts. First, Linhda joins us to review the episodes from Data Skeptic: Pilot Season and give her feedback on each of the topics. Second, we introduce our new segment “Orders of Magnitude”. It’s a statistical game show in which participants must identify the true statistic hidden in a list of statistics which are off by at least an order of magnitude. Claudia and Vanessa join as our first contestants. Below are the sources of our questions. Heights Bird Statistics Amounts of Data Our statistics come from this post
|
May 07, 2021 |
They're Coming for Our Jobs
43:42
https://traffic.libsyn.com/secure/dataskeptic/theyre-coming-for-our-jobs.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/4/0/7/f/407ff35a6de860ed40be95ea3302a6a1/CELESTIA_WARD.jpg
20727289660
97cad4cd-7bc9-494f-a896-3f1f1853266f
AI has, is, and will continue to facilitate the automation of work done by humans. Sometimes this may be an entire role. Other times it may automate a particular part of their role, scaling their effectiveness. Unless progress in AI inexplicably halts, the tasks done by humans vs. machines will continue to evolve. Today’s episode is a speculative conversation about what the future may hold. Co-Host of Squaring the Strange Podcast, Caricature Artist, and an Academic Editor, Celestia Ward joins us today! Kyle and Celestia discuss whether or not her jobs as a caricature artist or as an academic editor are under threat from AI automation. Mentions
|
May 03, 2021 |
Pandemic Machine Learning Pitfalls
40:17
https://traffic.libsyn.com/secure/dataskeptic/pandemic-machine-learning-pitfalls.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/c/5/3/dc5322e12db84c1be55e3c100dce7605/derek_driggs.jpg
20193014390
6a14158d-9792-4e82-80b3-ce930f3481dd
Today on the show Derek Driggs, a PhD Student at the University of Cambridge. He comes on to discuss the work Common Pitfalls and Recommendations for Using Machine Learning to Detect and Prognosticate for COVID-19 Using Chest Radiographs and CT Scans. Help us vote for the next theme of Data Skeptic! Vote here: https://dataskeptic.com/vote
|
Apr 26, 2021 |
Flesch Kincaid Readability Tests
20:25
https://traffic.libsyn.com/secure/dataskeptic/flesch-kincaid-readability-tests.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
19674504899
c55894f9-78bb-4cdc-845b-813a8fc3d5d4
Given a document in English, how can you estimate the ease with which someone will find they can read it? Does it require a college-level of reading comprehension or is it something a much younger student could read and understand? While these questions are useful to ask, they don't admit a simple answer. One option is to use one of the (essentially identical) two Flesch Kincaid Readability Tests. These are simple calculations which provide you with a rough estimate of the reading ease. In this episode, Kyle shares his thoughts on this tool and when it could be appropriate to use as part of your feature engineering pipeline towards a machine learning objective. For empirical validation of these metrics, the plot below compares English language Wikipedia pages with "Simple English" Wikipedia pages. The analysis Kyle describes in this episode yields the intuitively pleasing histogram below. It summarizes the distribution of Flesch reading ease scores for 1000 pages examined from both Wikipedias. 
|
Apr 19, 2021 |
Fairness Aware Outlier Detection
39:33
https://traffic.libsyn.com/secure/dataskeptic/fairness-aware-outlier-detection.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/2/b/4/52b4902c7fe8e66f16c3140a3186d450/Shubhranshu_Shekhar.jpg
18882326098
aab7ac26-1e1c-45b5-84d7-a3366a996539
Today on the show we have Shubhranshu Shekar, a Ph. D Student at Carnegie Mellon University, who joins us to talk about his work, FAIROD: Fairness-aware Outlier Detection.
|
Apr 09, 2021 |
Life May be Rare
43:13
https://traffic.libsyn.com/secure/dataskeptic/life-may-be-rare.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/a/7/6/da769eff04b549db27a2322813b393ee/ANDERS_SANDBERG.jpg
18531758162
62fb4485-caa1-4f8c-9240-aae43c3a05ef
|
Apr 05, 2021 |
Social Networks
49:51
https://traffic.libsyn.com/secure/dataskeptic/social-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/7/4/a/774accdc3355cdf15f2e77a3093c12a1/mayank_kejirwal.jpg
17931847141
803ceba1-3442-4fb7-9803-2a1546d6b6f0
Mayank Kejriwal, Research Professor at the University of Southern California and Researcher at the Information Sciences Institute, joins us today to discuss his work and his new book Knowledge, Graphs, Fundamentals, Techniques and Applications by Mayank Kejriwal, Craig A. Knoblock, and Pedro Szekley. Works Mentioned “Knowledge, Graphs, Fundamentals, Techniques and Applications”by Mayank Kejriwal, Craig A. Knoblock, and Pedro Szekley
|
Mar 29, 2021 |
The QAnon Conspiracy
43:54
https://traffic.libsyn.com/secure/dataskeptic/the-qanon-conspiracy.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
17328025632
f4ccae62-6426-4bfc-a5b9-9518c6775b79
QAnon is a conspiracy theory born in the underbelly of the internet. While easy to disprove, these cryptic ideas captured the minds of many people and (in part) paved the way to the 2021 storming of the US Capital. This is a contemporary conspiracy which came into existence and grew in a very digital way. This makes it possible for researchers to study this phenomenon in a way not accessible in previous conspiracy theories of similar popularity. This episode is not so much a debunking of this debunked theory, but rather an exploration of the metadata and origins of this conspiracy. This episode is also the first in our 2021 Pilot Season in which we are going to test out a few formats for Data Skeptic to see what our next season should be. This is the first installment. In a few weeks, we're going to ask everyone to vote for their favorite theme for our next season.
|
Mar 22, 2021 |
Benchmarking Vision on Edge vs Cloud
47:53
https://traffic.libsyn.com/secure/dataskeptic/benchmarking-vision-on-edge-vs-cloud.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/4/9/8/f498c72e991ae52716c3140a3186d450/karthick_shankar.jpg
16807515966
03bef7c4-6f56-4dfd-ad3c-db98b6401764
|
Mar 15, 2021 |
Goodhart's Law in Reinforcement Learning
37:11
https://traffic.libsyn.com/secure/dataskeptic/goodharts-law-in-reinforcement-learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/b/b/4/fbb439ad79dfb105/hal-ashton.jpg
16349344322
e5a8bd9a-de77-4441-a1a5-f98b8785ffa0
Hal Ashton, a PhD student from the University College of London, joins us today to discuss a recent work Causal Campbell-Goodhart’s law and Reinforcement Learning. "Only buy honey from a local producer." - Hal Ashton Works Mentioned: “Causal Campbell-Goodhart’s law and Reinforcement Learning”by Hal AshtonBook “The Book of Why”by Judea PearlPaper
Thanks to our sponsor! When your business is ready to make that next hire, find the right person with LinkedIn Jobs. Just visit LinkedIn.com/DATASKEPTIC to post a job for free! Terms and conditions apply
|
Mar 05, 2021 |
Video Anomaly Detection
24:06
https://traffic.libsyn.com/secure/dataskeptic/video-anomaly-detection.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/3/4/8/03483be7f0753bb1/yuqi_ouyang.jpg
16143652141
538ea61f-9e29-4202-96bc-e84d000f761a
Yuqi Ouyang, in his second year of PhD study at the University of Warwick in England, joins us today to discuss his work “Video Anomaly Detection by Estimating Likelihood of Representations.”Works Mentioned: Video Anomaly Detection by Estimating Likelihood of Representations https://arxiv.org/abs/2012.01468 by: Yuqi Ouyang, Victor Sanchez
|
Mar 01, 2021 |
Fault Tolerant Distributed Gradient Descent
36:06
https://traffic.libsyn.com/secure/dataskeptic/fault-tolerant-distributed-gradient-descent.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/4/6/7/2467ab54a576f0b1/nirupam_gupta.jpg
15754305019
7a150b5c-7cb2-4a63-bde7-4872cec2afbe
|
Feb 22, 2021 |
Decentralized Information Gathering
32:57
https://traffic.libsyn.com/secure/dataskeptic/decentralized-information-gathering.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/d/1/0/1d10302c919c0931/mikko.jpg
15212942687
dcb9dfb0-92af-4a2e-9447-2c1433e798ab
|
Feb 15, 2021 |
Leaderless Consensus
27:25
https://traffic.libsyn.com/secure/dataskeptic/leaderless-consensus.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/f/f/e/2ffe9a050ab9a048/balaji_arun.jpg
14457559574
2fd35973-3742-4722-b241-c2dfa8d64475
|
Feb 05, 2021 |
Automatic Summarization
27:57
https://traffic.libsyn.com/secure/dataskeptic/automatic-summarization.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/d/5/5/5d55e304702b4764/maartje_ter_Hoeve.jpg
13933313651
c997b378-b488-4985-a077-e7d08c6865e2
|
Jan 29, 2021 |
Gerrymandering
34:09
https://traffic.libsyn.com/secure/dataskeptic/gerrymandering.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/1/9/b/719b8985771a71e2/Brian_Brubach.jpg
13401691121
04b632af-1937-4743-8e81-7bbe33f4b148
|
Jan 22, 2021 |
Even Cooperative Chess is Hard
23:09
https://traffic.libsyn.com/secure/dataskeptic/even-cooperative-chess-is-hard.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/a/a/b/8aab88e5644b9087/josh_brunner.jpg
12872408991
284c0a71-64ed-4851-afbf-6b6578ee4fdc
Aside from victory questions like “can black force a checkmate on white in 5 moves?” many novel questions can be asked about a game of chess. Some questions are trivial (e.g. “How many pieces does white have?") while more computationally challenging questions can contribute interesting results in computational complexity theory. In this episode, Josh Brunner, Master's student in Theoretical Computer Science at MIT, joins us to discuss his recent paper Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard. Works Mentioned Complexity of Retrograde and Helpmate Chess Problems: Even Cooperative Chess is Hard by Josh Brunner, Erik D. Demaine, Dylan Hendrickson, and Juilian Wellman 1x1 Rush Hour With Fixed Blocks is PSPACE Complete by Josh Brunner, Lily Chung, Erik D. Demaine, Dylan Hendrickson, Adam Hesterberg, Adam Suhl, Avi Zeff
|
Jan 15, 2021 |
Consecutive Votes in Paxos
30:11
https://traffic.libsyn.com/secure/dataskeptic/consecutive-votes-in-paxos.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/8/f/b/e8fb9e4672397ed2/eli_goldweber.jpg
12564655453
12ae85f7-4659-4fe6-9769-32cd4d084b91
|
Jan 11, 2021 |
Visual Illusions Deceiving Neural Networks
33:43
https://traffic.libsyn.com/secure/dataskeptic/visual-illusions-deceiving-neural-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11825956686
87ef4963-4fed-4aa2-b2af-57dec6fdc138
|
Jan 01, 2021 |
Earthquake Detection with Crowd-sourced Data
29:27
https://traffic.libsyn.com/secure/dataskeptic/earthquake-detection-with-crowd-sourced-data.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11313362831
45df8a23-7c03-4c99-9ca2-15241aa7a48e
|
Dec 25, 2020 |
Byzantine Fault Tolerant Consensus
35:33
https://traffic.libsyn.com/secure/dataskeptic/byzantine-fault-tolerant-consensus.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/4/6/f/346fc6686942db82/Ted_yin.jpg
11100994854
ff224fc3-f172-4d8e-8e69-b725364278e8
Byzantine fault tolerance (BFT) is a desirable property in a distributed computing environment. BFT means the system can survive the loss of nodes and nodes becoming unreliable. There are many different protocols for achieving BFT, though not all options can scale to large network sizes. Ted Yin joins us to explain BFT, survey the wide variety of protocols, and share details about HotStuff.
|
Dec 22, 2020 |
Alpha Fold
23:14
https://traffic.libsyn.com/secure/dataskeptic/alpha-fold.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/7/e/1/d7e15f0bbda0a880/Kyle_Polich.jpg
10368980610
afb2af57-0a9e-42d8-a0c3-693c0f9667c4
Kyle shared some initial reactions to the announcement about Alpha Fold 2's celebrated performance in the CASP14 prediction. By many accounts, this exciting result means protein folding is now a solved problem.
Thanks to our sponsors! - Brilliant is a great last-minute gift idea! Give access to 60 + interactive courses including Quantum Computing and Group Theory. There's something for everyone at Brilliant. They have award-winning courses, taught by teachers, researchers and professionals from MIT, Caltech, Duke, Microsoft, Google and many more. Check them out at brilliant.org/dataskeptic to take advantage of 20% off a Premium memebership.
- Betterhelp is an online professional counseling platform. Start communicating with a licensed professional in under 24 hours! It's safe, private and convenient. From online messages to phone and video calls, there is something for everyone. Get 10% off your first month at betterhelp.com/dataskeptic
|
Dec 11, 2020 |
Arrow's Impossibility Theorem
26:19
https://traffic.libsyn.com/secure/dataskeptic/arrows-impossibility-theorem.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/f/c/c/8fcc8dd4941061be/Kyle_Polich.jpg
9905657582
d1a1eaf8-cb11-40d9-b25e-7fb123cfbafd
Above all, everyone wants voting to be fair. What does fair mean and how can we measure it? Kenneth Arrow posited a simple set of conditions that one would certainly desire in a voting system. For example, unanimity - if everyone picks candidate A, then A should win! Yet surprisingly, under a few basic assumptions, this theorem demonstrates that no voting system exists which can satisfy all the criteria. This episode is a discussion about the structure of the proof and some of its implications. Works Mentioned Thank you to our sponsors! Better Help is much more affordable than traditional offline counseling, and financial aid is available! Get started in less than 24 hours. Data Skeptic listeners get 10% off your first month when you visit: betterhelp.com/dataskeptic Let Springboard School of Data jumpstart your data career! With 100% online and remote schooling, supported by a vast network of professional mentors with a tuition-back guarantee, you can't go wrong. Up to twenty $500 scholarships will be awarded to Data Skeptic listeners. Check them out at springboard.com/dataskeptic and enroll using code: DATASK
|
Dec 04, 2020 |
Face Mask Sentiment Analysis
41:11
https://traffic.libsyn.com/secure/dataskeptic/face-mask-sentiment-analysis.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
9478120877
bfdc9c5d-567d-4917-8923-dedfb73f4e94
As the COVID-19 pandemic continues, the public (or at least those with Twitter accounts) are sharing their personal opinions about mask-wearing via Twitter. What does this data tell us about public opinion? How does it vary by demographic? What, if anything, can make people change their minds? Today we speak to, Neil Yeung and Jonathan Lai, Undergraduate students in the Department of Computer Science at the University of Rochester, and Professor of Computer Science, Jiebo-Luoto to discuss their recent paper. Face Off: Polarized Public Opinions on Personal Face Mask Usage during the COVID-19 Pandemic. Works Mentioned https://arxiv.org/abs/2011.00336 Emails: Neil Yeung nyeung@u.rochester.edu Jonathan Lia jlai11@u.rochester.edu Jiebo Luo jluo@cs.rochester.edu Thanks to our sponsors! - Springboard School of Data offers a comprehensive career program encompassing data science, analytics, engineering, and Machine Learning. All courses are online and tailored to fit the lifestyle of working professionals. Up to 20 Data Skeptic listeners will receive $500 scholarships. Apply today at springboard.com/datasketpic
- Check out Brilliant's group theory course to learn about object-oriented design! Brilliant is great for learning something new or to get an easy-to-look-at review of something you already know. Check them out a Brilliant.org/dataskeptic to get 20% off of a year of Brilliant Premium!
|
Nov 27, 2020 |
Counting Briberies in Elections
37:55
https://traffic.libsyn.com/secure/dataskeptic/counting-briberies-in-elections.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/0/7/7/10775763ba224bb9/niclas_boehmer.jpg
9101060607
3e309f80-01ef-44e3-8571-79a498398a1a
Niclas Boehmer, second year PhD student at Berlin Institute of Technology, comes on today to discuss the computational complexity of bribery in elections through the paper “On the Robustness of Winners: Counting Briberies in Elections.” Links Mentioned: https://www.akt.tu-berlin.de/menue/team/boehmer_niclas/ Works Mentioned: “On the Robustness of Winners: Counting Briberies in Elections.” by Niclas Boehmer, Robert Bredereck, Piotr Faliszewski. Rolf Niedermier Thanks to our sponsors: Springboard School of Data: Springboard is a comprehensive end-to-end online data career program. Create a portfolio of projects to spring your career into action. Learn more about how you can be one of twenty $500 scholarship recipients at springboard.com/dataskeptic. This opportunity is exclusive to Data Skeptic listeners. (Enroll with code: DATASK) Nord VPN: Protect your home internet connection with unlimited bandwidth. Data Skeptic Listeners-- take advantage of their Black Friday offer: purchase a 2-year plan, get 4 additional months free. nordvpn.com/dataskeptic (Use coupon code DATASKEPTIC)
|
Nov 20, 2020 |
Sybil Attacks on Federated Learning
31:32
https://traffic.libsyn.com/secure/dataskeptic/sybil-attacks-on-federated-learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/6/0/0/16007d2714e96605/clement_fung.jpg
8836046974
1350ddda-967d-480c-a877-1daa67b76f9a
Clement Fung, a Societal Computing PhD student at Carnegie Mellon University, discusses his research in security of machine learning systems and a defense against targeted sybil-based poisoning called FoolsGold. Works Mentioned: The Limitations of Federated Learning in Sybil Settings Twitter: @clemfung Website: https://clementfung.github.io/ Thanks to our sponsors: Brilliant - Online learning platform. Check out Geometry Fundamentals! Visit Brilliant.org/dataskeptic for 20% off Brilliant Premium! BetterHelp - Convenient, professional, and affordable online counseling. Take 10% off your first month at betterhelp.com/dataskeptic
|
Nov 13, 2020 |
Differential Privacy at the US Census
29:43
https://traffic.libsyn.com/secure/dataskeptic/differential-privacy-at-the-us-census.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/0/d/1/e0d1cb293d9bc826/simson_garfinkel.jpg
8621226341
68dfca13-8334-4d7b-9fc7-7d64104e7742
Simson Garfinkel, Senior Computer Scientist for Confidentiality and Data Access at the US Census Bureau, discusses his work modernizing the Census Bureau disclosure avoidance system from private to public disclosure avoidance techniques using differential privacy. Some of the discussion revolves around the topics in the paper Randomness Concerns When Deploying Differential Privacy. WORKS MENTIONED: Check out: https://simson.net/page/Differential_privacy Thank you to our sponsor, BetterHelp. Professional and confidential in-app counseling for everyone. Save 10% on your first month of services with www.betterhelp.com/dataskeptic
|
Nov 06, 2020 |
Distributed Consensus
27:44
https://traffic.libsyn.com/secure/dataskeptic/distributed-consensus.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/6/9/4/f6943e72a65057a8/heidi_howard.jpg
8423527063
6a01e1fe-a827-4e21-aafe-478cdfadcfa0
|
Oct 30, 2020 |
ACID Compliance
23:47
https://traffic.libsyn.com/secure/dataskeptic/acid-compliance.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/e/0/0/3e0084cb689aa8e7/option1.jpg
8225317017
614d4c7a-7636-4f02-bb6e-d8f3ffb866d8
Linhda joins Kyle today to talk through A.C.I.D. Compliance (atomicity, consistency, isolation, and durability). The presence of these four components can ensure that a database’s transaction is completed in a timely manner. Kyle uses examples such as google sheets, bank transactions, and even the game rummy cube. Thanks to this week's sponsors: - Monday.com - Their Apps Challenge is underway and available at monday.com/dataskeptic
- Brilliant - Check out their Quantum Computing Course, I highly recommend it! Other interesting topics I’ve seen are Neural Networks and Logic. Check them out at Brilliant.org/dataskeptic
|
Oct 23, 2020 |
National Popular Vote Interstate Compact
30:36
https://traffic.libsyn.com/secure/dataskeptic/national-popular-vote-interstate-compact.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/7/f/6/37f6e78bfa7c9562/patrick_rosenstiel.jpg
8022129114
8770c5ee-e07c-4d05-9e16-e03e17dedb4b
|
Oct 16, 2020 |
Defending the p-value
30:05
https://traffic.libsyn.com/secure/dataskeptic/defending-the-p-value.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/c/c/2/7cc2089889593250/yudi_pawitan.jpg
7905792174
b8ca04e9-eee0-4b74-9dbf-5e292d33a4cd
|
Oct 12, 2020 |
Retraction Watch
32:04
https://traffic.libsyn.com/secure/dataskeptic/retraction-watch.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/f/3/3/af339fff6bb3ed88/ivan_oransky.jpg
7722390734
24b460dc-ae1f-4f6a-8879-0e4cd730bcb5
|
Oct 05, 2020 |
Crowdsourced Expertise
27:50
https://traffic.libsyn.com/secure/dataskeptic/crowdsourced-expertise.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/0/4/3/604377f10868e6fa/derek_lim.jpg
7335830418
0215091f-3878-4b50-b239-2b6b03df1d5f
|
Sep 21, 2020 |
The Spread of Misinformation Online
35:35
https://traffic.libsyn.com/secure/dataskeptic/the-spread-of-misinformation-online.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/2/6/c/f26c92d0bca40e90/eil_johnson.jpg
7133325890
3cc2ddc1-f3aa-49c2-8cd3-0fd242550afb
|
Sep 14, 2020 |
Consensus Voting
22:57
https://traffic.libsyn.com/secure/dataskeptic/consensus-voting.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/2/c/5/12c56a6ad6264bf0/mash.jpg
6927533285
3c92e066-c85b-4885-9cd0-68721b92e0a7
|
Sep 07, 2020 |
Voting Mechanisms
27:28
https://traffic.libsyn.com/secure/dataskeptic/voting-mechanisms.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/4/0/4/e/404eb6d5f5729a9d/stevenheilman.jpg
6711749055
d37e33c0-2451-42cf-aa84-6b3901a0ef08
|
Aug 31, 2020 |
False Consensus
33:06
https://traffic.libsyn.com/secure/dataskeptic/false-concensus.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/4/1/c/d41c155aaaa4fe50/samiyousif2.jpg
6556499415
03352654-1d1a-4e65-902d-a4cee472537b
|
Aug 24, 2020 |
Fraud Detection in Real Time
38:24
https://traffic.libsyn.com/secure/dataskeptic/fraud-detection-in-real-time.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
6439091043
31875b19-223f-46d0-aec9-9466976f64b8
In this solo episode, Kyle overviews the field of fraud detection with eCommerce as a use case. He discusses some of the techniques and system architectures used by companies to fight fraud with a focus on why these things need to be approached from a real-time perspective.
|
Aug 18, 2020 |
Listener Survey Review
23:12
https://traffic.libsyn.com/secure/dataskeptic/listener-survey-review.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/3/a/4/93a48bda556a3bfb/kp-lt.jpg
6253965937
623c8657-8029-4627-86fb-b3e09590a739
In this episode, Kyle and Linhda review the results of our recent survey. Hear all about the demographic details and how we interpret these results.
|
Aug 11, 2020 |
Human Computer Interaction and Online Privacy
32:38
https://traffic.libsyn.com/secure/dataskeptic/human-computer-interaction-and-online-privacy.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/f/1/8/ff181faa490d1c3a/moses_namara.jpg
5879976112
cd43a1f6-db73-4ed2-b8ea-008170cc56c2
Moses Namara from the HATLab joins us to discuss his research into the interaction between privacy and human-computer interaction.
|
Jul 27, 2020 |
Authorship Attribution of Lennon McCartney Songs
33:10
https://traffic.libsyn.com/secure/dataskeptic/authorship-attribution-of-lennon-mccartney-songs.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/4/c/b/c4cb76afc4e923e8/mark_glickman.jpg
5698659232
b16c5214-0965-41cc-91da-2d5f2834f057
|
Jul 20, 2020 |
GANs Can Be Interpretable
26:39
https://traffic.libsyn.com/secure/dataskeptic/gans-can-be-interpretable.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/2/b/5/d2b519db93d5f90e/ERIK_H.jpg
5437046392
558a4925-058e-4f70-a5a8-53d676e51c2d
|
Jul 11, 2020 |
Sentiment Preserving Fake Reviews
28:39
https://traffic.libsyn.com/secure/dataskeptic/sentiment-preserving-fake-reviews.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/8/d/6/e8d6701cf7d931bf/David_I._Adelani.jpg
5304765470
f675b14b-0947-4ae7-be51-8ad7f9bf88aa
|
Jul 06, 2020 |
Interpretability Practitioners
32:07
https://traffic.libsyn.com/secure/dataskeptic/interpretability-practitioners.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/7/1/2/c712292b2c694631/rayhong.jpg
5025600046
251f4283-610f-4972-8cda-f365134251fb
|
Jun 26, 2020 |
Facial Recognition Auditing
47:30
https://traffic.libsyn.com/secure/dataskeptic/facial-recognition-auditing.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/3/a/c/13ac68a531c7d8d0/deb_raji.jpg
4853408465
7813ecad-a9be-4567-8a7c-5ac59d74c27c
|
Jun 19, 2020 |
Robust Fit to Nature
38:16
https://traffic.libsyn.com/secure/dataskeptic/robust-fit-to-nature.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/0/0/6/5006316ec4c1c258/uri_hasson.jpg
4695270557
a093de54-b73f-44f5-a25c-54f177344cb6
|
Jun 12, 2020 |
Black Boxes Are Not Required
32:29
https://traffic.libsyn.com/secure/dataskeptic/black-boxes-are-not-required.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/8/a/6/d8a6052324c52c44/CynthiaCentered.jpg
4550156348
7a5de2af-aa7b-4962-a39c-40fa03ec559f
Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”. While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)… Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition
|
Jun 05, 2020 |
Robustness to Unforeseen Adversarial Attacks
21:43
https://traffic.libsyn.com/secure/dataskeptic/robustness-to-unforeseen-adversarial-attacks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/8/2/6/8826e164497e8295/daniel_kang.jpg
4431417866
85efd001-66a6-4912-9182-e2f880c6d6e5
|
May 30, 2020 |
Estimating the Size of Language Acquisition
25:06
https://traffic.libsyn.com/secure/dataskeptic/estimating-the-size-of-language-acquisition.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/f/e/6/7fe60e2ee632e186/frank_mollica.jpg
4302897049
fb5b42b9-cee1-43e1-a950-35defb3e4d28
|
May 22, 2020 |
Interpretable AI in Healthcare
35:51
https://traffic.libsyn.com/secure/dataskeptic/interpretable-ai-in-healthcare.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/4/9/c/f/49cf2e6cff8b0f51/jay.jpg
4302897050
349e3c5b-cca6-4bed-a803-a5b028bf91e2
|
May 15, 2020 |
Understanding Neural Networks
34:43
https://traffic.libsyn.com/secure/dataskeptic/understanding-neural-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/1/f/d/31fd62d0ec5a5e86/lillicrap.jpg
4302897051
4fd57bb0-8534-4233-830c-153c76338922
What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.
|
May 08, 2020 |
Self-Explaining AI
32:03
https://traffic.libsyn.com/secure/dataskeptic/self-explaining-ai.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/d/6/9/ad69227a1a28e515/Daniel_Elton.jpg
4207129084
9b646628-59ed-4b55-8815-f5a1cbdbc77f
Dan Elton joins us to discuss self-explaining AI. What could be better than an interpretable model? How about a model wich explains itself in a conversational way, engaging in a back and forth with the user. We discuss the paper Self-explaining AI as an alternative to interpretable AI which presents a framework for self-explainging AI.
|
May 02, 2020 |
Plastic Bag Bans
34:51
https://traffic.libsyn.com/secure/dataskeptic/plastic-bag-bans.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/1/9/1/819111fee138eba9/rebeccataylor.jpg
4081663101
0f0c5edc-b3fb-4b0c-aee1-1b9efee0c58a
|
Apr 24, 2020 |
Self Driving Cars and Pedestrians
30:44
https://traffic.libsyn.com/secure/dataskeptic/self-driving-cars-and-pedestrians.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/b/2/4/2b2472a6c20a60ef/ArashKalatian_PODCASTCOVER.jpg
3990577173
5353d1ba-1148-4b16-aff3-40d00dedcbb4
|
Apr 18, 2020 |
Computer Vision is Not Perfect
26:08
https://traffic.libsyn.com/secure/dataskeptic/computer-vision-is-not-perfect.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/9/5/d/595d7ba2bde45897/julia_evans.jpg
3860421283
4c5c9547-c829-4bbc-be1a-ddf6ca90e0d2
|
Apr 10, 2020 |
Uncertainty Representations
39:48
https://traffic.libsyn.com/secure/dataskeptic/uncertainty-representations.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/3/9/3/e393d17ea8a7ec24/JESSICA_HUMMLAN_PODCAST_COVER.jpg
3755249764
f162d8a5-266b-4e46-ba9f-55197ec6a309
Jessica Hullman joins us to share her expertise on data visualization and communication of data in the media. We discuss Jessica’s work on visualizing uncertainty, interviewing visualization designers on why they don't visualize uncertainty, and modeling interactions with visualizations as Bayesian updates. Homepage: http://users.eecs.northwestern.edu/~jhullman/ Lab: MU Collective
|
Apr 04, 2020 |
AlphaGo, COVID-19 Contact Tracing and New Data Set
33:41
https://traffic.libsyn.com/secure/dataskeptic/AlphaGo_COVID-19_Contact_Tracing_and_New_Data_Set.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/0/3/9/903987bee0d97ab6/Data_Skeptic_Journal_Club.jpg
3653937044
eb6f5d23-338b-4cfc-806d-f37dfc590bdd
Announcing Journal Club I am pleased to announce Data Skeptic is launching a new spin-off show called "Journal Club" with similar themes but a very different format to the Data Skeptic everyone is used to. In Journal Club, we will have a regular panel and occasional guest panelists to discuss interesting news items and one featured journal article every week in a roundtable discussion. Each week, I'll be joined by Lan Guo and George Kemp for a discussion of interesting data science related news articles and a featured journal or pre-print article. We hope that this podcast will give listeners an introduction to the works we cover and how people discuss these works. Our topics will often coincide with the original Data Skeptic podcast's current Interpretability theme, but we have few rules right now or what we pick. We enjoy discussing these items with each other and we hope you will do. In the coming weeks, we will start opening up the guest chair more often to bring new voices to our discussion. After that we'll be looking for ways we can engage with our audience. Keep reading and thanks for listening! Kyle
|
Mar 28, 2020 |
Visualizing Uncertainty
32:53
https://traffic.libsyn.com/secure/dataskeptic/visualizing-uncertainty.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/4/d/3/c4d34aa009266e57/Lonnio_Bescanon_REDO_1.jpg
3543969445
86486820-b2b6-4e4d-8709-fe104e7ca506
|
Mar 20, 2020 |
Interpretability Tooling
42:38
https://traffic.libsyn.com/secure/dataskeptic/interpretability-tooling.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/f/5/4/df54e81431e867b6/Pramit_Choudhary.jpg
3447835191
e8a7da54-3ca5-4d24-beed-57c88f4e5238
Pramit Choudhary joins us to talk about the methodologies and tools used to assist with model interpretability.
|
Mar 13, 2020 |
Shapley Values
20:08
https://traffic.libsyn.com/secure/dataskeptic/shapley-values.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/9/0/4/d904c30e13a2bff5/DS_Logo_Stacked_-_Color_2.png
3334898918
a05ec701-0993-4931-9016-59bc30371b26
Kyle and Linhda discuss how Shapley Values might be a good tool for determining what makes the cut for a home renovation.
|
Mar 06, 2020 |
Anchors as Explanations
37:07
https://traffic.libsyn.com/secure/dataskeptic/anchors-as-explanations.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/3/3/3/e333907f90f6c88a/marco_TR.jpg
3230318908
e45c47bd-7662-4311-ac41-927818e6a1de
|
Feb 28, 2020 |
Mathematical Models of Ecological Systems
36:42
https://traffic.libsyn.com/secure/dataskeptic/mathematical-models-of-ecological-systems.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/b/5/9/db59c22409d8fa92/johnvandermeer.jpg
3132363983
16677155-fbbe-46a6-b44c-287bfdf26c32
|
Feb 22, 2020 |
Adversarial Explanations
36:51
https://traffic.libsyn.com/secure/dataskeptic/adversarial-explanations.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/6/1/7/6617950cb8377300/WALTERWOODS.jpg
3020419627
817c4325-7b7b-439e-a6bb-98ede2d41a47
|
Feb 14, 2020 |
ObjectNet
38:37
https://traffic.libsyn.com/secure/dataskeptic/objectnet.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/0/8/7/9087b906df159dd8/andrei_barbu.jpg
2930172964
c5f38387-ecd0-4042-a491-b54e99337b39
Andrei Barbu joins us to discuss ObjectNet - a new kind of vision dataset. In contrast to ImageNet, ObjectNet seeks to provide images that are more representative of the types of images an autonomous machine is likely to encounter in the real world. Collecting a dataset in this way required careful use of Mechanical Turk to get Turkers to provide a corpus of images that removes some of the bias found in ImageNet. http://0xab.com/
|
Feb 07, 2020 |
Visualization and Interpretability
35:49
https://traffic.libsyn.com/secure/dataskeptic/visualization-and-interpretability.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/0/4/7/c047eca5741f172c/enrico_bertini.jpg
2828185455
9a6ddf43-d6ff-40e9-8aaa-fa9ef3a9f404
Enrico Bertini joins us to discuss how data visualization can be used to help make machine learning more interpretable and explainable. Find out more about Enrico at http://enrico.bertini.io/. More from Enrico with co-host Moritz Stefaner on the Data Stories podcast!
|
Jan 31, 2020 |
Interpretable One Shot Learning
30:40
https://traffic.libsyn.com/secure/dataskeptic/interpretable-one-shot-learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/6/8/3/d6832f8d99416c82/suwang.jpg
2740354012
c07b082d-401a-491c-a644-60490c60866b
|
Jan 26, 2020 |
Fooling Computer Vision
25:26
https://traffic.libsyn.com/secure/dataskeptic/fooling-computer-vision.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/f/7/b/0f7bb4c3b66eadfb/weibe.jpg
2695172640
d3a15282-a7e5-428b-8153-9b9caff3a463
Wiebe van Ranst joins us to talk about a project in which specially designed printed images can fool a computer vision system, preventing it from identifying a person. Their attack targets the popular YOLO2 pre-trained image recognition model, and thus, is likely to be widely applicable.
|
Jan 22, 2020 |
Algorithmic Fairness
42:10
https://traffic.libsyn.com/secure/dataskeptic/algorithmic-fairness.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/3/0/f/030f44676b26cc26/aaronroth.jpg
2557740637
23f7e1d0-14e5-424d-ac02-59a5bba69852
|
Jan 14, 2020 |
Interpretability
32:43
https://traffic.libsyn.com/secure/dataskeptic/interpretability.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/f/9/1/6f91f99359202923/christophmolnar.jpg
2456303804
6549f3cc-6dad-4598-8e82-e7a27683c8bb
Interpretability Machine learning has shown a rapid expansion into every sector and industry. With increasing reliance on models and increasing stakes for the decisions of models, questions of how models actually work are becoming increasingly important to ask. Welcome to Data Skeptic Interpretability. In this episode, Kyle interviews Christoph Molnar about his book Interpretable Machine Learning. Thanks to our sponsor, the Gartner Data & Analytics Summit going on in Grapevine, TX on March 23 – 26, 2020. Use discount code: dataskeptic. Music Our new theme song is #5 by Big D and the Kids Table. Incidental music by Tanuki Suit Riot.
|
Jan 07, 2020 |
NLP in 2019
38:43
https://traffic.libsyn.com/secure/dataskeptic/nlp-in-2019.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
2358635275
16b95404-f209-4a66-95aa-444756833f04
|
Dec 31, 2019 |
The Limits of NLP
29:47
https://traffic.libsyn.com/secure/dataskeptic/the-limits-of-nlp.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
2255195445
7deed310-da4f-4fcd-a604-fd73366e3b5d
We are joined by Colin Raffel to discuss the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer".
|
Dec 24, 2019 |
Jumpstart Your ML Project
20:34
https://traffic.libsyn.com/secure/dataskeptic/jumpstart-your-ml-project.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
2139235973
8a6575dc-9339-4b30-b2b7-21bdd6cdac4b
Seth Juarez joins us to discuss the toolbox of options available to a data scientist to jumpstart or extend their machine learning efforts.
|
Dec 15, 2019 |
Serverless NLP Model Training
29:02
https://traffic.libsyn.com/secure/dataskeptic/serverless-nlp-model-training.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
2055479925
640f60a2-ef45-45f0-8fde-eb177945a99e
Alex Reeves joins us to discuss some of the challenges around building a serverless, scalable, generic machine learning pipeline. The is a technical deep dive on architecting solutions and a discussion of some of the design choices made.
|
Dec 10, 2019 |
Team Data Science Process
41:24
https://traffic.libsyn.com/secure/dataskeptic/the-team-data-science-process.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1989223012
26545ff6-9a9b-48ae-819d-b2663d13740c
Buck Woody joins Kyle to share experiences from the field and the application of the Team Data Science Process - a popular six-phase workflow for doing data science.
|
Dec 03, 2019 |
Ancient Text Restoration
41:13
https://traffic.libsyn.com/secure/dataskeptic/ancient-text-restoration.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1928137505
0ce940d1-0c25-4629-b170-6918e32dedeb
Thea Sommerschield joins us this week to discuss the development of Pythia - a machine learning model trained to assist in the reconstruction of ancient language text.
|
Dec 01, 2019 |
ML Ops
36:31
https://traffic.libsyn.com/secure/dataskeptic/ml-ops.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1882945277
1fc6e3e4-241c-4723-835c-df9dc0fbaa3e
Kyle met up with Damian Brady at MS Ignite 2019 to discuss machine learning operations.
|
Nov 27, 2019 |
Annotator Bias
25:55
https://traffic.libsyn.com/secure/dataskeptic/annotator-bias.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1828223097
7f984357-1ce2-496b-83b4-cbfff6b7d051
The modern deep learning approaches to natural language processing are voracious in their demands for large corpora to train on. Folk wisdom estimates used to be around 100k documents were required for effective training. The availability of broadly trained, general-purpose models like BERT has made it possible to do transfer learning to achieve novel results on much smaller corpora. Thanks to these advancements, an NLP researcher might get value out of fewer examples since they can use the transfer learning to get a head start and focus on learning the nuances of the language specifically relevant to the task at hand. Thus, small specialized corpora are both useful and practical to create. In this episode, Kyle speaks with Mor Geva, lead author on the recent paper Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets, which explores some unintended consequences of the typical procedure followed for generating corpora. Source code for the paper available here: https://github.com/mega002/annotator_bias
|
Nov 23, 2019 |
NLP for Developers
29:01
https://traffic.libsyn.com/secure/dataskeptic/nlp-for-developers.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1772809075
d6ca854d-8032-4fec-99e1-e9cafbdd5503
While at MS Build 2019, Kyle sat down with Lance Olson from the Applied AI team about how tools like cognitive services and cognitive search enable non-data scientists to access relatively advanced NLP tools out of box, and how more advanced data scientists can focus more time on the bigger picture problems.
|
Nov 20, 2019 |
Indigenous American Language Research
22:51
https://traffic.libsyn.com/secure/dataskeptic/indigenous-american-language-research.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1697436902
8b25f9236fa84f978c0afc8ad63daf9c
Manuel Mager joins us to discuss natural language processing for low and under-resourced languages. We discuss current work in this area and the Naki Project which aggregates research on NLP for native and indigenous languages of the American continent.
|
Nov 13, 2019 |
Talking to GPT-2
29:10
https://traffic.libsyn.com/secure/dataskeptic/talking-to-gpt2.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1557268572
d3f28152f4c048efb425046084a105be
GPT-2 is yet another in a succession of models like ELMo and BERT which adopt a similar deep learning architecture and train an unsupervised model on a massive text corpus. As we have been covering recently, these approaches are showing tremendous promise, but how close are they to an AGI? Our guest today, Vazgen Davidyants wondered exactly that, and have conversations with a Chatbot running GPT-2. We discuss his experiences as well as some novel thoughts on artificial intelligence.
|
Oct 31, 2019 |
Reproducing Deep Learning Models
22:43
https://traffic.libsyn.com/secure/dataskeptic/reproducing-deep-learning-models.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1438210197
b6631174c0774adca25e570c11cbe213
Rajiv Shah attempted to reproduce an earthquake-predicting deep learning model. His results exposed some issues with the model. Kyle and Rajiv discuss the original paper and Rajiv's analysis.
|
Oct 23, 2019 |
What BERT is Not
27:00
https://traffic.libsyn.com/secure/dataskeptic/what-bert-is-not.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1321317814
9143bdb924654624a47518b09a99bd4c
Allyson Ettinger joins us to discuss her work in computational linguistics, specifically in exploring some of the ways in which the popular natural language processing approach BERT has limitations.
|
Oct 14, 2019 |
SpanBERT
24:49
https://traffic.libsyn.com/secure/dataskeptic/spanbert.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1230499537
cc2600914352494fa17199f8529bd831
|
Oct 08, 2019 |
BERT is Shallow
20:29
https://traffic.libsyn.com/secure/dataskeptic/bert-is-shallow.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
1054050812
37cc595c9c3e42f1a393490082ac7a9b
Tim Niven joins us this week to discuss his work exploring the limits of what BERT can do on certain natural language tasks such as adversarial attacks, compositional learning, and systematic learning.
|
Sep 23, 2019 |
BERT is Magic
18:01
https://traffic.libsyn.com/secure/dataskeptic/bert-is-magic.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
975303383
f75d4eb6cc4045ae93d967e4d3aed295
Kyle pontificates on how impressed he is with BERT.
|
Sep 16, 2019 |
Applied Data Science in Industry
21:51
https://traffic.libsyn.com/secure/dataskeptic/applied-data-science-in-industry.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
867688340
b4442f2001d1476a88911ad9f8ab7ca7
Kyle sits down with Jen Stirrup to inquire about her experiences helping companies deploy data science solutions in a variety of different settings.
|
Sep 06, 2019 |
Building the howto100m Video Corpus
22:38
https://traffic.libsyn.com/secure/dataskeptic/building-the-howto100m-video-corpus.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
703561062
c4ffd9a0bb664aba968b8d234ca8c306
Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen. This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities. Related Links The paper will be presented at ICCV 2019 @antoine77340 Antoine on Github Antoine's homepage
|
Aug 19, 2019 |
BERT
13:44
https://traffic.libsyn.com/secure/dataskeptic/bert.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
473816145
31dde7b999664cc3a266357e845aad14
Kyle provides a non-technical overview of why Bidirectional Encoder Representations from Transformers (BERT) is a powerful tool for natural language processing projects.
|
Jul 29, 2019 |
Onnx
20:32
https://traffic.libsyn.com/secure/dataskeptic/onyx.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
426836834
e3aba9d406394c7aab68801c843d5f25
Kyle interviews Prasanth Pulavarthi about the Onnx format for deep neural networks.
|
Jul 22, 2019 |
Catastrophic Forgetting
21:27
https://traffic.libsyn.com/secure/dataskeptic/catastrophic-forgetting.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
388005372
eb00d12787114e77b1ee6b4b49690cec
Kyle and Linhda discuss some high level theory of mind and overview the concept machine learning concept of catastrophic forgetting.
|
Jul 15, 2019 |
Transfer Learning
29:51
https://traffic.libsyn.com/secure/dataskeptic/transfer_learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
359030838
e91343f5d01445919fe17ba0d2111a3a
Sebastian Ruder is a research scientist at DeepMind. In this episode, he joins us to discuss the state of the art in transfer learning and his contributions to it.
|
Jul 08, 2019 |
Facebook Bargaining Bots Invented a Language
23:08
https://traffic.libsyn.com/secure/dataskeptic/facebook-language.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
300517549
11bc11c22848453393d87e16d6051d4f
In 2017, Facebook published a paper called Deal or No Deal? End-to-End Learning for Negotiation Dialogues. In this research, the reinforcement learning agents developed a mechanism of communication (which could be called a language) that made them able to optimize their scores in the negotiation game. Many media sources reported this as if it were a first step towards Skynet taking over. In this episode, Kyle discusses bargaining agents and the actual results of this research.
|
Jun 21, 2019 |
Under Resourced Languages
16:47
https://traffic.libsyn.com/secure/dataskeptic/under-resourced-languages.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
280893945
00afb6552357488a88837795e95990da
Priyanka Biswas joins us in this episode to discuss natural language processing for languages that do not have as many resources as those that are more commonly studied such as English. Successful NLP projects benefit from the availability of like large corpora, well-annotated corpora, software libraries, and pre-trained models. For languages that researchers have not paid as much attention to, these tools are not always available.
|
Jun 15, 2019 |
Named Entity Recognition
17:12
https://traffic.libsyn.com/secure/dataskeptic/named-entity-recognition.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
257333681
36b59073d9ca485fa7ce2a71bb19e1a5
Kyle and Linh Da discuss the class of approaches called "Named Entity Recognition" or NER. NER algorithms take any string as input and return a list of "entities" - specific facts and agents in the text along with a classification of the type (e.g. person, date, place).
|
Jun 08, 2019 |
The Death of a Language
20:19
https://traffic.libsyn.com/secure/dataskeptic/the-death-of-a-language.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
228315428
7e0bedaac10845b29193613aac8ba87b
USC students from the CAIS++ student organization have created a variety of novel projects under the mission statement of "artificial intelligence for social good". In this episode, Kyle interviews Zane and Leena about the Endangered Languages Project.
|
Jun 01, 2019 |
Neural Turing Machines
25:27
https://traffic.libsyn.com/secure/dataskeptic/neuro-turing-machines.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
200135792
532167c709c14b8ba15be243c48e23ea
Kyle and Linh Da discuss the concepts behind the neural Turing machine.
|
May 25, 2019 |
Data Infrastructure in the Cloud
30:05
https://traffic.libsyn.com/secure/dataskeptic/data-infrastructure-in-the-cloud.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
153419384
599a9361089c432bacf39e3f3f2f0298
Kyle chats with Rohan Kumar about hyperscale, data at the edge, and a variety of other trends in data engineering in the cloud.
|
May 18, 2019 |
NCAA Predictions on Spark
23:53
https://traffic.libsyn.com/secure/dataskeptic/ncaa-predictions-on-spark.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
90968246
34cce9347f624443b82dbc261d33a695
In this episode, Kyle interviews Laura Edell at MS Build 2019. The conversation covers a number of topics, notably her NCAA Final 4 prediction model.
|
May 11, 2019 |
The Transformer
15:23
https://traffic.libsyn.com/secure/dataskeptic/transformer.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
44185513
68595f430c3240dfa3a38019bc6a3ffb
Kyle and Linhda discuss attention and the transformer - an encoder/decoder architecture that extends the basic ideas of vector embeddings like word2vec into a more contextual use case.
|
May 03, 2019 |
Mapping Dialects with Twitter Data
25:20
https://traffic.libsyn.com/secure/dataskeptic/mapping-dialects-with-twitter-data.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
28333777
b589437cfd944a64bbf2f0edf493d36b
When users on Twitter post with geographic tags, it creates the opportunity for a variety of interesting questions to be posed having to do with language, dialects, and location. In this episode, Kyle interviews Bruno Gonçalves about his work studying language in this way.
|
Apr 26, 2019 |
Sentiment Analysis
27:28
https://traffic.libsyn.com/secure/dataskeptic/sentiment-analysis.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137553
8b426b91ef564a9f83657ffb7606d89f
This is an interview with Ellen Loeshelle, Director of Product Management at Clarabridge. We primarily discuss sentiment analysis.
|
Apr 20, 2019 |
Attention Primer
14:51
https://traffic.libsyn.com/secure/dataskeptic/attention-part-1.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137554
4eb829e293dd454cb5e8fe42b65800c4
A gentle introduction to the very high-level idea of "attention" in machine learning, as it will play a major role in some upcoming episodes over the next few weeks.
|
Apr 13, 2019 |
Cross-lingual Short-text Matching
24:43
https://traffic.libsyn.com/secure/dataskeptic/cross-lingual.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137555
a9fcf4a476e64ba2a08e4296d316f2af
Modern messaging technology has facilitated a trend towards highly compact, short messages send by users who can presume a great amount of context held between the communicating parties. The rules of grammar may be discarded and often visible errors are a normal part of the conversation. >>> Good mornink >>> morning Yet such short messages are also important for businesses whose users are unlikely to read a large block of text upon completing an order. Similarly, a business might want to offer assistance and effective question and answering solutions in an automated and ideally multi-lingual way. In this episode, we discuss techniques for designing solutions like that.
|
Apr 05, 2019 |
ELMo
23:49
https://traffic.libsyn.com/secure/dataskeptic/elmo.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137556
707cf41bc7cb43f8a897b6c6ac179481
ELMo (Embeddings from Language Models) introduced the idea of deep contextualized word representations. It extends previous ideas like word2vec and GloVe. The ELMo model is a neural network able to map natural language into a vector space. This vector space, out of box, proved to be incredibly useful in a wide variety of seemingly unrelated NLP tasks like sentiment analysis and name entity recognition.
|
Mar 29, 2019 |
BLEU
42:23
https://traffic.libsyn.com/secure/dataskeptic/bleu.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137557
7afb1089a8394237b6b345e697aafba8
Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?
|
Mar 23, 2019 |
Simultaneous Translation at Baidu
24:10
https://traffic.libsyn.com/secure/dataskeptic/simultaneous-translation.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137558
264d346c178d45548ee791dd98802bbf
While at NeurIPS 2018, Kyle chatted with Liang Huang about his work with Baidu research on simultaneous translation, which was demoed at the conference.
|
Mar 15, 2019 |
Human vs Machine Transcription
32:43
https://traffic.libsyn.com/secure/dataskeptic/human-vs-machine-transcription-errors.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137559
944e2414a22b43a2a32e83c6a2a1b6e6
Machine transcription (the process of translating audio recordings of language to text) has come a long way in recent years. But how do the errors made during machine transcription compare to the errors made by a human transcriber? Find out in this episode!
|
Mar 08, 2019 |
seq2seq
21:41
https://traffic.libsyn.com/secure/dataskeptic/seq2seq.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137560
1ae9f1c1cf1d4434a6f5e4dd6d39403e
A sequence to sequence (or seq2seq) model is neural architecture used for translation (and other tasks) which consists of an encoder and a decoder. The encoder/decoder architecture has obvious promise for machine translation, and has been successfully applied this way. Encoding an input to a small number of hidden nodes which can effectively be decoded to a matching string requires machine learning to learn an efficient representation of the essence of the strings. In addition to translation, seq2seq models have been used in a number of other NLP tasks such as summarization and image captioning. Related Links
|
Mar 01, 2019 |
Text Mining in R
20:28
https://traffic.libsyn.com/secure/dataskeptic/text-mining-in-r.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137561
842948dad4414dd99847cd768da20a4f
Kyle interviews Julia Silge about her path into data science, her book Text Mining with R, and some of the ways in which she's used natural language processing in projects both personal and professional. Related Links
|
Feb 22, 2019 |
Recurrent Relational Networks
19:13
https://traffic.libsyn.com/secure/dataskeptic/recurrent-relational-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137562
b93acc967af84985b61928d93be0ec85
One of the most challenging NLP tasks is natural language understanding and reasoning. How can we construct algorithms that are able to achieve human level understanding of text and be able to answer general questions about it? This is truly an open problem, and one with the bAbI dataset has been constructed to facilitate. bAbI presents a variety of different language understanding and reasoning tasks and exists as benchmark for comparing approaches. In this episode, Kyle talks to Rasmus Berg Palm about his recent paper Recurrent Relational Networks
|
Feb 15, 2019 |
Text World and Word Embedding Lower Bounds
39:07
https://traffic.libsyn.com/secure/dataskeptic/text-world-and-word-embedding-lower-bounds.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137563
7822e3c3815c48d7b3986d9aadfe01c7
In the first half of this episode, Kyle speaks with Marc-Alexandre Côté and Wendy Tay about Text World. Text World is an engine that simulates text adventure games. Developers are encouraged to try out their reinforcement learning skills building agents that can programmatically interact with the generated text adventure games. In the second half of this episode, Kyle interviews Kevin Patel about his paper Towards Lower Bounds on Number of Dimensions for Word Embeddings. In this research, the explore an important question of how many hidden nodes to use when creating a word embedding.
|
Feb 08, 2019 |
word2vec
31:27
https://traffic.libsyn.com/secure/dataskeptic/word2vec.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137564
adda28bce9f241b9a7285aa96c191371
Word2vec is an unsupervised machine learning model which is able to capture semantic information from the text it is trained on. The model is based on neural networks. Several large organizations like Google and Facebook have trained word embeddings (the result of word2vec) on large corpora and shared them for others to use. The key algorithmic ideas involved in word2vec is the continuous bag of words model (CBOW). In this episode, Kyle uses excerpts from the 1983 cinematic masterpiece War Games, and challenges Linhda to guess a word Kyle leaves out of the transcript. This is similar to how word2vec is trained. It trains a neural network to predict a hidden word based on the words that appear before and after the missing location.
|
Feb 01, 2019 |
Authorship Attribution
50:37
https://traffic.libsyn.com/secure/dataskeptic/authorship-attribution.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137565
aff30e227812464d8d4cddeea520c023
In a recent paper, Leveraging Discourse Information Effectively for Authorship Attribution, authors Su Wang, Elisa Ferracane, and Raymond J. Mooney describe a deep learning methodology for predict which of a collection of authors was the author of a given document.
|
Jan 25, 2019 |
Very Large Corpora and Zipf's Law
24:11
https://traffic.libsyn.com/secure/dataskeptic/extremely-large-corpora.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137566
96547317601e4ac6b63f1cd790daf737
The earliest efforts to apply machine learning to natural language tended to convert every token (every word, more or less) into a unique feature. While techniques like stemming may have cut the number of unique tokens down, researchers always had to face a problem that was highly dimensional. Naive Bayes algorithm was celebrated in NLP applications because of its ability to efficiently process highly dimensional data. Of course, other algorithms were applied to natural language tasks as well. While different algorithms had different strengths and weaknesses to different NLP problems, an early paper titled Scaling to Very Very Large Corpora for Natural Language Disambiguation popularized one somewhat surprising idea. For many NLP tasks, simply providing a large corpus of examples not only improved accuracy, but it also showed that asymptotically, some algorithms yielded more improvement from working on very, very large corpora. Although not explicitly in about NLP, the noteworthy paper The Unreasonable Effectiveness of Data emphasizes this point further while paying homage to the classic treatise The Unreasonable Effectiveness of Mathematics in the Natural Sciences. In this episode, Kyle shares a few thoughts along these lines with Linh Da. The discussion winds up with a brief introduction to Zipf's law. When applied to natural language, Zipf's law states that the frequency of any given word in a corpus (regardless of language) will be proportional to its rank in the frequency table.
|
Jan 18, 2019 |
Semantic search at Github
34:57
https://traffic.libsyn.com/secure/dataskeptic/semantic-search-at-github.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137567
949a270151ff46f99a7a54d5ae749ae8
Github is many things besides source control. It's a social network, even though not everyone realizes it. It's a vast repository of code. It's a ticketing and project management system. And of course, it has search as well. In this episode, Kyle interviews Hamel Husain about his research into semantic code search.
|
Jan 11, 2019 |
Let's Talk About Natural Language Processing
36:13
https://traffic.libsyn.com/secure/dataskeptic/natural-language-processing.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137568
60005b36314c4f6990d1c42aeb400ed8
This episode reboots our podcast with the theme of Natural Language Processing for the next few months. We begin with introductions of Yoshi and Linh Da and then get into a broad discussion about natural language processing: what it is, what some of the classic problems are, and just a bit on approaches. Finishing out the show is an interview with Lucy Park about her work on the KoNLPy library for Korean NLP in Python. If you want to share your NLP project, please join our Slack channel. We're eager to see what listeners are working on! http://konlpy.org/en/latest/
|
Jan 04, 2019 |
Data Science Hiring Processes
33:05
https://traffic.libsyn.com/secure/dataskeptic/data-science-hiring-processes.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137569
60c5197507874b9f9329f9c377a87166
Kyle shares a few thoughts on mistakes observed by job applicants and also shares a few procedural insights listeners at early stages in their careers might find value in.
|
Dec 28, 2018 |
Holiday Reading - Epicac
21:21
https://traffic.libsyn.com/secure/dataskeptic/epicac.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137570
71ee7620be364b259e4ce944bd58b817
|
Dec 25, 2018 |
Drug Discovery with Machine Learning
28:59
https://traffic.libsyn.com/secure/dataskeptic/drug-discovery.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/0/0/6/6006034e8fb050d3/fake-news-album-400.jpg
11137571
b2f6be2f478844ad80842e4819a0806a
In today's episode, Kyle chats with Alexander Zhebrak, CTO of Insilico Medicine, Inc. Insilico self describes as artificial intelligence for drug discovery, biomarker development, and aging research. The conversation in this episode explores the ways in which machine learning, in particular, deep learning, is contributing to the advancement of drug discovery. This happens not just through research but also through software development. Insilico works on data pipelines and tools like MOSES, a benchmarking platform to support research on machine learning for drug discovery. The MOSES platform provides a standardized benchmarking dataset, a set of open-sourced models with unified implementation, and metrics to evaluate and assess their performance.
|
Dec 21, 2018 |
Sign Language Recognition
19:46
https://traffic.libsyn.com/secure/dataskeptic/sign-language-recognition.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/2/3/d/023d02997c0cb8ad/main-logo.jpg
11137572
d0655645532d44d0b88750e149fad9ae
At the NeurIPS 2018 conference, Stradigi AI premiered a training game which helps players learn American Sign Language. This episode brings the first of many interviews conducted at NeurIPS 2018. In this episode, Kyle interviews Chief Data Scientist Carolina Bessega about the deep learning architecture used in this project. The Stradigi AI team was exhibiting a project called the American Sign Language (ASL) Alphabet Game at the recent NeurIPS 2018 conference. They also published a detailed blog post about how they built the system found here.
|
Dec 14, 2018 |
Data Ethics
19:51
https://traffic.libsyn.com/secure/dataskeptic/data-ethics.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/8/f/b/08fb7d9bd78d0b7d/fake-news-album-400.jpg
11137573
c0661e0227d74489acc1d836ce10a29e
This week, Kyle interviews Scott Nestler on the topic of Data Ethics. Today, no ubiquitous, formal ethical protocol exists for data science, although some have been proposed. One example is the INFORMS Ethics Guidelines. Guidelines like this are rather informal compared to other professions, like the Hippocratic Oath. Yet not every profession requires such a formal commitment. In this episode, Scott shares his perspective on a variety of ethical questions specific to data and analytics.
|
Dec 07, 2018 |
Escaping the Rabbit Hole
33:49
https://traffic.libsyn.com/secure/dataskeptic/escaping-the-rabbit-hole.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/d/8/1/cd81596719c5a86c/fake-news-album-400.jpg
11137574
6eb68b5fed9e4f1b8729fc3f61414725
Kyle interviews Mick West, author of Escaping the Rabbit Hole: How to Debunk Conspiracy Theories Using Facts, Logic, and Respect about the nature of conspiracy theories, the people that believe them, and how to help people escape the belief in false information. Mick is also the creator of metabunk.org. The discussion explores conspiracies like chemtrails, 9/11 conspiracy theories, JFK assassination theories, and the flat Earth theory. We live in a complex world in which no person can have a sufficient understanding of all topics. It's only natural that some percentage of people will eventually adopt fringe beliefs. In this book, Mick provides a fantastic guide to helping individuals who have fallen into a rabbit hole of pseudo-science or fake news.
|
Nov 30, 2018 |
[MINI] Theorem Provers
18:59
https://traffic.libsyn.com/secure/dataskeptic/theorem-provers.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/5/9/b/559b3f8530caa7d6/fake-news-album-400.jpg
11137575
943ad940be7b4fdd9fc04980896387ce
Fake news attempts to lead readers/listeners/viewers to conclusions that are not descriptions of reality. They do this most often by presenting false premises, but sometimes by presenting flawed logic. An argument is only sound and valid if the conclusions are drawn directly from all the state premises, and if there exists a path of logical reasoning leading from those premises to the conclusion. While creating a theorem does feel to most mathematicians as a creative act of discovery, some theorems have been proven using nothing more than search. All the "rules" of logic (like modus ponens) can be encoded into a computer program. That program can start from the premises, applying various combinations of rules to inference new information, and check to see if the program has inference the desired conclusion or its negation. This does seem like a mechanical process when painted in this light. However, several challenges exist preventing any theorem prover from instantly solving all the open problems in mathematics. In this episode, we discuss a bit about what those challenges are.
|
Nov 23, 2018 |
Automated Fact Checking
31:48
https://traffic.libsyn.com/secure/dataskeptic/automated-fact-checking.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/c/a/e/6caee1249b1a936c/fake-news-album-400.jpg
11137576
f8752c5e37584082a1439c6cb2288532
Fake news can be responded to with fact-checking. However, it's easier to create fake news than the fact check it. Full Fact is the UK's independent fact-checking organization. In this episode, Kyle interviews Mevan Babakar, head of automated fact-checking at Full Fact. Our discussion talks about the process and challenges in doing fact-checking. Full Fact has been exploring ways in which machine learning can assist in automating parts of the fact-checking process. Progress in areas like this allows journalists to be more effective and rapid in responding to new information.
|
Nov 16, 2018 |
[MINI] Single Source of Truth
29:30
https://traffic.libsyn.com/secure/dataskeptic/single-source-of-truth.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/2/2/9/c229d08939b675e8/fake-news-album-400.jpg
11137577
6908d1329b084972ba24982b19770992
In mathematics, truth is universal. In data, truth lies in the where clause of the query. As large organizations have grown to rely on their data more significantly for decision making, a common problem is not being able to agree on what the data is. As the volume and velocity of data grow, challenges emerge in answering questions with precision. A simple question like "what was the revenue yesterday" could become mired in details. Did your query account for transactions that haven't been finalized? If I query again later, should I exclude orders that have been returned since the last query? What time zone should I use? The list goes on and on. In any large enough organization, you are also likely to find multiple copies if the same data. Independent systems might record the same information with slight variance. Sometimes systems will import data from other systems; a process which could become out of sync for several reasons. For any sufficiently large system, answering analytical questions with precision can become a non-trivial challenge. The business intelligence community aspires to provide a "single source of truth" - one canonical place where data consumers can go to get precise, reliable, and trusted answers to their analytical questions.
|
Nov 09, 2018 |
Detecting Fast Radio Bursts with Deep Learning
44:51
https://traffic.libsyn.com/secure/dataskeptic/detecting-fast-radio-bursts-with-deep-learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/2/1/1/12119536ee1189da/fake-news-album-400.jpg
11137578
e9a0d4fe15154cb0acdd62b3e6e63800
Fast radio bursts are an astrophysical phenomenon first observed in 2007. While many observations have been made, science has yet to explain the mechanism for these events. This has led some to ask: could it be a form of extra-terrestrial communication? Probably not. Kyle asks Gerry Zhang who works at the Berkeley SETI Research Center about this possibility and more importantly, about his applications of deep learning to detect fast radio bursts. Radio astronomy captures observations from space which can be converted to a waterfall chart or spectrogram. These data structures can be formatted in a visual way and also make great candidates for applying deep learning to the task of detecting the fast radio bursts.
|
Nov 02, 2018 |
Being Bayesian
24:38
https://traffic.libsyn.com/secure/dataskeptic/bayesian-redux.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/6/f/8/06f8d1ab4cfd69f9/fake-news-album-400.jpg
11137579
ebaa3f39beab4aa5b60061a96c4a8ef6
This episode explores the root concept of what it is to be Bayesian: describing knowledge of a system probabilistically, having an appropriate prior probability, know how to weigh new evidence, and following Bayes's rule to compute the revised distribution. We present this concept in a few different contexts but primarily focus on how our bird Yoshi sends signals about her food preferences. Like many animals, Yoshi is a complex creature whose preferences cannot easily be summarized by a straightforward utility function the way they might in a textbook reinforcement learning problem. Her preferences are sequential, conditional, and evolving. We may not always know what our bird is thinking, but we have some good indicators that give us clues.
|
Oct 26, 2018 |
Modeling Fake News
33:12
https://traffic.libsyn.com/secure/dataskeptic/modeling-fake-news.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/2/4/b/824bf9f314006fa8/fake-news-album-400.jpg
11137580
7bde97345ee5462facb7be55f6bc683b
This is our interview with Dorje Brody about his recent paper with David Meier, How to model fake news. This paper uses the tools of communication theory and a sub-topic called filtering theory to describe the mathematical basis for an information channel which can contain fake news. Thanks to our sponsor Gartner.
|
Oct 19, 2018 |
The Louvain Method for Community Detection
26:47
https://traffic.libsyn.com/secure/dataskeptic/louvain-community-detection.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/a/1/0/3a10d6b017686faa/fake-news-album-400.jpg
11137581
711343205bd64a45b5d335df007fcc99
Without getting into definitions, we have an intuitive sense of what a "community" is. The Louvain Method for Community Detection is one of the best known mathematical techniques designed to detect communities. This method requires typical graph data in which people are nodes and edges are their connections. It's easy to imagine this data in the context of Facebook or LinkedIn but the technique applies just as well to any other dataset like cellular phone calling records or pen-pals. The Louvain Method provides a means of measuring the strength of any proposed community based on a concept known as Modularity. Modularity is a value in the range that measure the density of links internal to a community against links external to the community. The quite palatable assumption here is that a genuine community would have members that are strongly interconnected. A community is not necessarily the same thing as a clique; it is not required that all community members know each other. Rather, we simply define a community as a graph structure where the nodes are more connected to each other than connected to people outside the community. It's only natural that any person in a community has many connections to people outside that community. The more a community has internal connections over external connections, the stronger that community is considered to be. The Louvain Method elegantly captures this intuitively desirable quality.
|
Oct 12, 2018 |
Cultural Cognition of Scientific Consensus
31:48
https://traffic.libsyn.com/secure/dataskeptic/cultural-cognition.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/3/3/2/93326d6d5d3dad5c/fake-news-album-400.jpg
11137582
2e9e1de2fc064fe1ada5fd930c1b1304
In this episode, our guest is Dan Kahan about his research into how people consume and interpret science news. In an era of fake news, motivated reasoning, and alternative facts, important questions need to be asked about how people understand new information. Dan is a member of the Cultural Cognition Project at Yale University, a group of scholars interested in studying how cultural values shape public risk perceptions and related policy beliefs. In a paper titled Cultural cognition of scientific consensus, Dan and co-authors Hank Jenkins‐Smith and Donald Braman discuss the "cultural cognition of risk" and establish experimentally that individuals tend to update their beliefs about scientific information through a context of their pre-existing cultural beliefs. In this way, topics such as climate change, nuclear power, and conceal-carry handgun permits often result in people. The findings of this and other studies tell us that on topics such as these, even when people are given proper information about a scientific consensus, individuals still interpret those results through the lens of their pre-existing cultural beliefs. The ‘cultural cognition of risk’ refers to the tendency of individuals to form risk perceptions that are congenial to their values. The study presents both correlational and experimental evidence confirming that cultural cognition shapes individuals’ beliefs about the existence of scientific consensus, and the process by which they form such beliefs, relating to climate change, the disposal of nuclear wastes, and the effect of permitting concealed possession of handguns. The implications of this dynamic for science communication and public policy‐making are discussed.
|
Oct 05, 2018 |
False Discovery Rates
25:46
https://traffic.libsyn.com/secure/dataskeptic/false-discovery-rates.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/9/a/f/e9afcc07ff244cd8/fake-news-album-400.jpg
11137583
d1c0d9eb347e48e2864a07bc28e07395
A false discovery rate (FDR) is a methodology that can be useful when struggling with the problem of multiple comparisons. In any experiment, if the experimenter checks more than one dependent variable, then they are making multiple comparisons. Naturally, if you make enough comparisons, you will eventually find some correlation. Classically, people applied the Bonferroni Correction. In essence, this procedure dictates that you should lower your p-value (raise your standard of evidence) by a specific amount depending on the number of variables you're considering. While effective, this methodology is strict about preventing false positives (type i errors). You aren't likely to find evidence for a hypothesis that is actually false using Bonferroni. However, your exuberance to avoid type i errors may have introduced some type ii errors. There could be some hypotheses that are actually true, which you did not notice. This episode covers an alternative known as false discovery rates. The essence of this method is to make more specific adjustments to your expectation of what p-value is sufficient evidence.
|
Sep 28, 2018 |
Deep Fakes
30:23
https://traffic.libsyn.com/secure/dataskeptic/deepfakes.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/c/a/3/aca34077e3464f08/fake-news-album-400.jpg
11137584
8ed692d61b7e49a3a587800b57fda923
Digital videos can be described as sequences of still images and associated audio. Audio is easy to fake. What about video? A video can easily be broken down into a sequence of still images replayed rapidly in sequence. In this context, videos are simply very high dimensional sequences of observations, ripe for input into a machine learning algorithm. The availability of commodity hardware, clever algorithms, and well-designed software to implement those algorithms at scale make it possible to do machine learning on video, but to what end? There are many answers, one interesting approach being the technology called "DeepFakes". The Deep of Deepfakes refers to Deep Learning, and the fake refers to the function of the software - to take a real video of a human being and digitally alter their face to match someone else's face. Here are two examples: This software produces curiously convincing fake videos. Yet, there's something slightly off about them. Surely machine learning can be used to determine real from fake... right? Siwei Lyu and his collaborators certainly thought so and demonstrated this idea by identifying a novel, detectable feature which was commonly missing from videos produced by the Deep Fakes software. In this episode, we discuss this use case for deep learning, detecting fake videos, and the threat of fake videos in the future.
|
Sep 21, 2018 |
Fake News Midterm
19:19
https://traffic.libsyn.com/secure/dataskeptic/fake-news-midterm.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/8/5/9/a859817729db9ac5/fake-news-album-500.jpg
11137585
ec43d43fb2464988ab8f72f6a56f0262
In this episode, Kyle reviews what we've learned so far in our series on Fake News and talks briefly about where we're going next.
|
Sep 14, 2018 |
Quality Score
18:55
https://traffic.libsyn.com/secure/dataskeptic/quality_score.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/6/7/3/26736f3fe27effe5/fake-news-album-400.jpg
11137586
0705f1fc3e274304a96938ab99f11bd8
Two weeks ago we discussed click through rates or CTRs and their usefulness and limits as a metric. Today, we discuss a related metric known as quality score. While that phrase has probably been used to mean dozens of different things in different contexts, our discussion focuses around the idea of quality score encountered in Search Engine Marketing (SEM). SEM is the practice of purchasing keyword targeted ads shown to customers using a search engine. Most SEM is managed via an auction mechanism - the advertiser states the price they are willing to pay, and in real time, the search engine will serve users advertisements and charge the advertiser. But how to search engines decide who to show and what price to charge? This is a complicated question requiring a multi-part answer to address completely. In this episode, we focus on one part of that equation, which is the quality score the search engine assigns to the ad in context. This quality score is calculated via several factors including crawling the destination page (also called the landing page) and predicting how applicable the content found there is to the ad itself.
|
Sep 07, 2018 |
The Knowledge Illusion
40:01
https://traffic.libsyn.com/secure/dataskeptic/the-knowledge-illusion.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/7/8/b/c78b302bab802497/fake-news-album-400.jpg
11137587
fd038ece71fa49bcaff8de17dcfdf3ed
Kyle interviews Steven Sloman, Professor in the school of Cognitive, Linguistic, and Psychological Sciences at Brown University. Steven is co-author of The Knowledge Illusion: Why We Never Think Alone and Causal Models: How People Think about the World and Its Alternatives. Steven shares his perspective and research into how people process information and what this teaches us about the existence of and belief in fake news.
|
Aug 31, 2018 |
Click Through Rates
31:45
https://traffic.libsyn.com/secure/dataskeptic/ctrs.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/2/9/f/829f5ea254848aa5/fake-news-album-500.jpg
11137588
fabf4476db954cec97f403328f729bc9
A Click Through Rate (CTR) is the proportion of clicks to impressions of some item of content shared online. This terminology is most commonly used in digital advertising but applies just as well to content websites might choose to feature on their homepage or in search results. A CTR is intuitively appealing as a metric for optimization. After all, if users are disinterested in some content, under normal circumstances, it's reasonable to assume they would ignore the content, rather than clicking on it. On the other hand, the best content is likely to elicit a high CTR as users signal their interest by following the hyperlink. In the advertising world, a website could charge per impression, per click, or per action. Both impression and action based pricing have asymmetrical results for the publisher and advertiser. However, paying per click (CPC based advertising) seems to strike a nice balance. For this and other numeric reasons, many digital advertising mechanisms (such as Google Adwords) use CPC as the payment mechanism. When charging per click, an advertising platform will value a high CTR when selecting which ad to show. As we learned in our episode on Goodhart's Law, once a measure is turned into a target, it ceases to be a good measure. While CTR alone does not entirely drive most online advertising algorithms, it does play an important role. Thus, advertisers are incentivized to adopt strategies that maximize CTR. On the surface, this sounds like a great idea: provide internet users what they are looking for, and be awarded with their attention and lower advertising costs. However, one possible unintended consequence of this type of optimization is the creation of ads which are designed solely to generate clicks, regardless of if the users are happy with the page they visit after clicking a link. So, at least in part, websites that optimize for higher CTRs are going to favor content that does a good job getting viewers to click it. Getting a user to view a page is not totally synonymous with getting a user to appreciate the content of a page. The gap between the algorithmic goal and the user experience could be one of the factors that has promoted the creation of fake news.
|
Aug 24, 2018 |
Algorithmic Detection of Fake News
46:26
https://traffic.libsyn.com/secure/dataskeptic/algorithmic-detection-of-fake-news.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/5/1/b/e51b288b17a22176/fake-news-album-400.jpg
11137589
626d7d33cb734db583b1c57734417e86
The scale and frequency with which information can be distributed on social media makes the problem of fake news a rapidly metastasizing issue. To do any content filtering or labeling demands an algorithmic solution. In today's episode, Kyle interviews Kai Shu and Mike Tamir about their independent work exploring the use of machine learning to detect fake news. Kai Shu and his co-authors published Fake News Detection on Social Media: A Data Mining Perspective, a research paper which both surveys the existing literature and organizes the structure of the problem in a robust way. Mike Tamir led the development of fakerfact.org, a website and Chrome/Firefox plugin which leverages machine learning to try and predict the category of a previously unseen web page, with categories like opinion, wiki, and fake news.
|
Aug 17, 2018 |
Ant Intelligence
28:17
https://traffic.libsyn.com/secure/dataskeptic/ant-intelligence.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/d/7/5/cd7512900964a257/DS-PodcastCover-R2.png
11137590
2f286792dfab4e5a89320c006076b24b
If you prepared a list of creatures regarded as highly intelligent, it's unlikely ants would make the cut. This is expected, as on an individual level, ants do not generally display behavior that most humans would regard as intelligence. In fact, it might even be true that most species of ants are unable to learn. Despite this, ant colonies have evolved excellent survival mechanisms through the careful orchestration of ants.
|
Aug 10, 2018 |
Human Detection of Fake News
28:27
https://traffic.libsyn.com/secure/dataskeptic/human-detection-of-fake-news.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/8/3/4/1834232191256ed6/fake-news-album-500.jpg
11137591
0771428607a843ee943f841b822d1f2f
With publications such as "Prior exposure increases perceived accuracy of fake news", "Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning", and "The science of fake news", Gordon Pennycook is asking and answering analytical questions about the nature of human intuition and fake news. Gordon appeared on Data Skeptic in 2016 to discuss people's ability to recognize pseudo-profound bullshit. This episode explores his work in fake news.
|
Aug 03, 2018 |
Spam Filtering with Naive Bayes
19:45
https://traffic.libsyn.com/secure/dataskeptic/spam-filtering.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/7/a/4/37a4d9fdb7373c7b/logo.jpg
11137592
059fbeda72854653954e9c0f57eb9953
Today's spam filters are advanced data driven tools. They rely on a variety of techniques to effectively and often seamlessly filter out junk email from good email. Whitelists, blacklists, traffic analysis, network analysis, and a variety of other tools are probably employed by most major players in this area. Naturally content analysis can be an especially powerful tool for detecting spam. Given the binary nature of the problem ( or ) its clear that this is a great problem to use machine learning to solve. In order to apply machine learning, you first need a labelled training set. Thankfully, many standard corpora of labelled spam data are readily available. Further, if you're working for a company with a spam filtering problem, often asking users to self-moderate or flag things as spam can be an effective way to generate a large amount of labels for "free". With a labeled dataset in hand, a data scientist working on spam filtering must next do feature engineering. This should be done with consideration of the algorithm that will be used. The Naive Bayesian Classifer has been a popular choice for detecting spam because it tends to perform pretty well on high dimensional data, unlike a lot of other ML algorithms. It also is very efficient to compute, making it possible to train a per-user Classifier if one wished to. While we might do some basic NLP tricks, for the most part, we can turn each word in a document (or perhaps each bigram or n-gram in a document) into a feature. The Naive part of the Naive Bayesian Classifier stems from the naive assumption that all features in one's analysis are considered to be independent. If and are known to be independent, then . In other words, you just multiply the probabilities together. Shh, don't tell anyone, but this assumption is actually wrong! Certainly, if a document contains the word algorithm, it's more likely to contain the word probability than some randomly selected document. Thus, , violating the assumption. Despite this "flaw", the Naive Bayesian Classifier works remarkably will on many problems. If one employs the common approach of converting a document into bigrams (pairs of words instead of single words), then you can capture a good deal of this correlation indirectly. In the final leg of the discussion, we explore the question of whether or not a Naive Bayesian Classifier would be a good choice for detecting fake news.
|
Jul 27, 2018 |
The Spread of Fake News
45:18
https://traffic.libsyn.com/secure/dataskeptic/the-spread-of-fake-news.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/7/7/a/b77a07c254d6c3f2/fake-news-album-400.jpg
11137593
d089f82bbe174f1b99d711b28f5c9ebb
How does fake news get spread online? Its not just a matter of manipulating search algorithms. The social platforms for sharing play a major role in the distribution of fake news. But how significant of an impact can there be? How significantly can bots influence the spread of fake news? In this episode, Kyle interviews Filippo Menczer, Professor of Computer Science and Informatics. Fil is part of the Observatory on Social Media ([OSoMe][https://osome.iuni.iu.edu/tools/]). OSoMe are the creators of Hoaxy, Botometer, Fakey, and other tools for studying the spread of information on social media. The interview explores these tools and the contributions Bots make to the spread of fake news.
|
Jul 20, 2018 |
Fake News
38:19
https://traffic.libsyn.com/secure/dataskeptic/fake-news.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/a/0/8/da08ee520fd050da/fake-news-album-500.jpg
11137594
5b239f1adacc4d51b265d2bcaf572c31
This episode kicks off our new theme of "Fake News" with guests Robert Sheaffer and Brad Schwartz. Fake news is a new label for an old idea. For our purposes, we will define fake news information created to deliberately mislead while masquerading as a legitimate, journalistic source of truth. It's become a modern topic of discussion as our cultures evolve to the fledgling mechanisms of communication introduced by online platforms. What was the earliest incident of fake news? That's a question for which we may never find a satisfying answer. While not the earliest, we present a dramatization of an early example of fake news, which leads us into a discussion with UFO Skeptic Robert Sheaffer. Following that we get into our main interview with Brad Schwartz, author of Broadcast Hysteria: Orson Welles's War of the Worlds and the Art of Fake News.
|
Jul 13, 2018 |
Dev Ops for Data Science
38:20
https://traffic.libsyn.com/secure/dataskeptic/devops-for-data-science.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137595
db35a82146574f52ab5e8650d1bb9f43
We revisit the 2018 Microsoft Build in this episode, focusing on the latest ideas in DevOps. Kyle interviews Cloud Developer Advocates Damien Brady, Paige Bailey, and Donovan Brown to talk about DevOps and data science and databases. For a data scientist, what does it even mean to “build”? Packaging and deployment are things that a data scientist doesn't normally have to consider in their day-to-day work. The process of making an AI app is usually divided into two streams of work: data scientists building machine learning models and app developers building the application for end users to consume. DevOps includes all the parties involved in getting the application deployed and maintained and thinking about all the phases that follow and precede their part of the end solution. So what does DevOps mean for data science? Why should you adopt DevOps best practices? In the first half, Paige and Damian share their views on what DevOps for data science would look like and how it can be introduced to provide continuous integration, delivery, and deployment of data science models. In the second half, Donovan and Damian talk about the DevOps life cycle of putting a database under version control and carrying out deployments through a release pipeline.
|
Jul 11, 2018 |
First Order Logic
16:51
https://traffic.libsyn.com/secure/dataskeptic/first-order-logic.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/f/9/4/1f94b61dd085935a/ds-couch-mini-400.png
11137596
df74ad6688b241ce8ff152f0e4b926b0
Logic is a fundamental of mathematical systems. It's roots are the values true and false and it's power is in what it's rules allow you to prove. Prepositional logic provides it's user variables. This episode gets into First Order Logic, an extension to prepositional logic.
|
Jul 06, 2018 |
Blind Spots in Reinforcement Learning
27:35
https://traffic.libsyn.com/secure/dataskeptic/blind-spots-in-reinforcement-learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/1/4/e/c14e751db3cb1423/DS-PodcastCover-R2.png
11137597
a8feaf4664e54744b24e9950b9da50a5
An intelligent agent trained in a simulated environment may be prone to making mistakes in the real world due to discrepancies between the training and real-world conditions. The areas where an agent makes mistakes are hard to find, known as "blind spots," and can stem from various reasons. In this week’s episode, Kyle is joined by Ramya Ramakrishnan, a PhD candidate at MIT, to discuss the idea “blind spots” in reinforcement learning and approaches to discover them.
|
Jun 29, 2018 |
Defending Against Adversarial Attacks
31:29
https://traffic.libsyn.com/secure/dataskeptic/defending-against-adversarial-attacks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/f/1/3/0f13c6825cd46e17/DS-PodcastCover-R2.png
11137598
b4f3fa58280246e59fd16759f90905c7
In this week’s episode, our host Kyle interviews Gokula Krishnan from ETH Zurich, about his recent contributions to defenses against adversarial attacks. The discussion centers around his latest paper, titled “Defending Against Adversarial Attacks by Leveraging an Entire GAN,” and his proposed algorithm, aptly named ‘Cowboy.’
|
Jun 22, 2018 |
Transfer Learning
18:04
https://traffic.libsyn.com/secure/dataskeptic/transfer-learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/4/e/b/9/4eb901c25b081a96/ds-couch-mini-400.png
11137599
8fa5993bd9554b068f33566eeddb325c
On a long car ride, Linhda and Kyle record a short episode. This discussion is about transfer learning, a technique using in machine learning to leverage training from one domain to have a head start learning in another domain. Transfer learning has some obvious appealing features. Take the example of an image recognition problem. There are now many widely available models that do general image recognition. Detecting that an image contains a "sofa" is an impressive feat. However, for a furniture company interested in more specific details, this classifier is absurdly general. Should the furniture company build a massive corpus of tagged photos, effectively starting from scratch? Or is there a way they can transfer the learnings from the general task to the specific one. A general definition of transfer learning in machine learning is the use of taking some or all aspects of a pre-trained model as the basis to begin training a new model which a specific and potentially limited dataset.
|
Jun 15, 2018 |
Medical Imaging Training Techniques
25:21
https://traffic.libsyn.com/secure/dataskeptic/medical-imaging-training-techniques.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/4/d/0/b4d0729fcddc423e/DS-PodcastCover-R2.png
11137600
4b89f3615aa94de8b994abee770ac011
Medical imaging is a highly effective tool used by clinicians to diagnose a wide array of diseases and injuries. However, it often requires exceptionally trained specialists such as radiologists to interpret accurately. In this episode of Data Skeptic, our host Kyle Polich is joined by Gabriel Maicas, a PhD candidate at the University of Adelaide, to discuss machine learning systems that can be used by radiologists to improve their accuracy and speed of diagnosis.
|
Jun 08, 2018 |
Kalman Filters
21:32
https://traffic.libsyn.com/secure/dataskeptic/kalman-filters.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/2/1/a/121a7922ea3d0075/ds-couch-mini-400.png
11137601
cde6f37bec7b4d4889f4c099c549fd90
Thanks to our sponsor Galvanize A Kalman Filter is a technique for taking a sequence of observations about an object or variable and determining the most likely current state of that object. In this episode, we discuss it in the context of tracking our lilac crowned amazon parrot Yoshi. Kalman filters have many applications but the one of particular interest under our current theme of artificial intelligence is to efficiently update one's beliefs in light of new information. The Kalman filter is based upon the Gaussian distribution. This distribution is described by two parameters: (the mean) and standard deviation. The procedure for updating these values in light of new information has a closed form. This means that it can be described with straightforward formulae and computed very efficiently. You may gain a greater appreciation for Kalman filters by considering what would happen if you could not rely on the Gaussian distribution to describe your posterior beliefs. If determining the probability distribution over the variables describing some object cannot be efficiently computed, then by definition, maintaining the most up to date posterior beliefs can be a significant challenge. Kyle will be giving a talk at Skeptical 2018 in Berkeley, CA on June 10.
|
Jun 01, 2018 |
AI in Industry
43:03
https://traffic.libsyn.com/secure/dataskeptic/ms-ai.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137602
f33912be144b47f1bd453e55ffbc73c8
There's so much to discuss on the AI side, it's hard to know where to begin. Luckily, Steve Guggenheimer, Microsoft’s corporate vice president of AI Business, and Carlos Pessoa, a software engineering manager for the company’s Cloud AI Platform, talked to Kyle about announcements related to AI in industry.
|
May 25, 2018 |
AI in Games
25:58
https://traffic.libsyn.com/secure/dataskeptic/ai-in-games-master.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/0/8/8/a088ef320c9f8a57/DS-PodcastCover-R2.png
11137603
8834f740e25e4b2ba817dc08033273fb
Today's interview is with the authors of the textbook Artificial Intelligence and Games.
|
May 18, 2018 |
Game Theory
24:11
https://traffic.libsyn.com/secure/dataskeptic/game-theory.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/f/d/6/1fd6ec81ca2ca3cd/ds-couch-mini-400.png
11137604
b731ec6a1e5506d698aadb33cdd14e4e
Thanks to our sponsor The Great Courses. This week's episode is a short primer on game theory. For tickets to the free Data Skeptic meetup in Chicago on Tuesday, May 15 at the Mendoza College of Business (224 South Michigan Avenue, Suite 350), click here,
|
May 11, 2018 |
The Experimental Design of Paranormal Claims
27:32
https://traffic.libsyn.com/secure/dataskeptic/the-experimental-design-of-paranormal-claims.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/7/e/2/a7e277facc183533/DS-PodcastCover-R2.png
11137605
bd381acdf634764f45a16df603fd5895
In this episode of Data Skeptic, Kyle chats with Jerry Schwarz from the Independent Investigations Group (IIG)'s SF Bay Area chapter about testing claims of the paranormal. The IIG is a volunteer-based organization dedicated to investigating paranormal or extraordinary claim from a scientific viewpoint. The group, headquartered at the Center for Inquiry-Los Angeles in Hollywood, offers a $100,000 prize to anyone who can show, under proper observing conditions, evidence of any paranormal, supernatural, or occult power or event. CHICAGO Tues, May 15, 6pm. Come to our Data Skeptic meetup. CHICAGO Saturday, May 19, 10am. Kyle will be giving a talk at the Chicago AI, Data Science, and Blockchain Conference 2018.
|
May 04, 2018 |
Winograd Schema Challenge
36:57
https://traffic.libsyn.com/secure/dataskeptic/winograd_episode.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/a/0/d/9a0dadbeca7c8cc5/DS-PodcastCover-R2.png
11137606
d52b3133851ae7f5627178c3fac8afe8
Our guest this week, Hector Levesque, joins us to discuss an alternative way to measure a machine’s intelligence, called Winograd Schemas Challenge. The challenge was proposed as a possible alternative to the Turing test during the 2011 AAAI Spring Symposium. The challenge involves a small reading comprehension test about common sense knowledge.
|
Apr 27, 2018 |
The Imitation Game
01:00:58
https://traffic.libsyn.com/secure/dataskeptic/the-imitation-game.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/a/4/2/8a4283e9fdab4dd5/DS-PodcastCover-R2.png
11137607
6ea9e980ef50a2f6d12b9d3f99a24b55
This week on Data Skeptic, we begin with a skit to introduce the topic of this show: The Imitation Game. We open with a scene in the distant future. The year is 2027, and a company called Shamony is announcing their new product, Ada, the most advanced artificial intelligence agent. To prove its superiority, the lead scientist announces that it will use the Turing Test that Alan Turing proposed in 1950. During this we introduce Turing’s “objections” outlined in his famous paper, “Computing Machinery and Intelligence.” Following that, we talk with improv coach Holly Laurent on the art of improvisation and Peter Clark from the Allen Institute for Artificial Intelligence about question and answering algorithms.
|
Apr 20, 2018 |
Eugene Goostman
17:15
https://traffic.libsyn.com/secure/dataskeptic/eugene-goostman.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/e/4/5/2e4529fad5ea8729/DS-PodcastCover-R2.png
11137608
66675e51d7526a7da5a5b294c82d3e2b
In this episode, Kyle shares his perspective on the chatbot Eugene Goostman which (some claim) "passed" the Turing Test. As a second topic Kyle also does an intro of the Winograd Schema Challenge.
|
Apr 13, 2018 |
The Theory of Formal Languages
23:44
https://traffic.libsyn.com/secure/dataskeptic/the-theory-of-formal-languages.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/7/e/7/e7e713ec4440d4ba/ds-couch-mini-400.png
11137609
6ca2078ef6372c38ab8ce64b06979c7f
In this episode, Kyle and Linhda discuss the theory of formal languages. Any language can (theoretically) be a formal language. The requirement is that the language can be rigorously described as a set of strings which are considered part of the language. Those strings are any combination of alphabet characters in the given language. Read more
|
Apr 06, 2018 |
The Loebner Prize
33:21
https://traffic.libsyn.com/secure/dataskeptic/the-loebner-prize.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/5/c/d/b5cd8ab97ed0c63c/DS-PodcastCover-R2.png
11137610
f653672b5cae36cd46e75f3fea1cd6db
The Loebner Prize is a competition in the spirit of the Turing Test. Participants are welcome to submit conversational agent software to be judged by a panel of humans. This episode includes interviews with Charlie Maloney, a judge in the Loebner Prize, and Bruce Wilcox, a winner of the Loebner Prize.
|
Mar 30, 2018 |
Chatbots
27:05
https://traffic.libsyn.com/secure/dataskeptic/chatbots.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/7/5/d/075daebaccc96e80/DS-PodcastCover-R2.png
11137611
cada69954898d18af4db5e0dbe69bb75
In this episode, Kyle chats with Vince from iv.ai and Heather Shapiro who works on the Microsoft Bot Framework. We solicit their advice on building a good chatbot both creatively and technically. Our sponsor today is Warby Parker.
|
Mar 23, 2018 |
The Master Algorithm
46:34
https://traffic.libsyn.com/secure/dataskeptic/the-master-algorithm.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/7/c/f/87cf646105ad740b/DS-PodcastCover-R2.png
11137612
a1e02bb891dcbcd7074004e7e040dbe6
In this week’s episode, Kyle Polich interviews Pedro Domingos about his book, The Master Algorithm: How the quest for the ultimate learning machine will remake our world. In the book, Domingos describes what machine learning is doing for humanity, how it works and what it could do in the future. He also hints at the possibility of an ultimate learning algorithm, in which the machine uses it will be able to derive all knowledge — past, present, and future.
|
Mar 16, 2018 |
The No Free Lunch Theorems
27:25
https://traffic.libsyn.com/secure/dataskeptic/no-free-lunch-theorems.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/6/2/e/d62e1f38b348c7a2/ds-couch-mini-400.png
11137613
607bfc3cac46840ee599aa9c5cf52e7a
What's the best machine learning algorithm to use? I hear that XGBoost wins most of the Kaggle competitions that aren't won with deep learning. Should I just use XGBoost all the time? That might work out most of the time in practice, but a proof exists which tells us that there cannot be one true algorithm to rule them.
|
Mar 09, 2018 |
ML at Sloan Kettering Cancer Center
38:34
https://traffic.libsyn.com/secure/dataskeptic/ml-at-sloan-kettering-cancer-center.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/3/9/a/539ae2d299add4fd/DS-PodcastCover-R2.png
11137614
53ecfef2f365d114f64f311b7232cec2
For a long time, physicians have recognized that the tools they have aren't powerful enough to treat complex diseases, like cancer. In addition to data science and models, clinicians also needed actual products — tools that physicians and researchers can draw upon to answer questions they regularly confront, such as “what clinical trials are available for this patient that I'm seeing right now?” In this episode, our host Kyle interviews guests Alex Grigorenko and Iker Huerga from Memorial Sloan Kettering Cancer Center to talk about how data and technology can be used to prevent, control and ultimately cure cancer.
|
Mar 02, 2018 |
Optimal Decision Making with POMDPs
18:40
https://traffic.libsyn.com/secure/dataskeptic/optimal-decision-making-with-pomdps.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/2/c/2/82c23ef60973ffe9/ds-couch-mini-400.png
11137615
6f7d91e359c7fc9aac5b198770ad219a
In a previous episode, we discussed Markov Decision Processes or MDPs, a framework for decision making and planning. This episode explores the generalization Partially Observable MDPs (POMDPs) which are an incredibly general framework that describes most every agent based system.
|
Feb 23, 2018 |
AI Decision-Making
42:59
https://traffic.libsyn.com/secure/dataskeptic/ai-decision-making.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/5/8/3/a58322fffef2cbf2/DS-PodcastCover-R2.png
11137616
44f42f6e6776163ffa0ae226ae66c9eb
Making a decision is a complex task. Today's guest Dongho Kim discusses how he and his team at Prowler has been building a platform that will be accessible by way of APIs and a set of pre-made scripts for autonomous decision making based on probabilistic modeling, reinforcement learning, and game theory. The aim is so that an AI system could make decisions just as good as humans can.
|
Feb 16, 2018 |
[MINI] Reinforcement Learning
23:03
https://traffic.libsyn.com/secure/dataskeptic/reinforcement-learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/b/f/f/fbffd6eb93dd69f0/ds-couch-mini-400.png
11137617
f8946083c7765a945920e2a56db49f4c
In many real world situations, a person/agent doesn't necessarily know their own objectives or the mechanics of the world they're interacting with. However, if the agent receives rewards which are correlated with the both their actions and the state of the world, then reinforcement learning can be used to discover behaviors that maximize the reward earned.
|
Feb 09, 2018 |
Evolutionary Computation
24:44
https://traffic.libsyn.com/secure/dataskeptic/evolutionary-computation.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/4/b/b/5/4bb528f3358cbf69/DS-PodcastCover-R2.png
11137618
1b97492aaad0bc3ca0f32e7e8fd75488
In this week’s episode, Kyle is joined by Risto Miikkulainen, a professor of computer science and neuroscience at the University of Texas at Austin. They talk about evolutionary computation, its applications in deep learning, and how it’s inspired by biology. They also discuss some of the things Sentient Technologies is working on in stock and finances, retail, e-commerce and web design, as well as the technology behind it-- evolutionary algorithms.
|
Feb 02, 2018 |
[MINI] Markov Decision Processes
20:24
https://traffic.libsyn.com/secure/dataskeptic/markov-decision-process.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/6/2/a/c/62aca57ac4026e5f/ds-couch-mini-400.png
11137619
6471f356e05ac62de6819943e1ecc53f
Formally, an MDP is defined as the tuple containing states, actions, the transition function, and the reward function. This podcast examines each of these and presents them in the context of simple examples. Despite MDPs suffering from the curse of dimensionality, they're a useful formalism and a basic concept we will expand on in future episodes.
|
Jan 26, 2018 |
Neuroscience Frontiers
29:06
https://traffic.libsyn.com/secure/dataskeptic/neuroscience-frontiers.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/9/7/f/297f489c0255ce1a/DS-PodcastCover-R2.png
11137620
642bd344413becadc177b428285b7d26
Last week on Data Skeptic, we visited the Laboratory of Neuroimaging, or LONI, at USC and learned about their data-driven platform that enables scientists from all over the world to share, transform, store, manage and analyze their data to understand neurological diseases better. We talked about how neuroscientists measure the brain using data from MRI scans, and how that data is processed and analyzed to understand the brain. This week, we'll continue the second half of our two-part episode on LONI.
|
Jan 19, 2018 |
Neuroimaging and Big Data
26:37
https://traffic.libsyn.com/secure/dataskeptic/neuroimaging-and-big-data.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/8/9/b/589bcbd422eb9626/DS-PodcastCover-R2.png
11137621
a9b3c28fbee332dc383caecb4efd7efb
Last year, Kyle had a chance to visit the Laboratory of Neuroimaging, or LONI, at USC, and learn about how some researchers are using data science to study the function of the brain. We’re going to be covering some of their work in two episodes on Data Skeptic. In this first part of our two-part episode, we'll talk about the data collection and brain imaging and the LONI pipeline. We'll then continue our coverage in the second episode, where we'll talk more about how researchers can gain insights about the human brain and their current challenges. Next week, we’ll also talk more about what all that has to do with data science machine learning and artificial intelligence. Joining us in this week’s episode are members of the LONI lab, which include principal investigators, Dr. Arthur Toga and Dr. Meng Law, and researchers, Farshid Sepherband, PhD and Ryan Cabeen, PhD.
|
Jan 12, 2018 |
The Agent Model of Artificial Intelligence
17:21
https://traffic.libsyn.com/secure/dataskeptic/the-agent-model-of-intelligence.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/8/2/e/b82e6c9131f13bce/ds-couch-mini-400.png
11137622
75ac8840c3ac7437aba957629de21c14
In artificial intelligence, the term 'agent' is used to mean an autonomous, thinking agent with the ability to interact with their environment. An agent could be a person or a piece of software. In either case, we can describe aspects of the agent in a standard framework.
|
Jan 05, 2018 |
Artificial Intelligence, a Podcast Approach
33:17
https://traffic.libsyn.com/secure/dataskeptic/artificial-intelligence-a-podcast-approach.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/e/6/7/ee67d193b38eeaeb/DS-PodcastCover-R2.png
11137623
b54ac85f637424d387752924e339eb63
This episode kicks off the next theme on Data Skeptic: artificial intelligence. Kyle discusses what's to come for the show in 2018, why this topic is relevant, and how we intend to cover it.
|
Dec 29, 2017 |
Holiday reading 2017
12:38
https://traffic.libsyn.com/secure/dataskeptic/the-tale-of-the-omega-team.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137624
10958d67f794eb06f096302dd694bfb5
We break format from our regular programming today and bring you an excerpt from Max Tegmark's book "Life 3.0". The first chapter is a short story titled "The Tale of the Omega Team". Audio excerpted courtesy of Penguin Random House Audio from LIFE 3.0 by Max Tegmark, narrated by Rob Shapiro. You can find "Life 3.0" at your favorite bookstore and the audio edition via penguinrandomhouseaudio.com. Kyle will be giving a talk at the Monterey County SkeptiCamp 2018.
|
Dec 22, 2017 |
Complexity and Cryptography
35:53
https://traffic.libsyn.com/secure/dataskeptic/complexity-and-cryptography.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137625
e588fbb711e9ac762dc5a1b417e6db50
This week, our host Kyle Polich is joined by guest Tim Henderson from Google to talk about the computational complexity foundations of modern cryptography and the complexity issues that underlie the field. A key question that arises during the discussion is whether we should trust the security of modern cryptography.
|
Dec 15, 2017 |
Mercedes Benz Machine Learning Research
27:05
https://traffic.libsyn.com/secure/dataskeptic/mercedes-benz-machine-learning-research.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137626
ce64cddf648716933e896d10d9fe945c
This episode features an interview with Rigel Smiroldo recorded at NIPS 2017 in Long Beach California. We discuss data privacy, machine learning use cases, model deployment, and end-to-end machine learning.
|
Dec 14, 2017 |
[MINI] Parallel Algorithms
20:37
https://traffic.libsyn.com/secure/dataskeptic/parallel-algorithms.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137627
f76f852abccc089708ac971df3f61bba
When computers became commodity hardware and storage became incredibly cheap, we entered the era of so-call "big" data. Most definitions of big data will include something about not being able to process all the data on a single machine. Distributed computing is required for such large datasets. Getting an algorithm to run on data spread out over a variety of different machines introduced new challenges for designing large-scale systems. First, there are concerns about the best strategy for spreading that data over many machines in an orderly fashion. Resolving ambiguity or disagreements across sources is sometimes required. This episode discusses how such algorithms related to the complexity class NC.
|
Dec 08, 2017 |
Quantum Computing
47:49
https://traffic.libsyn.com/secure/dataskeptic/quantum-computing.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137628
b0521827c83d7164dca141a61fabea88
In this week's episode, Scott Aaronson, a professor at the University of Texas at Austin, explains what a quantum computer is, various possible applications, the types of problems they are good at solving and much more. Kyle and Scott have a lively discussion about the capabilities and limits of quantum computers and computational complexity.
|
Dec 01, 2017 |
Azure Databricks
28:27
https://traffic.libsyn.com/secure/dataskeptic/azure-databricks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137629
1e1711efd17f67e6cd336a1676647fee
I sat down with Ali Ghodsi, CEO and found of Databricks, and John Chirapurath, GM for Data Platform Marketing at Microsoft related to the recent announcement of Azure Databricks. When I heard about the announcement, my first thoughts were two-fold. First, the possibility of optimized integrations with existing Azure services. This would be a big benefit to heavy Azure users who also want to use Spark. Second, the benefits of active directory to control Databricks access for large enterprise. Hear Ali and JG's thoughts and comments on what makes Azure Databricks a novel offering.
|
Nov 28, 2017 |
[MINI] Exponential Time Algorithms
15:55
https://traffic.libsyn.com/secure/dataskeptic/exp-time.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137630
c258f04f2cfef659ddf3c5f039094452
In this episode we discuss the complexity class of EXP-Time which contains algorithms which require $O(2^{p(n)})$ time to run. In other words, the worst case runtime is exponential in some polynomial of the input size. Problems in this class are even more difficult than problems in NP since you can't even verify a solution in polynomial time. We mostly discuss Generalized Chess as an intuitive example of a problem in EXP-Time. Another well-known problem is determining if a given algorithm will halt in k steps. That extra condition of restricting it to k steps makes this problem distinct from Turing's original definition of the halting problem which is known to be intractable.
|
Nov 24, 2017 |
P vs NP
38:48
https://traffic.libsyn.com/secure/dataskeptic/p-vs-np.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137631
10bf39fd4c7b729a2fa6ca9f4b6757bb
In this week's episode, host Kyle Polich interviews author Lance Fortnow about whether P will ever be equal to NP and solve all of life’s problems. Fortnow begins the discussion with the example question: Are there 100 people on Facebook who are all friends with each other? Even if you were an employee of Facebook and had access to all its data, answering this question naively would require checking more possibilities than any computer, now or in the future, could possibly do. The P/NP question asks whether there exists a more clever and faster algorithm that can answer this problem and others like it.
|
Nov 17, 2017 |
[MINI] Sudoku \in NP
18:29
https://traffic.libsyn.com/secure/dataskeptic/sudoku-in-np.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137632
a9bb14676f332d625ced474b2a8333e4
Algorithms with similar runtimes are said to be in the same complexity class. That runtime is measured in the how many steps an algorithm takes relative to the input size. The class P contains all algorithms which run in polynomial time (basically, a nested for loop iterating over the input). NP are algorithms which seem to require brute force. Brute force search cannot be done in polynomial time, so it seems that problems in NP are more difficult than problems in P. I say it "seems" this way because, while most people believe it to be true, it has not been proven. This is the famous P vs. NP conjecture. It will be discussed in more detail in a future episode. Given a solution to a particular problem, if it can be verified/checked in polynomial time, that problem might be in NP. If someone hands you a completed Sudoku puzzle, it's not difficult to see if they made any mistakes. The effort of developing the solution to the Sudoku game seems to be intrinsically more difficult. In fact, as far as anyone knows, in the general case of all possible examples of the game, it seems no strategy can do better on average than just random guessing. This notion of random guessing the solution is where the N in NP comes from: Non-deterministic. Imagine a machine with a random input already written in its memory. Given enough such machines, one of them will have the right answer. If they all ran in parallel, one of them could verify it's input in polynomial time. This guess / provided input is often called a witness string. NP is an important concept for many reasons. To me, the most reason to know about NP is a practical one. Depending on your goals or the goals of your employer, there are many challenging problems you may attempt to solve. If a problem you are trying to solve happens to be in NP, then you should consider the implications very carefully. Perhaps you'll be lucky and discover that your particular instance of the problem is easy. Sudoku is pretty easy if only 2 remaining squares need to be filled in. The traveling salesman problem is easy to solve if you live in a country where all roads for a ring with exactly one road in and out. If the problem you wish to solve is not trivial, or if you will face many instances of the problem and expect some will not be trivial, then it's unlikely you'll be able to find the exact solution. Sure, maybe you can grab a bunch of commodity servers and try to scale the heck out of your attempt. Depending on the problem you're solving, that might just work. If you can out-purchase your problem in computing power, then problems in NP will surrender to you. But if your input size ever grows, it's unlikely you'll be able to keep up. If your problem is intractable in this way, all is not lost. You might be able to find an approximate solution to your problem. Good enough is better than no solution at all, right? Most of the time, probably. However, some tremendous work has also been done studying topics like this. Are there problems which are not even approximable in polynomial time? What approximation techniques work best? Alas, those answers lie elsewhere. This episode avoids a discussion of a few key points in order to keep the material accessible. If you find this interesting, you should next familiarize yourself with the notions of NP-Complete, NP-Hard, and co-NP. These are topics we won't necessarily get to in future episodes. Michael Sipser's Introduction to the Theory of Computation is a good resource.
|
Nov 10, 2017 |
The Computational Complexity of Machine Learning
47:31
https://traffic.libsyn.com/secure/dataskeptic/the-computational-complexity-of-machine-learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137633
f99aaea895a6a3daa8e7d155a23df1bd
In this episode, Professor Michael Kearns from the University of Pennsylvania joins host Kyle Polich to talk about the computational complexity of machine learning, complexity in game theory, and algorithmic fairness. Michael's doctoral thesis gave an early broad overview of computational learning theory, in which he emphasizes the mathematical study of efficient learning algorithms by machines or computational systems. When we look at machine learning algorithms they are almost like meta-algorithms in some sense. For example, given a machine learning algorithm, it will look at some data and build some model, and it’s going to behave presumably very differently under different inputs. But does that mean we need new analytical tools? Or is a machine learning algorithm just the same thing as any deterministic algorithm, but just a little bit more tricky to figure out anything complexity-wise? In other words, is there some overlap between the good old-fashioned analysis of algorithms with the analysis of machine learning algorithms from a complexity viewpoint? And what is the difference between strategies for determining the complexity bounds on samples versus algorithms? A big area of machine learning (and in the analysis of learning algorithms in general) Michael and Kyle discuss is the topic known as complexity regularization. Complexity regularization asks: How should one measure the goodness of fit and the complexity of a given model? And how should one balance those two, and how can one execute that in a scalable, efficient way algorithmically? From this, Michael and Kyle discuss the broader picture of why one should care whether a learning algorithm is efficiently learnable if it's learnable in polynomial time. Another interesting topic of discussion is the difference between sample complexity and computational complexity. An active area of research is how one should regularize their models so that they're balancing the complexity with the goodness of fit to fit their large training sample size. As mentioned, a good resource for getting started with correlated equilibria is: https://www.cs.cornell.edu/courses/cs684/2004sp/feb20.pdf Thanks to our sponsors: Mendoza College of Business - Get your Masters of Science in Business Analytics from Notre Dame. brilliant.org - A fun, affordable, online learning tool. Check out their Computer Science Algorithms course.
|
Nov 03, 2017 |
[MINI] Turing Machines
13:54
https://traffic.libsyn.com/secure/dataskeptic/turing-machines.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137634
2492bf207d1ed5a1c06f79aef19e9634
TMs are a model of computation at the heart of algorithmic analysis. A Turing Machine has two components. An infinitely long piece of tape (memory) with re-writable squares and a read/write head which is programmed to change it's state as it processes the input. This exceptionally simple mechanical computer can compute anything that is intuitively computable, thus says the Church-Turing Thesis. Attempts to make a "better" Turing Machine by adding things like additional tapes can make the programs easier to describe, but it can't make the "better" machine more capable. It won't be able to solve any problems the basic Turing Machine can, even if it perhaps solves them faster. An important concept we didn't get to in this episode is that of a Universal Turing Machine. Without the prefix, a TM is a particular algorithm. A Universal TM is a machine that takes, as input, a description of a TM and an input to that machine, and subsequently, simulates the inputted machine running on the given input. Turing Machines are a central idea in computer science. They are central to algorithmic analysis and the theory of computation.
|
Oct 27, 2017 |
The Complexity of Learning Neural Networks
38:51
https://traffic.libsyn.com/secure/dataskeptic/the-complexity-of-learning-neural-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137635
0d15170f21eb741f59bab585d9643f23
Over the past several years, we have seen many success stories in machine learning brought about by deep learning techniques. While the practical success of deep learning has been phenomenal, the formal guarantees have been lacking. Our current theoretical understanding of the many techniques that are central to the current ongoing big-data revolution is far from being sufficient for rigorous analysis, at best. In this episode of Data Skeptic, our host Kyle Polich welcomes guest John Wilmes, a mathematics post-doctoral researcher at Georgia Tech, to discuss the efficiency of neural network learning through complexity theory.
|
Oct 20, 2017 |
[MINI] Big Oh Analysis
18:44
https://traffic.libsyn.com/secure/dataskeptic/big-oh-analysis.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137636
467dbd2724d363ac6fc99e96d445a692
How long an algorithm takes to run depends on many factors including implementation details and hardware. However, the formal analysis of algorithms focuses on how they will perform in the worst case as the input size grows. We refer to an algorithm's runtime as it's "O" which is a function of its input size "n". For example, O(n) represents a linear algorithm - one that takes roughly twice as long to run if you double the input size. In this episode, we discuss a few everyday examples of algorithmic analysis including sorting, search a shuffled deck of cards, and verifying if a grocery list was successfully completed. Thanks to our sponsor Brilliant.org, who right now is featuring a related problem as their Brilliant Problem of the Week.
|
Oct 13, 2017 |
Data science tools and other announcements from Ignite
31:40
https://traffic.libsyn.com/secure/dataskeptic/data-science-tools-and-other-announcements-from-ignite.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137637
9ebd5faa74a2f2056a4aefa3f50f50a9
In this episode, Microsoft's Corporate Vice President for Cloud Artificial Intelligence, Joseph Sirosh, joins host Kyle Polich to share some of the Microsoft's latest and most exciting innovations in AI development platforms. Last month, Microsoft launched a set of three powerful new capabilities in Azure Machine Learning for advanced developers to exploit big data, GPUs, data wrangling and container-based model deployment. Extended show notes found here. Thanks to our sponsor Springboard. Check out Springboard's Data Science Career Track Bootcamp.
|
Oct 06, 2017 |
Generative AI for Content Creation
34:33
https://traffic.libsyn.com/secure/dataskeptic/generative-ai-for-content-creation.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137638
040d5cfbdc614c7c4c11b397b0d3b836
Last year, the film development and production company End Cue produced a short film, called Sunspring, that was entirely written by an artificial intelligence using neural networks. More specifically, it was authored by a recurrent neural network (RNN) called long short-term memory (LSTM). According to End Cue’s Chief Technical Officer, Deb Ray, the company has come a long way in improving the generative AI aspect of the bot. In this episode, Deb Ray joins host Kyle Polich to discuss how generative AI models are being applied in creative processes, such as screenwriting. Their discussion also explores how data science for analyzing development projects, such as financing and selecting scripts, as well as optimizing the content production process.
|
Sep 29, 2017 |
[MINI] One Shot Learning
17:39
https://traffic.libsyn.com/secure/dataskeptic/one-shot-learning.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/1/c/3/f1c315d72955975a/oneshot.png
11137639
1f0bad8223bfb5266ca6d997519c75f2
One Shot Learning is the class of machine learning procedures that focuses learning something from a small number of examples. This is in contrast to "traditional" machine learning which typically requires a very large training set to build a reasonable model. In this episode, Kyle presents a coded message to Linhda who is able to recognize that many of these new symbols created are likely to be the same symbol, despite having extremely few examples of each. Why can the human brain recognize a new symbol with relative ease while most machine learning algorithms require large training data? We discuss some of the reasons why and approaches to One Shot Learning.
|
Sep 22, 2017 |
Recommender Systems Live from FARCON 2017
46:09
https://traffic.libsyn.com/secure/dataskeptic/recommender-systems-live-from-farcon.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137640
b3191b998a35b7fa75bc312efc1ad382
Recommender systems play an important role in providing personalized content to online users. Yet, typical data mining techniques are not well suited for the unique challenges that recommender systems face. In this episode, host Kyle Polich joins Dr. Joseph Konstan from the University of Minnesota at a live recording at FARCON 2017 in Minneapolis to discuss recommender systems and how machine learning can create better user experiences.
|
Sep 15, 2017 |
[MINI] Long Short Term Memory
15:29
https://traffic.libsyn.com/secure/dataskeptic/long-short-term-memory.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137641
4b99863b70904bf6ecbe2c9de4a0a024
Thanks to our sponsor brilliant.org/dataskeptics A Long Short Term Memory (LSTM) is a neural unit, often used in Recurrent Neural Network (RNN) which attempts to provide the network the capacity to store information for longer periods of time. An LSTM unit remembers values for either long or short time periods. The key to this ability is that it uses no activation function within its recurrent components. Thus, the stored value is not iteratively modified and the gradient does not tend to vanish when trained with backpropagation through time.
|
Sep 08, 2017 |
Zillow Zestimate
37:11
https://traffic.libsyn.com/secure/dataskeptic/zillow-zestimate.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137642
ac223f62619832a857a950fdb304b9ae
Zillow is a leading real estate information and home-related marketplace. We interviewed Andrew Martin, a data science Research Manager at Zillow, to learn more about how Zillow uses data science and big data to make real estate predictions.
|
Sep 01, 2017 |
Cardiologist Level Arrhythmia Detection with CNNs
32:05
https://traffic.libsyn.com/secure/dataskeptic/cardiologist-level-arrhythmia-detection-with-cnns.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137643
acc0ef2d521bf713d53eeaaa88c5d604
|
Aug 25, 2017 |
[MINI] Recurrent Neural Networks
17:06
https://traffic.libsyn.com/secure/dataskeptic/recurrent-neural-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137644
1fe08637eef0798e221be13fe85f3afd
RNNs are a class of deep learning models designed to capture sequential behavior. An RNN trains a set of weights which depend not just on new input but also on the previous state of the neural network. This directed cycle allows the training phase to find solutions which rely on the state at a previous time, thus giving the network a form of memory. RNNs have been used effectively in language analysis, translation, speech recognition, and many other tasks.
|
Aug 18, 2017 |
Project Common Voice
31:14
https://traffic.libsyn.com/secure/dataskeptic/project-common-voice.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137645
bf0ee7e132e82a57d7e6da537d98a2d3
Thanks to our sponsor Springboard. In this week's episode, guest Andre Natal from Mozilla joins our host, Kyle Polich, to discuss a couple exciting new developments in open source speech recognition systems, which include Project Common Voice. In June 2017, Mozilla launched a new open source project, Common Voice, a novel complementary project to the TensorFlow-based DeepSpeech implementation. DeepSpeech is a deep learning-based voice recognition system that was designed by Baidu, which they describe in greater detail in their research paper. DeepSpeech is a speech-to-text engine, and Mozilla hopes that, in the future, they can use Common Voice data to train their DeepSpeech engine.
|
Aug 11, 2017 |
[MINI] Bayesian Belief Networks
17:03
https://traffic.libsyn.com/secure/dataskeptic/bayesian-belief-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137646
fa10ba69d77370ec1dbef67d5ab87429
A Bayesian Belief Network is an acyclic directed graph composed of nodes that represent random variables and edges that imply a conditional dependence between them. It's an intuitive way of encoding your statistical knowledge about a system and is efficient to propagate belief updates throughout the network when new information is added.
|
Aug 04, 2017 |
pix2code
26:59
https://traffic.libsyn.com/secure/dataskeptic/pix2code.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137647
403faa08719a675b204bf943a23608bb
In this episode, Tony Beltramelli of UIzard Technologies joins our host, Kyle Polich, to talk about the ideas behind his latest app that can transform graphic design into functioning code, as well as his previous work on spying with wearables.
|
Jul 28, 2017 |
[MINI] Conditional Independence
14:43
https://traffic.libsyn.com/secure/dataskeptic/conditional-independence.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137648
0f856c18fe7f2fa370b561182ba25413
In statistics, two random variables might depend on one another (for example, interest rates and new home purchases). We call this conditional dependence. An important related concept exists called conditional independence. This phrase describes situations in which two variables are independent of one another given some other variable. For example, the probability that a vendor will pay their bill on time could depend on many factors such as the company's market cap. Thus, a statistical analysis would reveal many relationships between observable details about the company and their propensity for paying on time. However, if you know that the company has filed for bankruptcy, then we might assume their chances of paying on time have dropped to near 0, and the result is now independent of all other factors in light of this new information. We discuss a few real world analogies to this idea in the context of some chance meetings on our recent trip to New York City.
|
Jul 21, 2017 |
Estimating Sheep Pain with Facial Recognition
27:05
https://traffic.libsyn.com/secure/dataskeptic/estimating-sheep-pain-with-facial-recognition.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137649
7bba1c747e16915c58a42d596ac04b92
Animals can't tell us when they're experiencing pain, so we have to rely on other cues to help treat their discomfort. But it is often difficult to tell how much an animal is suffering. The sheep, for instance, is the most inscrutable of animals. However, scientists have figured out a way to understand sheep facial expressions using artificial intelligence. On this week's episode, Dr. Marwa Mahmoud from the University of Cambridge joins us to discuss her recent study, "Estimating Sheep Pain Level Using Facial Action Unit Detection." Marwa and her colleague's at Cambridge's Computer Laboratory developed an automated system using machine learning algorithms to detect and assess when a sheep is in pain. We discuss some details of her work, how she became interested in studying sheep facial expression to measure pain, and her future goals for this project. If you're able to be in Minneapolis, MN on August 23rd or 24th, consider attending Farcon. Get your tickets today via https://farcon2017.eventbrite.com.
|
Jul 14, 2017 |
CosmosDB
33:33
https://traffic.libsyn.com/secure/dataskeptic/cosmosdb.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137650
e111f1ed79ee00a06d74d64fa1db35be
This episode collects interviews from my recent trip to Microsoft Build where I had the opportunity to speak with Dharma Shukla and Syam Nair about the recently announced CosmosDB. CosmosDB is a globally consistent, distributed datastore that supports all the popular persistent storage formats (relational, key/value pair, document database, and graph) under a single streamlined API. The system provides tunable consistency, allowing the user to make choices about how consistency trade-offs are managed under the hood, if a consumer wants to go beyond the selected defaults.
|
Jul 07, 2017 |
[MINI] The Vanishing Gradient
15:16
https://traffic.libsyn.com/secure/dataskeptic/the-vanishing-gradient.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137651
925fe3f63e00cf9f8c32180798c081d9
This episode discusses the vanishing gradient - a problem that arises when training deep neural networks in which nearly all the gradients are very close to zero by the time back-propagation has reached the first hidden layer. This makes learning virtually impossible without some clever trick or improved methodology to help earlier layers begin to learn.
|
Jun 30, 2017 |
Doctor AI
41:50
https://traffic.libsyn.com/secure/dataskeptic/doctor-ai.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137652
549fcfd57674d36606bcd12347bc95b4
hen faced with medical issues, would you want to be seen by a human or a machine? In this episode, guest Edward Choi, co-author of the study titled Doctor AI: Predicting Clinical Events via Recurrent Neural Network shares his thoughts. Edward presents his team’s efforts in developing a temporal model that can learn from human doctors based on their collective knowledge, i.e. the large amount of Electronic Health Record (EHR) data.
|
Jun 23, 2017 |
[MINI] Activation Functions
14:11
https://traffic.libsyn.com/secure/dataskeptic/activation-functions.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137653
45d2e99a02f2fbd688a34bc7bc12b5d6
In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation which can only scale the data. However, other transformations, like a step function allow for non-linear properties to be introduced. Activation functions can also help to standardize your data between layers. Some functions such as the sigmoid have the effect of "focusing" the area of interest on data. Extreme values are placed close together, while values near it's point of inflection change more quickly with respect to small changes in the input. Similarly, these functions can take any real number and map all of them to a finite range such as [0, 1] which can have many advantages for downstream calculation. In this episode, we overview the concept and discuss a few reasons why you might select one function verse another.
|
Jun 16, 2017 |
MS Build 2017
27:37
https://traffic.libsyn.com/secure/dataskeptic/ms-build-recap.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137654
972d795e318a23066f80a367d9679ce7
This episode recaps the Microsoft Build Conference. Kyle recently attended and shares some thoughts on cloud, databases, cognitive services, and artificial intelligence. The episode includes interviews with Rohan Kumar and David Carmona.
|
Jun 09, 2017 |
[MINI] Max-pooling
12:33
https://traffic.libsyn.com/secure/dataskeptic/max-pooling.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137656
60990b8ea1febd022153c9efd182ffcb
Max-pooling is a procedure in a neural network which has several benefits. It performs dimensionality reduction by taking a collection of neurons and reducing them to a single value for future layers to receive as input. It can also prevent overfitting, since it takes a large set of inputs and admits only one value, making it harder to memorize the input. In this episode, we discuss the intuitive interpretation of max-pooling and why it's more common than mean-pooling or (theoretically) quartile-pooling.
|
Jun 02, 2017 |
Unsupervised Depth Perception
23:43
https://traffic.libsyn.com/secure/dataskeptic/unsupervised-depth-perception.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137657
85901d346f3b838faae8d4628904e8aa
|
May 26, 2017 |
[MINI] Convolutional Neural Networks
14:54
https://traffic.libsyn.com/secure/dataskeptic/cnns.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137659
9c283b75cfc6e34a33cbb816a4f992b8
CNNs are characterized by their use of a group of neurons typically referred to as a filter or kernel. In image recognition, this kernel is repeated over the entire image. In this way, CNNs may achieve the property of translational invariance - once trained to recognize certain things, changing the position of that thing in an image should not disrupt the CNN's ability to recognize it. In this episode, we discuss a few high-level details of this important architecture.
|
May 19, 2017 |
Multi-Agent Diverse Generative Adversarial Networks
29:19
https://traffic.libsyn.com/secure/dataskeptic/mutli-agent-diverse-generative-adversarial-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137661
7664bfaef46796f43570b879253ec3d9
Despite the success of GANs in imaging, one of its major drawbacks is the problem of 'mode collapse,' where the generator learns to produce samples with extremely low variety. To address this issue, today's guests Arnab Ghosh and Viveka Kulharia proposed two different extensions. The first involves tweaking the generator's objective function with a diversity enforcing term that would assess similarities between the different samples generated by different generators. The second comprises modifying the discriminator objective function, pushing generations corresponding to different generators towards different identifiable modes.
|
May 12, 2017 |
[MINI] Generative Adversarial Networks
09:51
https://traffic.libsyn.com/secure/dataskeptic/generative-adversarial-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137664
2de876aa4366c52f5b068546be7e859c
GANs are an unsupervised learning method involving two neural networks iteratively competing. The discriminator is a typical learning system. It attempts to develop the ability to recognize members of a certain class, such as all photos which have birds in them. The generator attempts to create false examples which the discriminator incorrectly classifies. In successive training rounds, the networks examine each and play a mini-max game of trying to harm the performance of the other. In addition to being a useful way of training networks in the absence of a large body of labeled data, there are additional benefits. The discriminator may end up learning more about edge cases than it otherwise would be given typical examples. Also, the generator's false images can be novel and interesting on their own. The concept was first introduced in the paper Generative Adversarial Networks.
|
May 05, 2017 |
Opinion Polls for Presidential Elections
52:59
https://traffic.libsyn.com/secure/dataskeptic/polling.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137666
f38e634884eafdbe8dcc8104a32481af
Recently, we've seen opinion polls come under some skepticism. But is that skepticism truly justified? The recent Brexit referendum and US 2016 Presidential Election are examples where some claims the polls "got it wrong". This episode explores this idea.
|
Apr 28, 2017 |
OpenHouse
26:17
https://traffic.libsyn.com/secure/dataskeptic/openhouse.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137668
8c14ac0c4803871ac94d1d56c86022ce
No reliable, complete database cataloging home sales data at a transaction level is available for the average person to access. To a data scientist interesting in studying this data, our hands are complete tied. Opportunities like testing sociological theories, exploring economic impacts, study market forces, or simply research the value of an investment when buying a home are all blocked by the lack of easy access to this dataset. OpenHouse seeks to correct that by centralizing and standardizing all publicly available home sales transactional data. In this episode, we discuss the achievements of OpenHouse to date, and what plans exist for the future. I also encourage everyone to check out the project Zareen mentioned which was her Harry Potter word2vec webapp and Joy's project doing data visualization on Jawbone data. Guests Thanks again to @iamzareenf, @blueplastic, and @joytafty for coming on the show. Thanks to the numerous other volunteers who have helped with the project as well! Announcements and details Sponsor Thanks to our sponsor for this episode Periscope Data. The blog post demoing their maps option is on our blog titled Periscope Data Maps.  To start a free trial of their dashboarding too, visit http://periscopedata.com/skeptics Kyle recently did a youtube video exploring the Data Skeptic podcast download numbers using Periscope Data. Check it out at https://youtu.be/aglpJrMp0M4. Supplemental music is Lee Rosevere's Let's Start at the Beginning.
|
Apr 21, 2017 |
[MINI] GPU CPU
11:03
https://traffic.libsyn.com/secure/dataskeptic/gpu-cpu.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137670
0ce2074e71a17f3a74dbcafb20e2eb24
There's more than one type of computer processor. The central processing unit (CPU) is typically what one means when they say "processor". GPUs were introduced to be highly optimized for doing floating point computations in parallel. These types of operations were very useful for high end video games, but as it turns out, those same processors are extremely useful for machine learning. In this mini-episode we discuss why.
|
Apr 14, 2017 |
[MINI] Backpropagation
15:13
https://traffic.libsyn.com/secure/dataskeptic/backpropagation.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137673
7035e0098b901a6684777140f6f53e8d
Backpropagation is a common algorithm for training a neural network. It works by computing the gradient of each weight with respect to the overall error, and using stochastic gradient descent to iteratively fine tune the weights of the network. In this episode, we compare this concept to finding a location on a map, marble maze games, and golf.
|
Apr 07, 2017 |
Data Science at Patreon
32:23
https://traffic.libsyn.com/secure/dataskeptic/data-science-at-patreon.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137675
b2ee82d76b60dafc0885bcdce183f0f2
In this week's episode of Data Skeptic, host Kyle Polich talks with guest Maura Church, Patreon's data science manager. Patreon is a fast-growing crowdfunding platform that allows artists and creators of all kinds build their own subscription content service. The platform allows fans to become patrons of their favorite artists- an idea similar the Renaissance times, when musicians would rely on benefactors to become their patrons so they could make more art. At Patreon, Maura's data science team strives to provide creators with insight, information, and tools, so that creators can focus on what they do best-- making art. On the show, Maura talks about some of her projects with the data science team at Patreon. Among the several topics discussed during the episode include: optical music recognition (OMR) to translate musical scores to electronic format, network analysis to understand the connection between creators and patrons, growth forecasting and modeling in a new market, and churn modeling to determine predictors of long time support. A more detailed explanation of Patreon's A/B testing framework can be found here Other useful links to topics mentioned during the show: OMR research Patreon blog Patreon HQ blog Amanda Palmer Fran Meneses
|
Mar 31, 2017 |
[MINI] Feed Forward Neural Networks
15:58
https://traffic.libsyn.com/secure/dataskeptic/feed-forward-neural-networks.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137678
e9b58d49df0695e019b3a061049edaa3
Feed Forward Neural Networks In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case. Below are the truth tables that describe each of these functions. AND Truth Table Input 1 | Input 2 | Output | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | OR Truth Table Input 1 | Input 2 | Output | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | XOR Truth Table Input 1 | Input 2 | Output | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions? Let's consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.    Can this perceptron learn the AND function? Sure. Let and  What about OR? Yup. Let and  An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented. How about XOR? No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters. In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate. Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated. Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We'll get into some of these in future mini-episodes.  Check out our recent blog post on how we're using Periscope Data cohort charts. Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics
|
Mar 24, 2017 |
Reinventing Sponsored Search Auctions
41:31
https://traffic.libsyn.com/secure/dataskeptic/reinventing-sponsored-search-auctions.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137680
cbaa54731d19ed38eb8e302efe50555b
In this Data Skeptic episode, Kyle is joined by guest Ruggiero Cavallo to discuss his latest efforts to mitigate the problems presented in this new world of online advertising. Working with his collaborators, Ruggiero reconsiders the search ad allocation and pricing problems from the ground up and redesigns a search ad selling system. He discusses a mechanism that optimizes an entire page of ads globally based on efficiency-maximizing search allocation and a novel technical approach to computing prices.
|
Mar 17, 2017 |
[MINI] The Perceptron
14:46
https://traffic.libsyn.com/secure/dataskeptic/the-perceptron.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137682
b195b1c2c3480c1364a664d5fad2a8c0
Today's episode overviews the perceptron algorithm. This rather simple approach is characterized by a few particular features. It updates its weights after seeing every example, rather than as a batch. It uses a step function as an activation function. It's only appropriate for linearly separable data, and it will converge to a solution if the data meets these criteria. Being a fairly simple algorithm, it can run very efficiently. Although we don't discuss it in this episode, multi-layer perceptron networks are what makes this technique most attractive.
|
Mar 10, 2017 |
The Data Refuge Project
24:35
https://traffic.libsyn.com/secure/dataskeptic/data-refuge.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137684
7644939b2c49e37607bf22bb3ae67e1c
DataRefuge is a public collaborative, grassroots effort around the United States in which scientists, researchers, computer scientists, librarians and other volunteers are working to download, save, and re-upload government data. The DataRefuge Project, which is led by the UPenn Program in Environmental Humanities and the Penn Libraries group at University of Pennsylvania, aims to foster resilience in an era of anthropogenic global climate change and raise awareness of how social and political events affect transparency.
|
Mar 03, 2017 |
[MINI] Automated Feature Engineering
16:14
https://traffic.libsyn.com/secure/dataskeptic/automated-feature-engineering.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137687
40eec155114a9a95ecb2b1b0650b0f30
If a CEO wants to know the state of their business, they ask their highest ranking executives. These executives, in turn, should know the state of the business through reports from their subordinates. This structure is roughly analogous to a process observed in deep learning, where each layer of the business reports up different types of observations, KPIs, and reports to be interpreted by the next layer of the business. In deep learning, this process can be thought of as automated feature engineering. DNNs built to recognize objects in images may learn structures that behave like edge detectors in the first hidden layer. Proceeding layers learn to compose more abstract features from lower level outputs. This episode explore that analogy in the context of automated feature engineering. Linh Da and Kyle discuss a particular image in this episode. The image included below in the show notes is drawn from the work of Lee, Grosse, Ranganath, and Ng in their paper Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations. 
|
Feb 24, 2017 |
Big Data Tools and Trends
30:45
https://traffic.libsyn.com/secure/dataskeptic/big-data-tools-and-trends.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137689
8f5e0a32abc06a1c9ad5e487eba5c9a1
In this episode, I speak with Raghu Ramakrishnan, CTO for Data at Microsoft. We discuss services, tools, and developments in the big data sphere as well as the underlying needs that drove these innovations.
|
Feb 17, 2017 |
[MINI] Primer on Deep Learning
14:28
https://traffic.libsyn.com/secure/dataskeptic/deep-learning-primer.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/6/c/9/c6c96779218f4fb3/deep-learning.png
11137691
100ef9156253a4187e6b36e26fd9f1c9
In this episode, we talk about a high-level description of deep learning. Kyle presents a simple game (pictured below), which is more of a puzzle really, to try and give Linh Da the basic concept.    Thanks to our sponsor for this week, the Data Science Association. Please check out their upcoming Dallas conference at dallasdatascience.eventbrite.com
|
Feb 10, 2017 |
Data Provenance and Reproducibility with Pachyderm
40:11
https://traffic.libsyn.com/secure/dataskeptic/data-provenance-and-reproducibility-with-pachyderm.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137693
5e8633ce1fa910a54783bf437c92837b
Versioning isn't just for source code. Being able to track changes to data is critical for answering questions about data provenance, quality, and reproducibility. Daniel Whitenack joins me this week to talk about these concepts and share his work on Pachyderm. Pachyderm is an open source containerized data lake. During the show, Daniel mentioned the Gopher Data Science github repo as a great resource for any data scientists interested in the Go language. Although we didn't mention it, Daniel also did an interesting analysis on the 2016 world chess championship that complements our recent episode on chess well. You can find that post here Supplemental music is Lee Rosevere's Let's Start at the Beginning. Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics 
|
Feb 03, 2017 |
[MINI] Logistic Regression on Audio Data
20:48
https://traffic.libsyn.com/secure/dataskeptic/logistic-regression.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/4/1/c/e41c79a002244ac5/logistic-regression.png
11137695
c67d846cac416984857f7ec4879a52ec
Logistic Regression is a popular classification algorithm. In this episode, we discuss how it can be used to determine if an audio clip represents one of two given speakers. It assumes an output variable (isLinhda) is a linear combination of available features, which are spectral bands in the discussion on this episode. Keep an eye on the dataskeptic.com blog this week as we post more details about this project. Thanks to our sponsor this week, the Data Science Association. Please check out their upcoming conference in Dallas on Saturday, February 18th, 2017 via the link below. dallasdatascience.eventbrite.com 
|
Jan 27, 2017 |
Studying Competition and Gender Through Chess
34:27
https://traffic.libsyn.com/secure/dataskeptic/studying-competition-and-gender-through-chess.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137697
4e3afa9993014f76e3edcece9a24e1e3
Prior work has shown that people's response to competition is in part predicted by their gender. Understanding why and when this occurs is important in areas such as labor market outcomes. A well structured study is challenging due to numerous confounding factors. Peter Backus and his colleagues have identified competitive chess as an ideal arena to study the topic. Find out why and what conclusions they reached. Our discussion centers around Gender, Competition and Performance: Evidence from Real Tournaments from Backus, Cubel, Guid, Sanchez-Pages, and Mañas. A summary of their paper can also be found here.
|
Jan 20, 2017 |
[MINI] Dropout
15:55
https://traffic.libsyn.com/secure/dataskeptic/dropout.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/6/9/9/86993898aae8f837/dropout.png
11137699
cb8f8998c0faa03ea1e8c351240f6739
Deep learning can be prone to overfit a given problem. This is especially frustrating given how much time and computational resources are often required to converge. One technique for fighting overfitting is to use dropout. Dropout is the method of randomly selecting some neurons in one's network to set to zero during iterations of learning. The core idea is that each particular input in a given layer is not always available and therefore not a signal that can be relied on too heavily.
|
Jan 13, 2017 |
The Police Data and the Data Driven Justice Initiatives
49:17
https://traffic.libsyn.com/secure/dataskeptic/police-data-initiative-and-data-driven-justice-initiative.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137701
88e45564a5260ec29f9fa8c3547f5823
In this episode I speak with Clarence Wardell and Kelly Jin about their mutual service as part of the White House's Police Data Initiative and Data Driven Justice Initiative respectively. The Police Data Initiative was organized to use open data to increase transparency and community trust as well as to help police agencies use data for internal accountability. The PDI emerged from recommendations made by the Task Force on 21st Century Policing. The Data Driven Justice Initiative was organized to help city, county, and state governments use data-driven strategies to help low-level offenders with mental illness get directed to the right services rather than into the criminal justice system.
|
Jan 06, 2017 |
The Library Problem
35:23
https://traffic.libsyn.com/secure/dataskeptic/the-library-problem.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/5/e/1/55e1f6fe84672bab/DataSkeptic-Podcast-2.jpg
11137703
eed23eede4ecab3048a354ff5dde1b42
We close out 2016 with a discussion of a basic interview question which might get asked when applying for a data science job. Specifically, how a library might build a model to predict if a book will be returned late or not.
|
Dec 30, 2016 |
2016 Holiday Special
39:33
https://traffic.libsyn.com/secure/dataskeptic/2016-holiday-special.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/e/a/c/b/eacb304a43d66710/DataSkeptic-Podcast-2.jpg
11137704
24d5c0d2c4137affaf20a67bfe52b2d3
Today's episode is a reading of Isaac Asimov's Franchise. As mentioned on the show, this is just a work of fiction to be enjoyed and not in any way some obfuscated political statement. Enjoy, and happy holidays!
|
Dec 23, 2016 |
[MINI] Entropy
16:36
https://traffic.libsyn.com/secure/dataskeptic/entropy.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/6/d/c/16dcea2d0917874a/entropy.png
11137706
c1fc72f87f38df6a95a6421e440345e5
Classically, entropy is a measure of disorder in a system. From a statistical perspective, it is more useful to say it's a measure of the unpredictability of the system. In this episode we discuss how information reduces the entropy in deciding whether or not Yoshi the parrot will like a new chew toy. A few other everyday examples help us examine why entropy is a nice metric for constructing a decision tree.
|
Dec 16, 2016 |
MS Connect Conference
42:23
https://traffic.libsyn.com/secure/dataskeptic/ms-connect-conference.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137707
cca3a442082e2750b5f58621ab9eda42
Cloud services are now ubiquitous in data science and more broadly in technology as well. This week, I speak to Mark Souza, Tobias Ternström, and Corey Sanders about various aspects of data at scale. We discuss the embedding of R into SQLServer, SQLServer on linux, open source, and a few other cloud topics.
|
Dec 09, 2016 |
Causal Impact
34:13
https://traffic.libsyn.com/secure/dataskeptic/causal-impact.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/c/a/a/5caa43fbfc586330/DataSkeptic-Podcast-1.jpg
11137709
ff039a20e947c197802d0b80ac2d9d3b
Today's episode is all about Causal Impact, a technique for estimating the impact of a particular event on a time series. We talk to William Martin about his research into the impact releases have on app and we also chat with Karen Blakemore about a project she helped us build to explore the impact of a Saturday Night Live appearance on a musician's career. Martin's work culminated in a paper Causal Impact for App Store Analysis. A shorter summary version can be found here. His company helping app developers do this sort of analysis can be found at crestweb.cs.ucl.ac.uk/appredict/.
|
Dec 02, 2016 |
[MINI] The Bootstrap
10:37
https://traffic.libsyn.com/secure/dataskeptic/the-bootstrap.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/f/f/4/d/ff4d067905344cae/the-bootstrap.png
11137711
fd9c484eff435ece4da87c3fc3acebe1
The Bootstrap is a method of resampling a dataset to possibly refine it's accuracy and produce useful metrics on the result. The bootstrap is a useful statistical technique and is leveraged in Bagging (bootstrap aggregation) algorithms such as Random Forest. We discuss this technique related to polling and surveys.
|
Nov 25, 2016 |
[MINI] Gini Coefficients
15:59
https://traffic.libsyn.com/secure/dataskeptic/gini-index.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/6/a/9/86a9dd3dbc9859ba/gini-coefficient.png
11137713
c013763cf30d08616f93301e96eec9b6
The Gini Coefficient (as it relates to decision trees) is one approach to determining the optimal decision to introduce which splits your dataset as part of a decision tree. To pick the right feature to split on, it considers the frequency of the values of that feature and how well the values correlate with specific outcomes that you are trying to predict.
|
Nov 18, 2016 |
Unstructured Data for Finance
33:31
https://traffic.libsyn.com/secure/dataskeptic/unstructured-data-for-finance.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137715
b9ee4cce0960b540f9fe24efe87f9102
Financial analysis techniques for studying numeric, well structured data are very mature. While using unstructured data in finance is not necessarily a new idea, the area is still very greenfield. On this episode,Delia Rusu shares her thoughts on the potential of unstructured data and discusses her work analyzing Wikipedia to help inform financial decisions. Delia's talk at PyData Berlin can be watched on Youtube (Estimating stock price correlations using Wikipedia). The slides can be found here and all related code is available on github.
|
Nov 11, 2016 |
[MINI] AdaBoost
10:39
https://traffic.libsyn.com/secure/dataskeptic/adaboost.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137716
5e31de554c584543938b69d48d4aab9d
AdaBoost is a canonical example of the class of AnyBoost algorithms that create ensembles of weak learners. We discuss how a complex problem like predicting restaurant failure (which is surely caused by different problems in different situations) might benefit from this technique.
|
Nov 04, 2016 |
Stealing Models from the Cloud
37:06
https://traffic.libsyn.com/secure/dataskeptic/stealing-models-from-the-cloud.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137728
82cf6463a8be608acb741d9bae5e764a
Platform as a service is a growing trend in data science where services like fraud analysis and face detection can be provided via APIs. Such services turn the actual model into a black box to the consumer. But can the model be reverse engineered? Florian Tramèr shares his work in this episode showing that it can. The paper Stealing Machine Learning Models via Prediction APIs is definitely worth your time to read if you enjoy this episode. Related source code can be found in https://github.com/ftramer/Steal-ML.
|
Oct 28, 2016 |
[MINI] Calculating Feature Importance
13:04
https://traffic.libsyn.com/secure/dataskeptic/feature-importance.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/5/8/7/25875a6fd51d9d6b/feature-importance.png
11137733
29bab2dac5a5a326f4b56ef41197d6c8
For machine learning models created with the random forest algorithm, there is no obvious diagnostic to inform you which features are more important in the output of the model. Some straightforward but useful techniques exist revolving around removing a feature and measuring the decrease in accuracy or Gini values in the leaves. We broadly discuss these techniques in this episode.
|
Oct 21, 2016 |
NYC Bike Share Rebalancing
29:39
https://traffic.libsyn.com/secure/dataskeptic/nyc-bike-share-rebalancing.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137735
48b90ead6ea5e176ef1d5b5cd1d02448
As cities provide bike sharing services, they must also plan for how to redistribute bicycles as they inevitably build up at more popular destination stations. In this episode, Hui Xiong talks about the solution he and his colleagues developed to rebalance bike sharing systems.
|
Oct 14, 2016 |
[MINI] Random Forest
12:43
https://traffic.libsyn.com/secure/dataskeptic/random-forest.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/c/c/b/3ccb1c923fb27c46/random-forest.png
11137737
ce3e55c55ad1a54b47ff794db1d5e7e0
Random forest is a popular ensemble learning algorithm which leverages bagging both for sampling and feature selection. In this episode we make an analogy to the process of running a bookstore.
|
Oct 07, 2016 |
Election Predictions
21:44
https://traffic.libsyn.com/secure/dataskeptic/asa-election-prediction-with-jo-hardin.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137740
242e1b4a1a4e933363112fdc7c5663b7
Jo Hardin joins us this week to discuss the ASA's Election Prediction Contest. This is a competition aimed at forecasting the results of the upcoming US presidential election competition. More details are available in Jo's blog post found here. You can find some useful R code for getting started automatically gathering data from 538 via Jo's github and official contest details are available here. During the interview we also mention Daily Kos and 538.
|
Sep 30, 2016 |
[MINI] F1 Score
09:01
https://traffic.libsyn.com/secure/dataskeptic/f1-score.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/d/7/1/ad715406fedd7928/f1-score.png
11137742
466cfc619c02e416ba7c3c7221ccbad3
The F1 score is a model diagnostic that combines precision and recall to provide a singular evaluation for model comparison. In this episode we discuss how it applies to selecting an interior designer.
|
Sep 23, 2016 |
Urban Congestion
35:19
https://traffic.libsyn.com/secure/dataskeptic/urban-congestion.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137744
cd0c76518959006a713b7295dc0bf22f
Urban congestion effects every person living in a city of any reasonable size. Lewis Lehe joins us in this episode to share his work on downtown congestion pricing. We explore topics of how different pricing mechanisms effect congestion as well as how data visualization can inform choices. You can find examples of Lewis's work at setosa.io. His paper which we discussed during the interview isDistance-dependent congestion pricing for downtown zones. On this episode, we discuss State of California data which can be found at pems.dot.ca.gov.
|
Sep 16, 2016 |
[MINI] Heteroskedasticity
08:57
https://traffic.libsyn.com/secure/dataskeptic/heteroskedasticity.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/6/5/3/7653f17b473c748b/heteroskedasticity.png
11137746
3e131404de03f5bfd862747ee5399225
Heteroskedasticity is a term used to describe a relationship between two variables which has unequal variance over the range. For example, the variance in the length of a cat's tail almost certainly changes (grows) with age. On the other hand, the average amount of chewing gum a person consume probably has a consistent variance over a wide range of human heights. We also discuss some issues with the visualization shown in the tweet embedded below. 
|
Sep 09, 2016 |
Music21
34:38
https://traffic.libsyn.com/secure/dataskeptic/music21.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137748
5241e584f5a4a4c73e07544b5a9c2877
Our guest today is Michael Cuthbert, an associate professor of music at MIT and principal investigator of the Music21 project, which we focus our discussion on today. Music21 is a python library making analysis of music accessible and fun. It supports integration with popular formats such as MIDI, MusicXML, Lilypond, and others. It's also well integrated with The Elvis Project, enabling users to import large volumes of music for easy analysis. Music21 is a great platform for musicologists and machine learning researchers alike to explore patterns and structure in music.
|
Sep 02, 2016 |
[MINI] Paxos
14:43
https://traffic.libsyn.com/secure/dataskeptic/paxos.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/5/b/e/05beaba7b0eff0d4/paxos.png
11137750
6a96b9107decc25b0a933e8b2354c7a4
Paxos is a protocol for arriving a consensus in a distributed computing system which accounts for unreliability of the nodes. We discuss how this might be used in the real world in the event of a massive disaster.
|
Aug 26, 2016 |
Trusting Machine Learning Models with LIME
35:16
https://traffic.libsyn.com/secure/dataskeptic/trust-in-ml.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137752
0236345dfaf9fd5028bd0968faab7cd3
Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there's good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems. The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it's conclusion. In this episode, Marco Tulio Ribeiro joins us to discuss how LIME (Locally Interpretable Model-Agnostic Explanations) can help users trust machine learning models. The accompanying paper is titled "Why Should I Trust You?": Explaining the Predictions of Any Classifier.
|
Aug 19, 2016 |
[MINI] ANOVA
12:55
https://traffic.libsyn.com/secure/dataskeptic/anova.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/a/8/1/5a811be8850ac03d/anova.png
11137753
b5e0c720f3583f0b86066f87db1a0327
Analysis of variance is a method used to evaluate differences between the two or more groups. It works by breaking down the total variance of the system into the between group variance and within group variance. We discuss this method in the context of wait times getting coffee at Starbucks.
|
Aug 12, 2016 |
Machine Learning on Images with Noisy Human-centric Labels
23:11
https://traffic.libsyn.com/secure/dataskeptic/ishan.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137755
c3e75a6071ef04592940adcafc20d2dd
When humans describe images, they have a reporting bias, in that the report only what they consider important. Thus, in addition to considering whether something is present in an image, one should consider whether it is also relevant to the image before labeling it. Ishan Misra joins us this week to discuss his recent paper Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels which explores a novel architecture for learning to distinguish presence and relevance. This work enables web-scale datasets to be useful for training, not just well groomed hand labeled corpora.
|
Aug 05, 2016 |
[MINI] Survival Analysis
14:20
https://traffic.libsyn.com/secure/dataskeptic/survival-analysis.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/d/b/7/9db75222d76bd2c2/survival-analysis.png
11137758
c6fe80f2d715b98dc00ea40a49a3a81d
Survival analysis techniques are useful for studying the longevity of groups of elements or individuals, taking into account time considerations and right censorship. This episode explores how survival analysis can describe marriages, in particular, using the non-parametric Cox proportional hazard model. This episode discusses some good summaries of survey data on marriage and divorce which can be found here. The python lifelines library is a good place to get started for people that want to do some hands on work.
|
Jul 29, 2016 |
Predictive Models on Random Data
36:32
https://traffic.libsyn.com/secure/dataskeptic/Predictive_Models_on_Random_Data.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137761
83964ee9babaec0c60497e27bc1b2cf7
|
Jul 22, 2016 |
[MINI] Receiver Operating Characteristic (ROC) Curve
11:10
https://traffic.libsyn.com/secure/dataskeptic/roc-auc.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/5/4/e/b54ed8edc55ae466/roc-curve.png
11137762
530a6339287f53af180c5b9e4d0e726b
An ROC curve is a plot that compares the trade off of true positives and false positives of a binary classifier under different thresholds. The area under the curve (AUC) is useful in determining how discriminating a model is. Together, ROC and AUC are very useful diagnostics for understanding the power of one's model and how to tune it.
|
Jul 15, 2016 |
Multiple Comparisons and Conversion Optimization
30:02
https://traffic.libsyn.com/secure/dataskeptic/multiple-comparisons.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137764
a289332b29f6960b9fdc032f890bd5da
|
Jul 08, 2016 |
[MINI] Leakage
12:00
https://traffic.libsyn.com/secure/dataskeptic/leakage.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/1/9/6/01969b6ed6ca9f25/leakage.png
11137767
241d6ae22ed6d0c1f2d7e15207df0f13
If you'd like to make a good prediction, your best bet is to invent a time machine, visit the future, observe the value, and return to the past. For those without access to time travel technology, we need to avoid including information about the future in our training data when building machine learning models. Similarly, if any other feature whose value would not actually be available in practice at the time you'd want to use the model to make a prediction, is a feature that can introduce leakage to your model.
|
Jul 01, 2016 |
Predictive Policing
36:01
https://traffic.libsyn.com/secure/dataskeptic/predictive-policing.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137769
3379c868d172bf210121c3c4086980a1
|
Jun 24, 2016 |
[MINI] The CAP Theorem
10:32
https://traffic.libsyn.com/secure/dataskeptic/cap-theorem.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/c/c/1/3cc1d587abbd3427/cap-theorem.png
11137772
66034da563eb29982f0bea7025ee801b
Distributed computing cannot guarantee consistency, accuracy, and partition tolerance. Most system architects need to think carefully about how they should appropriately balance the needs of their application across these competing objectives. Linh Da and Kyle discuss the CAP Theorem using the analogy of a phone tree for alerting people about a school snow day.
|
Jun 17, 2016 |
Detecting Terrorists with Facial Recognition?
33:10
https://traffic.libsyn.com/secure/dataskeptic/detecting-terrorists.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/a/f/9/9/af990ebace859a8a/DataSkeptic-Podcast-2.jpg
11137775
5f6a35bf1c296438e7778411043305ea
A startup is claiming that they can detect terrorists purely through facial recognition. In this solo episode, Kyle explores the plausibility of these claims.
|
Jun 10, 2016 |
[MINI] Goodhart's Law
10:56
https://traffic.libsyn.com/secure/dataskeptic/goodharts-law.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/c/1/1/7/c117897a5542ee5a/goodharts-law.png
11137777
af2b8f2d22bf744d99d342df7a3f67c3
Goodhart's law states that "When a measure becomes a target, it ceases to be a good measure". In this mini-episode we discuss how this affects SEO, call centers, and Scrum.
|
Jun 03, 2016 |
Data Science at eHarmony
42:43
https://traffic.libsyn.com/secure/dataskeptic/data-science-at-eharmony.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137780
589192f6a24b69c6185fa856c4d2bb79
I'm joined this week by Jon Morra, director of data science at eHarmony to discuss a variety of ways in which machine learning and data science are being applied to help connect people for successful long term relationships. Interesting open source projects mentioned in the interview include Face-parts, a web service for detecting faces and extracting a robust set of fiducial markers (features) from the image, and Aloha, a Scala based machine learning library. You can learn more about these and other interesting projects at the eHarmony github page. In the wrap up, Jon mentioned the LA Machine Learning meetup which he runs. This is a great resource for LA residents separate and complementary to datascience.la groups, so consider signing up for all of the above and I hope to see you there in the future.
|
May 27, 2016 |
[MINI] Stationarity and Differencing
13:38
https://traffic.libsyn.com/secure/dataskeptic/stationarity.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/f/8/b/9f8be5eb95ab0611/stationarity-and-differencing.png
11137782
6d0e59eedf0db225b7e8cf01b7aafa6b
Mystery shoppers and fruit cultivation help us discuss stationarity - a property of some time serieses that are invariant to time in several ways. Differencing is one approach that can often convert a non-stationary process into a stationary one. If you have a stationary process, you get the benefits of many known statistical properties that can enable you to do a significant amount of inferencing and prediction.
|
May 20, 2016 |
Feather
23:04
https://traffic.libsyn.com/secure/dataskeptic/feather.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137784
60475c847cdb7c0f56a573ee3ee62603
I'm joined by Wes McKinney (@wesmckinn) and Hadley Wickham (@hadleywickham) on this episode to discuss their joint project Feather. Feather is a file format for storing data frames along with some metadata, to help with interoperability between languages. At the time of recording, libraries are available for R and Python, making it easy for data scientists working in these languages to quickly and effectively share datasets and collaborate.
|
May 13, 2016 |
[MINI] Bargaining
15:03
https://traffic.libsyn.com/secure/dataskeptic/bargaining.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/3/0/6/1/30614fb7f3dc8052/bargaining.png
11137786
5d1202462a834622db463af37a92f1b6
Bargaining is the process of two (or more) parties attempting to agree on the price for a transaction. Game theoretic approaches attempt to find two strategies from which neither party is motivated to deviate. These strategies are said to be in equilibrium with one another. The equilibriums available in bargaining depend on the the transaction mechanism and the information of the parties. Discounting (how long parties are willing to wait) has a significant effect in this process. This episode discusses some of the choices Kyle and Linh Da made in deciding what offer to make on a house.
|
May 06, 2016 |
deepjazz
29:53
https://traffic.libsyn.com/secure/dataskeptic/deepjazz.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137788
b18993e65f8e01a82937abbadd468888
Deepjazz is a project from Ji-Sung Kim, a computer science student at Princeton University. It is built using Theano, Keras, music21, and Evan Chow's project jazzml. Deepjazz is a computational music project that creates original jazz compositions using recurrent neural networks trained on Pat Metheny's "And Then I Knew". You can hear some of deepjazz's original compositions on soundcloud.
|
Apr 29, 2016 |
[MINI] Auto-correlative functions and correlograms
14:58
https://traffic.libsyn.com/secure/dataskeptic/acf.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/c/2/f/2c2f92aa368261b5/acf.png
11137791
2e5040222c9f939b0fbfe68325cd44d6
When working with time series data, there are a number of important diagnostics one should consider to help understand more about the data. The auto-correlative function, plotted as a correlogram, helps explain how a given observations relates to recent preceding observations. A very random process (like lottery numbers) would show very low values, while temperature (our topic in this episode) does correlate highly with recent days.
|
Apr 22, 2016 |
Early Identification of Violent Criminal Gang Members
27:05
https://traffic.libsyn.com/secure/dataskeptic/predicting-violent-offenders.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137793
364b760d01c8c8f84a1967e3cf72f7f3
|
Apr 15, 2016 |
[MINI] Fractional Factorial Design
11:09
https://traffic.libsyn.com/secure/dataskeptic/Fractional_factorial_design.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/9/9/8/c/998cbc1c6bd9836a/fractional-factorial-design.png
11137796
c123ae27dcb7ff8118d704e9b5d44176
A dinner party at Data Skeptic HQ helps teach the uses of fractional factorial design for studying 2-way interactions.
|
Apr 08, 2016 |
Machine Learning Done Wrong
25:21
https://traffic.libsyn.com/secure/dataskeptic/machine_learning_done_wrong.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137798
3ffc5e45f4af3b132d596fbf681236f1
Cheng-tao Chu (@chengtao_chu) joins us this week to discuss his perspective on common mistakes and pitfalls that are made when doing machine learning. This episode is filled with sage advice for beginners and intermediate users of machine learning, and possibly some good reminders for experts as well. Our discussion parallels his recent blog postMachine Learning Done Wrong. Cheng-tao Chu is an entrepreneur who has worked at many well known silicon valley companies. His paper Map-Reduce for Machine Learning on Multicore is the basis for Apache Mahout. His most recent endeavor has just emerged from steath, so please check out OneInterview.io.
|
Apr 01, 2016 |
Potholes
41:22
https://traffic.libsyn.com/secure/dataskeptic/potholes.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/9/1/5/d9157637935c5bb3/DataSkeptic-Podcast-1.jpg
11137800
9c4195de10a011b758ab70f7a74b9df7
Co-host Linh Da was in a biking accident after hitting a pothole. She sustained an injury that required stitches. This is the story of our quest to file a 311 complaint and track it through the City of Los Angeles's open data portal. My guests this episode are Chelsea Ursaner (LA City Open Data Team), Ben Berkowitz (CEO and founder of SeeClickFix), and Russ Klettke (Editor of pothole.info)
|
Mar 25, 2016 |
[MINI] The Elbow Method
15:14
https://traffic.libsyn.com/secure/dataskeptic/the-elbow-method.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/0/a/3/b0a3557201505296/elbow-method.png
11137802
7d525d7ab1fbb1a959655a371797ebed
Certain data mining algorithms (including k-means clustering and k-nearest neighbors) require a user defined parameter k. A user of these algorithms is required to select this value, which raises the questions: what is the "best" value of k that one should select to solve their problem? This mini-episode explores the appropriate value of k to use when trying to estimate the cost of a house in Los Angeles based on the closests sales in it's area.
|
Mar 18, 2016 |
Too Good to be True
35:11
https://traffic.libsyn.com/secure/dataskeptic/too_good_to_be_true.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137805
9343d1fdd0a5ab1aa4af968e55ebd237
Today on Data Skeptic, Lachlan Gunn joins us to discuss his recent paper Too Good to be True. This paper highlights a somewhat paradoxical / counterintuitive fact about how unanimity is unexpected in cases where perfect measurements cannot be taken. With large enough data, some amount of error is expected. The "Too Good to be True" paper highlights three interesting examples which we discuss in the podcast. You can also watch a lecture from Lachlan on this topic via youtube here.
|
Mar 11, 2016 |
[MINI] R-squared
13:20
https://traffic.libsyn.com/secure/dataskeptic/r-squared.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/1/1/d/a/11dadbe155a21c0e/r-squared.png
11137808
60ebb92a24989b6fa0a3e6a7af6b078f
How well does your model explain your data? R-squared is a useful statistic for answering this question. In this episode we explore how it applies to the problem of valuing a house. Aspects like the number of bedrooms go a long way in explaining why different houses have different prices. There's some amount of variance that can be explained by a model, and some amount that cannot be directly measured. R-squared is the ratio of the explained variance to the total variance. It's not a measure of accuracy, it's a measure of the power of one's model.
|
Mar 04, 2016 |
Models of Mental Simulation
39:44
https://traffic.libsyn.com/secure/dataskeptic/think_again.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137810
aeca53ccb0d024e22dc581a06bed9e4a
|
Feb 26, 2016 |
[MINI] Multiple Regression
18:29
https://traffic.libsyn.com/secure/dataskeptic/multiple_regressions.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/0/b/1/b0b1c162d9bf2e48/multiple-regression.png
11137813
d67f19619d2610e280fd8f3ac56888f4
This episode is a discussion of multiple regression: the use of observations that are a vector of values to predict a response variable. For this episode, we consider how features of a home such as the number of bedrooms, number of bathrooms, and square footage can predict the sale price. Unlike a typical episode of Data Skeptic, these show notes are not just supporting material, but are actually featured in the episode. The site Redfin gratiously allows users to download a CSV of results they are viewing. Unfortunately, they limit this extract to 500 listings, but you can still use it to try the same approach on your own using the download link shown in the figure below.
|
Feb 19, 2016 |
Scientific Studies of People's Relationship to Music
42:14
https://traffic.libsyn.com/secure/dataskeptic/samuel.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137814
14e480e948e0db65c6734aac119a4a2c
|
Feb 12, 2016 |
[MINI] k-d trees
14:11
https://traffic.libsyn.com/secure/dataskeptic/kd_trees.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/d/f/3/6/df360841c3db6c82/k-d-trees.png
11137816
e2a79a2e36bab9870b8edba8b828bd5a
This episode reviews the concept of k-d trees: an efficient data structure for holding multidimensional objects. Kyle gives Linhda a dictionary and asks her to look up words as a way of introducing the concept of binary search. We actually spend most of the episode talking about binary search before getting into k-d trees, but this is a necessary prerequisite.
|
Feb 05, 2016 |
Auditing Algorithms
42:58
https://traffic.libsyn.com/secure/dataskeptic/auditing_algorithms.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137819
f2a881925c3bdca129a166896ec7028e
Algorithms are pervasive in our society and make thousands of automated decisions on our behalf every day. The possibility of digital discrimination is a very real threat, and it is very plausible for discrimination to occur accidentally (i.e. outside the intent of the system designers and programmers). Christian Sandvig joins us in this episode to talk about his work and the concept of auditing algorithms. Christian Sandvig (@niftyc) has a PhD in communications from Stanford and is currently an Associate Professor of Communication Studies and Information at the University of Michigan. His research studies the predictable and unpredictable effects that algorithms have on culture. His work exploring the topic of auditing algorithms has framed the conversation of how and why we might want to have oversight on the way algorithms effect our lives. His writing appears in numerous publications including The Social Media Collective, The Huffington Post, and Wired. One of his papers we discussed in depth on this episode was Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms, which is well worth a read.
|
Jan 29, 2016 |
[MINI] The Bonferroni Correction
14:29
https://traffic.libsyn.com/secure/dataskeptic/bonferroni-correction2.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/8/f/5/d/8f5df3dea572ad6d/bonferroni_correction.png
11137821
4e13dfc84b15e509fc71a6df86ace23c
Today's episode begins by asking how many left handed employees we should expect to be at a company before anyone should claim left handedness discrimination. If not lefties, let's consider eye color, hair color, favorite ska band, most recent grocery store used, and any number of characteristics could be studied to look for deviations from the norm in a company. When multiple comparisons are to be made simultaneous, one must account for this, and a common method for doing so is with the Bonferroni Correction. It is not, however, a sure fire procedure, and this episode wraps up with a bit of skepticism about it.
|
Jan 22, 2016 |
Detecting Pseudo-profound BS
37:37
https://traffic.libsyn.com/secure/dataskeptic/pseudo-profound-episode.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137823
18102dbc685d36dea2556055eea4ed76
A recent paper in the journal of Judgment and Decision Making titled On the reception and detection of pseudo-profound bullshit explores empirical questions around a reader's ability to detect statements which may sound profound but are actually a collection of buzzwords that fail to contain adequate meaning or truth. These statements are definitively different from lies and nonesense, as we discuss in the episode. This paper proposes the Bullshit Receptivity scale (BSR) and empirically demonstrates that it correlates with existing metrics like the Cognitive Reflection Test, building confidence that this can be a useful, repeatable, empirical measure of a person's ability to detect pseudo-profound statements as being different from genuinely profound statements. Additionally, the correlative results provide some insight into possible root causes for why individuals might find great profundity in these statements based on other beliefs or cognitive measures. The paper's lead author Gordon Pennycook joins me to discuss this study's results. If you'd like some examples of pseudo-profound bullshit, you can randomly generate some based on Deepak Chopra's twitter feed. To read other work from Gordon, check out his Google Scholar page and find him on twitter via @GordonPennycook. And just for fun, if you think you've dreamed up a Data Skeptic related pseudo-profound bullshit statement, tweet it with hashtag #pseudoprofound. If I see an especially clever or humorous one, I might want to send you a free Data Skeptic sticker.
|
Jan 15, 2016 |
[MINI] Gradient Descent
14:51
https://traffic.libsyn.com/secure/dataskeptic/gradient_descent.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/c/b/d/5cbdb35291f50d46/gradient-descent.png
11137825
f36f572f85bbf937a7bc381b89ead322
Today's mini episode discusses the widely known optimization algorithm gradient descent in the context of hiking in a foggy hillside.
|
Jan 08, 2016 |
Let's Kill the Word Cloud
15:03
https://traffic.libsyn.com/secure/dataskeptic/lets_kill_the_word_cloud.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/8/a/a/08aa84aadef1e9d5/DataSkeptic-Podcast-2.jpg
11137827
75537da8a555a0072d6c2fed5529b68d
This episode is a discussion of data visualization and a proposed New Year's resolution for Data Skeptic listeners. Let's kill the word cloud.
|
Jan 01, 2016 |
2015 Holiday Special
14:22
https://traffic.libsyn.com/secure/dataskeptic/2015_Holiday_Special.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/b/6/0/2b608d7db6b04470/DataSkeptic-Podcast-2.jpg
11137829
f8c6c4e71f269487485d6d407ba0922b
Today's episode is a reading of Isaac Asimov's The Machine that Won the War. I can't think of a story that's more appropriate for Data Skeptic.
|
Dec 25, 2015 |
Wikipedia Revision Scoring as a Service
42:56
https://traffic.libsyn.com/secure/dataskeptic/wikipedia-revision-scoring-as-a-service.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137831
19903e6141c04516eddbc8e697bb4281
In this interview with Aaron Halfaker of the Wikimedia Foundation, we discuss his research and career related to the study of Wikipedia. In his paper The Rise and Decline of an open Collaboration Community, he highlights a trend in the declining rate of active editors on Wikipedia which began in 2007. I asked Aaron about a variety of possible hypotheses for the phenomenon, in particular, how automated quality control tools that revert edits automatically could play a role. This lead Aaron and his collaborators to develop Snuggle, an optimized interface to help Wikipedians better welcome new comers to the community. We discuss the details of these topics as well as ORES, which provides revision scoring as a service to any software developer that wants to consume the output of their machine learning based scoring. You can find Aaron on Twitter as @halfak.
|
Dec 18, 2015 |
[MINI] Term Frequency - Inverse Document Frequency
10:17
https://traffic.libsyn.com/secure/dataskeptic/tf_-_idf.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/5/a/b/05abf34e8a161f0e/tf-idf.png
11137832
7afffa66c10aafc7ebea01cdcacb133b
Today's topic is term frequency inverse document frequency, which is a statistic for estimating the importance of words and phrases in a set of documents.
|
Dec 11, 2015 |
The Hunt for Vulcan
41:31
https://traffic.libsyn.com/secure/dataskeptic/the-hunt-for-vulcan.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137835
abdf4560049259ece64cc06cf96c07d1
Early astronomers could see several of the planets with the naked eye. The invention of the telescope allowed for further understanding of our solar system. The work of Isaac Newton allowed later scientists to accurately predict Neptune, which was later observationally confirmed exactly where predicted. It seemed only natural that a similar unknown body might explain anomalies in the orbit of Mercury, and thus began the search for the hypothesized planet Vulcan. Thomas Levenson's book "The Hunt for Vulcan" is a narrative of the key scientific minds involved in the search and eventual refutation of an unobserved planet between Mercury and the sun. Thomas joins me in this episode to discuss his book and the fascinating story of the quest to find this planet. During the discussion, we mention one of the contributions made by Urbain-Jean-Joseph Le Verrier which involved some complex calculations which enabled him to predict where to find the planet that would eventually be called Neptune. The calculus behind this work is difficult, and some of that work is demonstrated in a Jupyter notebook I recently discovered from Paulo Marques titled The-Body Problem.  | Thomas Levenson is a professor at MIT and head of its science writing program. He is the author of several books, including Einstein in Berlin and Newton and the Counterfeiter: The Unknown Detective Career of the World’s Greatest Scientist. He has also made ten feature-length documentaries (including a two-hour Nova program on Einstein) for which he has won numerous awards. In his most recent book "The Hunt for Vulcan", explores the century spanning quest to explain the movement of the cosmos via theory and the role the hypothesized planet Vulcan played in the story. Follow Thomas on twitter @tomlevenson and check out his blog athttps://inversesquare.wordpress.com/. Pick up your copy of The Hunt for Vulcan at your local bookstore, preferred book buying place, or at the Penguin Random House site. |
|
Dec 04, 2015 |
[MINI] The Accuracy Paradox
17:04
https://traffic.libsyn.com/secure/dataskeptic/the-accuracy-paradox.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/b/2/9/8/b2989c2c2f52dc83/accuracy-paradox.png
11137837
11e28027d156b297cb5ddc84da75c767
Today's episode discusses the accuracy paradox. There are cases when one might prefer a less accurate model because it yields more predictive power or better captures the underlying causal factors describing the outcome variable you are interested in. This is especially relevant in machine learning when trying to predict rare events. We discuss how the accuracy paradox might apply if you were trying to predict the likelihood a person was a bird owner.
|
Nov 27, 2015 |
Neuroscience from a Data Scientist's Perspective
40:18
https://traffic.libsyn.com/secure/dataskeptic/neuroscience.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137839
be662d4d6dbf8db532c060ca817fcb41
... or should this have been called data science from a neuroscientist's perspective? Either way, I'm sure you'll enjoy this discussion with Laurie Skelly. Laurie earned a PhD in Integrative Neuroscience from the Department of Psychology at the University of Chicago. In her life as a social neuroscientist, using fMRI to study the neural processes behind empathy and psychopathy, she learned the ropes of zooming in and out between the macroscopic and the microscopic -- how millions of data points come together to tell us something meaningful about human nature. She's currently at Metis Data Science, an organization that helps people learn the skills of data science to transition in industry. In this episode, we discuss fMRI technology, Laurie's research studying empathy and psychopathy, as well as the skills and tools used in common between neuroscientists and data scientists. For listeners interested in more on this subject, Laurie recommended the blogs Neuroskeptic, Neurocritic, and Neuroecology. We conclude the episode with a mention of the upcoming Metis Data Science San Francisco cohort which Laurie will be teaching. If anyone is interested in applying to participate, they can do so here.
|
Nov 20, 2015 |
[MINI] Bias Variance Tradeoff
13:35
https://traffic.libsyn.com/secure/dataskeptic/bias-variance-tradeoff.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/7/0/d/b/70db0da2d9da31a0/bias_variance_tradeoff.png
11137841
23fa13aacf0940fc4f8d7889caf4e6b9
A discussion of the expected number of cars at a stoplight frames today's discussion of the bias variance tradeoff. The central ideal of this concept relates to model complexity. A very simple model will likely generalize well from training to testing data, but will have a very high variance since it's simplicity can prevent it from capturing the relationship between the covariates and the output. As a model grows more and more complex, it may capture more of the underlying data but the risk that it overfits the training data and therefore does not generalize (is biased) increases. The tradeoff between minimizing variance and minimizing bias is an ongoing challenge for data scientists, and an important discussion for skeptics around how much we should trust models.
|
Nov 13, 2015 |
Big Data Doesn't Exist
32:28
https://traffic.libsyn.com/secure/dataskeptic/big-data-doesnt-exist.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137842
2407283aea97cb88e7147ae3cc152e7a
The recent opinion piece Big Data Doesn't Exist on Tech Crunch by Slater Victoroff is an interesting discussion about the usefulness of data both big and small. Slater joins me this episode to discuss and expand on this discussion. Slater Victoroff is CEO of indico Data Solutions, a company whose services turn raw text and image data into human insight. He, and his co-founders, studied at Olin College of Engineering where indico was born. indico was then accepted into the "Techstars Accelarator Program" in the Fall of 2014 and went on to raise $3M in seed funding. His recent essay "Big Data Doesn't Exist" received a lot of traction on TechCrunch, and I have invited Slater to join me today to discuss his perspective and touch on a few topics in the machine learning space as well.
|
Nov 06, 2015 |
[MINI] Covariance and Correlation
14:29
https://traffic.libsyn.com/secure/dataskeptic/covariance_and_correlation.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/2/b/d/1/2bd1713290199214/covariance_correlation.png
11137844
f69c021cb000d94652b27721621af0c0
The degree to which two variables change together can be calculated in the form of their covariance. This value can be normalized to the correlation coefficient, which has the advantage of transforming it to a unitless measure strictly bounded between -1 and 1. This episode discusses how we arrive at these values and why they are important.
|
Oct 30, 2015 |
Bayesian A/B Testing
30:11
https://traffic.libsyn.com/secure/dataskeptic/bayesian-methods-for-hackers.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/0/e/4/b/0e4bd71bb64c6e45/DS_-_New_Logo_assets_-_JL_DS_Logo_Stacked_-_Color_2.jpg
11137846
3646397a9caee83882f1a78339bdf542
Today's guest is Cameron Davidson-Pilon. Cameron has a masters degree in quantitative finance from the University of Waterloo. Think of it as statistics on stock markets. For the last two years he's been the team lead of data science at Shopify. He's the founder of dataoragami.net which produces screencasts teaching methods and techniques of applied data science. He's also the author of the just released in print book Bayesian Methods for Hackers: Probabilistic Programming and Bayesian Inference, which you can also get in a digital form. This episode focuses on the topic of Bayesian A/B Testing which spans just one chapter of the book. Related to today's discussion is the Data Origami post The class imbalance problem in A/B testing. Lastly, Data Skeptic will be giving away a copy of the print version of the book to one lucky listener who has a US based delivery address. To participate, you'll need to write a review of any site, book, course, or podcast of your choice on datasciguide.com. After it goes live, tweet a link to it with the hashtag #WinDSBook to be given an entry in the contest. This contest will end November 20th, 2015, at which time I'll draw a single randomized winner and contact them for delivery details via direct message on Twitter.
|
Oct 23, 2015 |
[MINI] The Central Limit Theorem
13:07
https://traffic.libsyn.com/secure/dataskeptic/Central_Limit_Theorem.mp3?dest-id=201630
https://ssl-static.libsyn.com/p/assets/5/9/d/7/59d727f2484333a5/central_limit_theorem.png
11137849
e9d6423ec527424cfec4add6948d31fd
The central limit theorem is an important statistical result which states that typically, the mean of a large enough set of independent trials is approximately normally distributed. This episode explores how this might be used to determine if an amazon parrot like Yoshi produces or or less waste than an African Grey, under the assumption that the individual distributions are not normal.
|
Oct 16, 2015 |
Accessible Technology
38:44
|