Data Science at Home

By Francesco Gadaleta

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.

Description

Making machine learning easy for everyone

Episode Date
Episode 33: Decentralized Machine Learning and the proof-of-train
00:17:40

In the attempt of democratizing machine learning, data scientists should have the possibility to train their models on data they do not necessarily own, nor see. A model that is privately trained should be verified and uniquely identified across its entire life cycle, from its random initialization to setting the optimal values of its parameters.
How does blockchain allow all this? Fitchain is the decentralized machine learning platform that provides models an identity and a certification of their training procedure, the proof-of-train

Jun 11, 2018
Episode 32: I am back. I have been building fitchain
00:23:14

I know, I have been away too long without publishing much in the last 3 months.
But, there’s a reason for that. I have been building a platform that combines machine learning with blockchain technology.
Let me introduce you to fitchain and tell you more in this episode.

If you want to collaborate on the project or just think it’s interesting, drop me a line on the contact page at fitchain.io

Jun 04, 2018
Founder Interview – Francesco Gadaleta of Fitchain
00:31:04

Cross-posting from Cryptoradio.io

Overview

Francesco Gadaleta introduces Fitchain, a decentralized machine learning platform that combines blockchain technology and AI to solve the data manipulation problem in restrictive environments such as healthcare or financial institutions.Francesco Gadaleta is the founder of Fitchain.io and senior advisor to Abe AI. Fitchain is a platform that officially started in October [...]

May 24, 2018
Episode 31: The End of Privacy
00:39:03

Data is a complex topic, not only related to machine learning algorithms, but also and especially to privacy and security of individuals, the same individuals who create such data just by using the many mobile apps and services that characterize their digital life.

In this episode I am together with B.J.n Mendelson, author of “Social Media is Bullshit” from St. Martin’s Press and world-renowned speaker on issues involving the myths and realities involving today’s Internet platforms.  B.J. has a new a book about privacy and sent me a free copy of “Privacy, and how to get it back” that I read in just one day. That was enough to realise how much we have in common when it comes to data and data collection.

 

Apr 02, 2018
Episode 30: Neural networks and genetic evolution: an unfeasible approach
00:22:19

Despite what researchers claim about genetic evolution, in this episode we give a realistic view of the field.

Nov 21, 2017
Episode 29: Fail your AI company in 9 steps
00:14:27

In order to succeed with artificial intelligence, it is better to know how to fail first. It is easier than you think.
Here are 9 easy steps to fail your AI startup.

Nov 11, 2017
Episode 28: Towards Artificial General Intelligence: preliminary talk
00:20:34

The enthusiasm for artificial intelligence is raising some concerns especially with respect to some ventured conclusions about what AI can really do and what its direct descendent, artificial general intelligence would be capable of doing in the immediate future. From stealing jobs, to exterminating the entire human race, the creativity (of some) seems to have no limits. 
In this episode I make sure that everyone comes back to reality - which might sound less exciting than Hollywood but definitely more… real. 

Nov 04, 2017
Episode 27: Techstars accelerator and the culture of fireflies
00:17:42

In the aftermath of the Barclays Accelerator, powered by Techstars experience, one of the most innovative and influential startup accelerators in the world, I’d like to give back to the community lessons learned, including the need for confidence, soft-skills, and efficiency, to be applied to startups that deal with artificial intelligence and data science.
In this episode I also share some thoughts about the culture of fireflies in modern and dynamic organisations.

Oct 30, 2017
Episode 26: Deep Learning and Alzheimer
00:54:02

In this episode I speak about Deep Learning technology applied to Alzheimer disorder prediction. I had a great chat with Saman Sarraf, machine learning engineer at Konica Minolta, former lab manager at the Rotman Research Institute at Baycrest, University of Toronto and author of DeepAD: Alzheimer′ s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI.

I hope you enjoy the show.

Oct 23, 2017
Episode 25: How to become data scientist [RB]
00:16:16

In this episode, I speak about the requirements and the skills to become data scientist and join an amazing community that is changing the world with data analyticsa

Oct 16, 2017
Episode 24: How to handle imbalanced datasets
00:21:21

In machine learning and data science in general it is very common to deal at some point with imbalanced datasets and class distributions. This is the typical case where the number of observations that belong to one class is significantly lower than those belonging to the other classes.  Actually this happens all the time, in several domains, from finance, to healthcare to social media, just to name a few I have personally worked with.
Think about a bank detecting fraudulent transactions among millions or billions of daily operations, or equivalently in healthcare for the identification of rare disorders.
In genetics but also with clinical lab tests this is a normal scenario, in which, fortunately there are very few patients affected by a disorder and therefore very [...]

Oct 09, 2017
Episode 23: Why do ensemble methods work?
00:18:59

Ensemble methods have been designed to improve the performance of the single model, when the single model is not very accurate. According to the general definition of ensembling, it consists in building a number of single classifiers and then combining or aggregating their predictions into one classifier that is usually stronger than the single one.

The key idea behind ensembling is that some models will do well when they model certain aspects of the data while others will do well in modelling other aspects.
In this episode I show with a numeric example why and when ensemble methods work.

Oct 03, 2017
Episode 22: Parallelising and distributing Deep Learning
00:19:42

Continuing the discussion of the last two episodes, there is one more aspect of deep learning that I would love to consider and therefore left as a full episode, that is parallelising and distributing deep learning on relatively large clusters.

As a matter of fact, computing architectures are changing in a way that is encouraging parallelism more than ever before. And deep learning is no exception and despite the greatest improvements with commodity GPUs - graphical processing units, when it comes to speed, there is still room for improvement.

Together with the last two episodes, this one completes the picture of deep learning at scale. Indeed, as I mentioned in the previous episode, How to master optimisation in deep learning, the function op [...]

Sep 25, 2017
Episode 21: Additional optimisation strategies for deep learning
00:15:08

In the last episode How to master optimisation in deep learning I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning.
I explored the family of gradient descent methods - even though not exhaustively - giving a list of approaches that deep learning researchers are considering for different scenarios. Every method has its own benefits and drawbacks, pretty much depending on the type of data, and data sparsity. But there is one method that seems to be, at least empirically, the best approach so far.

Feel free to listen to the previous episode,

Sep 18, 2017
Episode 20: How to master optimisation in deep learning
00:19:29

The secret behind deep learning is not really a secret. It is function optimisation. What a neural network essentially does, is optimising a function. In this episode I illustrate a number of optimisation methods and explain which one is the best and why.

Aug 28, 2017
Episode 19: How to completely change your data analytics strategy with deep learning
00:15:56

Over the past few years, neural networks have re-emerged as powerful machine-learning models, reaching state-of-the-art results in several fields like image recognition and speech processing. More recently, neural network models started to be applied also to textual data in order to deal with natural language, and there too with promising results. In this episode I explain why is deep learning performing the way it does, and what are some of the most tedious causes of failure.

Aug 09, 2017
Episode 18: Machines that learn like humans
00:42:06

Artificial Intelligence allow machines to learn patterns from data. The way humans learn however is different and more efficient. With Lifelong Machine Learning, machines can learn the way human beings do, faster, and more efficiently

Mar 28, 2017
Episode 17: Protecting privacy and confidentiality in data and communications
00:17:31

Talking about security of communication and privacy is never enough, especially when political instabilities are driving leaders towards decisions that will affect people on a global scale

Feb 15, 2017
Episode 16: 2017 Predictions in Data Science
00:20:31

We strongly believe 2017 will be a very interesting year for data science and artificial intelligence. Let me tell you what I expect and why.

Dec 23, 2016
Episode 15: Statistical analysis of phenomena that smell like chaos
00:10:14

Is the market really predictable? How do stock prices increase? What is their dynamics? Here is what I think about the magics and the reality of predictions applied to markets and the stock exchange.

Dec 05, 2016