Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.
Episode 33: Decentralized Machine Learning and the proof-of-train
In the attempt of democratizing machine learning, data scientists should have the possibility to train their models on data they do not necessarily own, nor see. A model that is privately trained should be verified and uniquely identified across its entire life cycle, from its random initialization to setting the optimal values of its parameters.
|Jun 11, 2018|
Episode 32: I am back. I have been building fitchain
I know, I have been away too long without publishing much in the last 3 months.
If you want to collaborate on the project or just think it’s interesting, drop me a line on the contact page at fitchain.io
|Jun 04, 2018|
Founder Interview – Francesco Gadaleta of Fitchain
Cross-posting from Cryptoradio.io
Francesco Gadaleta introduces Fitchain, a decentralized machine learning platform that combines blockchain technology and AI to solve the data manipulation problem in restrictive environments such as healthcare or financial institutions.Francesco Gadaleta is the founder of Fitchain.io and senior advisor to Abe AI. Fitchain is a platform that officially started in October [...]
|May 24, 2018|
Episode 31: The End of Privacy
Data is a complex topic, not only related to machine learning algorithms, but also and especially to privacy and security of individuals, the same individuals who create such data just by using the many mobile apps and services that characterize their digital life.
In this episode I am together with B.J.n Mendelson, author of “Social Media is Bullshit” from St. Martin’s Press and world-renowned speaker on issues involving the myths and realities involving today’s Internet platforms. B.J. has a new a book about privacy and sent me a free copy of “Privacy, and how to get it back” that I read in just one day. That was enough to realise how much we have in common when it comes to data and data collection.
|Apr 02, 2018|
Episode 30: Neural networks and genetic evolution: an unfeasible approach
Despite what researchers claim about genetic evolution, in this episode we give a realistic view of the field.
|Nov 21, 2017|
Episode 29: Fail your AI company in 9 steps
In order to succeed with artificial intelligence, it is better to know how to fail first. It is easier than you think.
|Nov 11, 2017|
Episode 28: Towards Artificial General Intelligence: preliminary talk
The enthusiasm for artificial intelligence is raising some concerns especially with respect to some ventured conclusions about what AI can really do and what its direct descendent, artificial general intelligence would be capable of doing in the immediate future. From stealing jobs, to exterminating the entire human race, the creativity (of some) seems to have no limits.
|Nov 04, 2017|
Episode 27: Techstars accelerator and the culture of fireflies
In the aftermath of the Barclays Accelerator, powered by Techstars experience, one of the most innovative and influential startup accelerators in the world, I’d like to give back to the community lessons learned, including the need for confidence, soft-skills, and efficiency, to be applied to startups that deal with artificial intelligence and data science.
|Oct 30, 2017|
Episode 26: Deep Learning and Alzheimer
In this episode I speak about Deep Learning technology applied to Alzheimer disorder prediction. I had a great chat with Saman Sarraf, machine learning engineer at Konica Minolta, former lab manager at the Rotman Research Institute at Baycrest, University of Toronto and author of DeepAD: Alzheimer′ s Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI.
I hope you enjoy the show.
|Oct 23, 2017|
Episode 25: How to become data scientist [RB]
In this episode, I speak about the requirements and the skills to become data scientist and join an amazing community that is changing the world with data analyticsa
|Oct 16, 2017|
Episode 24: How to handle imbalanced datasets
In machine learning and data science in general it is very common to deal at some point with imbalanced datasets and class distributions. This is the typical case where the number of observations that belong to one class is significantly lower than those belonging to the other classes. Actually this happens all the time, in several domains, from finance, to healthcare to social media, just to name a few I have personally worked with.
|Oct 09, 2017|
Episode 23: Why do ensemble methods work?
Ensemble methods have been designed to improve the performance of the single model, when the single model is not very accurate. According to the general definition of ensembling, it consists in building a number of single classifiers and then combining or aggregating their predictions into one classifier that is usually stronger than the single one.
The key idea behind ensembling is that some models will do well when they model certain aspects of the data while others will do well in modelling other aspects.
|Oct 03, 2017|
Episode 22: Parallelising and distributing Deep Learning
Continuing the discussion of the last two episodes, there is one more aspect of deep learning that I would love to consider and therefore left as a full episode, that is parallelising and distributing deep learning on relatively large clusters.
As a matter of fact, computing architectures are changing in a way that is encouraging parallelism more than ever before. And deep learning is no exception and despite the greatest improvements with commodity GPUs - graphical processing units, when it comes to speed, there is still room for improvement.
Together with the last two episodes, this one completes the picture of deep learning at scale. Indeed, as I mentioned in the previous episode, How to master optimisation in deep learning, the function op [...]
|Sep 25, 2017|
Episode 21: Additional optimisation strategies for deep learning
In the last episode How to master optimisation in deep learning I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning.
Feel free to listen to the previous episode,
|Sep 18, 2017|
Episode 20: How to master optimisation in deep learning
The secret behind deep learning is not really a secret. It is function optimisation. What a neural network essentially does, is optimising a function. In this episode I illustrate a number of optimisation methods and explain which one is the best and why.
|Aug 28, 2017|
Episode 19: How to completely change your data analytics strategy with deep learning
Over the past few years, neural networks have re-emerged as powerful machine-learning models, reaching state-of-the-art results in several fields like image recognition and speech processing. More recently, neural network models started to be applied also to textual data in order to deal with natural language, and there too with promising results. In this episode I explain why is deep learning performing the way it does, and what are some of the most tedious causes of failure.
|Aug 09, 2017|
Episode 18: Machines that learn like humans
Artificial Intelligence allow machines to learn patterns from data. The way humans learn however is different and more efficient. With Lifelong Machine Learning, machines can learn the way human beings do, faster, and more efficiently
|Mar 28, 2017|
Episode 17: Protecting privacy and confidentiality in data and communications
Talking about security of communication and privacy is never enough, especially when political instabilities are driving leaders towards decisions that will affect people on a global scale
|Feb 15, 2017|
Episode 16: 2017 Predictions in Data Science
We strongly believe 2017 will be a very interesting year for data science and artificial intelligence. Let me tell you what I expect and why.
|Dec 23, 2016|
Episode 15: Statistical analysis of phenomena that smell like chaos
Is the market really predictable? How do stock prices increase? What is their dynamics? Here is what I think about the magics and the reality of predictions applied to markets and the stock exchange.
|Dec 05, 2016|