When most people want to learn about Naive Bayes, they want to learn about the Multinomial Naive Bayes Classifier – which sounds really fancy, but is actually quite simple. This video walks you through it one step at a time and by the end, you’ll no longer be naive about Naive Bayes!!
Here’s an interesting session from the SciPy 2020 virtual conference.
As a foundational tutorial in statistics and Bayesian inference, the intended audience is Pythonistas who are interested in gaining a foundational knowledge of probability theory and the basics of parameter estimation. Knowledge of `numpy`, `matplotlib`, and Python are prerequisites for this tutorial, in addition to curiosity and an excitement to learn new things!
While COVID is not directly referenced, it’s clear that the current pandemic counts as an extreme event.
Nassim Nicholas Taleb spent 21 years as a risk taker before becoming a researcher in philosophical, mathematical and (mostly) practical problems with probability. Taleb is the author of a multivolume essay, the Incerto (The Black Swan, Fooled by Randomness, and Antifragile) covering broad facets of uncertainty. It has been translated into 36 languages. In addition to his trader life, Taleb has also published, as a backup of the Incerto, more than 45 scholarly papers in statistical physics, statistics, philosophy, ethics, economics, international affairs, and quantitative finance, all around the notion of risk and probability. He spent time as a professional researcher (Distinguished Professor of Risk Engineering at NYU ’s School of Engineering and Dean’s Professor at U. Mass Amherst). His current focus is on the properties of systems that can handle disorder (“antifragile”). Taleb refuses all honors and anything that “turns knowledge into a spectator sport”.
Lex Fridman interviews Michael Jordan – not that Michael Jordan.
Michael I Jordan is a professor at Berkeley, and one of the most influential people in the history of machine learning, statistics, and artificial intelligence. He has been cited over 170,000 times and has mentored many of the world-class researchers defining the field of AI today, including Andrew Ng, Zoubin Ghahramani, Ben Taskar, and Yoshua Bengio. This conversation is part of the Artificial Intelligence podcast.
0:00 – Introduction
3:02 – How far are we in development of AI?
8:25 – Neuralink and brain-computer interfaces
14:49 – The term “artificial intelligence”
19:00 – Does science progress by ideas or personalities?
19:55 – Disagreement with Yann LeCun
23:53 – Recommender systems and distributed decision-making at scale
43:34 – Facebook, privacy, and trust
1:01:11 – Are human beings fundamentally good?
1:02:32 – Can a human life and society be modeled as an optimization problem?
1:04:27 – Is the world deterministic?
1:04:59 – Role of optimization in multi-agent systems
1:09:52 – Optimization of neural networks
1:16:08 – Beautiful idea in optimization: Nesterov acceleration
1:19:02 – What is statistics?
1:29:21 – What is intelligence?
1:37:01 – Advice for students
1:39:57 – Which language is more beautiful: English or French?
Siraj Raval has a video exploring a paper about genomics and creating reliable machine learning systems.
Deep learning classifiers make the ladies (and gentlemen) swoon, but they often classify novel data that’s not in the training set incorrectly with high confidence. This has serious real world consequences! In Medicine, this could mean misdiagnosing a patient. In autonomous vehicles, this could mean ignoring a stop sign. Machines are increasingly tasked with making life or death decisions like that, so it’s important that we figure out how to correct this problem! I found a new, relatively obscure yet extremely fascinating paper out of Google Research that tackles this problem head on. In this episode, I’ll explain the work of these researchers, we’ll write some code, do some math, do some visualizations, and by the end I’ll freestyle rap about AI and genomics. I had a lot of fun making this, so I hope you enjoy it!
Great Learning has provided this free 7 hour course on statistics for Data Science.
This course will be taught by Dr.Abhinanda Sarkar who has his Ph.D. in Statistics from Stanford University. He has taught applied mathematics at the Massachusetts Institute of Technology (MIT); been on the research staff at IBM; led Quality, Engineering Development, and Analytics functions at General Electric (GE); and has co-founded OmiX Labs.
These are the topics covered in this full course:
Statistics vs Machine Learning – 2:22
Types of Statistics [Descriptive, Prescriptive and Predictive] – 9:05
Types of Data – 1:50:45
Correlation – 2:46:02
Covariance – 2:52:33
Introduction to Probability – 4:26:55
Conditional Probability with Baye’s Theorem – 5:24:00
Lex Fridman interviews Grant Sanderson is a math educator and creator of 3Blue1Brown, a popular YouTube channel that uses programmatically-animated visualizations to explain concepts in linear algebra, calculus, and other fields of mathematics.
0:00 – Introduction
1:56 – What kind of math would aliens have?
3:48 – Euler’s identity and the least favorite piece of notation
10:31 – Is math discovered or invented?
14:30 – Difference between physics and math
17:24 – Why is reality compressible into simple equations?
21:44 – Are we living in a simulation?
26:27 – Infinity and abstractions
35:48 – Most beautiful idea in mathematics
41:32 – Favorite video to create
45:04 – Video creation process
50:04 – Euler identity
51:47 – Mortality and meaning
55:16 – How do you know when a video is done?
56:18 – What is the best way to learn math for beginners?
59:17 – Happy moment