Francois Chollet - On the Measure Of Intelligence

  Рет қаралды 13,392

Machine Learning Street Talk

Machine Learning Street Talk

Күн бұрын

And this is the machine learning street talk episode where Dr. Tim Scarfe, Yannic Kilcher and Connor Shorten covered François Chollets "On the Measure of Intelligence" paper. Chollet thinks that deep learning methods are great for pattern recognition but not the route to AGI. Generalisation comes from a high level of abstraction and reasoning capability. He advocates strongly that we need to start looking at program synthesis methods. He created the ARC dataset and Kaggle challenge to test developer-aware generalisation and created a formalism for measuring intelligence as a function of generalisation difficulty and priors.
00:00:00 MAIN SHOW FLASHY INTRO
00:09:51 SHOW STARTS
00:11:21 GENERALISATION LEVELS
00:14:21 THE G FACTOR
00:22:20 INCLUDING THE CONTEXT OF INTELLIGENCE i.e. Creators, society, evolution
00:26:51 DERMGAN PAPER - GANS TO HELP US MODEL KNOWN UNKNOWNS(?)
00:37:41 WOZNIAK COFFEE CUP vs AlphaGo and broad intelligence
00:43:11 PRIORS, CORE KNOWLEDGE (DONT MISS THIS!)
00:46:31 MULTI TASK BENCHMARKS
00:47:01 ARC CHALLENGE (DONT MISS!)
00:48:51 LEG AND HUTTER, UNIVERSAL INTELLIGENCE
00:54:21 CHOLLETS formalism of intelligence
01:02:17 SPARSE FACTOR GRAPH TO LEARN RELATIONSHIPS
01:03:31 HOW SMART IS ALPHA GO, DEVELOPER AWARE GENERALISATION
01:04:41 AUTOML ZERO
01:05:41 THE EXTENDED MIND
01:12:41 ARC CHALLENGE 2
01:20:21 Hofstadter's string analogy problem
01:22:31 HOW WOULD WE SOLVE ARC? (DONT MISS!)
01:34:31 META LEARNING AND PROGRAM SYNTHESIS (DONT MISS!)
01:37:21 SIMPLEST SOLUTION TO ARC, CHOLLET MAKES IMPLICIT UNSPOKEN ASSUMPTIONS?
01:40:21 DNNS ARE GLORIFIED HASH TABLES SKIT
01:43:31 MORE ARC CONVERSATION, RULE FINDING, GAN solution
01:47:11 REDDIT COMMENTS
01:51:51 COMMENT RE:MLST FROM REDDIT! (FUNNY)
01:55:31 REDDIT Q continued- meta learning, consciousness, alphago
02:14:51 LOOKING AT KAGGLE SOLUTIONS
02:16:41 BACK TO REDDDIT COMMENTS
02:17:51 BUILDING A GENERATIVE MODEL
02:23:51 FINAL TAKES ON PAPER
02:32:31 TIM'S FINAL TAKES ON CHOLLET
Paper: arxiv.org/abs/1911.01547
Abstract:
"To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans."

Пікірлер: 43
@AbdennacerAyeb
@AbdennacerAyeb 4 жыл бұрын
The best channel to be updated about AI. Thank you all the team.
@user-xg6ez8mj7i
@user-xg6ez8mj7i 4 жыл бұрын
Very insightful and informative ,hope to see such kind of discussion so often
@theexplorerJP
@theexplorerJP 3 жыл бұрын
Awesome discussion
@joefioti5698
@joefioti5698 4 жыл бұрын
You guys are fantastic, you should post the audio from these episodes as a podcast so people can listen from podcast apps too. Keep up the great work!
@machinelearningdojowithtim2898
@machinelearningdojowithtim2898 4 жыл бұрын
We do! Search for machine learning street talk podcast 😀😀
@joefioti5698
@joefioti5698 4 жыл бұрын
@@machinelearningdojowithtim2898 Nice, will be listening regularly!
@JinayShah
@JinayShah 4 жыл бұрын
Never been so early! Just started listening....
@abby5493
@abby5493 4 жыл бұрын
Best video you have made! 😃
@sayakpaul3152
@sayakpaul3152 4 жыл бұрын
The return!
@johnvanderpol2
@johnvanderpol2 Жыл бұрын
A hard task is done by dividing it in smaller tasks. Knowing how to divide this task, is just another task.
@PeterOtt
@PeterOtt 4 жыл бұрын
Alright this paper might be the best paper of 2019, but Tim, can we also get more cat footage? I see that little guy back there around 10 minutes 🐈
@abby5493
@abby5493 4 жыл бұрын
I second more cat footage 🐈
@MachineLearningStreetTalk
@MachineLearningStreetTalk 4 жыл бұрын
That is Kina the cat! There is actually an introduction to Kina and we asked her what she thought about Chollet rejecting the universal intelligence theory. Next week we will shout out the first person to find the time index! 😂
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
Natural language to sparql translation is a game changer.
@billcosby8411
@billcosby8411 Жыл бұрын
yes, would completely revolutionize enterprise analytics
@paxdriver
@paxdriver 3 жыл бұрын
I think GANs are the key to AGI. Human intelligence is predicated on our propensity to forecast and hypothesize everything from body language to intonation, where speech in this case would be the data. A GAN that actively keeps training the neural network with hypotheses in parallel with the algorithm operating in the wild, that over time creates intelligence. But even then, no matter what it would always have to ask a human "is this result significant to you humans?" because it is pattern matching "like" intelligence but its acuity for generalization is always ultimately engineered by the model's structure. Our brains' model was evolved, but there's also quantum mechanics flipping our biochemical switches in a continuum. Silicone isn't an instantiation of consciousness, it's an expression of consciousness. But to that argument: our consciousness is then the expression of the sun's... lol philosophy is magnificently intricate
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
Is it possible to include guesses with a confidence metric into datasets in order to make extrapolation possible? Give known training data 100% confidence and guesses at the far extremes of the manifold low confidence.
@tobiasurban8065
@tobiasurban8065 Жыл бұрын
Great talk! The sunglasses are extremely rude.
@99dynasty
@99dynasty 2 жыл бұрын
To generalize, that was quite a roasting 😂
@andybaldman
@andybaldman Жыл бұрын
I've been looking for a systemic way to link consciousness and information. And this is the closest thing I've found. Consciousness is what converts information (experience) from the realm beyond language and ideas, into the realm of spreadable/communicable information. .
@TheReferrer72
@TheReferrer72 4 жыл бұрын
Geoffrey Hinton: It seemed to me there's no other way the brain could work. It has to work by learning the strength of connections. And if you want to make a device do something intelligent, you’ve got two options: You can program it, or it can learn. And people certainly weren't programmed, so we had to learn. This had to be the right way to go. And we are still having this debate....
@TimScarfe
@TimScarfe 4 жыл бұрын
Peter, Chollet is still advocating for learning (programs)
@snippletrap
@snippletrap 3 жыл бұрын
"People certainly weren't programmed". This is laughably false. People are born with countless biases and unlearned subroutines. Some of them, like recognizing faces and language, begin in infancy and can be "switched off" or "broken", as in Broca's aphasia or visual agnosia. The brain is programmed. And it is programmed, in part, to learn.
@JasonBlank
@JasonBlank 2 жыл бұрын
@@snippletrap Hinton cites the Baldwin effect as to how learned behaviours become innate. So for him it is still learning.
@flaskapp9885
@flaskapp9885 3 жыл бұрын
hey tim ? could you pls recommend me paper or books like these things? Thanks
@artandculture5262
@artandculture5262 Жыл бұрын
Dehumanization is the way!
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
Solving those ARC puzzles can be done the same way as winning Atari games. An adversarial network should solve every one. The difference is, the computer has to "design" it's own objective function. What we are able to do is guess what the test designer wants based on having seen similar puzzles. Therefore there is an initial categorization problem to choose a puzzle type and matching objective function from a set of puzzles and objective functions. Once an objective function is selected, run its corresponding GAN to find the solution. One needs the appropriate priors to "recognize" a solution. We need priors and so does an ML algorithm.
@MMc9081
@MMc9081 3 жыл бұрын
Yannic says the only thing (in a system) you can really measure is skill (13:55). Is the ability to gain skills efficiently at a previously unknown task a skill in itself? Or is this actually intelligence? Maybe it is just semantics, but the answer may be both. Chollet states (p.27) that "skill is merely the output of the process of intelligence", so he clearly wants to distinguish between them. By the definition of intelligence given in the paper (also p.27), "The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty" and assuming that Yannic is correct that skill IS all you can measure, then intelligence IS a skill. But, in Chollet's view, it seems intelligence is a unique skill, as it is a meta-skill of ALL skills.
@PatrickOliveras
@PatrickOliveras 3 жыл бұрын
Now my goal will be to build the first AI that has a prior that consists of the entire catalog of IKEA
@CopperKettle
@CopperKettle Жыл бұрын
Normalize the sound levels, please
@dr.mikeybee
@dr.mikeybee 2 жыл бұрын
Do I know symmetry as a concept? Or do I just know a bunch of examples labeled as symmetry? Is that the same thing? I can also give a definition -- which is, of course, just another list, this time of properties. In a CNN, a machine can learn to properly classify something as symmetric or asymmetric. Is that fundamentally different from what I do? Do I exist in a realm of Platonic forms? Or do I simply have a model in my brain that I run my vision through? I really don't know the answer to this, but more and more, I suspect that my own intelligence works in all the same ways ML works and can work -- just with more compute and better priors. Nature doesn't usually reinvent what works well. We see this across diverse species. And make no mistake, ML is as much a natural process as a bird building a nest.
@DistortedV12
@DistortedV12 3 жыл бұрын
I thought Francois was actually going to join in
@andreww.8262
@andreww.8262 2 жыл бұрын
Can you guys work on a better video ranking algo for KZfaq so people actually get to see your videos? Haha
@GarrethOriley
@GarrethOriley Жыл бұрын
If the task is human intelligence (the only known and completely undefined intelligence), teach an ai abstracting logic and still hit that barn more often then not.
@programmabilities
@programmabilities Жыл бұрын
this dude is just reading lines from his smart-glasses
@filoautomata
@filoautomata 3 жыл бұрын
A human child has general intelligence. An entitiy has general intelligence if it can create a cup of coffee from an average household One human child that never know coffee cannot create coffee from an average household. Therefore this one human child does not have general intelligence. This does not make sense. You will need data even if it is with little or big data in order to create an actionable model. Even human do need repeatable experience in order to learn especially on task that they are still bad at. If a task is quite similar to other task or the rules for this particular task are subset of rules that previously learned, then of course the agent will be able to do the task in a few shot or even zero shot sample.
@donaldrobertson1808
@donaldrobertson1808 Жыл бұрын
The two men doeth lie too often
@GarrethOriley
@GarrethOriley Жыл бұрын
So after 2h 30min talk you realize you should have first discussed your alignment?
NLP is not NLU and GPT-3 - Walid Saba
2:20:33
Machine Learning Street Talk
Рет қаралды 11 М.
OMG 😨 Era o tênis dela 🤬
00:19
Polar em português
Рет қаралды 10 МЛН
Did you find it?! 🤔✨✍️ #funnyart
00:11
Artistomg
Рет қаралды 120 МЛН
[Vowel]물고기는 물에서 살아야 해🐟🤣Fish have to live in the water #funny
00:53
#047 Interpretable Machine Learning - Christoph Molnar
1:40:22
Machine Learning Street Talk
Рет қаралды 13 М.
#063 - Prof. YOSHUA BENGIO - GFlowNets, Consciousness & Causality
1:33:08
Machine Learning Street Talk
Рет қаралды 38 М.
#033 Karl Friston - The Free Energy Principle
1:51:25
Machine Learning Street Talk
Рет қаралды 23 М.
Robotic Leg Control Interacts with the Brain
15:04
Machine Design
Рет қаралды 182
Did Apple go too far with this ad? - AI Newsroom
32:03
The AI Boardroom
Рет қаралды 108
ICLR 2020: Yann LeCun and Energy-Based Models
2:12:12
Machine Learning Street Talk
Рет қаралды 21 М.
#49 - Meta-Gradients in RL - Dr. Tom Zahavy (DeepMind)
1:25:20
Machine Learning Street Talk
Рет қаралды 9 М.