Rosa Cao: Are apparently successful DNN models also truly explanatory? | Philosophy of Deep Learning

  Рет қаралды 1,290

NYU Center for Mind, Brain and Consciousness

NYU Center for Mind, Brain and Consciousness

Жыл бұрын

A talk by Rosa Cao as part of a 2023 workshop on The Philosophy of Deep Learning: wp.nyu.edu/consciousness/the-...

Пікірлер: 1
@labsanta
@labsanta Жыл бұрын
My notes: Rosa Cao from Stanford discusses the success and explanation of DNN models in AI. *Philosophical questions about AI and machine learning. *Questioning the effectiveness of DNN models. *Investigating representations in DNN models to understand their theoretical workings. *Asking whether DNN models have relevant internal states like humans. *Using activations in intermediate layers of neural networks to predict neural activity in the brain. Comparing models' abilities to human vision and language use, and examining if they have representations similar to humans. *Models trained for image classification unexpectedly good at predicting neural activity. *Large language models are sometimes relevantly like us in language generation, but not always in use. *Explanatory project of understanding models involves comparing to human behavior and neural activity. *Examining if models have representations similar to humans to understand their inner workings. *Assuming humans have world models, questioning if chatbots have the same. Discussion on comparing human and chatbot capacities to understand internal models and representations. *Comparison of human and chatbot capacities *Behavior as a criterion for internal models *Two approaches to assign representations: naturalistic theory and neuroscience Probing neural networks and artificial organisms. The type of probe used affects the patterns found. *Machine learning people view systems as artificial organisms. *Probes are sophisticated decoders that identify patterns. *Probes can find patterns that aren't necessarily present. *Patterns should be manipulable and causally involved in behavior. *Downstream processes are useful for evaluating representational content. Discusses the nature of representations in deep neural networks and their relation to successful predictions. Provides examples of successful decoding and manipulation of representations in a GPT-like clone trained on Othello game transcripts. *Successful interpretation of representations implies their existence. *Representations may seem less objective than initially thought. *Probing inner states can reveal representations of board state in a GPT-like clone trained on Othello game transcripts. *Successful decoding and manipulation of these representations suggest their real-world usefulness. *Debate centers on whether successful deep neural network models capture something real about the world. Discussion of models and theories in science, with examples from machine learning and psychology. *Distinguishing between models and theories in science. *Examples of model-based versus model-free approaches in psychology. *The importance of discerning a model's actual claims about the world. *Analogous contrasts in science and education, such as realism versus instrumentalism. *The need to consider what functional roles and contents are shared by internal models. The section discusses the limitations of attributing a world model to an artificial system and suggests alternative ways to assess the model's effectiveness. *Ability to separate different pieces *Artificial system's world model *Language use and carving up the world *Beliefs of artificial system *Assessing the effectiveness of the model The speaker discusses different types of internal representations in neural networks and how they relate to traditional computational models. *Different internal representations in neural networks *Neural network models are more like model organisms than traditional computational models *Different types of internal representations (e.g. perceptual representations vs. world models) should be distinguished *The speaker doesn't think there is a difference between truly explanatory and non-truly explanatory models Discussion on representations and their types, and the limitations of models in capturing them. Question on real-world applications. *Different kinds of representations and their distinctions should be made clear. *Intuitions about models' understanding stem from introspection. *Access to reasoning patterns is due to brain activity. *Real-world applications of research discussed. Discussion of using probes to understand brain activity and caution against automatic identification of neural activity with mental states. *Building a probe to access brain inputs and transformations *Difficulties in decoding beliefs from brain activity *Caution against automatically identifying neural activity with theoretical representations *Emphasis on pluralism and multiple uses of models in science *Different aspects of models become salient in different contexts Susanna Schellenberg asks about understanding in artificial systems compared to humans *Schellenberg introduces herself as a philosopher and cognitive scientist at Rutgers. *Schellenberg's takeaway from the talk is the difference between meaning and understanding in humans and non-sensory grounded artificial systems. *Schellenberg asks for ways to assess and rank different types of understanding. *Speaker suggests that artificial systems lack the same phenomenology and joy of understanding as humans. *Understanding in more complex artificial systems is more likely to play a role in expanding capacities.
Joseph LeDoux: Our Realms of Existence: A Fresh Look at the Science of What and Who We Are| ASSC26
56:04
NYU Center for Mind, Brain and Consciousness
Рет қаралды 2,6 М.
The Attention Mechanism in Large Language Models
21:02
Serrano.Academy
Рет қаралды 82 М.
Final muy inesperado 🥹
00:48
Juan De Dios Pantoja
Рет қаралды 19 МЛН
Smart Sigma Kid #funny #sigma #comedy
00:19
CRAZY GREAPA
Рет қаралды 23 МЛН
Raphaël Millière: Compositionality in Deep Neural Networks | Philosophy of Deep Learning
1:13:32
NYU Center for Mind, Brain and Consciousness
Рет қаралды 1,3 М.
The Biggest Ideas in the Universe | 19. Probability and Randomness
1:23:37
Build an SQL Agent with Llama 3 | Langchain | Ollama
20:28
TheAILearner
Рет қаралды 2,2 М.
What Came Before The Big Bang?
1:01:23
History of the Universe
Рет қаралды 35 М.
Seven Ways of Looking at Grammar | The New School
1:06:15
The New School
Рет қаралды 245 М.
Thomas Nagel: Psychophysical Monism as an Ideal | ASSC26
52:48
NYU Center for Mind, Brain and Consciousness
Рет қаралды 7 М.
Courses After Plus Two Science 📚 | Malayalam | Career FrameZ
5:58
What are Transformer Models and how do they work?
44:26
Serrano.Academy
Рет қаралды 102 М.
Lecture 4 | Introduction to Neural Networks
1:13:59
Stanford University School of Engineering
Рет қаралды 658 М.
Lid hologram 3d
0:32
LEDG
Рет қаралды 8 МЛН
SSD с кулером и скоростью 1 ГБ/с
0:47
Rozetked
Рет қаралды 365 М.
CY Superb Earphone 👌 For Smartphone Handset
0:42
Tech Official
Рет қаралды 825 М.
Asus  VivoBook Винда за 8 часов!
1:00
Sergey Delaisy
Рет қаралды 1,1 МЛН