Are Brain Organoids Conscious? AI? Christof Koch on Consciousness vs. Intelligence in IIT [Clip 4]

  Рет қаралды 2,644

Ihm Curious

Ihm Curious

Күн бұрын

Christof Koch explains why brain organoids are conscious, but AI is not, according to Integrated Information Theory (IIT). Dr. Koch is a contributor to IIT and studies consciousness at the Allen Institute for Brain Science, which he used to run.
Full interview: / ihmcurious
More clips: • Christof Koch Intervie...
For a more complete description of IIT, see the books and audiobooks below.
Video about researchers calling IIT "pseudoscience": • Consciousness Theory D...
Get Christof's books and audiobooks (I will get a small commission at no cost to you to support the channel)
- BOOK ABOUT IIT: The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed amzn.to/3xiJvsd
- UPCOMING BOOK: Then I Am Myself the World: What Consciousness Is and How to Expand It (Preorder for May 7, 2024) amzn.to/3vt45FX
- EASY TO READ: Consciousness: Confessions of a Romantic Reductionist amzn.to/49oPBom
Christof's pen (MUJI pen, writes like a dream, great colors): amzn.to/4aPGvCa

Пікірлер: 40
@mkp8176
@mkp8176 Ай бұрын
Love this series so much!
@cate01a
@cate01a Ай бұрын
hope you eventually upload the full interview on yt for free after a bit
@GeoffryGifari
@GeoffryGifari 11 күн бұрын
But what kind of architecture can we assign consciousness on though? is there a clear line? what about neuromorphic computing utilizing memristors for example? what if quantum computers become strong enough?
@superawesomegoku6512
@superawesomegoku6512 Ай бұрын
I feel like conciousness is an emergent quality of intelligence and connections of neurons
@alkeryn1700
@alkeryn1700 Ай бұрын
i think the exact oposite, meaning our neurons and the whole physical world being emergent from consciousness.
@BootyRealDreamMurMurs
@BootyRealDreamMurMurs Ай бұрын
​@@alkeryn1700why not both as a biconditional relationship?
@spinningaround
@spinningaround Ай бұрын
Is there a full version?
@ihmcurious
@ihmcurious Ай бұрын
The full interview is up on patreon.com/IhmCurious, and more clips are on the way.
@henrycardona2940
@henrycardona2940 Ай бұрын
We know how to radically boost our intelligence and create intelligent machines, but what does a radically conscious being look like?
@braphog21
@braphog21 Ай бұрын
Christof is completely misguided at 11:05 when he's explaining the differences between neurons and LLMs. Yes, LLMs like GPT-4 run on transistors which only have 3 connections each but that's looking too closely, Christof needs to zoom out and realise that there is a higher structure above these transistors which actually mimic neurons quite well. Biological neurons are analogue and have thousands of incoming connections and thousands of outputs. Artificial neurons can also have thousands incoming connections and thousands of outputs (like in feed-forward network). It's also important to note that these networks of artificial neurons are non-linear (the maths is done using floating point arithmetic which is non-linear and the activation functions are also non-linear) so they cannot be reduced to a Perceptron network which is very different from biological neurons. So if we can create artificial neurons that very closely mimic the structure of biological neurons and we can create networks of these artificial neurons that behave similarly to biological neurons (i.e. they both have an ability to gain intelligence by perceiving things) then what separates the two? The two neurons functionality appear to be very similar which only leaves the possibility of the network architecture being different. Why would current day architectures of neural networks not contain any 'consciousness' at all but contain a lot of 'intelligence'? It's clear that consciousness and intelligence are NOT orthogonal concepts but that consciousness is a property of intelligence that increases as intelligence increases. I think Christof is wrong to ascribe so much consciousness to a tiny brain organelle that has very low intelligence. I also think he's wrong to ascribe so much intelligence to GPT-4 and yet so little consciousness (that's not to say I think GPT-4 is very conscious, I don't think it's very smart. It definitely is a little intelligent so I think it has a little bit of consciousness).
@notthere83
@notthere83 Ай бұрын
Maybe so but I believe there's also something you're missing: Continuity is key for consciousness. Just try to imagine if you fell asleep every few milliseconds and then you wake up again with completely different inputs and fall asleep again after a few milliseconds again. You wouldn't be able to form a sense of anything. Maybe if LLMs were running continuously and receiving inputs all the time, one could argue that they're conscious but that's not what's happening. They're not even constructed to proactively do anything. But only process data once they're given input. I don't see how anything constructed like that could ever qualify as "conscious".
@emilyvs2252
@emilyvs2252 14 күн бұрын
Maybe his argument hinges on complexity? I don't know exactly how you can calculate that but maybe he's saying the human brain has a complexity of neurons^50-100000 while computers have one of neurons^2-3. Also the idea that computers could only be conscious if they mimic human brains seems a bit problematic to me since we don't even really know what consciousness is or how it is tied brains specifically. So the idea that consciousness has some correlation with complexity makes a little bit more sense to me. But still... If you compare neurons to pixels and colours to emotions. Let's say we have a very simple organism that only has two "pixels" that can experience only two colours, black and white. Black means pain and white means happiness. If both the pixels would be black, couldn't it be that the experience of that blackness would be just as intense as the deepest imaginable pain for a creature with billions of pixels? It seems to me that the experience of signals might be seperate from the signals itself. So measuring consciousness just by complexity seems not too convincing to me either. Just because a creature has less nuance in the way it experiences sensations, its experience doesn't have to be any 'lesser' for it. So going back to computers, maybe the complexity doesn't really answer the question of whether or not it has 'it' - consciousness
@raresmircea
@raresmircea 8 күн бұрын
Intelligence and consciousness *are* orthogonal. From philosopher David Pearce in conversation with someone: _"Our most intense experiences (like agony, uncontrollable panic, pleasure) are mediated by the most evolutionarily ancient regions of the brain._ _Compare introspection & logico-linguistic thinking. Intensity of experience doesn't vary with intelligence. A gifted mathematician isn’t more predisposed to torture than a person with low IQ"._ Also him: _"Not just wrong but ethically catastrophic. Pretend for a moment that Dennett is a realist about phenomenal consciousness rather than a quasi-behaviorist. If the dimmer-switch (or high-dimensional versus low-dimensional) metaphor of human and nonhuman animal consciousness were correct, then we would expect "primitive" experiences like agony, terror and orgasm to be faint and impoverished, whereas generative syntax and logico-linguistic thought-episodes would be extremely vivid and intense. The opposite is the case. Our most intense experiences are also the most evolutionarily ancient. Humans should treat nonhuman animals accordingly."_
@angellestat2730
@angellestat2730 Ай бұрын
Consciousness explained by Harrison Ford.
@sipper2136
@sipper2136 Ай бұрын
It doesn't seem obvious to me why the complexity of the causal relationships at or between each node (transistors or neurons) should scale consciousness more than an increased number of nodes that is able to generate equally complex transformations. Certainly the ability of nodes to interact is necessary but I see no reason why you could not view the system instead at a higher level of abstraction and group nodes together into supernodes simply for the purposes of the consciousness calculation. You would have fewer nodes with more diverse connections leading to a higher calculation which leads me to believe that placing the level of abstraction at the smallest information processing unit as opposed to anywhere else (including the system in total) is arbitrary.
@ihmcurious
@ihmcurious Ай бұрын
In IIT, the system can be viewed at higher levels of abstraction. If there is more intrinsic causal power (higher phi) at a higher level, then that level would be where consciousness emerges. On the other hand, abstracting to a higher level reduces granularity, and higher-level things are generally casually dependent on lower-level things, so there's often more causal power to find at lower levels. For these and other reasons, phi is often lower at higher levels of abstraction. But maybe if you go down too low, e.g. to the level of quantum uncertainty, you won't find much intrinsic casual power, since higher-level structures are robust to these fluctuations (i.e. they may be differences that don't "make a difference" from the intrinsic perspective of the system). IIT is very complicated and probably wrong, but Giulio Tononi has anticipated many objections. See the books in the description, or the many Google-able and open-access papers on IIT, if you're interested in a more thorough discussion than I was able to have in this interview.
@filippomilani9014
@filippomilani9014 Ай бұрын
The only doubt I have about the way Christof Koch places brain organoids in the graph is that he seems to automatically place the organoid at a higher level of consciousness than the jellyfish (also due to the way the graph ismade in the first place), even though he knows that the brain organoid has no input/perception and no output/action? I would think that without input or output it would be very difficult to have any conscious experience, since in nature consciousness seems to generally scale with intelligence (as in capacity to execute different behaviors). So shouldn't we see brain organoids just as what they are, groups of neurons, where their potential for consciousness depends on their outputs and inputs, and maybe also on the way they reward themselves? I'm curious (lol) if IIT or other consciousness theories have looked into the role of reinforcement and reward in to explain intelligence and consciousness
@BetterChoicesNeeded
@BetterChoicesNeeded Күн бұрын
Why tf are ppl tryna put these things in robot crab legs?!?? Why are ppl even making these things anyways..
@grivza
@grivza Ай бұрын
3:01 that sounds like a very naive view, implying that the repertoire of "conscious" activities has anything to do with the strength of consciousness. Activities can only be described as "conscious activities" only if there is already a developed consciousness to perceive them. But are those activities vital for the development of a "stronger" consciousness? Sex, drugs and rock n' roll? Doubtful, very much so.
@patrickl5290
@patrickl5290 Ай бұрын
What is a “developed consciousness” though? I think the idea of consciousness being a continuous quality makes more sense
@grivza
@grivza Ай бұрын
@@patrickl5290 It may be a continuous quality, I am not ruling that out. I am simply talking about his example about the baby, you can bring it to a rock n' roll party and it isn't going to be much more conscious than it was before. It's something different that develops this capacity, maybe through experiences but not from experiences themselves and certainly not just any experiences.
@carltongannett
@carltongannett Ай бұрын
See I would think the dog has as much consciousness as a human but much less intelligence. I figure intelligence just happens to correlate with consciousness in mammals because brains are multipurpose.
@grivza
@grivza Ай бұрын
​@@carltongannettFrom seeing how Helen Keller describes her experience of learning her first word, in my understanding consciousness is directly related to the capacity for the symbolic, so I would actually say that no animal is as conscious as humans but that's a bit of a different topic. Still don't buy the whole "experiences" interpretation.
@ihmcurious
@ihmcurious Ай бұрын
When Christof talks about "more" consciousness, he's talking about expanding the repertoire of possible conscious states. For example, a human is theoretically capable of a wider range of possible conscious states than a bee or a dog, and the human's conscious states would tend to be richer in information. In the language of Integrated Information Theory (IIT), a system with more consciousness (higher phi) is capable of distinguishing more "differences that make a difference" from the perspective of the system itself. Read if you dare: architalbiol.org/index.php/aib/article/viewFile/15056/23165867
@Gome.o
@Gome.o Ай бұрын
The argument that a dog seems less conscious than a human being seems dubious
@GhostSamaritan
@GhostSamaritan Ай бұрын
They lack metacognition 🤷🏽‍♂️
@Gome.o
@Gome.o Ай бұрын
@@GhostSamaritan How do you know? Are you a dog typing on a human keyboard?
@Gome.o
@Gome.o Ай бұрын
@@GhostSamaritan How do you know? are you a cute doggo who can communicate with dogs?
@alkeryn1700
@alkeryn1700 Ай бұрын
@@GhostSamaritan you are making a major mistake in thinking that inteligence and consciousness are related. a llm is smarter than a dog on most of our metric yet the dog definitely has more qualia.
@theonlylolking
@theonlylolking 14 күн бұрын
Stop worshipping dogs
@joeybasile1572
@joeybasile1572 Ай бұрын
No.
@doblo2670
@doblo2670 Ай бұрын
I am sorry, but this guy is so biased towards humans. His only explanation as to why a human derived brain organoid is more conscious than animals is because its human derived. Even though what makes humans special is the amount of neurons we posses and the incredibly efficient way its connections are constructed. Brain organoids are just neurons. They dont have that "human brain structure", because they dont have the rest of what makes humans human beside the brain - the rest of the body. Neurons only learn, their function are ultimately determined by input. Babies are unconscious because they are incredibly reflex-based. Once they get enough input they start learning to do more stuff into which i wont get into in a yt comment. Human neurons are no different to most animals', neurons, we just have incredible brain structure, but that requires a lot of genetically set in stone input to teach those neurons to form those structures
Christof Koch - Can Brains Have Free Will?
12:49
Closer To Truth
Рет қаралды 59 М.
How to bring sweets anywhere 😋🍰🍫
00:32
TooTool
Рет қаралды 35 МЛН
Christof Koch - Is Consciousness Ultimate Reality?
5:44
Closer To Truth
Рет қаралды 10 М.
The worst prediction in physics
9:59
Fermilab
Рет қаралды 443 М.
Sitting down with Neuralink’s 1st brain chip implant patient
6:03
Good Morning America
Рет қаралды 496 М.
10 weird algorithms
9:06
Fireship
Рет қаралды 1,1 МЛН
The Next Generation Of Brain Mimicking AI
25:46
New Mind
Рет қаралды 106 М.
I Took an IQ Test to Find Out What it Actually Measures
34:29
Veritasium
Рет қаралды 8 МЛН
Do we see reality as it is? | Donald Hoffman | TED
21:51
TED
Рет қаралды 2,6 МЛН
ВЫ ЧЕ СДЕЛАЛИ С iOS 18?
22:40
Overtake lab
Рет қаралды 87 М.
Где раздвижные смартфоны ?
0:49
Не шарю!
Рет қаралды 819 М.
iPhone 15 Pro vs Samsung s24🤣 #shorts
0:10
Tech Tonics
Рет қаралды 13 МЛН
сюрприз
1:00
Capex0
Рет қаралды 1,3 МЛН