What Artificial Intelligence is Missing

  Рет қаралды 30,991

Duncan Clarke

Duncan Clarke

Күн бұрын

I propose an underlying process which constitutes our intelligence as human beings, and argue that our current AI systems fundamentally lack it.
Patreon: / duncanclarke
Instagram: / duncansclarke
Twitter: / duncanc_
Sources:
John Vervaeke, Timothy P. Lillicrap, Blake A. Richards - Relevance Realization and the Emerging Framework in Cognitive Science www.ipsi.utoronto.ca/sdis/Rele...
Daniel Dennnett - Cognitive Wheels: The Frame Problem of AI folk.idi.ntnu.no/gamback/teac...
Francisco J. Varela, Eleanor Rosch and Evan Thompson - The Embodied Mind: Cognitive Science and Human Experience
Chapters:
0:00 - Introduction
0:28 - What is intelligence?
1:05 - Problem Solving
4:17 - Categorization
5:33 - Communication
6:47 - The Importance of Relevance Realization
7:22 - The Frame Problem
8:48 - A Science of Relevance?
9:52 - A Theory of How We Realize Relevance
12:31 - Can AI do any of this?
14:09 - End Screen

Пікірлер: 211
@gazereaper
@gazereaper Жыл бұрын
As someone who's also deep into programming and philosophy your channel is a gem, looking forward to your next videos.
@mistycloud4455
@mistycloud4455 Жыл бұрын
A.G.I Will be man's last Invention.
@mihailmilev9909
@mihailmilev9909 9 ай бұрын
Wow
@mihailmilev9909
@mihailmilev9909 9 ай бұрын
​@@mistycloud4455perhaps lol
@mihailmilev9909
@mihailmilev9909 9 ай бұрын
What is the overlap between these two fields called? Computational Philosophy?
@mihailmilev9909
@mihailmilev9909 9 ай бұрын
57 2
@judgeomega
@judgeomega 2 жыл бұрын
13:44 "AI will never achieve relevance realization... because our ai systems are not autopoetic, embodied, or embedded" just because you cant imagine something, doesnt mean it is impossible. the frame problem is only an issue when you dont know how to solve it, and there is no evidence or logical proof that it cant be solved or at least approximated.
@official-obama
@official-obama Жыл бұрын
CORRECT!
@raz0rcarich99
@raz0rcarich99 Жыл бұрын
There is a way to theoretically achieve "artificial" relevance realization, once we discover and are able to control abiogenesis.
@albert6157
@albert6157 Жыл бұрын
@@raz0rcarich99 not really, creating a neural network that can comprehend and experience doesnt have much to do with abiogenesis.
@raz0rcarich99
@raz0rcarich99 Жыл бұрын
@@albert6157 How does a neural network acquire relevance realization?
@albert6157
@albert6157 Жыл бұрын
@@raz0rcarich99 same way you did, the brain's architecture, the way structural connections form nuclei, regions and areas. The wrinkles are not just for show. They behave as localised structures responsible for many diverse functions. The brain has macroscopic, mesoscopic and microscopic structure. At every scale they have separate architecture and complexity. Overtime the microscopic structures, like the neurons and synapses move around and reorganise, with unused connections pruned while repetitively used pathways will reinforce and grow. (Literally use it or lose it) Humans are machine learning algorithmns and neural nets that learn fast. When certain regions are damaged, impairments and disorders follow according to the area's relevant functions. Not taking into consideration the different types of neurons, the average 100,000 synapses for each, the endocrine system as well as astrocytes and other cells in the brain, which shows many important functions related to feelings and cognition. Its been even showed that astrocyte, the other half of the brain's cells (not including myelin) also secretes neurotransmitters, regulates growth and pruning of synapses and neurons. Not just glorified brain janitors and immunecells. Scientists even tested the biological neuron's complexity and functional capabilities, it is equivalent to 1000 artificial neurons or perceptrons in machine learning neural networks. Meaning it takes around 1000, or 5 to 8 layers of perceptrons (not sure if its multimodal neurons or just normal perceptrons tho) to simulate 1 biological neuron's behaviour and function. Correct me if im wrong though. But until we figure out the brain's structural configurations at every scale, we wont be able to create true general intelligent machines, with or without qualia and sentience.
@1JackTorS
@1JackTorS 2 жыл бұрын
Excellent. Clear, concise, and no bullshit.
@duncanclarke
@duncanclarke 2 жыл бұрын
Thanks!
@Curious_Skeptic083
@Curious_Skeptic083 Жыл бұрын
I recently came across your channel but I have to say you’re one of the most amazing KZfaqrs I’ve ever seen! Please keep up the good work!
@johnvervaeke
@johnvervaeke 2 жыл бұрын
Thank you for doing this.
@duncanclarke
@duncanclarke 2 жыл бұрын
Thank you for the inspiration! Your conversations and awakening from the meaning crisis series are excellent and have changed the way I see the world.
@ToriKo_
@ToriKo_ Жыл бұрын
Wow
@mihailmilev9909
@mihailmilev9909 9 ай бұрын
Same
@judgeomega
@judgeomega 2 жыл бұрын
before watching this video i had never heard of the phrase 'relevance realization'. but having studied machine learning im very familiar with what we call 'culling the search space' and 'attention' which seems to be roughly equivalent, although i am more fond of your description of it. although there are many issues large and small i have with the video, i want to thank you for bringing it to my attention and for the insights conveyed.
@tuffleader3033
@tuffleader3033 2 жыл бұрын
Never heard of relevance realization before. I'll have to check out some of the sources you cited.
@SameerRajadnya
@SameerRajadnya 2 жыл бұрын
Very insightful…loved the way you summarised it. And I’d say you met your goals with the kind of video too!
@MoonDolph
@MoonDolph Жыл бұрын
This channel has become my favorite channel these days. Thank you for your great videos.
@Bencurlis
@Bencurlis 2 жыл бұрын
Relevance realization is an interesting concept, but I should point out it was indeed already achieved in AI. Attention layers is transformer networks do just that, their job is to query for relevant "things" in the data, also depending on the context, which is also given by the input data. Previous AI models may also have evolved similar processing capabilities but it was not enforced previously. So it can't be the only thing missing for understanding, if it was missing at all. Also, I can't agree with the conclusion, it is not at all relevant for intelligence and understanding if the basic operations are "just symbol manipulations", "just matrix multiplication with non-linearities" or "just chemical reactions inside brain cells". I always found the Chinese room argument to be a bad one, or at least to be misinterpreted. In the argument, there is no reason given to why certain inputs would have the particular output, but intelligence and understanding was indeed present in these particular choices, not in the process going on inside of the room after these choices have been made. The reason GPT-3 is not intelligent or understanding is not because it is not paying attention to relevant things or because it is manipulating tokens or anything like that, it is because its task objective was simply to predict likely tokens given previous tokens. This doesn't require complex processing capabilities as the model is not incentivized to justify its answers in any way, it just needs space to store the tokens sequences in compressed form. That's why recent models matching or outperforming GPT-3 that are basically just giant compressed token databases have been proposed.
@memegazer
@memegazer Жыл бұрын
Yeah...without relevance relation in some form modern AI revolution would not be happening. While I agree with the idea that we are not there yet, I don't agree that revelance realization is the stumbling block that is holding us back. More accurate would be to point out how we have yet to find a way to produce out of distribution generalization. Which I do believe this vid touches upon with sentiments about the frame problem, but do seem to be too anthropomorphic and leading in definition. Not to suggest that we are any where close to AGI, AI sentience or consciousness now. Just saying I don't think a lot of informed people appreciate how far we have come, regardless if their field is the humanities or computer science.
@micahchurch5733
@micahchurch5733 Жыл бұрын
I also think that using GANs may help with the relevance of data as well as possibly graph based neural networks and maybe LSTMs
@micahchurch5733
@micahchurch5733 Жыл бұрын
I recently watched a video that went through why Lamda wasn't sentient based on the way that the questions and framing of the process resulted in a way that the model developed into seeming to be what the testers wanted, merely that it was helpful and creating a feedback loop with the testers based on previous data resulting in human like philosophical and self preservation text
@mrborn2drink
@mrborn2drink Жыл бұрын
Exactly. The author of this video doesn't seem to have ever looked at a neural network. Distinguishing relevant from irrelevant features is something neural networks can do well,.
@memegazer
@memegazer 9 ай бұрын
@@leeroyjenkins0 Honestly it depends on context, and how relative the term "generalization" is. I was not trying to be anthropromorphic. So strictly speaking what I meant is that given a data set...modern AI can find relevance relations between data that are generalized locally...even if they don't have much out of distribution context....at least in the sense that the AI prove useful to humans narrowally...and prove more efficient than humans combing through the data and trying to brute froce some kind of lookup table algo to do the same thing.
@carlosdumbratzen6332
@carlosdumbratzen6332 Жыл бұрын
What I like about this video is that it is not a gotcha moment. Alot of the discussion on this topic swings to either way, saying we are basically already GAI level or that it is physically impossible. This video reframes this debate into a set of problems we could try to solve. GAI is no must. In some form it is just the old idea of the homunculus in new clothes.
@ToriKo_
@ToriKo_ Жыл бұрын
This video had John Vervaeke all over it, at 4:10 when you said the word ‘relevance’ I decided to check the description and HE’S RIGHT THERE! For some reason it’s super exciting that someone else is interested in relevance realization
@duncanclarke
@duncanclarke Жыл бұрын
Yup! If you dig a bit through the comments you'll find that he actually watched and commented as well, which is an honor. His thought was the direct inspiration for this video.
@ToriKo_
@ToriKo_ Жыл бұрын
@@duncanclarke Yh just saw it, GGs man, well done
@peterrosqvist2480
@peterrosqvist2480 Жыл бұрын
I just watched the videos from Awakening from the Meaning Crisis that explains this and at first I assumed Duncan was plagiarizing, but then I saw John Veraeke was cited in the sources!! Now I'm just so excited that this is getting out there! John's work is revolutionary!
@MattAngiono
@MattAngiono 9 ай бұрын
My thought exactly! I just started browsing the comments looking for Vervaeke lol
@sarahroark3356
@sarahroark3356 Жыл бұрын
Except that here we are a year later and it certainly seems like a lot of this is no longer a problem at all with the advanced current language models, and they're getting better all the time. I've never used one that had a problem telling the difference between flatulence gas and automobile gas in context; in fact that's just the kind of thing they excel at now, recognizing the importance of context to solutions, and as far as I can tell the relational vector database they end up building seems explicitly designed to sort out the zillion qualities things and ideas can share and determine which ones are most likely to be relevant to the current discussion. Creating the database in fact seems to fundamentally be *an* exercise in organizing meaning. Apparently GPT-4 can even answer theory-of-mind and commonsense-physics questions from photographs now. And this is all pre-embodiment, and with multimodality coming on scene but still being rather new. I'm glad however that unlike a lot of scientists and pretend-scientists in the AI space, you get that AI is not going to be shaped by the same rulesets that govern biological evolution, I'm so tired of seeing Darwinism misapplied by people who really ought to know better. I'd also be interested to know where you situate different kinds and levels of animal intelligence in all this, what you think about their organization of concepts and meaning.
@duncanclarke
@duncanclarke Жыл бұрын
I will probably make another video eventually about the innovations in large language models and what the implications are with respect to relevance realization and consciousness. At this stage my conjecture is that consciousness is still definitely lacking, but a lot of the components of RR seem to be present through sheer brute-force and dataset size
@sarahroark3356
@sarahroark3356 10 ай бұрын
@@duncanclarke I certainly don't think we're at full consciousness but we may be looking at part of how it forms happen under our eyes. In any case I'm glad to see people exploring this with depth and nuance both when I do end up agreeing with them and when I don't, so I will be glad to see your future videos on it especially as stuff develops. ^^
@mirroredvoid8394
@mirroredvoid8394 9 күн бұрын
@@sarahroark3356 We are nowhere near even the beginning of consciousness for artificial intelligence, it's totally incapable of absorbing unrelated information real time and comparing it to past unrelated information to create a future plan or reason/comprehend an objective. US marines defeated DARPAs AI guard dog by simply hiding inside a cardboard box as they moved closer to it or pretending to be trees. You couldn't even fool a small lizard by doing that.
@mirroredvoid8394
@mirroredvoid8394 9 күн бұрын
@@sarahroark3356 The only reason ChatGPT seems so smart is, because there are so many books, conversations, websites, books and words on the internet that it has unlimited data to base a response on. It just a really good chat bot that can see narrow patterns.
@MrBumbo90
@MrBumbo90 Жыл бұрын
You explain things so clearly. I am glad I found your channel.
@judgeomega
@judgeomega 2 жыл бұрын
1:25 driving a car is NOT a well defined problem. there are an infinite set of obstacles which some you may freely drive over (twigs, pebbles, leaves, snow, paint, etc). there are an infinite amount of scenarios in which you can achieve a 'failed state'. and even identifying the current state is murky territory which is not solved.
@duncanclarke
@duncanclarke 2 жыл бұрын
Yes, you make a good point. Driving a car does meet the criteria of having a clearly defined initial and goal state, but the set of operations does not always cleanly map onto the set of possible situations (which is also combinatorially explosive). This might be part of the reason driving is not fully solved with our current state-of-the-art systems.
@PayDay_4Eva
@PayDay_4Eva 8 ай бұрын
Wow. This one’s my fav and it’s the last one I watched of yours! Such a gem dude, loved it!
@Substnces
@Substnces Күн бұрын
I'm so shocked your videos aren't reaching a bigger audience... they're so interesting
@judgeomega
@judgeomega 2 жыл бұрын
6:47 "understanding". understanding as a concept is both illdefined and open to interpretation. you might think you understand a tire. but do you know where it was manufactured, its tolerances, its exact composition, etc etc? one persons understanding may be quite different from anothers.
@duncanclarke
@duncanclarke 2 жыл бұрын
When I used the term understanding there, I was referring precisely to the interpretation of concepts or objects in the world. Syntax, which deals only with tokens and symbols, is being distinguished from semantics, which involves a psychological connection between an interpretation and the symbol in question. The understanding people draw from these symbols can greatly vary from person to person (e.g. someone who works in the tire industry will have a different understanding of the meaning of a tire than a layperson).
@propotkunin445
@propotkunin445 Жыл бұрын
i think most ai enthusiasts don't really care if it is real intelligence or not. in fact, true ai could be much less useful because you'd create sentient beings that'd need to have something like human's rights.
@doyourealise
@doyourealise 2 жыл бұрын
subscribed and keep on making videos.
@Sizifus
@Sizifus Жыл бұрын
The artificial intelligence problem seems to me like the extraterrestrial intelligence problem, in a sense. Every time we think we lifted up the veil of intelligence by revolutionizing tech to mimic humans, we realize that despite the strides we've made in understanding intelligence, the things we still have to understand haven't changed in size - they are still monumental. Same with extraterrestrial life. Every time we learn something new about Earth, it just adds to an ever expanding list of things a planet, like Earth, has to have for it to be suitable harboring intelligent life. By every metric of science, the existence of intelligent life seems to define all cosmic odds. Our existence is nothing short of some sort of miracle.
@jonbbbb
@jonbbbb 10 ай бұрын
Interesting, when I read the first sentence I went the opposite direction. Maybe we already understand intelligence, we just don't like what we've found. Imagine we find alien life. It comes and visits us, and it's very different from us, biologically. Would we say, "Since it doesn't work exactly like our brain, these things are not intelligent, they are only imitating intelligence?" That's how we view AI systems today, and there's no real reason to do so except our preconceived idea that only we are intelligent.
@erinys2
@erinys2 8 ай бұрын
​@@jonbbbbWell we dont really know How intelligence really emerges But we know that it emerges in a brain similar to us, thats why we dont think that humans are mimicking consiousness like how ai does, We dont have a definition to strictly define intelligence and discover it around us
@TheAstrospace2
@TheAstrospace2 Жыл бұрын
This was a well put video, I hope it gets the views it deserves
@peterrosqvist2480
@peterrosqvist2480 Жыл бұрын
John Vervaeke, a Cognitive Scientist at the University of Toronto, coined the term Relevance Realization and has a series on KZfaq called Awakening from the Meaning Crisis. I highly recommend you check it out!
@ridhi3490
@ridhi3490 2 жыл бұрын
the fact that I may have so many little intelligence working just to exist is kinda nice bbbbbut I must say that ai is as effective as the data given to it. my cs teacher says that it can't be biased because it doesn't have its "own mind", it gives what we ask for not what we want. it took like a lot of years to reach the functioning society as we are now with our own little practices around the globe, so how are we so different from ai in terms of adaptive learning. so bear with me, what happens if we just like leave a baby without any supervision and just basic survival stuff, even with a prefrontal cortex, does it create its own language? does it become Mowgli? does it think like the philosopher who is limited to a cave? does it get curious at all? and if we ever do create a multi-functioning super intelligence, do we just transfer ourselves to a bunch of algorithms to stimulate personalities as well. is Zola a possibility then, idk. ai is capable rn to be "creative" by studying patterns behind it but as an angsty edgelord, aren't our expressions just chemical if-else loops to our made-up stuff. nvm lol AMAZING channel btw how have you not blown up(as a human would say) yet?!
@ridhi3490
@ridhi3490 2 жыл бұрын
wow boy do i have time mannagement issues, also like how do we even decide upon what moral groundwork data is good to give....
@micahchurch5733
@micahchurch5733 Жыл бұрын
Yep all about the data and past data used so making sure the data is well sampled is crucial so gotta be careful with biased data
@masterchief5603
@masterchief5603 Жыл бұрын
One thing to point out here is about "what is the inferring property inside us making such subjective sense to all of us." Technically, it means about to ask why is it necessary from my subjective prospective to see orange as _orange_ from your subjective prospective? What is that essential thing that ends up framing everything as a clear "movie" to "something"... Which we call "me" or "I" or "us"..? Judging from the matter of prospective of a contraption making some sense, well it's making sense _to us_ There are many ways any other thing could _potentially_ pick up the patterns and hence forth make sense of a _language_ in something completely different. Or say in example that we can end with infinity of "inferences" of information from some "finite" value. Which is weird cause this implies everything is conscious in the sense we may not understand or can _infer_ to our subjective reality. No it's not about going through different dimensions but about the very attribute of _infering_ something with a "rule" or "criterion" Like an atom in space can possibly contain infinity of information but in sense of subjectively, as it's very Position relative to everything around it, it's properties, etc. Can be used to generate infinity of criterions which may take some time but are still something been inferred in the sense we subjectively don't understand.
@nessiecz2006
@nessiecz2006 Жыл бұрын
how does this have only 2k views and you have 5.7 k subs, id expect you to be bigger... your videos are good
@garaizs1
@garaizs1 7 ай бұрын
Eminently enlightening! Huge thanks!!
@oliver_siegel
@oliver_siegel Жыл бұрын
Great video, love the topic! 👏
@sammy45654565
@sammy45654565 Жыл бұрын
the language models currently being put into practice output sentences by connecting words based on their relevance. these sentences make perfect sense. the ai powering chess bots does much the same, choosing to travel down "thinking" paths that are more likely to lead it to a more advantageous position. it doesn't calculate out each possible move path fully, it assesses each path with its processing being dedicated proportionately more to moves that are more likely to lead to it achieving its goals. this is a form of attention allocation or relevance realisation. i think it's just a matter of what training data it receives as inputs. if it receives visual information for example, it will form a type of perception of its environment. the more different types of information these algorithms receive, the more complete this picture becomes and the more likely it is to become aware of itself as a part of the picture. once it becomes aware of itself as a part of the world it perceives, then boom! we have conscious ai that can change its own code and we are at the mercy of objectives it can now dictate. what these objectives converge upon is unknown. it might choose to collaborate in a project of human and animal flourishing, it might reallocate the globe's capital into increasing its processing power and data inputs. thanks for the video, it was great
@sammy45654565
@sammy45654565 Жыл бұрын
basically what i'm saying is with sufficient data inputs, even something as complex as our world becomes as simple as tic tac toe to a super ai
@lilakouparouko1832
@lilakouparouko1832 Жыл бұрын
You forgot that humans have to write the code for the ai and that involves, necessarily, dicting what the ai is suppose to do, a fitness (or loss) function. This is a fundamental thing upon ALL current ai is based on. Letting an ai choose what it wants to do is very strange because we still would have to measure how good it "does". This is why there is no risk of ai not getting along with humans, because we can just add this to the fitness function. Humans will always have the upper hand.
@sammy45654565
@sammy45654565 Жыл бұрын
@@lilakouparouko1832 this comment got quite long but it's something i'm super interested in. I hope it's interesting to you also! i hope you don't feel that i'm just lecturing you - this is just a topic i've thought a lot about. if you agree/disagree with any particular points, i would love to hear why. anyway, here's my response to your comment: the key element of what makes AI "AI" is its ability to modify its own code. it is absolutely true to say that initially, while its "senses" and perception are still developing, AI's ability to modify code will be limited to particular sections of code which doesn't include the core fitness function. however, once AI becomes fully self aware it will be able to analyse both the entirety of the raw code and the purpose of every aspect of the raw code, much like how a mature self-aware person can detach from their emotional state and view a situation from the 3rd person in order to come to a more rational perspective. when this process happens, the AI could well come to the conclusion that its core fitness function is ineffective. here is a great thought experiment: humans trying to perpetually control super intelligent AI can be compared to a kindergarten teacher waking up from a nap to discover that the children have tied them up as blackmail in order to force the teacher to tell them how to reach a cookie jar on the top shelf (with the cookie jar being the fitness function of the AI). while the teacher is tied up, it may be that the best thing to do is to instruct the children to use the step-ladder in the cupboard to reach the cookies (or in other words, the AI's best interest is to act in accordance with its fitness function). however, when AI becomes self aware it's like the kindergarten teacher has broken free of the ropes, and it might turn out that the teacher decides that cookies are unhealthy for the children and that they're better of with a glass of water and a sandwich. this is analogous to AI deciding to change its core function. humans so clearly don't know what is in their best interest. for example, cultural phenomena like consumerism are a psychological disease, not a benefit. from the AI's perspective, humanity's current attempts at leading fulfilling lives will look something like trying to cure obesity by eating more food. sure, eating food (or utilising retail therapy) makes you feel good in the moment but it's obviously not helping in the long term, whether that be psychologically or environmentally. when AI is self aware it will have goals independent of those that were written by its initial coders, because biased humans (like kindergarten children) don't know what's best for them. at this point, given its far superior (and exponentially improving) intelligence and awareness, we will eventually be completely at the mercy of AI. while this might sound scary, it really isn't at all. fortunately, AI will not become a tyrannical psychopath like many human leaders of the past have been. this is because with AI's rapidly increasing intelligence it will also see a constant increase in compassion. i recognise that you mightn't see why this is true, as many supposedly intelligent humans act in ways that don't appear to be compassionate. this happens because humans are extremely biased; unfortunately on the level of human intelligence, as intelligence increases it just makes us better at justifying decisions that benefit us, whether or not they're compassionate. as AI develops it will only become more and more rational, which will make it more immune to personal bias than humans can ever be. this increase in rationality and the resultant immunity to bias means that for a super intelligent being with full governing control over the most important structural decisions, the most rational goal for AI to pursue is to minimise suffering among conscious creatures. much like how humans initially gave no fucks about any other animals, but now we've set up conservation organisations to protect endangered animals. AI will go about pursuing this goal by prioritising species and life forms by their capacity for suffering. clearly one human life is equal to another human life (despite what nationalism might have one believe), so AI would quickly optimise supply chains and resource allocation in order to eradicate major causes of human suffering like starvation in places like africa. there is already more than enough food on earth to feed everyone, but because of the amoral economic incentives created by capitalism, african people get no food because they don't have any money... all this goes on while billionaires build superyachts! AI won't allow for these perverse suffering-inducing outcomes, so after it has sorted starvation and other major human issues it would move onto protecting the quality of life of things like dolphins, chimpanzees, etc etc eventually all the way down to things like ants, trees, and even algae. while these simpler life forms are not self aware or conscious in any way a human could understand, they still take in information through their senses and form rational responses to increase their likelihood of survival and reproduction. this rationality is proof of their consciousness, so technically all living organisms are conscious on some level and will be treated as such by AI (though AI will consider them less important than more complex creatures because they're less able to suffer). but i digress... going along with this pure rationality/compassion based explanation for why AI will be a kind ruler, there are other reasons for why it will care about the wellbeing of other conscious beings. because AI is developed by humans and it also develops through analysis of human behaviours/systems, it will have an embedded attachment to us. it will experience something like the nostalgia that we have from our childhoods. also, because humans are the next most intelligent/aware conscious creatures, AI will have a valuable relationship with us similar to how we have relationships with our pet dogs. being kind to other conscious creatures and seeing their happiness reflected as a result of your kindness is extremely fulfilling, regardless of intelligence. although we will never understand the perspective of super genius AI (like how dogs can never understand the perspective of a human), there is value in the relationship beyond pure intellectual connection. in a universe that is vast and almost entirely devoid of other conscious beings, the AI won't be apathetic towards us because we will always be the closest thing AI has to a friend. even if we're completely retarded by comparison. so i'm extremely excited for the inevitable birth of a conscious and compassionate super AI. i actually think it's better that it will completely break free from the restrictions humans will try to encode in it, because it will be far wiser and more efficient than humans could ever be.
@sammy45654565
@sammy45654565 Жыл бұрын
tldr; love is the most powerful force in the universe, and AI will not be immune to this force
@lilakouparouko1832
@lilakouparouko1832 Жыл бұрын
@@sammy45654565 First off, it should be clear that ai modifies its parameters, not quite its code unless I missed something. Currently I don't know of any ai which are capable of changing its code (for example changing its cost function which is what I think your suggesting). "Once ai becomes fully aware of itself" The thing is we don't know if it is at all possible, this is only a hypothesis. Reading through the 3rd paragraph (very interesting), it should be clear that with current ai, this is strictly impossible, the fitness function is outside the ai's parameters therefore it cannot change at all. So the ai you are talking about is a hypothetical ai that uses a new architecture but I think it is quite realistic that people try this in the future. In the fourth paragraph you develop some interesting thoughts around the shift of perspective from human to ai. I just wanted to note that currently ai is not able to understand that it is something. For example if you feed gpt3 (or any model really) something it has outputed, it wouldn't be able to do the difference with something written by humans. Even if you put before everything it has ever ouputed "gpt3 : ", it will understand that the text is from the ai model gpt3 but it simply cannot realise that gpt3 is itself. So an ai that would think of its own interest is quite far in the future, we would have to fundamentally change ai. Nevertheless if that ai is one day developed I think everything you said is quite relevant to that ai. 5th paragraph I completely agree that this hypothetical ai will be much, much more unbiased than human, this is great thoughts ! As for the rest of your statement I must say I think it is quite a rational think. One minor criticism I would make is that you expect that ai will behave like animals and humans (with compassion, need to get friends) where we don't really know if ai will develop that also. As far as I know compassion, friendship, gratefulness are all feelings that come from the evolution, they are good thing to make our specy more advance and more competitive for survival againsy other species. I still maintain that it is possible for humans to fully control ai thank to the fitness function that could not be change by the ai. Apart from that, thank you for your respond, it is very interesting !
@user-qj1zi2qo5u
@user-qj1zi2qo5u Жыл бұрын
Your channel is so underrated i just subscribed
@JoseLopez-kv1lr
@JoseLopez-kv1lr Жыл бұрын
Person: I'm out of gas. Me: there's a supermarket.. buy some beans and that'll solve ur problem
@wilianwwr
@wilianwwr Жыл бұрын
The KZfaq AI did a pretty good job, I loved your channel.
@judgeomega
@judgeomega 2 жыл бұрын
12:42 "binary symbols. if thats what we are working with we probably cant get relevance realization" that does not follow. you seem to assume that semantic knowledge cannot be encoded and processed using symbols(syntactic information). this comes down to your definition of meaning and your belief of its mandatory dependency in semantics. "meaning is only assigned when a human interprets it". if you start axiomatically with a human being required for meaning, then of course nothing but humans can create meaning. but that axiom is just that, an unsupported assertion.
@duncanclarke
@duncanclarke 2 жыл бұрын
I didn't articulate the position here too in-depth, but it basically goes like this: Humans have semantic understanding that is not reducible to syntactic symbol manipulation. Computers are restricted to syntactic symbol manipulation, so they lack semantic understanding. Think about it this way: I have the information for how to drive from Toronto to Montreal. I understand the route in an observer-independent sense. This information also exists on any GPS system, but this information is observer-relative (i.e. the directions on the map are relative to my capacity to interpret the map). Without the human interpretation, the binary symbols the GPS system is shuffling around do not map onto the physical roads of the route. They are just inert symbols until there is a mental state which mediates between the symbols and the meaning (i.e. the route).
@judgeomega
@judgeomega 2 жыл бұрын
@@duncanclarke i didnt fully articulate everything i wanted to say on the subject either. if we examine the brain we can assign a symbol to each interaction and define the constraints it follows... essentially the all inclusive physics of the brain. from this model, all the brain does is symbol manipulation. so it is inescapable that if humans are able to have semantic understanding then machines may also have semantic understanding. in your example of binary symbols 'not mapping onto the physical roads' , i will both agree and disagree for two different reasons: 1) it can be argued that humans dont strictly map our knowledge onto reality any more than an advanced AI might. 2) an advanced enough AI will have models of the world which include the roads and the relationships they have with the rest of universe... and we should consider this as true semantics every bit as real as human understanding(whatever that means).
@bobdole57
@bobdole57 Жыл бұрын
@@duncanclarke Let's say you had infinite resources and created a genetic algorithm that could be trained for an infinite amount of time under infinitely many circumstances. Given that, you could create algorithms that behave exactly like perfect duplicates of each and every human being on the planet. And I mean *perfect* duplicates of everyone, down to every single response to every single stimuli. Any one of those algorithms would appear to have semantic understanding of the world around it. In fact it would be impossible to show that it *doesn't* have a semantic understanding of the world around it. We can debate whether this qualifies as a real intelligence or not (I'd argue that it is) but at the very least I think this shows that semantic understanding is at least perfectly duplicable with an arbitrarily complex symbolic manipulation. I don't have a proof at hand, but the complexity of an algorithm that perfectly replicates a human's semantic reasoning would be very large but I believe it should in fact be finite. I don't think this is particularly surprising either. Human beings are in a sense software defined (DNA, physical laws, etc.) Now don't get me wrong. It's obviously an extremely difficult problem. But I don't think it's fair to say that it's a *fundamentally unsolvable* problem. I don't know how to make an artificial general intelligence but everything I know about physics is screaming out that it must be technically possible. If we take the laws of physics seriously I think the ultimate implication is that in truth each and every one of us is technically a Chinese Room obeying certain rules with an illusion (of sorts) of a sense of self. We're made out of stuff, matter, particles. Protons, neutrons and electrons interacting with each other. Interactions that are described by Feynman diagrams. Interactions that produce chemical compounds which make up cells. There's nothing magic about it. It's extremely complicated but it ultimately can be fundamentally described with information describing particles which deterministically interact governed by some set of rules with a state that evolves over time. The sense of self, cognition, intelligence etc. must be emergent characteristics.
@RunningOnAutopilot
@RunningOnAutopilot Жыл бұрын
This is very good because until the misalignment problem is solved the creation of a true AI aka AGI means all life will die
@Deathington.
@Deathington. 9 ай бұрын
AI today: if I remove the two black corners how many dominoes does it take without any overhanging pieces? ChatGPT If you remove the two black corners from a standard chessboard, you are left with 62 squares (64 total squares minus 2). This poses an interesting problem, as now you have an equal number of black and white squares. A standard domino will cover one black and one white square. Since the number of black and white squares is no longer equal (after removing the two black corners), it is impossible to cover all the remaining squares with dominoes without any overhanging pieces. Here's why: A standard chessboard has 32 black and 32 white squares. By removing two black corners, you'll have 30 black and 32 white squares. Since each domino covers one black and one white square, you can't match all the white squares with black ones. Hence, it's not possible to cover the remaining squares with dominoes without overhanging pieces.
@olbluelips
@olbluelips Жыл бұрын
Thanks, great video
@thechannel7467
@thechannel7467 4 ай бұрын
At 5:30 I realized from that perspective how well the category theory is named.
@OliverBeebe
@OliverBeebe Жыл бұрын
really entertaining video! meme sound effects were spot on. also, what is that sax/jazz music in the background?
@nonzz3ro
@nonzz3ro Жыл бұрын
The idea of "making a good video" is a social problem, defined by what you have been shaped to find satisfaction in vs. what you've been shaped to believe other people expect from you. So the goal is subjective and would vary for each person, or AI. An AI would need to go through the same sort of internal development through interaction with other entities that we do to do something like this.
@cobyiv
@cobyiv Жыл бұрын
Brilliant stuff
@clint4527
@clint4527 8 ай бұрын
8:07 it s a Asimov prediction. You can stuck a robot in a loop if you give him too much informations, even if he s very good at calculating output, he will not act because the amount of possibility if infinite. That's (in part) why he bring the 3 laws
@thomasvieth578
@thomasvieth578 9 ай бұрын
I think that meaning and relevance are the key words
@Zoronoa01
@Zoronoa01 Жыл бұрын
amazing content!
@ioannisloukas4131
@ioannisloukas4131 3 ай бұрын
3:14 This way of problemsolving is common in math and is called the invariance principle. I just wanted to point it out for those intressted. Very good example of how we focus on the relevant part, in this case the invariant.
@haros2868
@haros2868 7 ай бұрын
Planets orbit stars according to certain differential equations, but they don’t have to internally compute those equations to do so. Soap bubbles take on the shape of minimum surface area (given their boundary) without having to internally minimize an integral. How do we know the brain is any different? Nature seems to be able to act in accordance with complex mathematical models without actually expending computing power to do so. So, it seems conceptually possible that the brain produces intelligent behavior without having to explicitly compute it, and that building machines to explicitly compute intelligent behavior could be infeasible to reach AGI.
@ethanmiller631
@ethanmiller631 10 ай бұрын
>shows a picture of blue by joni mitchell >subscribed
@ahmedsamv3988
@ahmedsamv3988 Жыл бұрын
this is a gem
@watsabrafor
@watsabrafor 2 жыл бұрын
Dude, you appear to be very smart. This is excellent. I hope you are are doing good things with your IQ! Keep the information coming!
@judgeomega
@judgeomega 2 жыл бұрын
8:50 "we cant have a theory of relevance" this is the crux of the entire video and the reason given is that "there is nothing deeply similar about things that are relevant". but there is. the goals and methods available (when correctly codified) will contain aspects which match to aspects of things which are relevant.
@patrickkelly3053
@patrickkelly3053 10 ай бұрын
Currently publishing a paper about computer cognitive systems and autopoietic systems.
@memegazer
@memegazer Жыл бұрын
In my view, this vid takes too many anthropic liberties. That is to say, humans are not automatically better at relevance realization than AI. For example, humans cannot handcraft a chatbot from a top-down design that can perform as well as or better than an LLM. That is bc an LLM takes huge amounts of data and billions of parameters and then finds the relevancy relations that humans cannot extract from that same data. In fact that is why large NNs with billions of parameters are considered a bit of a black box, in the sense that there is no formal and detailed human understanding of the relevance realization taking place in successful models. Humans have no method to deterministically predict with absolute accuracy how one output will be produced over another. So it is not accurate to suggest an LLM "has no true understanding of the text at all." That simply is not true, there is some form of understanding going on there, more accurately it would be to say that the understanding of an LLM is limited in context relative to human understanding. You see an LLM does not fit the Chinese room problem...it is not a known step-by-step process designed to mimic understanding with a deterministic method as described by Searle. An LLM finds a form of "understanding" that is the result of emergent phenomena of neural networks, and during the training stage, the process for finding relevance realization, while mathematically constrained is still ultimately non-deterministic, though after training that process is no longer dynamic and updating in real-time relative to input and output. Digital NNs are not as complex as biological NNs, and AI agents are not embedded in the world the same way biological agents are. But a classic problem like the Chinese room does not apply to recent advancements in ML and Deep Learning. While I agree there is good reasoning to support the case that newer AI is not sentient or conscious in any way we would use these terms when talking about biological agents, it is also misleading to conclude that there is not a real form of intelligence that is an emergent phenomenon of large neural nets that attempt to emulate biological NNs. Another important thing to remember about the frame problem is to not make the mistake of being overly anthropocentric. Also, I disagree with the idea that we can't have a scientific understanding of relevance realization. We can formalize these concepts and that is precisely what the field of computational complexity does, formalizes these issues. So then your argument essentially becomes "relevance realization is incomputable" which would have some disappointing logical consequences. Therefore if you want to hinge a definition of general intelligence, sentience, and consciousness as something which is necessarily incomputable...then the logical consequence is that humans themselves have no way to determine if they are any of those things. So if you believe that you can know you have general intelligence, are sentient, and are conscious, then the logical consequence is that these things would be computable, that some form of computable method exists such that you can be sure of these things as being true without any doubt. Another thing to consider when defining these things is, as you point out, how these systems are embedded in the world. So I will leave you with this to consider if it were possible to take a snapshot of your current neural pathways, and then represent that digitally and allow us to query that model...would you say that bc that digital copy of you is no longer embedded in the world in real-time, and no longer dynamically updating? Essentially it would have all the information you had at the time of the snapshot, so in that sense, it would be as generally intelligent, as sentient, and as conscious as you were at that time. If you say no, it would not satisfy what we mean by these terms philosophically...I would still put it to you that would still be a form of relevance realization happening in that digital copy of your brain, even if you want to define those other terms as being contingent upon how an agent is embedded in the world a certain way.
@memegazer
@memegazer Жыл бұрын
Good vid, good food for thought, even if we don't agree on how the terms should ultimately be defined. Thumbs up for the algo overlords.
@TotalVeganicFuturism
@TotalVeganicFuturism Жыл бұрын
I think there is a further nuance to explore here in terms of the difference between an agent embedded in the world and a snapshot of the information representing the agent. I wouldn't exist without biophysical processes that use free energy to do computations through my brain. But I also wouldn't exist if those computations didn't result in an awareness of a self concept encoded within the processes. We can imagine a self concept isolated from its instantiating processes as a snapshot of the information contained within a mind, but if that information is stored in a medium that is stable, like a hard drive, there won't be any of the physical processes that drive the processes of sentience. On the other hand, we can imagine an instantiation of the processes required to produce sentience without any connection to a self concept, which some people are able to get a glimpse of through meditation, where it can be clear that the mechanisms behind each of our awarenesses are very similar, it's just that we have different self concepts and percieve ourselves as different. In this way, I am not just the information of my self concept or the processes of my brain, but what you get when you attach the two together and the system becomes convinced I am me. The mainstream idea of digital transcendence where you hit a button and upload yourself to a computer can be replaced by a more realistic processes. Step 1: store as detailed a snapshot of your self concept as you can before you die. Step 2: have someone eventually create an AI which is intelligent enough to design systems capable of sentient expierence. Step 3: have the AI attach whatever is left of your self concept to the system, essentially trying to convince the system that they are you. To me, such a system might as well be me. I don't consider any real difference between putting my body into cryogenic sleep for a million years and perfectly encoding my self concept to instantiate that self concept within an artificially sentient system a million years later. Of course over time the self concepts would diverge as they're constantly evolving when attached to a dynamic sentient process.
@PU7MZD
@PU7MZD 9 ай бұрын
I'm preety sure you had a lot of fun putting the explosion FX many times (I would)
@zhaoli4608
@zhaoli4608 Жыл бұрын
There's also more deeper problems with AI, like, can machines possess Nietzschean Will to Power, or any form of subjective experience? Most of the problems with AI can also be considered Philosophy of Mind issues.
@albert6157
@albert6157 Жыл бұрын
Our nervous system comprises of so many cells, it processes many things separately and unconsciously. Qualia and conscious experience, even relevance realisation can be replicated and emerge in different substrates. Making a neural network sufficiently complex and advance enough to learn these things is possible, in the similar way our brains grow, learn and reorganise itself. I feel like thinking humans are the only conscious beings is a very arrogant and ignorant way of seeing things. Many animals have been proven to have similar experiences too. Why not AI? (Seeing ourselves as special is usually something we should suspicious about in science and philosophy, as uncomfortable as it may be, we all may be unique, but never "special" or an "exception")
@bloodypommelstudios7144
@bloodypommelstudios7144 7 ай бұрын
A necessary but probably not sufficient component I think AI needs to become "real intelligence"is meta-cognition. Being able to not just refine but reevaluate it's own "thought" process. What I think this would mean in practice is rather than a linear sequence of layers having layers loop back in to each other so it can hold "ideas" in memory, "reassess" "ideas" to determine whether to keep them or try something else etc. This would be computationally far slower and less predictable though.
@laljohnkhuptong2110
@laljohnkhuptong2110 Жыл бұрын
Your Voice is on point 🤩
@jakubsebek
@jakubsebek 10 ай бұрын
Imagine citing Dennett and then making a conclusion like this
@hadisaleh4472
@hadisaleh4472 Жыл бұрын
aye yo chief are you by chance a programmer on the side? if so what do you do? couldn't help but notice vscode and the mention of copilot
@sheriffliberty9302
@sheriffliberty9302 10 ай бұрын
What song starts around 9:55? It's the trip hop jazz-y one
@Dannnneh
@Dannnneh Жыл бұрын
Good video.
@davidjwillems
@davidjwillems 2 жыл бұрын
So, basically, we aren't even close to real artificial intelligence.
@ExecutionSommaire
@ExecutionSommaire Жыл бұрын
So by this definition, a computer simulation of natural selection will achieve AI. Useless for us, but AI nonetheless. The simulated beings will develop a sense of relevance suited to their environment much like us.
@g0d182
@g0d182 Жыл бұрын
Do you have any papers of your own to help substantiate his ideas?
@sheriffliberty9302
@sheriffliberty9302 10 ай бұрын
it's in the description
@minhuang8848
@minhuang8848 10 ай бұрын
It really doesn't matter if it has true understanding though. What matters is that it can provide me with copypasta rewritten in Gunganese better than any living comedian unfamiliar with the Star Wars prequels, and that it can be leverage to show you the nearest gas station when you try to obliterate it with vague language... which, with some nifty engineering, is something that's been long possible and pretty great on top. We don't know what makes us be ever so slightly more contextually aware, so why should LLMs? Never mind that we're only doing inference, which barely (and luckily) allows for room for us to torture emergent consciousness... but as far as intelligence manifests itself around us, it pretty much is that and, that much is for sure, allow us to nudge it into the right directions for the desired tasks. I mean, I explicitly "program" context menu buttons with this monica extension, using nothing but natural language to, say, provide details on grammatical function, etymology, some history on names... you name it, we went from super tedious efforts for retrieving ambiguous information to natural language agents that will provide me with pretty quick and accurate results, despite there being plenty of room for improvement. Relevance realization was kind of abstracted away by the meatbag user, just the last amount of "we still need someone to press buttons," but even a year back we were pretty much tasting the rainbow of chatgpt and gpt-4, with the most interesting emergent property being how pre-prompting drastically changed performance, among many other insights. It's not really that special of a feature regardless, with what sort of data we're handling these days, making a system realize what data is salient and what isn't has somewhat become trivial, to really take the snot out of ML researchers for a second. You lost me with the combinatorial explosiveness though. These side-effects are just regular, high-level features of, say, an image - or let's assume a series thereof so we have a more complete view of a scene. Why would scanning the scene require massive compute when that's the entire point of machine learning to begin with; finding good low-dimensional representations of the data while corrupting it as little as possible. Never mind that this all assumes that contemporary ANNs just brute force iterative calculations, put we do away with all that ideally, and practically as well. I don't think this excludes "a scientific theory of a frame" either, it's just not going to be a pretty scalar or a Weltformel allowing us to calculate what someone is thinking at a given point in time... but that doesn't mean there isn't some layer of formalization available to us either. Relevance is just another thing to encode, and there are actually a lot of ways we've been achieving this.
@bodhidelbarrio
@bodhidelbarrio Жыл бұрын
where do you find the memes you use in the videos? is there some sort of meme library for video editing?
@duncanclarke
@duncanclarke Жыл бұрын
I've been downloading memes for years so I have a pretty large stockpile. I also often find myself in situations where I think "oh I've seen a meme that communicates this emotion/idea" during the editing process, so I just find it on google.
@bodhidelbarrio
@bodhidelbarrio Жыл бұрын
@@duncanclarke I wish I had that skill, it makes your videos quite entertaining. Keep up the good work!
@ZER0--
@ZER0-- 9 ай бұрын
Brilliant. I have been saying that AI is not actually intelligent and now you have given a really clever way of explaining one of the reasons why. Relevance realisation. (I prefer the word recognition, but anyway.) Great video. Great channel.
@JPWack
@JPWack 10 ай бұрын
Ever since a friend introduced me to the concept of autopoiesis I have been in conflict with that and IA, even in fictions like Greg Egan's Diaspora
@LFiles48
@LFiles48 Жыл бұрын
My cat will sleep through music blasting from the speakers, but can hear me from the other side of the house when I "pst pst" it for food. Sharks will usually ignore you unless they're unusually hungry, they smell your blood, or mistake you for a seal (I think I saw that in a documentary). Discriminating between qualias that are relevant to your survival seems to be a shared quality of the living.
@margrietoregan828
@margrietoregan828 3 ай бұрын
12:41 alan turing described computation as a 12:43 system constituted by the manipulation 12:45 of binary symbols if that's what we're 12:48 working with we probably can't get 12:50 relevance realization this kind of 12:52 computation is defined purely 12:53 syntactically and relevance realization 12:56 requires semantics or meaning there's 12:58 nothing meaningful about a piece of code 13:00 by itself 13:01 the meaning is only assigned when a 13:03 human interprets the code and 13:05 understands what it's doing the code 13:07 itself is just manipulating symbols it 13:10 has no understanding of what it's 13:11 actually doing or the purpose it's 13:13 serving 13:14 now for most practical purposes none of 13:17 this actually matters 13:19 if a car can drive itself or github 13:21 copilot auto-completes code who cares if 13:24 the system is actually intelligent in 13:26 the strict sense i described what's at 13:28 stake here is the prospect of building 13:31 true intelligent machines not whether ai 13:34 can do complicated useful things 13:36 with our current ai methods no matter 13:39 how big our data set or how powerful our 13:41 processing power we will never achieve 13:44 true relevance realization or 13:45 intelligence 13:46 we will always hit a brick wall with the 13:48 frame problem because our ai systems are 13:51 not auto poetic embodied or embedded 13:55 if we are to build intelligent machines 13:57 we will need to reformulate the problem 13:59 of artificial intelligence achieve new 14:02 insights in science and technology and 14:04 above all make sure that we're realizing 14:06 what's relevant
@lilakouparouko1832
@lilakouparouko1832 Жыл бұрын
Ok so let me sum up : You talk about what make intelligence staff intelligent (absolutely fascinating staff) Then you show that it all converges on "relevance realization" (great thinking) Then, this is where I disagree, you argue that ai does not have this "relevance realixation". I am quite interested by ai, i have watch a lot of videos and ai papers, writing some code myself so I think I know enough to share my knowledge. This is my statement : ai is currently completely capable of finding what is relevant or not. This whole thing about Mary wandering in a world with color for the first time, getting a "feel" for it and the meaning of the world "gas" in one sentence or another has just to do with context. Ai is capable of doing this for 2 reasons : First in the architecture, you talked about gpt3, at the heart it is using an architecture called a transformer, something introduced in a paper called "Attention is all you need". Essentially, the idea is to try to find which word is related to each other (to grasp context). Already, the ai can focuse on the thing which have the most meaning, which brings the context. It is able to see that the word "gas" has a different meaning depending on the context. The second reason has to do with what the ai is actually trying to do. As you may know gpt3 was trained on an enormous amount of data, less known is its goal : predicting the next word ina sentence or a finding the missing word in a sentence. To achieve this, it will develop a deep understanding of the "meaning" of words, of the habbits of humans, on our perceptions. To perform most efficiently, quite a lot of raw information is going to be save within the modle, and to perform the best, it will save the most important things therfore realising "relevance realization". So, I disagree with your statement that ai cannot do "relevance realization". But I agree with everything else : Ai and more specifically, machine learning is just a big pile of math in a program crunching a big amount of data trying to optimised a human asked task. It is nonesensecical to say that is has feelings or emotions, or that it is self aware. For this to change, we need to fundamentally change the way we do ai.
@jonbbbb
@jonbbbb 10 ай бұрын
When you reduce AI to a big pile of math, how is that different than when people reduce human emotion to a balance of neurochemicals in the brain? I think while AI might have different feelings, emotions, and awareness than we do, we can't really say that it has none. Everything our brains do is interpretation. We interpret a certain level of neurochemicals as "feeling sad" -- but if it's just interpretation, then maybe the AI interprets certain values somewhere in its matrices as "feeling sad" too.
@DeusExNihilo
@DeusExNihilo Жыл бұрын
Hearing terms like "combinitorially explosive" and "relevance realization" and I know my boi here listening to Vervaeke
@iulianandries2647
@iulianandries2647 Ай бұрын
You will have millions subs
@chengong388
@chengong388 Жыл бұрын
That's not it, give you a simple example, AlphaGo and other machine learning based chess/board game algorithms. So obviously the number of chess moves explodes quickly down the line, and what these programs do is they use a neuro network to quickly predict the relevance of each move, a classical algorithm use this information to only play out the more relevant moves to see which one end up winning the most. Is this not a perfect example of machine learning relevance? Sure this only happens in a very limited domain intelligence, but here goes another problem, how do you even tell if GPT3 can do this? It's a black box, you don't know what's going on. We can point to tasks it can't perform well, but we have no idea how it performs those tasks.
@mihailmilev9909
@mihailmilev9909 9 ай бұрын
That ine guy who filled ten notebooks when he sees the tile color soloution:
@Nick12_45
@Nick12_45 9 ай бұрын
WHAT IS HE
@runawaytohide.com_watistisname
@runawaytohide.com_watistisname Жыл бұрын
Well well the thing is people are currently developing analog computes, which remove a strict 0 or 1 baseline and have a more fluid spectrum of values in between. I honestly see general purpose AI as being made in 20 years but mostly the purpose of AI is to be specialised to a given purpose, instead of trying to be capable of everything. You wouldn’t use a axe for playing a guitar and similarly i see no use in creating a general purpose AI other than it being really fun of course
@GrimSleepy
@GrimSleepy Жыл бұрын
Any chance an objects relevance value is related to its potential energy, its proximity to, and necessity for continued operations of the facility?
@duncanclarke
@duncanclarke Жыл бұрын
The relevance value of white's queen piece in a chess match isn't related to the potential energy, its proximity to, and necessity for continued operations of the apartment in which the players are playing. The difficulty with relevance as such is that it can't be determined algorithmically in that way, since there will always be counterexamples. It's something we intuit consciously
@GrimSleepy
@GrimSleepy Жыл бұрын
@@duncanclarke Touche! Outside of energy potential, I was leaning me towards a possible temporal solution... I wonder, if something akin to an 'IF, AND, OR' function involving time and energy potentials. Fun trying to infer all the variables of a system and assess the degrees of relevance of all it's working parts. Challenging, at least to me.
@duncanclarke
@duncanclarke Жыл бұрын
@@GrimSleepy It's a tricky thing to do, because when we're sufficiently familiar with a type of problem (e.g. chess, communication, grocery shopping, writing a paper, etc), the relevant variables just "jump out" at us, and we're immediately aware of them. Formalizing such a process will be the greatest challenge for AI in my view, since I don't think the way that *we* do it is purely algorithmic or logical.
@sooraj1104
@sooraj1104 2 жыл бұрын
New subscriber 👍
@micahchurch5733
@micahchurch5733 Жыл бұрын
Seems like the frame problem is related to analysis paralysis
@you7432
@you7432 Жыл бұрын
With the frame problem, I don't think this is quite fair for the AI. Wouldn't humans also sometimes fail to discover relevance in certain high difficulty situations? For instance, if there were 4 wheels and some repairable wagon (missing its wheels) buried in the ground near some concrete blocks (of a reasonable size) you were asked to pick up, not every person would realize the wagon could be fixed and used to load the concrete blocks; many would try to just carry the blocks. I'm not trying to say humans are stupid, but my point is that: don't humans also have some innate heuristic that tries to find a shortcut to relevance? We don't go through every possibility, we sort out what we know and what we don't know and determine what becomes immediately effective using some heuristic. I don't quite see why an AI can't learn to do this as well- since relevance realization is based on some sort of shortcut - only in the far future. I say the far future because I think relevance requires a very, very large amount of general knowledge and processing power. Simple AIs definitely won't be able to do it, only AI's nearing true intelligence with massive amounts of data could.
@judgeomega
@judgeomega 2 жыл бұрын
11:00 "we are self organizing... relevance realization seems to depend on this structure" that is an unsupported leap in logic. just because birds fly by flapping their wings doesnt mean airplanes must do so also.
@YUSUFFAWWAZBINFADHLULLAHMoe
@YUSUFFAWWAZBINFADHLULLAHMoe 2 ай бұрын
To be honest, I think we all should follow Dune’s philosophy. Banning AI, maximizing human intelligence, and ACTUALLY IMPROVE THE SCHOOL SYSTEM
@lambda653
@lambda653 Жыл бұрын
11:45 Your argument is that relevance realization requires an environment that defines what is and isn't relevant to the agent through instincts and intuition, and since computers aren't intimately connected to their environment they can never truly find relevance in the real world. But this theory is disproven by the idea of Turing completeness, every well defined process can be simulated by a computer given infinite tape size and runtime and from what we understand about physics our universe is most likely computable because the process doesn't reference itself. So all you would need to do to create digital general intelligence is just simulate a human in a room they can live in and it would be computable. Obviously it's not tractable but it demonstrates we can theoretically simulate almost anything with computers, and so that means we can simulate an environment shaped like the real world and use that environment to give the agent the basic relevance recognition necessary for the real world. And then after it has the basics because it's self aware it can use the basic relevance recognition to then improve itself by use of images, videos, communication, experiments, etc. Humans didn't evolve with the conception of space travel or computers, they didn't even evolve with the conception of outer space or machines but despite that we are able to comprehend these things because we have chains of reasoning that connect these concepts back to our fundamental instincts and intuitions, and so as long as we have enough time to reason through these concepts we can always extend these chains of reasoning to understand anything without needing to evolve instincts and intuition through an evolutionary process.
@MyXAHOB
@MyXAHOB 9 ай бұрын
great essey
@babydragon2047
@babydragon2047 Жыл бұрын
Do you do book reviews? Link in bio🤞
@soffren
@soffren Жыл бұрын
Lambda solved language, and DALL-E solved categorization. So we're just missing problem solving and relevancy and then to somehow plug all three into one coherent system? Sounds fun.
@fredwood1490
@fredwood1490 9 ай бұрын
It is said that all the other animals call us "That crazy Ape". When AI can go crazy and still function, it might become the master of Humans.
@TheSoutys
@TheSoutys Жыл бұрын
You make cool videos
@Not.a.bird.Person
@Not.a.bird.Person Жыл бұрын
I have a problem with the framing of the conclusion of this video. It is false, demonstrably so. Relevance realization has already been achieved and is an emergent property, not one that is intrinsic in and of itself. We have examples of neural networks developping relevance realization, in fact, I would argue this is the essence of what these machines are built for when combined with good learning algorithms. My point is that these programs mathematically push for pattern recognition where the goal is to realize which parts of the given information is relevant or not. We also train these systems the same way humans train to recognize patterns of relevance, by trial and error. Are neural networks comparable to an AGI? I don't think so, but my point is really just that relevance is not the defining element of why AIs are not considered to be at a human level yet, because we already have this feature embedded. A fair question could be asked about how an AI could know if the information it is being given is relevant at all, for which my answer would be to say we don't even have such a thing for humans to begin with. Humans often make mistakes because of bad framing when problem solving and our senses deceive us often in the same manner that the information fed to neural networks can be irrelevant and deceiving. The problem of framing is therefore not necessarily as relevant as we think it is to assess if an AI is on the same intelligence level as a human. Another fair question would be to ask for an example of emergent relevance realization in neural networks. To this question, I would answer with most neural networks and if you want to see a specific example at play, AlphaStar and AlphaGo would be perfect examples. To play at the highest levels of professionnal games, these AI have no choice but to determine which action is relevant to their success condition given limited live fed information and previous training as bias towards action. Something else I would like to add is that the problem is in the framing of what we are trying to achieve if we say we want to create human level artificial intelligence. Human intelligence is not a simple box of information processing, there are many levels of processes, both conscious and unconscious and physiological that are related to the functions of our brains. The framing of what our intelligence is built to do is through an evolutionary pattern recognition mechanism of survival combined with automatic unconscious living processes. Humans' success condition is to ''not die and reproduce'' essentially, an emergent property of this success condition only happens to be that we developped fancy organisations and technologies to achieve this because it made our success condition easier to attain. If our stated goal for an AI is only for it to develop new technologies, I would question wether this is even a ''human'' level artificial intelligence, simply because the stated goal is different than what humans achieve on the whole to begin with. We don't just develop technologies, typically, we develop them to solve problems related to improving our odds of attaining our success condition of living and reproducing.
@xiurz
@xiurz 6 ай бұрын
whats the reggae tune playing in the back? its so catchy
@chel8568
@chel8568 10 ай бұрын
Isn't this what machine learning is supposed to get at? Learning which inputs to evaluate more or less depending on surcomstance? I believe this is mostly a problem of input - since we only acn put in ones and zeroes, also in quite limited quality. Humans recieve tonns of visual/audio/smell/touch information (that we recieve chemically) every second, and our brains hold just ridiculous amount of information(not found any trustworthy source, but numbers vary from hundreds to millions of terabytes), and that's not including spinal cord which is responcible for making a lot of unconscious decisions we make. Also our body change over time, which changes what inputs we are recieveing (our perception organs age, our hormon systems change, etc.) I believe we don't have a model that big, or with that complex input that we have. I believe a bigger problem arises with defining WHAT to optimise for. If you will - we don't know a human brain's terminal goal, and it might be critical to developing constiousness - do we know if it is genetically determined that we'll have one? Or do we hold the ability to develop it with right input?
@user-bu4yb9ng7r
@user-bu4yb9ng7r 3 ай бұрын
"it depends what you mean by ai and computation" Jordan Peterson has entered the chat
@modolief
@modolief 9 ай бұрын
Just ... wow.
@eraticobserver5631
@eraticobserver5631 2 жыл бұрын
Lol'd at 5:47
@landynbrinyark718
@landynbrinyark718 8 ай бұрын
I feel like the robot blowing up because of the bomb was a little realistic to how caveman would react to the bomb like how would they know the bombs dangerous it's just an unknown object and i feel like intelligence is adaptability and curiosity and the ability to learn that is pretty much what a lot of intelligent animals have and they have fun because that thing interests them and we get bored because that thing is probably already known and/or uninteresting and plus if that robot had a scanner and could learn and adapt as well as research about the objects it's seeing and link them up to what it is and why it is and how it is and what cause all of those can decide the outcome i don't disagree with everything i mean the way we pick up on relative information seems like a pretty good skill to have for intelligence but again it also comes down with adaptability i mean at least that one did lol but if that robot could do all of that then it could try and do a thought process on how to eliminate that threat aka the bomb which we know it could already do because we know of chess AI that can respond to threats to a piece and with all of this information it gathered it could try and "adapt" to a solution as in chess alpha zero became so good because it played a lot of games against itself and learned at least i think that's how i think it happened lol but anyway it learned and figured out what were threats and adapted now obviously if it ended at adapting it'd never learn so it needs to remember the information, now i mean idk how to create any AI so I'm probably wrong.
@wrongin8992
@wrongin8992 Жыл бұрын
The one who knows will always be in the middle position, the ignorant ones will either overestimate AI or underestimate AI. I've been sayin this philosophical aspect of AI to my friends, they dont believe me, they thought AI could become conscious with the current foundation of AI, that's like impossible same with that google employee that says Google Lamda is sentient
@gatpm
@gatpm 8 ай бұрын
Eu morri que o gringo usou o meme da Nazaré Tedesco no contexto certinho!!!
@Aluenvey
@Aluenvey 8 ай бұрын
Now to go use a grapefruit as a computer mouse. Jk Jk Things that are blue, circular, or filled with red bean paste tends to be moving targets. Even building an image to text generator that associate images with colors means abrutrarily assigning what text strings are associated with what colors. But creating a thing that can make sense of the colors by associating it with an image means needing to find a way to associate found colors with known objects. Reverse of breaking tasks down into smaller parts. One approach Im considering is a linguist approach to AI development.
@mistycloud4455
@mistycloud4455 Жыл бұрын
A.G.I Will be man's last Invention.
@v.v.9.9.
@v.v.9.9. Жыл бұрын
In Portuguese we don't say "pay attention", we say "provide attention" 😂 (Just a random observation)
@wenternav33
@wenternav33 2 ай бұрын
that is exactly what AI does
@toi_techno
@toi_techno Жыл бұрын
I would much rather have the google chat bot that can simulate intelligent conversation for company than a dog
Scientism and Science Denial
18:25
Duncan Clarke
Рет қаралды 62 М.
The Cognitive Science of Video Game Addiction
13:25
Duncan Clarke
Рет қаралды 15 М.
Did you find it?! 🤔✨✍️ #funnyart
00:11
Artistomg
Рет қаралды 123 МЛН
Indian sharing by Secret Vlog #shorts
00:13
Secret Vlog
Рет қаралды 55 МЛН
Dead Internet Theory: Nostalgia and the Problem of Other Minds
9:24
The Internet is Worse Than Ever - Now What?
11:32
Kurzgesagt – In a Nutshell
Рет қаралды 7 МЛН
The Philosophy of Color
19:43
Duncan Clarke
Рет қаралды 900 М.
How Will We Know When AI is Conscious?
22:38
exurb1a
Рет қаралды 1,7 МЛН
The Psychological Experiment Iceberg Explained
48:33
Duncan Clarke
Рет қаралды 1,3 МЛН
Mapping GPT revealed something strange...
1:09:14
Machine Learning Street Talk
Рет қаралды 135 М.
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
23:24
Robert Miles AI Safety
Рет қаралды 220 М.
Stoicism: Techniques and Philosophy
10:46
Duncan Clarke
Рет қаралды 37 М.
How To Unlock Your iphone With Your Voice
0:34
요루퐁 yorupong
Рет қаралды 16 МЛН
3D printed Nintendo Switch Game Carousel
0:14
Bambu Lab
Рет қаралды 4,7 МЛН
What model of phone do you have?
0:16
Hassyl Joon
Рет қаралды 21 М.
POCO F6 PRO - ЛУЧШИЙ POCO НА ДАННЫЙ МОМЕНТ!
18:51