Better Architectures for Neural Networks - Chris Manning vs Yann LeCun

  Рет қаралды 3,487

The Data Science Channel

The Data Science Channel

9 ай бұрын

Better Architectures for Neural Networks - debated between Chris Manning and Yann LeCun on how better structure and techniques can be used to improve the performance of Deep Neural Networks

Пікірлер: 5
@OnionKnight541
@OnionKnight541 7 ай бұрын
there is an anthology from 2000 called "Minds, Brains, and Computers," and it is an amazing starting point for anyone interested in cognitive science (which is the bridged between simple computer science and artificial super intelligence).
@richardnunziata3221
@richardnunziata3221 5 ай бұрын
I think Chris is looking more towards Geoffrey Hinton capsule networks or GNNS though I do think CNNs create feature hierarchies but are kernel based which is why you need many layers. None of these i think will solve the total problem that lies in neurobiology and machines other then tensor multipliers or liquid networks, bayesian flow networks and such.
@flareonspotify
@flareonspotify 7 ай бұрын
make the objective function to gain more shannon entropy
@fredt3217
@fredt3217 4 ай бұрын
What they are mainly looking for is how humans determine truth. This is easy for normal truths but when humans do not want to accept truth it gets difficult but since that too is based on physics so the equation is simple. The problem is they do not have the structure humans have to complete it. For example the baby example with gravity is simply they concentrate on the pattern of what the object is doing and if no negative associations come up they accept it cause it would be a negative not too. Same with words. It gets difficult in humans since we can also choose not to accept truth based on the same reason. Thus we accept truths that are not true if we want to. Then you need another more complex equation to verify it. If we don't do this those negative truths can be inputted and associated. But they don't have the structure needed for that. But if you want to know how humans learn so quick that is the basic reason and how. Basically you associate one pattern with another. If a negative is more than the positive we don't associate it since the mind goes to the negative and not the positive cutting off the process before it completes since it needs to snap the association in the mind. Thus true or false is born. It's super easy to trick the mind do denying truths like this. For example if I said people are being abused all around the world because computer scientists spend more time helping governments carry out arbitrary laws than prevent them... even though the first is true if you were a computer scientist this means you have to accept some of the blame. Thus want to deny it. But if we developed a program to detect arbitrary laws and give the reasons or negative consequences... it would stop them through detection and the negative associations. And once they accept this truth the negatives of doing it would come up. Thus not want to accept it till they do accept their blame. Check... the most widely used reason to justify illegal laws is still in use even though it caused more harm in the world than any other pattern. I'm in the group that thinks computers are needed to detect illegal laws and explain why they are arbitrary because they corrupt all other fields that are supposed to stop it but they can't bribe, threaton, discredit with unjustified ignorance triggers, etc, a computer like people. And because we don't have it we don't have the end of laws used for crimes. And all because this truth gets pushed out by negative associations of what would happen if computer scientists detect it. So the truth to us is we know laws can be crimes and should be doing it but it is not important in their minds even though it is. And they will think they are not to blame even though they are in two or more groups computers will check to see how the crime got passed when it should not. But until you build the 3 association processes, the perceived state, day long, long term and imagination like humans I have no idea how to do it since you wouldn't be able to check the data, consequences, etc, like humans do. Thus how do you determine truths? There may be a way to run the negative acceptance equation to see what the truth is and why they made that choice but It would be less complicated to build the system like it should be done since you need all the parts. You can do simple language but not complex without the prefrontal association processes for example. So the answers to most questions here is study truth and how humans determine it. I'm working on a video of it but not many will want to watch it since it is about 10 hours and it's to detect racism and those involved and to expose them all. And I wasn't in the best mood Long story short is if you want to build a computer to think like a human you need to understand your own mind works. Not how you got neural nets to do tricks.
@russianbotfarm3036
@russianbotfarm3036 7 ай бұрын
Even if human, structured learning proves inferior to ‘pure’, huge-data, (relatively) structureless learning, I’d want to know how we do it, anyway.
Intro to Machine Learning & Neural Networks.  How Do They Work?
1:42:18
Math and Science
Рет қаралды 130 М.
Future of NLP - Chris Manning Stanford CoreNLP
9:41
The Data Science Channel
Рет қаралды 3,9 М.
MISS CIRCLE STUDENTS BULLY ME!
00:12
Andreas Eskander
Рет қаралды 18 МЛН
ЧУТЬ НЕ УТОНУЛ #shorts
00:27
Паша Осадчий
Рет қаралды 10 МЛН
Alex hid in the closet #shorts
00:14
Mihdens
Рет қаралды 14 МЛН
DEFINITELY NOT HAPPENING ON MY WATCH! 😒
00:12
Laro Benz
Рет қаралды 63 МЛН
AI Native 2023 - Fireside Chat: What's Next for AI with Yann LeCun - Session #7
51:22
Heroes of NLP: Chris Manning
46:32
DeepLearningAI
Рет қаралды 16 М.
Yann LeCun, Jerome Pesenti: AI, Extinction or Rennaissance? - TLF 2023
32:14
Geoffrey Hinton | Will digital intelligence replace biological intelligence?
1:58:38
Schwartz Reisman Institute
Рет қаралды 154 М.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Machine Learning Street Talk
Рет қаралды 174 М.
Andrew Ng and Chris Manning Discuss Natural Language Processing
47:28
Stanford Online
Рет қаралды 13 М.
iPhone 16 с инновационным аккумулятором
0:45
ÉЖИ АКСЁНОВ
Рет қаралды 9 МЛН
Как удвоить напряжение? #электроника #умножитель
1:00
Hi Dev! – Электроника
Рет қаралды 1,1 МЛН
Looks very comfortable. #leddisplay #ledscreen #ledwall #eagerled
0:19
LED Screen Factory-EagerLED
Рет қаралды 4,9 МЛН
Ноутбук за 20\40\60 тысяч рублей
42:36
Ремонтяш
Рет қаралды 404 М.