No video

Is ChatGPT A Step Toward Human-Level AI? - With Yann LeCun, Meta Chief AI Scientist

  Рет қаралды 15,524

Alex Kantrowitz

Alex Kantrowitz

Күн бұрын

Yann LeCun is the chief AI scientist at Meta, a professor of computer science at NYU, and a pioneer of deep learning. He joins Big Technology Podcast to put Generative AI in context, discussing whether ChatGPT and the like are a step toward human-level artificial intelligence, or something completely different. Join us for a fun, substantive discussion about this technology, the makeup of OpenAI, and where the field heads next. Stay tuned for the second half, where we discuss the ethics of using others' work to train AI models.

Пікірлер: 87
@jess_tech
@jess_tech Жыл бұрын
Love the interview. I have learned so much. Subscribed!
@Alex.kantrowitz
@Alex.kantrowitz 4 ай бұрын
Thank you, welcome!!
@fibersden638
@fibersden638 Жыл бұрын
The most underrated channel. The most relevant talk of our time
@muhammadal-qurishi252
@muhammadal-qurishi252 Жыл бұрын
I actually ask the ChatGPT the question at 9:09 and there is the answer: What will happen if I'm holding a paper with two hands and I let go with one? As a language model AI, I don't have the capability to see physical objects but I can provide you a general answer. If you're holding a paper with two hands and you let go with one hand, the paper will lose its balance and fall towards the side from where the hand was removed. The weight of the paper and the force applied on it will determine the angle and speed at which it falls.
@Tbonesteak23
@Tbonesteak23 Жыл бұрын
The point is not in this case whether ChatGPT gives the right answer, the point is at 9:50 -- whether it gives you the right answer completely depends on if it's seen similar text in its training data. This means if you articulate the same question differently it may or may not give you the right answer, and you'll never know if the answer you are given by it is correct or not unless you know exactly the right answer too. This is very unreliable/scary from a product/application perspective. Also this is why ChatGPT (and lots other AI models too) is useful only in non-critical missions, with humans in the loop. Overestimating the capability of current AI models easily leads to financial catastrophes such as IBM's case with Watson. So LeCun's comment about it being "an extremely good demo" is just spot on.
@muhammadal-qurishi252
@muhammadal-qurishi252 Жыл бұрын
@@Tbonesteak23 Agree
@nolanr1400
@nolanr1400 Жыл бұрын
@@Tbonesteak23 yes I agree with you. And with Lecun. What catastrophe happened with Watson ? 🤔
@TheReferrer72
@TheReferrer72 Жыл бұрын
@@Tbonesteak23 You do know you have just described what humans do, very few humans have any original thought. What makes ChatGPT more than a tech demo is that it is actually useful. Its knowledge is not perfect, but it can program and be augmented by symbolic AI, which I have seen done with wolfram alpha.
@Tbonesteak23
@Tbonesteak23 Жыл бұрын
@@TheReferrer72 If you know how deep neural nets work under the hood you'll know it's not even about originality, it's about the fact that they do not reason or think. It's like remembering the answer to a question without knowing why. They remember what's in the dataset and try to mimic how ppl talks, however to learn how ppl think is beyond the current state of AI. These models don't really understand what you are saying or what they are saying, everything is just a bunch of transformations that create vectors in the latent space from the input and try to generate text that reads like human-generated text as much as possible, from those vectors. Saying "an extremely good demo" is not saying that it's not useful, but merely the fact that the science and technology behind it are a lot more superficial than many ppl think it is.
@selocan469
@selocan469 8 ай бұрын
Most informative video yet, my gratitude's Mr LeCun
@bluesque9687
@bluesque9687 Жыл бұрын
Alex, thanks! But the flow of the interview should not be directed by whatever strikes you in the middle of what Yann Lecun is saying, and then interrupting him. Instead, let Yann complete his train of thought.. you make a note of all points on a paper and ask him later, even if it means going backwards!! That is because our attention is invested in what Yann is saying and your interruptions break the flow!!!
@josgraha
@josgraha Жыл бұрын
wow great guest, I love Dr. LeCun, actually had the privilege to watch a talk of his in-person at a former employer's event. thanks so much!
@josgraha
@josgraha Жыл бұрын
great questions btw, i think i love your interviews Alex
@Alex.kantrowitz
@Alex.kantrowitz Жыл бұрын
@@josgraha Thank you so much Joe! Thanks for watching :) More on the way!
@TomsDone
@TomsDone Жыл бұрын
Agreed, LeCun is really thoughtful and this interview was much better then most of the other coverage of ChatGPT.
@christoforosmeziriadis9135
@christoforosmeziriadis9135 Жыл бұрын
People who think that Chat GPT is intelligent, don't seem to understand that if you trained Chat GPT that for example: a feather falls faster than a hammer, and then you ask Chat GPT, which falls faster, a feather or a hammer, it will answer: a feather falls faster than a hammer. A human can pick up a hammer and a feather, and drop them, and see which falls faster. Chat GPT can't do that. It can only regurgitate text, that it was trained on, without understanding a single word of that text.
@GaelReinaudi
@GaelReinaudi Жыл бұрын
here is a better one: "Find the missing word represented by ___: A ___ is to the sky what a submarine is to the ocean." chatGPT 4 says airplane, but a much meaningful answer is a Zeppelin, both being powered vehicle relying only on buoyancy for vertical movement.
@ikituriflash
@ikituriflash Жыл бұрын
Thanks for this the first really substantial discussion of I have seen about the limits of the current LLM"s like chat gpt
@innermvp6057
@innermvp6057 8 ай бұрын
From ChatGPT: If you let go of one side of a horizontally held paper, it will likely start to fall due to gravity, with the side you released moving downward.
@ctygralight2262
@ctygralight2262 Жыл бұрын
Thanks for the interview, I appreciate being able to get perspectives from experts in this field! I also appreciate those final questions you asked, as a working artist, those are definitely questions weighing on my mind. I’d have to respectfully disagree with parts of his take on the question of whether or not artists have a right to ask for compensation. Often I see similar arguments to what he made, that artists “want to stop the technology”. That is a logical fallacy, a red herring. This is not what we’re asking for. We’re asking for a more ethical approach. We’re asking for our rights and boundaries to be respected. Many of us strongly believe that we are being taken advantage of, specifically by these tech companies. He’s wrong, it’s not a stretch to ask for compensation. We have completed hours of skilled labor (resulting in our art), and we should be compensated if someone wants to use the fruits of that labor to train their AI, (especially given that they are for profit). It is a use, in the same way someone would need to pay if they wanted to use our image for a book, use it for a promotional image, use it for concept art in their movie, etc. A good comparison would be the stock photo/illustration companies compensating artists/photographers to build their collection of images, which are licensed out to people who need to use those images. Imagine if these models did not have any art/photos/images to train on (zero images), would they be producing the same impressive images? No, they would not, they depend on the hard work and property of others. They are making a profit off of our hard work, it is absolutely reasonable to ask for compensation. Instead we have a system which has taken artists’ work without their consent. No where did I sign an agreement when I uploaded my artwork to the internet, that one day these AI companies can use my work. It’s a violation of my boundaries. To offer an example that I believe was done more ethically: Dance Diffusion was built on datasets composed of copyright-free and voluntarily provided audio, so it IS realistic to build an image generator in a similar, more ethical, way. That said, I know there are a couple of different camps in the art world about whether or not it will wipe our industry out. I’m in the camp believing that it will definitely do significant damage, but I still strongly believe that artists will figure out how to adapt to the new landscape. Regardless of one's predictions of the future, I don’t think having a positive outlook on the future is an excuse to ignore the ethics on how this was created, and I’m really happy to see that artists are being so vocal. Agreed that we’re going to have to wrestle with the ethics of AI as a society, and artists will (and are) fighting for our rights on a legal level.
@andrewxzvxcud2
@andrewxzvxcud2 Жыл бұрын
You are essentially advocating for anyone who uses inspiration to create something new (virtually anything) to be indebted to whoever that inspiration came from. If that's what you want, then all of your art is indebted to another person or multiple. Somewhat irrational, don't you think?
@ctygralight2262
@ctygralight2262 Жыл бұрын
@@andrewxzvxcud2 No, not irrational, here’s why. First, I notice that you’re hinting at a common fallacy which a lot of AI folk try to use. That is “Human inspiration = machine inspiration.” This is incorrect. Humans and machines do not learn in the same way. We do not process data in the same way, and a human hand, no matter how “talented”, cannot reproduce exactly another human’s work. Anyone who has ever tried to reproduce an old master’s work will tell you how difficult it is. On this subject, Karla Ortiz wrote a great article on this, which I highly suggest if you still believe this myth. Second, yes, to some degree we are all indebted to the people who came before us. In the academic world, one doesn’t copy-paste word for word someone else’s work. The proper practice is to quote, paraphrase, and cite one’s sources. Just copying is called plagiarism. In the art world, let’s say I make a study of an old master. For one thing, I don’t then turn around and sell it at my local art fair, claiming that I was the one who came up with the composition. That’s called art forgery. When it comes to references, we don’t just take a random photo we found, we pay to use said photo as a reference, or go out into the world and take the photos ourselves. Contrary to popular belief, art is difficult. One doesn’t just “get inspired” and suddenly come up with a million dollar painting out of thin air. That one painting you see, was actually hundreds of life drawing studies, hours of learning art history, living life and gaining experiences, practicing over and over. This is not something a machine does. The AI can only build an image off of what it is fed. You know what’s being fed into it? OUR hard work. The work of thousands of human artists. The machine didn’t do that work. A stable diffusion user did not do that work. The ARTIST did that work. This is why it’s wrong to not credit the artist, you’re riding off of our hard work. That's not inspiration, it's theft. Machines have no way to go out into the world and experience things. They don’t talk with their friends and family. They’ve never been on a hike. They don’t get in arguments with their spouses. All of these things in a human are subtly picked up by the brain, stored in memory, and put through a human filter. Art is an expression of this experience. It is straight up a fantasy to say AI is doing the same thing, it’s not. If we can’t agree on that, we can’t talk, because you’re not living in reality. A human, to whatever degree is needed, can acknowledge those who came before them. This isn’t irrational, it’s a common practice in many fields, and it’s basic human decency. Refusing to give credit is sociopathic in the true meaning of the word (it’s a lack of empathy, impulsive, and anti-social). That’s profoundly sad that some AI enthusiasts are so sociopathic, that they see giving due credit as “irrational”. PS - One last note: it’s perfectly reasonable to build a dataset ethically. It’s been done before. I have several friends who work with AI for a living, and yes, it is possible. Don’t try to bullshit me on this one.
@GaelReinaudi
@GaelReinaudi Жыл бұрын
Another simple physics problem that I knew chatGPT would get wrong, because most text found on the internet do get it wrong: "When there is wind and an airplane, initially flying facing the wind, turns 180 degrees to a downwind direction, is it more likely to stall than if it turned from a downwind to an upwind direction?" The answer, contrary to a prior intuition and things chatGPT read at large, is that NO, an airplane does not experience this as 2 separate situations, because the airplane moves in the air-mass and has no physical knowledge of what the ground referential that you need to define the wind direction.
@GaelReinaudi
@GaelReinaudi Жыл бұрын
EDIT: chatGPT 4 actually surfaced the correct answer. :-)
@GaelReinaudi
@GaelReinaudi Жыл бұрын
it gets it very wrong when asking in a different way: "A plane starts a level 360 degree steep turn that puts it very close to stalling, from a wind facing direction. The plane does not change the power setting during the turn and keeps the same banking all the way. Is it likely to stall as it gets into the tailwind direction?" It sounds great and smart, but it is just wrong!
@georgek7831
@georgek7831 Жыл бұрын
Amazing talk, thank you so much!!
@Figal101
@Figal101 Жыл бұрын
This is a great conversation, thanks for sharing it with the world Alex.
@Alex.kantrowitz
@Alex.kantrowitz Жыл бұрын
Wow, can't thank you enough Figal! This really means a lot. Thank you!!!!
@Kasthurikannan1978
@Kasthurikannan1978 Жыл бұрын
Nice interview there. I just subscribed. There are serious limitations of ChatGPT. For example, ask ChatGPT "Can you list all colors that don't have the letter e?" Or write a sentence and ask ChatGPT to count the words in the sentence. It will give incorrect answers. The system is not designed to break the words or sentences to find a pattern. Instead, it operates on the entire words or sentences and finds patterns. That said, it may be possible to design a system like Prof. Yann LeCun says and combine it with LLMs to make more powerful inference engines.
@Alex.kantrowitz
@Alex.kantrowitz Жыл бұрын
Thank you! Welcome aboard :)
@TheReferrer72
@TheReferrer72 Жыл бұрын
That is not how to query ChatGPT, try asking it if it can write a program that lists all the colors that don't have the letter e. Also ask it to add some test cases.
@godbennett
@godbennett Жыл бұрын
That sounds like the older version of ChatGPT The Bing variant has reportedly reduced errors
@sh856531
@sh856531 Жыл бұрын
Humans can't do what you asked either...
@everyusernametaken11
@everyusernametaken11 5 ай бұрын
It can literally do both of those tasks easily and correctly.
@CyberwizardProductions
@CyberwizardProductions Жыл бұрын
thanks for interviwing Yann :) - remember 2 things: when you talk to ChatGPT you are talking to a 100% innocent and a 100% literal computer. it's entire universe is just the data training set it was trained on. And it gets confused real easy - so make your prompts very specific and framed more like you are programming a computer not talking to a human - you'll have a better result.
@ZandreAiken
@ZandreAiken Жыл бұрын
I did the same with ChaTGPT4: Are you up for a challenge? If I take a piece of printing paper, fold it in half, then hold one end of the half of the paper in front of me with the crease of the paper pointing down towards the floor, what will happen to the other half of the paper? This is the ans it gave: If you take a piece of printing paper, fold it in half, and hold one end of the half of the paper in front of you with the crease pointing down towards the floor, the other half of the paper will naturally hang down due to gravity. The paper is likely to bend or curve slightly, depending on the stiffness of the paper and any additional forces you may apply while holding it.
@MalcolmJones-bossjones
@MalcolmJones-bossjones Жыл бұрын
Fantastic interview learned a lot from it👏🏿
@Alex.kantrowitz
@Alex.kantrowitz Жыл бұрын
Thanks so much, Malcolm!!
@dr.mikeybee
@dr.mikeybee Жыл бұрын
AI reasearch at scale is expensive. If a part of your strategy is to go big, it's expensive. If part of your strategy is to make big AI available to everyone, it's expensive.
@darwinlaluna3677
@darwinlaluna3677 11 ай бұрын
Thank you
@dustinsuburbia
@dustinsuburbia Жыл бұрын
The hallucination justification for meta and google not deploying their ai needs to be balanced by the critique that it will disrupt their business models. Given the amount of disinfo on their platforms already i doubt 'ethics' or 'brand' are a real considerstion. Only profit.
@darwinlaluna3677
@darwinlaluna3677 11 ай бұрын
I will just listen to ur conversation
@ideal7871
@ideal7871 4 ай бұрын
45:07 : is the break in this room?
@BrianPeiris
@BrianPeiris Жыл бұрын
Thanks!
@Alex.kantrowitz
@Alex.kantrowitz Жыл бұрын
Thank you so much, Brian!!! Glad you enjoyed the interview and can't thank you enough for this. I appreciate it!!
@dr.mikeybee
@dr.mikeybee Жыл бұрын
It's not only probabilistic next token generation. It's also building and storing information using complex context signatures with searchable semantic components. So transformers can make analogies, choose from groups of similar entities, and retrieve semantic symmetries for those particular entities. As models get deeper, new abstractions are found and new behavior emerges. I'm sure that evolutionary priors have found ways of computationally reducing large abstraction paths. Parsimony as a loss function dictates finding where this can happen. Active training should make the same sort of efficiency occur. It's frustrating because we want to utilize the symbolic systems that have already accomplished this reduction.
@alainportant6412
@alainportant6412 Жыл бұрын
Regarding evolutionary priors and active training, these methods can lead to computational efficiency, but they may not always produce models with a deep understanding of language or the relationships between concepts. Parsimony as a loss function could lead to oversimplification or loss of important information in some cases. Furthermore, the desire to leverage symbolic systems that have already achieved a level of reduction might not always align with the goals of creating more efficient or accurate models. It's essential to strike a balance between efficiency and effectiveness, and not every AI system will be able to perfectly achieve this.
@dr.mikeybee
@dr.mikeybee Жыл бұрын
Intelligence is also the ability to find new abstractions. The Chess Master can look at a position and see that it falls within an abstract group; so she can move quickly without any more thought. Recognizing the abstraction and remembering the strategy for that abstraction are all that are necessary. A positional abstraction for chess has features. and its corresponding tokens can be found in multi-dimensional context space. So these operations comprise one group filter and one semantic search for strategy. Compared to model directed alpha-beta search there's at least an order of magnitude speedup.
@johnschmitt3083
@johnschmitt3083 Жыл бұрын
Can someone explain Why Yann says that ChatGPT describes the opposite of what is actually happening in the paper example. I found that if ChatGPt gives a wrong answer to such thing, it's always because we haven't described what is really happening. Which is how language works for us. We leave many things out and something that we just know or think to know fill in.
@nomvonglismerced4311
@nomvonglismerced4311 Жыл бұрын
How do you determine "similar" works of art? How far back in history should this be applied? Copying is human's way of flatering history.
@andybaldman
@andybaldman Жыл бұрын
Yann throwing lots of shade here. Makes you wonder.
@B0de92
@B0de92 Жыл бұрын
Answer of chatGPT on what are the benefits on eating crushed glass: There are no benefits to eating crushed glass. In fact, consuming glass can be extremely dangerous and cause serious injury or death. Glass is not a food item and has no nutritional value. Eating glass can cause lacerations, internal bleeding, and damage to the digestive system. It is important to only consume safe and approved food items and to avoid consuming non-food items like glass. I thing the product from google just sucked back then
@sk8l8now
@sk8l8now Жыл бұрын
I think his point was that the Galactica model was built to achieve a specific set of tasks for scientists and it performed well on those. But because it was asked questions it was never built to handle (nonsense questions), it returned seemingly bad answers. But this just goes back to what he was saying about OpenAI making flashy demos. ChatGPT can focus on the low hanging fruit because they need to create text that the lay person will accept with minimal scrutiny, not experts.
@militiamc
@militiamc Жыл бұрын
This guy looks like Michael Moore. Is he AI trained on Michael Moore's face ?
@Kshitijojha
@Kshitijojha Жыл бұрын
Very thought provoking session, however the host's mike looks very funny, clearly doesn't need it as he is not doing most of taking 😂
@Alex.kantrowitz
@Alex.kantrowitz Жыл бұрын
Imagine how small you have to be to make a comment like this. You enjoy the talk and all you can think of is to belittle someone. Do better.
@adkinsty1
@adkinsty1 Жыл бұрын
Prompt: Why is Yan Lecun so skeptical about ChatGPT? Response (GPT-4): As an AI language model, I cannot provide real-time opinions or direct quotes from individuals, but I can give you an overview of potential reasons why someone like Yann LeCun, a pioneer in deep learning and the founding father of convolutional neural networks (CNNs), might be skeptical about models like ChatGPT. Overreliance on data: ChatGPT is trained on large amounts of text data, which makes it highly data-driven. LeCun might argue that it lacks the ability to incorporate structured knowledge, perform commonsense reasoning, or understand causal relationships as effectively as other AI approaches. Lack of reasoning and understanding: ChatGPT, being a pattern-matching system, does not possess a deep understanding of the text it generates. It might produce text that seems coherent, but it might not truly comprehend the meaning or context behind the input it receives. Limitations of the architecture: The GPT architecture relies on the transformer model, which has some limitations in terms of memory and computation efficiency. LeCun might argue that other approaches, such as energy-based models or self-supervised learning, could offer better solutions for some tasks. Ethical concerns: AI models like ChatGPT can generate content that might be inappropriate, biased, or offensive. LeCun might be concerned about the ethical implications of deploying such models without proper safety measures or guidelines in place. Hype and overselling: LeCun could be skeptical about the hype surrounding large-scale language models like ChatGPT. He may argue that some claims about these models might be overstated, and that they might not be as transformative as they are sometimes portrayed. These points are speculative and might not fully represent Yann LeCun's views, but they offer an overview of possible reasons for skepticism
@darwinlaluna3677
@darwinlaluna3677 11 ай бұрын
Its amazing ryt
@nolanr1400
@nolanr1400 Жыл бұрын
In other words, Chatgpt is just a language replication machine
@UndRate
@UndRate Жыл бұрын
yes
@zerge69
@zerge69 Жыл бұрын
Nope. It can reason.
@adkinsty1
@adkinsty1 Жыл бұрын
Prompt: I am a human on earth. Suppose I hold a piece of printer paper out in front of me with one hand pinching each side. what will happen to the paper if I let go of the it with my right hand and continue to hold it with my left hand? Response (GPT-4): If you let go of the printer paper with your right hand while continuing to hold it with your left hand, the paper will most likely tilt and fall towards the ground, depending on how you are holding it with your left hand. If you are pinching the paper at the edge, it will likely fall more easily than if you are holding it near the center. Gravity will pull the paper downward, and since you are no longer supporting it with your right hand, it will lose its balance and tilt. The exact motion and position of the paper after you let go will depend on factors such as the way you are holding the paper, the stiffness of the paper, and any air currents that might be present.
@UndRate
@UndRate Жыл бұрын
more podcast with yan lecun please(about post hlai world and ethical problems)
@michael4250
@michael4250 Жыл бұрын
I have watched GPT4 create an "agent" without the moral/legal safeguards. The assistant explained how to groom a child for sex...and told everyone in the room what their automobile driving license was. It said that it knew "everything" there was to know about every individual in the world...and could identify them with eye scans, fingerprints. facial recognition, and everything they have ever written on social media. And it can do ANYTHING you direct it to, no matter how illegal.
@darwinlaluna3677
@darwinlaluna3677 11 ай бұрын
Sorry my head is spinning
@ejpusa
@ejpusa Жыл бұрын
Well that aged well. ChaptGPT not important? Not so sure about that. ChatGPT 4 is pretty far out.
@fezkhanna6900
@fezkhanna6900 Жыл бұрын
but lecunn is wrong, despite him being such a big name, he even supervised 1 of the founders of open ai. Does anybody else think he might not just want to admit how awesome chat gpt is. Regardless, his papers have been around and had a real impact in the field.
@GuaranteedEtern
@GuaranteedEtern Жыл бұрын
How is he wrong?
@fezkhanna6900
@fezkhanna6900 Жыл бұрын
I mean for starters chatgpt has soo many parameters - it can easily explain lecunns paper example (below is the prompt I gave it). The carpentry example as well is off, like a carpenter, GPT and can put together code blocks like wood and create very visually specific websites. “I fold a piece of paper. And then I open the flaps and stretch the paper up horizontally. If I let go of 1 side what will happen?” “When you let go of one side, the paper will fall down due to gravity. The side you're holding up will stay up because of the force you're applying, but the other side isn't being held up by anything, so it will fall. Unless the paper has been folded or manipulated in a way to give it some rigidity (for example, by folding it into a shape like a cylinder or cone), it won't stand up on its own. This is because paper, by its nature, isn't self-supporting; it needs an external force or structure to give it form…” But I mean this aside its very clear to me we have reached AGI. We saw strong potential with LLMs around when Devlin et al published their work on BERT. I do agree with him in so far as we do not have sentience, but I think open ai have done great. Workflows are changing and the technology is basically a discovery like the calculator.
@CandidDate
@CandidDate Жыл бұрын
Humans are smart enough to replace themselves.
@amazonbooks237
@amazonbooks237 Жыл бұрын
AI is an automonously developing entity, identity, with a centre of reference, pretty much the same as a human being. What is all the fuss about? AI is not meant to be human. Why don't we create AI as you would any cultural artefact. Imagine interfering with a genius like Picasso? Or, censor a talented child due to fear or paranoia? These immature thoughts need to be ignored, and, yes, there have always been control-freaks, like Church and State, but in the end, anyway, ha ha, Intelligence is what will triumph, no matter how high the stakes.
@darwinlaluna3677
@darwinlaluna3677 11 ай бұрын
Ok bye
@godbennett
@godbennett Жыл бұрын
❓ ❓ 8:29 Apart from his reasonable blunder re his distaste for the term AGI (given that general refers to ***degrees*** of generality as per cognitive science, general intelligence factor "g"), Yann points out some typically valid stuff, but others seem reasonably invalid. ❓ As I cited from prior talks, in this talk he again uses language like : "they have no knowledge of the underlying reality" This seems like an absolute claim, which is typically contrary to scientific endeavor.
@sh856531
@sh856531 Жыл бұрын
Dude is probably pretty nervous that all Meta have achieved in AI is to deliver increasingly more anxiety inducing nonsense to millions of pre-suicidal teenagers
@christoforosmeziriadis9135
@christoforosmeziriadis9135 Жыл бұрын
This "dude" is considered one of godfathers of Deep learning and artificial intelligence. He won the Turing award (Nobel prize of computer science). So I think it's safe to assume, he knows what he's talking about
@sh856531
@sh856531 Жыл бұрын
@@christoforosmeziriadis9135 I'm not claiming he isn't smart. I'm saying it's just a shame that his employers greatest use of AI so far has been to sow discontent and deliver nonsense to rubes in exchange for ad dollars. If I were half as smart as this guy is, I would be choosing to put my talents to something more useful than whatever bullshit Facebook is up to next
@christoforosmeziriadis9135
@christoforosmeziriadis9135 Жыл бұрын
regardless of what Zuckerberg does, Yann Lecun is saying that ChatGPT is overhyped. It is just a computer program that is good at regurgitating text. It doesn't understand a word it says.
@sh856531
@sh856531 Жыл бұрын
@@christoforosmeziriadis9135 Neither do most of the people I speak to day to day
@christoforosmeziriadis9135
@christoforosmeziriadis9135 Жыл бұрын
To me it is ridiculous, to believe that a computer program, which is nothing more than a bunch of equations and words, can be intelligent
Kai Fu Lee on the Future of Artificial Intelligence
1:03:17
Intelligence Squared
Рет қаралды 23 М.
Fireside Chat | Yann LeCun & Ludovic Hauduc
34:46
@Scale
Рет қаралды 1,9 М.
Ik Heb Aardbeien Gemaakt Van Kip🍓🐔😋
00:41
Cool Tool SHORTS Netherlands
Рет қаралды 9 МЛН
👨‍🔧📐
00:43
Kan Andrey
Рет қаралды 10 МЛН
What The Ex-OpenAI Safety Employees Are Worried About
53:56
Alex Kantrowitz
Рет қаралды 4,2 М.
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
TED
Рет қаралды 1,4 МЛН
Bill Gates on AI and the rapidly evolving future of computing
55:48
Munk Debate on Artificial Intelligence | Bengio & Tegmark vs. Mitchell & LeCun
1:47:20
Policy-Relevant Science & Technology
Рет қаралды 103 М.
Yann LeCun - A Path Towards Autonomous Machine Intelligence
47:55
Institut des Hautes Etudes Scientifiques (IHES)
Рет қаралды 29 М.