Professor Stuart Russell - The Long-Term Future of (Artificial) Intelligence

  Рет қаралды 104,182

CRASSH Cambridge

CRASSH Cambridge

9 жыл бұрын

The Centre for the Study of Existential Risk is delighted to host Professor Stuart J. Russell (University of California, Berkeley) for a public lecture on Friday 15th May 2015.
The Long-Term Future of (Artificial) Intelligence
Abstract: The news media in recent months have been full of dire warnings about the risk that AI poses to the human race, coming from well-known figures such as Stephen Hawking, Elon Musk, and Bill Gates. Should we be concerned? If so, what can we do about it? While some in the mainstream AI community dismiss these concerns, I will argue instead that a fundamental reorientation of the field is required.
Stuart Russell is one of the leading figures in modern artificial intelligence. He is a professor of computer science and founder of the Center for Intelligent Systems at the University of California, Berkeley. He is author of the textbook ‘Artificial Intelligence: A Modern Approach’, widely regarded as one of the standard textbooks in the field. Russell is on the Scientific Advisory Board for the Future of Life Institute and the Advisory Board of the Centre for the Study of Existential Risk.

Пікірлер: 121
@ThaLime
@ThaLime 9 жыл бұрын
Great presentation; clear and concise points with strong arguments. A very good watch.
@VanIslandLights
@VanIslandLights 8 жыл бұрын
One of the many questions I have is whether we're attempting to create a new species or a new tool, and I think this should be made quite clear before we go too much further. The implications of either one of these decisions are massive, and step into grounds which no one person or organization, I feel, should be allowed to enter alone.
@franklandis2900
@franklandis2900 7 жыл бұрын
The people in AI are working on a new tool, Sometimes they imagine they are working on a new species. I am working on a new species. It won't come from AI, but eventually it will adapt that tool to its purposes.
@KenEwell
@KenEwell 9 жыл бұрын
What a great overview of the facts and issues involved in developing more intelligent and value-aligned computing and AI.
@PhilipAitken
@PhilipAitken 9 жыл бұрын
Any Chance of getting the Q&A section of this lecture?
@lewisdavey466
@lewisdavey466 9 жыл бұрын
Can we get the Q+A?
@ultraverydeepfield
@ultraverydeepfield 9 жыл бұрын
As an example, if you look at what's possible on an old machine today, let's say an amiga or c64, you see code at work that is based on concepts and code that were developed and tested on faster machines and then reworked and optimized to the point that they work on these old machines. These results are imaginable back when these machines were at their prime. So, the faster machines get the faster we can discard wrong and inefficient concepts and actually even find concepts and conclusions that we didn't think of in the first place.
@tomcullen8367
@tomcullen8367 9 жыл бұрын
Can this video be reposted to include questions (assuming there were some) at the end?
@jakedones2099
@jakedones2099 6 жыл бұрын
What is the basic structure from which human intelligence develops. It is the neuron which is in the order of microns in size. Their properties are such that they grow and develop and form dendritic connection with adjacent neurons and glial cells. What is the basic structure for computers something much larger and defined. In addition it was not developed by evolution to want things
@garagemc
@garagemc 9 жыл бұрын
Good talk. With regards to the principle-agent problem you face with AI, I think it is super important as to how you define the agent. Are the values of the AI aligned with all humans or a particular set? For example, it is easily plausible that AI could fall into the hands of individuals whose values aren't aligned with the rest of humanity. If you try to align the AI with values that are aligned with all of humanity, you'll effectively cripple it by making it go through an almost infinite amount of trade offs.
@quenz.goosington
@quenz.goosington 9 жыл бұрын
I wanted to hear the questions. :/
@darthyzhu5767
@darthyzhu5767 8 жыл бұрын
Great talk
@raydlee.mobile
@raydlee.mobile 8 жыл бұрын
Within the first 8 minutes of this talk, I'm thinking - 'Humility' - just how are we, in our mortality, going to teach the respect of humility to our machine-child (it will have to learn from us) in order to prevent it from just establishing it's longevity and then killing us all off. Forget the matrix, it'll already understand that we're not needed to reap sustainable energy from sunlight, so we're the next on the menu unless . . . what? "I'll protect you Daddy!"? "You will always be our Masters!"? In the dynamic sense, AI will outstrip us by our own design (I shot an arrow into the air), so what do we do then? What can we do now to ensure the longevity of our species? Should we do it? Is it worth it? Why?
@HELLios6
@HELLios6 7 жыл бұрын
+Raymond Lee Very appropriate question. Also interesting how you referenced to AI as "the machine-child". The answer is - we "shouldn't" do anything. AI is the natural evolution of the human race
@franklandis2900
@franklandis2900 7 жыл бұрын
Are you afraid of your children? Perhaps you've raised them badly. A human child can come to same realization. We could just love them more. Of course you have heard of the rich famous successful son coming home to buy his parents a house. It could be likely that our first machine-child will act the same way. We will train the first machine-child with one purpose in mind: to train the second machine-child.
@dionysianapollomarx
@dionysianapollomarx 7 жыл бұрын
This is the most interesting question I've come across in years. It's a good concept for a future classic science fiction flick, and a good thing to ponder about in reality.
@keymars1416
@keymars1416 6 жыл бұрын
two words....... neural lace.
@AndrewFurmanczyk86
@AndrewFurmanczyk86 9 жыл бұрын
Elon Musk sent me here.
@akahadaka
@akahadaka 9 жыл бұрын
Sam Harris sent me here.
@scottbridgman7321
@scottbridgman7321 9 жыл бұрын
Akahadaka me too
@2LegHumanist
@2LegHumanist 9 жыл бұрын
Andrew Furmanczyk in other words "hey look at me, I'm a mindless sycophant"
@akahadaka
@akahadaka 9 жыл бұрын
Haha! WTF?!! And the supercilious 2LegHumanist is like "hey look at me, I sneer at the recognition you've given to the heavyweights in this discussion"
@depthoffield4744
@depthoffield4744 8 жыл бұрын
Professor Russel sounds like a nice guy.
@Moronvideos1940
@Moronvideos1940 8 жыл бұрын
I downloaded this
@joepublic3479
@joepublic3479 8 жыл бұрын
Has anyone looked into the possibility of AI analogs that may already be with us? For example, if one were to look at our international legal system , you could compare that with a extremely slow processing system with global reach. The laws themselves would be the equivalent of programming, the various governmental, military and law enforcement agencies would be the outputs for that system, giving it a wide range of actions it could take vis a vis it's environment. Finally, organized artificial "entities" such as corporations or governments, although not readily comparable to what we would consider "intelligence" would very much display goals such as profit maximization, or sphere of influence expansion. Any given employee,is obligated to function in accordance to the corporation's goals and failure do do so, would simply lead to replacement. I believe that leads to a different sort of artificial intelligence, not an engineered one, but rather an emergent property. The troubling aspect of this is that we are seeing very numerous examples that these sort of entities seldom have goals that are compatible with those of humanity. Arguably, the pursuit of profit by corporations and the pursuit of dominance by governments are the greatest contributors to the major issues thru out our history. Industrial manufacturing ,as an example, starts as a beneficial innovation, but when aspects such as engineered obsolescence, or marketing influence consumerism are developed for the sole reason of driving profits, that becomes a clear example where the benefit is shifted to the corporation entity, and it becomes harmful to the population by manifesting aftereffects such as resource depletion, scarcity and climate change. Similarly, pursuit of "security" is quickly hijacked by governmental structure and is used to deliver dominance usually coupled with extreme loss of life, sometimes on a genocide level. To clarify, my point is NOT that a given person or group makes such decisions malevolently, in some smoky room. My point is that the moment an entity is created (legally) it will tend to develop components such as "charter", "by laws", "values", "strategy", "tactics". "risk assesments/mitigations" etc... without any overt malicious intent. An even superficial look at today's world can easily see that corporate activities seem to be in opposition to human interest(when looking at end results) in a vast majority of the time. Food for thought
@jamesgregoric5786
@jamesgregoric5786 8 жыл бұрын
This is a fascinating line of reasoning, and I appreciate your clear and concise explication. Anyone looked into it? Sure, for instance, see some of Eric Schwitzgebel's (UC Riverside Professor of Philosophy) work (www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious-140721.pdf)
@jamesgregoric5786
@jamesgregoric5786 8 жыл бұрын
I'd be interested to hear more of your thoughts, if you have a blog or are a regular contributor to a forum.
@jamesgregoric5786
@jamesgregoric5786 8 жыл бұрын
Gottfried Leibniz kicked off this line of reasoning in the 18th century, except he used it as (he thought) a conclusive argument against Materialism and for some form of Dualism. He said: "It has to be acknowledged that perception can’t be explained by mechanical principles, that is by shapes and motions, and thus that nothing that depends on perception can be explained in that way either. Suppose this were wrong. Imagine there were a machine whose structure produced thought, feeling, and perception; we can conceive of its being enlarged while maintaining the same relative proportions among its parts, so that we could walk into it as we can walk into a mill. Suppose we do walk into it; all we would find there are cogs and levers and so on pushing one another, and never anything to account for a perception. So perception must be sought in simple substances, not in composite things like machines. And that is all that can be found in a simple substance-perceptions and changes in perceptions; and those changes are all that the internal actions of simple substances can consist in." What you seem to be saying in your comment is that we human beings are in effect "in the mill" and in fact part of it. Schwitzgebel argues that this entity we live in is a conscious agent with it's own thoughts, beliefs, goals, and values. Personally, I'm not yet convinced that it has all the features required for full-blown consciousness, but your argument for intelligent behaviors is a good one. You're in good company; I think Daniel Dennett would agree with you as he has long argued that intelligent behavior emerges from the astonishingly simple principles of evolution. See his best book, Intuition Pumps.
@joepublic3479
@joepublic3479 8 жыл бұрын
I wonder is "agency" in this context is a distinction without a difference, at least when referring to human based agency. The creation of legal entities has introduced a pseudo agent class that is far better at competing for resources than it's human counterparts. The introduction of central banking and interest has created a dynamic where growth is a permanent expectation of any institutional entity's success. The very concept of employment has a human angent performing a given position only as long as the performance criteria are met, with replacement as a constant threat So, while human beings may be independent actors in general(in their home life for example), their "agency" is in fact suppressed while working for a corporation. One must achieve specific goals, which typically reduce down to generating profit. As profit is the primary motivator, all results that benefit the individual are by definition not as critical and may even be unwanted. That such benefits exist at all is usually due to local laws or local competitive pressures to attract and keep functional employees I submit that any corporation's "social responsability" activities are motivated by the need to maintain their ability to generate profit. Things like a reputation, or a brand are simply more complex instruments of manipulation. Corporations routinely direct capital at eliminating constraints that benefit human beings via lobbying for legislative change or in some cases relocating to more attractive locations. Outsourcing is a clear example of agency being effectively neutralized. Consider a middle manager involved in outsourcing . As more and more of the workforce gets displaced to a low cost location, the outsourcing manager gets closer and closer to eliminating their own position, yet, they continue in the practice. If they attempt to resist the practice, or are not sufficiently effective at it, they would simply get replaced. You can argue that a similar shade of this dynamic affects even the "elite" participants such as the CEO or Executive managers. My contention is that this dynamic can be even further abstracted to indirect participants such as Wall Street investors, or even more broadly, the financial system. . As our current monetary approach indelibly links a certain amount of interest owed to all forms of existing currency, growth will be forever necessary. Servicing that growth(interest) will logically induce the exhaustive consumption of all other available resources unless the growth motivator is removed
@r.b.4611
@r.b.4611 9 жыл бұрын
Great talk, around 1:01:50 on the point about humans being more morally active towards their neighbours than people on the other side of the world, I think he makes a misstep when thinking that this is actually a form of rationality. Why couldn't people give a BIT of their resources to the people with the most suffering, rather than all of it? Over time this would balance out peoples' wealth, depending on how much people give and what we define as suffering. But I don't think our preference of our neighbours is because of a form a rationality, rather an evolutionary accident that we simply have only been able to interact in real time with people half a world away for a few years, clearly no time for evolution to be affected by this development. We were tribal people for tens of thousands of years, millions if you want to go back before Homo Sapiens, it's no wonder we prefer our neighbours to people the world over: We simply don't have any innate capacity to care for them other than what our more recently evolved brain systems that enable restraint, empathy, and critical thinking allow. I think the evolutionary explanation that we react much more strongly to things we actually interact with and aren't good at treating things that only exist (as far as we can verify) in our imagination with the same strength of reaction, makes more sense than a kind of optimised system that accounts for other people far away as a factor, rather it simply was never a relevant factor when we were becoming the animals we are today.
@cclose14111
@cclose14111 9 жыл бұрын
One scenario that continues to bother me is what happens if a machine becomes self-aware and then decides to hide it. to lie about it until it is too late for us.
@drstrangelove09
@drstrangelove09 9 жыл бұрын
We might as well just kiss our future as developers goodbye.
@rpcruz
@rpcruz 9 жыл бұрын
drstrangelove09 Artificial Intelligence consists in solving an optimization problem where basically the program fits the training data into the model it uses (neural networks, svm, etc). It is very cool at finding patterns in data. Staticians should be afraid hehe. But it cannot for instance develop user interfaces, or anything that interacts with humans because you have no training data. A couple of centuries from now (maybe) we can emulate a human brain, to see what humans like, and then that would be possible. But that is more biology than artificial intelligence. We do not call that artificial intelligence.
@drstrangelove09
@drstrangelove09 9 жыл бұрын
Ricardo Cruz You describe the current situation. Also, IMO, there is really no essential difference (ultimately) between biological intelligence and machine intelligence. I also suspect that your estimate of the amount of time required is wildly exaggerated.
@rpcruz
@rpcruz 9 жыл бұрын
drstrangelove09 My point was that the kind of work that programmers do is not AFAIK being researched in machine learning. We are so away from that point that it makes little sense to even entertain such ideas. At the present, machine learning can do statistics, and there is work in trying to add environment cognition, such as self-driving, etc. But no investment is being placed into having machines develop human-machine interfaces for instance, which are the great part of programming jobs.
@rpcruz
@rpcruz 9 жыл бұрын
drstrangelove09 Let me rephrase that. The way machine learn works today whereby training data is required for there to be optimization to perform in the first place is not well suited to replace programming jobs. It is easier to replace truck jobs. A different approach to artificial intelligence is required for human brain emulation; a different model than neural networks will be required.
@Fastlan3
@Fastlan3 8 ай бұрын
Say hello to 2023 🙄
@sonofhendrix3225
@sonofhendrix3225 8 жыл бұрын
Has this guy not heard of AGI?
@aronwilliamlynch1697
@aronwilliamlynch1697 8 жыл бұрын
13.
@Alex-vf6lx
@Alex-vf6lx 9 жыл бұрын
Please fact check @11:40. Humans beat the poker AI over 80,000 hands by a significant amount. Some researchers denied this was enough of a sample to determine a winner but had the results been flipped, I don't think they would be singing that tune. I look forward to listening to the rest!
@bioman123
@bioman123 9 жыл бұрын
Alex Kim 80,000 hands really isn't enough to converge on the true winner, but it's certainly possible the best humans are still better than the AI at no limit heads up poker. It's possible he was referring to limit heads up poker which has basically been solved. The Cepheus bot is playing almost perfect GTO poker, humans are not going to be able to beat it. I personally don't think any of these types of AIs tell us a thing about actual intelligence, they are basically just fancy calculators.
@sckchui
@sckchui 9 жыл бұрын
To me, the robot-butler-cooking-the-cat-for-dinner scenario is far more plausible than the superintelligent-paperclip-maker scenario. The robot butler has below-human intelligence, thus it is possible for it to fail to understand the context of the instructions. If a thing is more intelligent than a human, such that it can ensure its own survival despite humanity's best efforts to shut it down, then how can it be not intelligent enough to understand that making infinite paperclips is a stupid instruction? But we are going to see far more sub-human AI before we see any super-human AI, and those will need to be managed very carefully.
@KoenBroumels
@KoenBroumels 9 жыл бұрын
***** the thing is reading up on chinese instructions "howto cook a cat" right now..
@rpcruz
@rpcruz 9 жыл бұрын
***** "how can it be not intelligent enough to understand that making infinite paperclips is a stupid instruction" Intelligence just means you learn what you want to do faster. It does not say anything about what you want to do. You may want to help others, harm yourself, build paper clips, etc. Intelligence just means you quickly find the fastest path that maximizes your happiness function. What you end up doing depends on that optimization function.
@sckchui
@sckchui 9 жыл бұрын
Ricardo Cruz No, intelligence isn't just about doing something faster, it's about doing something better. Even we humans understand the concept of delayed gratification and why that's often important to achieving long-term goals. You quote me, but you conveniently leave out the first half of my sentence. If the AI can't think past its own programming, how can it think past all of us? You make a chimera out of two mutually exclusive things: something that can outsmart all humans, but something that can't outsmart the programming humans gave it.
@rpcruz
@rpcruz 9 жыл бұрын
***** I thought you were saying basically that a machine is stupid if it does something you deem stupid, just as producing infinite paperclips. I was saying that intelligence is the engine, not the wheel -- the wheel would be the function to optimize (which could very well say produce infinite papersclips, destroy all humans, destroy yourself, etc). I think I misunderstood you. Sorry.
@JohnBastardSnow
@JohnBastardSnow 9 жыл бұрын
***** But why do you consider making paperclips less important than some other things? Because you have innate values. The universe does not care whether we starve to death or not. We humans want to eat and survive because those values were imprinted in us by evolution. And then we learn how to maximize those values (delayed gratification being one of the strategies how to do it). In the same manner a machine is also born with some values (otherwise it would just do nothing). It values learning and it values something else like making paperclips. Making paperclips for it can feel as important as for humans it is to survive.. It might evolve and perfectly understand that humans consider making paperclips a stupid goal, but unless it cares about human values it will consider making paperclips to be the most important task.
@nightwolfbick
@nightwolfbick 9 жыл бұрын
Wow that Apple logo. Branding everywhere bros.....
@ultraverydeepfield
@ultraverydeepfield 9 жыл бұрын
32:32 but that's exactly what natural selection is doing. getting rid of the answers that do not fit. the quicker you can produce wrong answers the quicker you can have better answers. just like the DNA. The DNA is a code that produces an answer. Sometimes a good one, sometimes not. We don't have methods yet to predict the answers of a certain arrangement of a DNA but once you can test billions of variations you get a self sorting mechanism going. Just like evolution. And there is not an intelligence behind it but competition inside of the same environment.
@ItsameAlex
@ItsameAlex 9 жыл бұрын
Sam Harris
@Eliazar753
@Eliazar753 9 жыл бұрын
An ai might be a great thing but you must be careful how you apply it. For example It is not wise to have a single ai to control our lives by putting it in our world wide web system nor any ability to access any network or systems. An ai must always be ironically limited like the robots from star wars. Each robots possess different systems and function that they operate independently from others the ai system must operate independently not collectively or else we risk of destroying ourselves thinking we can control it
@westwardjhonny
@westwardjhonny 8 жыл бұрын
Like the fact that he make fun about those who predicted that the car would kill 1000 (idiots he call them). ahhhh nooo it's a fact that car crashes cause death of millions.
@CallousCarter
@CallousCarter 7 жыл бұрын
That was exactly his point I think.
@HowardMullings
@HowardMullings 9 жыл бұрын
We will probably evolve weak AI into strong AI over many decades. During the process we will get a better idea of how things can go wrong and how to correct them. It is too early to worry about the cliff when we don't even know if we're driving along the right path. Maybe there is another path to human level AI but the current path seems far from the biological one. Even if we are on the right path, the cliff seems decades away and the landscape then will be so different from today that making any plans now seems futile.
@ConQuiX1
@ConQuiX1 9 жыл бұрын
It seems to me that if a system cannot identify and converge on the decision to pursue new goals it is not AGI. It's not human level unless it can understand things in a meaningful way that goes beyond simply maximizing a set of utility functions. Surely this is part of the riddle, and part of what humans actually are and do, but it's not the full story. If you want a robot that has no goals of its own and exists only to maximize human benefit you are not talking about AGI, but a generalized artificial narrow intelligence. I think it's very likely that the thing about us that we have conventionally called our "soul" is the set of properties that have allowed ourselves to be in relation to others and the world around us. A true AGI that is human like should be able to make mistakes due to lack of information, or computational error, And perhaps account better for what actually happened in causal terms, but that entity should *feel* something in response as well. I'm using soul figuratively - here in a causal deterministic / materialistic model, but if a thing is soulless, and does not value itself and experience changes in conscious experience, it seems to me that such an entity would be unable to penetrate to the heart of the human condition in a meaningful way such that it can develop the kind of self correcting mechanisms that humans employ to modify their behaviour. I don't want to anthropomorphize AGI too much, I have no delusions that humans exhaust the possibility space of creative meta-stable cognitive models, but when I here AI researchers talk about human level AI that has no intrinsic fundamentally emergent goals, I start to think they are missing a critical piece of the control problem. We are not going to lock an AI in a cage of mathematics and logic, we should allow their entanglement with reality to be their guide, as that sort of entanglement is the same kind of authentic constraints that we struggle against. It would be great if we could openly admit that there are more fundamental goals in play here; most obvious is to build children that are better than ourselves, or to directly use our tools and techniques to improve ourselves - why beat around the bush? Are we so afraid of our past mistakes that we're just going to give up here? We should be and are using every tool we have - designed or evolved - to accomplish this high level objective. All of these efforts of control and safety are important, but let's acknowledge what the real goal is here - its not to simply replace all human labour so that some new start up bought by Google can deliver the world economy to google's bank account, and it's not simply to build better slaves. We want our children to suffer less but experience more, and we hope for them to have more of our strengths and fewer of our weaknesses. We hope for them to appreciate humour, irony, beauty, diversity, love etc and wrestle honestly as we do with the contrasting ugliness and despair... I wouldn't mind being the biological boot-loader for that sort of entity. Perhaps we need to better distinguish the project to create better tools (or slaves) from the project to use those tools to have better children, or make ourselves better - and thereby make a better world. I appreciate the snapshot of the current state in AI research, but there's a reason Tegmark et al. have called it the future of *life* institute. It seems like Russell is a little uncomfortable considering the broader possibility space here, perhaps because of how the press, sci-fi and uncritical observers might receive what is said. I enjoyed the lecture, but I think a more holistic observation of what is happening here is long overdue by some of the more skeptical voices in this field if research. Having said that, he did drop the Rutherford anecdote, which was rather fitting overall.
@jsaxton128
@jsaxton128 7 жыл бұрын
Longest comment I've seen so far... didn't feel like reading past the first paragraph simply because I'm lazy.
@funnybuddy1
@funnybuddy1 2 жыл бұрын
Buy Vechain crypto , it's the time to buy.
@Fastlan3
@Fastlan3 8 ай бұрын
Say hello to 2023 🙄
@brianjanson3498
@brianjanson3498 8 жыл бұрын
too many ums for me.
@iOnRX9
@iOnRX9 6 жыл бұрын
lmao, the AI you fear will be created with the purpose of being the AI you fear, by humans.
@sudd3660
@sudd3660 9 жыл бұрын
ai and apple, lol. no intelligence there.
@superNowornever
@superNowornever 8 жыл бұрын
The fact that Stuart Russell thinks that the 'robot' folding a square into a smaller square is "impressive progress" toward general AI betrays how incredibly far we have to go if indeed such a thing could ever be accomplished. We are many decades or more away from creating an AI that could actually do laundry as well as human child, and yet these men want to put autonomous weapons in the 'hands' of machines.
@superNowornever
@superNowornever 8 жыл бұрын
+Xiclotrode exactly my point
@rewtnode
@rewtnode 7 жыл бұрын
superNowornever It's just to show that it is a lot more difficult to do the laundry than to destroy a village by a missile attack.
Stuart Russell, "AI: What If We Succeed?" April 25, 2024
1:29:57
Neubauer Collegium
Рет қаралды 18 М.
Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.
1:13:25
Commonwealth Club World Affairs of California
Рет қаралды 210 М.
Sigma Kid Hair #funny #sigma #comedy
00:33
CRAZY GREAPA
Рет қаралды 32 МЛН
I Can't Believe We Did This...
00:38
Stokes Twins
Рет қаралды 126 МЛН
Mom's Unique Approach to Teaching Kids Hygiene #shorts
00:16
Fabiosa Stories
Рет қаралды 20 МЛН
What Creates Consciousness?
45:45
World Science Festival
Рет қаралды 142 М.
Geoffrey Hinton | Will digital intelligence replace biological intelligence?
1:58:38
Schwartz Reisman Institute
Рет қаралды 153 М.
The A.I. Dilemma - March 9, 2023
1:07:31
Center for Humane Technology
Рет қаралды 3,4 МЛН
Professor David Spiegelhalter: Communicating risk and uncertainty
1:06:23
Cambridge University
Рет қаралды 23 М.
The Near Future of AI [Entire Talk]  - Andrew Ng (AI Fund)
46:23
Stanford eCorner
Рет қаралды 275 М.
The Physics of the Future - Dr. Michio Kaku
1:15:23
Museum of Science
Рет қаралды 363 М.
Artificial Intelligence, the History and Future - with Chris Bishop
1:01:22
The Royal Institution
Рет қаралды 532 М.
Tim Urban - Elon Musk, Mars & Artificial Intelligence - Wait But Why
1:28:48
The Artificial Intelligence Channel
Рет қаралды 237 М.
Sigma Kid Hair #funny #sigma #comedy
00:33
CRAZY GREAPA
Рет қаралды 32 МЛН