No video

Exploring the myths, misconceptions and dangers of AI

  Рет қаралды 166

AtlanticCouncil

AtlanticCouncil

Күн бұрын

Are you curious about AI's capabilities, its potential dangers, or simply want to understand how it works?
Philip L. Frana, associate professor of interdisciplinary liberal studies and independent scholars at James Madison University, delves into the world of artificial intelligence (AI), uncovering the truths, debunking the myths, and clarifying misconceptions surrounding this revolutionary technology.

Пікірлер: 6
@CyanOgilvie
@CyanOgilvie 2 ай бұрын
Programmer here, working with AI right now. I'm sorry, but I disagree with your central point - yes, the short-term risks associated with AI are really just reflections of our own tendencies, given more power and reach, but it's a mistake to reason that, because we've understood that particular risk, that we have understood of the total risk. There are other classes of risk that share essentially nothing with those dynamics, and so hugely outweigh the potential downsides of the short term risks as to make them irrelevant by comparison. The mitigating approach for those short term risks is really just: be aware of them, and don't give AI power over people without human oversight in-the-loop (eg. making decisions about loan applications, thereby entrenching historical stereotype biases). Much more serious types of risk emerge from issues of alignment - while we have the ability now to build things with clear superhuman intelligence in narrow domains and it looks like we're on the cusp of extending that to much more general types of intelligence, we currently have no idea how to build them in such a way that we can ensure that their goals align with ours and don't have catastrophic unintended consequences when those values don't perfectly match. A core part of the way we build these systems is training them as optimizers - we're essentially trying to find a function that minimises some estimate of a cost or penalty function: "machine learning". They're really good at that, but anything not captured in the cost approximation we use is an externality to the decision function, and will be completely disregarded in pursuit of minimising the cost / maximising the reward of the actions taken in pursuit of whatever its goal is. The mental model for the existential risk class for AGI is not a silly science fiction terminator-style robot uprising, but rather trying to play chess against a god, where your continued existence depends on winning. We already have chess AI that is functionally on a god level compared to us (no human will ever, ever win a game of chess against it), and, as a close observer of the progress towards AGI, I would be very surprised if this level of overmatch is not extended to general intelligence in 10 years, and I would be only mildly surprised if it happened in just 3. But, to be safe from that risk, we essentially have to find a way to place inescapable constraints on a godlike entity (godlike in its ability to utterly outmaneuver and outthink us in any way you care to consider), so that its actions are never ones we don't like, and that any AI it in turn creates is still bound by those constraints (sub-agent problem). For a useful introduction into the difficulties in doing that, search for "AI stop button problem".
@jartotornroos4897
@jartotornroos4897 2 ай бұрын
I think, the potential self awareness, of the AIs of the future is a valid concern? How to define it? Their rights, do they have rights? I understand these are hypothetical questions, now. But these need to be answered beforehand... I think.
@nottgri
@nottgri 2 ай бұрын
For now, nobody knows what consciousness is. Personally, I doubt that a huge text-processing machine even *knows* it exists. Even it it has that info in its database.
@jartotornroos4897
@jartotornroos4897 2 ай бұрын
@@nottgri Yeah, that's exactly my point. Even we don't really know what it is... or how to define it.
@nottgri
@nottgri 2 ай бұрын
@@jartotornroos4897 Some tried to define it as "self-knowledge", but to me it smells of idem per idem. Was it Isaac Asimov (in "I, Robot"?) who started the topic? Anyway, most biologists agree now that so-called "higher animals" do have self-knowledge. Does that mean that we all become vegans and stop enslaving dogs and cats and canaries, or maybe we follow the "laws of nature" and naturally exploit other species, including those created by us? Guess we'll not solve this problem here on youtube...
@nottgri
@nottgri 2 ай бұрын
And?
This intense AI anger is exactly what experts warned of, w Elon Musk.
15:51
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 967 М.
OMG what happened??😳 filaretiki family✨ #social
01:00
Filaretiki
Рет қаралды 13 МЛН
Little brothers couldn't stay calm when they noticed a bin lorry #shorts
00:32
Fabiosa Best Lifehacks
Рет қаралды 20 МЛН
艾莎撒娇得到王子的原谅#艾莎
00:24
在逃的公主
Рет қаралды 54 МЛН
How Far is Too Far? | The Age of A.I.
34:40
YouTube Originals
Рет қаралды 62 МЛН
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
TED
Рет қаралды 1,4 МЛН
Neuralink Isn't Telling Us Something...
13:42
The Tesla Space
Рет қаралды 119 М.
The Fastest Growing Robotics Trends of 2024
17:50
ColdFusion
Рет қаралды 434 М.
The History of Artificial Intelligence
14:16
Futurology — An Optimistic Future
Рет қаралды 56 М.
Artificial Intelligence, the History and Future - with Chris Bishop
1:01:22
The Royal Institution
Рет қаралды 533 М.
AI & The Future of Work | Volker Hirsch | TEDxManchester
18:21
TEDx Talks
Рет қаралды 539 М.