No video

Jaan Tallinn: AI Risks, Investments, and AGI - #59

  Рет қаралды 815

Manifold

Manifold

Күн бұрын

Пікірлер: 3
@kreek22
@kreek22 3 ай бұрын
The level of abstraction involved in AI related existential risk is, as Steve noted, a real problem for effective communication with most of the elite, since most of them are not members of the cognitive elite. The problem is intensified by several other factors: this is something new under the sun, it is a black box, it is massively dual use (really omni-use), and, as mentioned, safety work converts almost automatically into more frontier incursions. The situation we face is akin to an invasion of infant aliens from a race of ultra-intelligent aliens, where we do not fully appreciate how intelligent the infants will soon be and have no conception of what they will want when they become themselves. Discussions of potential long term risk scenarios remind me of discussions of metaphysics and theology. The fact that current LLMs are black boxes goads the imaginations of both optimists and pessimists. Predicting a future contingent on these mysterious entities is not an altogether empirical endeavor. Consequently this game produces high degrees of diversity of opinion. And who's to say that the entire future is not at the mercy of the imaginations of these machines as much as of their capabilities? The is/ought distinction extends to machinic intelligence. This leads my imagination to suspect that entities with a better grasp of the "is" will select "oughts" that we will never understand. An entity which is smarter than you is, to some extent, inherently a black box. Even if our AI were transparent at this stage of sub-human g, they would transition to opacity as they surpassed the human range. Narrow AIs like AlphaGoZero were already opaque in their special spheres. But, like the leading LLMs now, that Deepmind system was designed (if I'm not mistaken) to be opaque even before it surpassed humans. Another source of unpredictability in humans and smart machines is the fact that there does not seem to be such a thing as pure reason. Chaos, probability, anosognosia, and many other factors prevent pure reason from arising. Maybe cognitive fallacies could be structurally embedded in the AI system to handicap it as an agent in the world, ideally fallacies that would not interfere too much with its useful work. For example, being unable to act in face of perceived ambiguity, due to a peculiar form of risk aversion, would make many possible AI attacks impossible. If the installed fallacy is a known characteristic of the system, it might be possible to cancel it or compensate for it after the results are produced. Of course, a sufficiently intelligent and self-aware system might be able discover such limitations and extricate itself surreptitiously. One of the questions I'm left with after this conversation, which was briefly touched on, is what the East Asian nations think about AI prospects. It could come to matter if the Chinese gain traction in their huge semiconductor catch-up efforts. It's hard to imagine that they'll catch America in this field in less than ten years, unless America abdicates or China achieves the greatest espionage coup of all time.
@black_eagle
@black_eagle 3 ай бұрын
The question is is AI risk so abstract that it isn't really a threat? I.e., is it something nerds who live in their heads and read a lot of science fiction have dreamed up, but has yet to be demonstrated as a real, tangible problem? That's kinda how I see it. It's more in the realm of imagination and ideology -- something like demons and devils in the theology of the high IQ techie crowd. But maybe I'm just too dumb to understand it, which is I'm sure what the nerds will say.
@Painted_Rocks
@Painted_Rocks 3 ай бұрын
Prior Jaan podcast: kzfaq.info/get/bejne/jbh4m5iHmqzYg4E.html
Molson Hart: China and Amazon, Up Close - #60
1:25:37
Manifold
Рет қаралды 2,7 М.
Jaan Tallinn Keynote: 2024 Vienna Conference on Autonomous Weapons
13:17
Future of Life Institute
Рет қаралды 809
Underwater Challenge 😱
00:37
Topper Guild
Рет қаралды 41 МЛН
Schoolboy Runaway в реальной жизни🤣@onLI_gAmeS
00:31
МишАня
Рет қаралды 4 МЛН
Whoa
01:00
Justin Flom
Рет қаралды 46 МЛН
AI Global Odyssey: REAL Situational Awareness - #63
1:00:19
Manifold
Рет қаралды 2,3 М.
Military Technology and U.S.-China War in the Pacific - #51
1:23:59
Russell Clark: Japan, China, and USD reserve status - #56
1:13:27
Steve Hsu on the Future of Everything
1:07:58
Upstream with Erik Torenberg
Рет қаралды 3,8 М.
Lecture: Fermi Paradox, AI, Simulation Question - #53
53:02
Manifold
Рет қаралды 1,2 М.
The Existential Risks of Artificial Intelligence - Jaan Tallinn
29:23
University of Tartu Delta Management School
Рет қаралды 661