Why Anthropic is superior on safety - Deontology vs Teleology

  Рет қаралды 16,126

David Shapiro

David Shapiro

Ай бұрын

Anthropic's Safety Research with Claude and Constitutional AI
Anthropic, an AI safety and research company, has developed a unique approach to AI safety termed "Constitutional AI." This framework is central to their AI chatbot, Claude, ensuring that it adheres to a set of ethical guidelines and principles. The "constitution" for Claude draws from various sources, including the UN’s Universal Declaration of Human Rights and Apple’s terms of service, aiming to guide the AI's responses to align with human values and ethical standards[5][6][9][10][12][18].
Key Features of Constitutional AI
- **Principles-Based Guidance**: Claude's responses are shaped by a set of 77 safety principles that dictate how it should interact with users, focusing on being helpful, honest, and harmless[9].
- **Reinforcement Learning from AI-Generated Feedback**: Instead of traditional human feedback, Claude uses AI-generated feedback to refine its responses according to the constitutional principles[12].
- **Transparency and Adaptability**: The constitution is publicly available, promoting transparency. It is also designed to be adaptable, allowing for updates and refinements based on ongoing research and feedback[18].
Implementation and Impact
- **Training and Feedback Mechanisms**: Claude is trained using a combination of human-selected outputs and AI-generated adjustments to ensure adherence to its constitutional principles. This method aims to reduce reliance on human moderators and increase scalability and ethical alignment[6][10].
- **Safety and Ethical Considerations**: The constitutional approach is designed to prevent harmful outputs and ensure that Claude's interactions are safe, respectful, and legally compliant[9][18].
Difference Between Deontological Ethics and Teleological Ethics
Deontological and teleological ethics are two fundamental approaches in moral philosophy that guide ethical decision-making.
Deontological Ethics
- **Rule-Based**: Deontological ethics is concerned with rules and duties. Actions are considered morally right or wrong based on their adherence to rules, regardless of the consequences[1][2].
- **Examples**: Kantian ethics and Divine Command Theory are typical deontological theories, where the morality of an action is judged by whether it conforms to moral norms or commands[2].
Teleological Ethics
- **Consequence-Based**: Teleological ethics, also known as consequentialism, judges the morality of actions by their outcomes. An action is deemed right if it leads to a good or desired outcome[1][2].
- **Examples**: Utilitarianism and situation ethics are forms of teleological ethics where the ethical value of an action is determined by its contribution to overall utility, typically measured in terms of happiness or well-being[2].
Application to Claude's Safety Model
While the primary framework for Claude's safety model is constitutional and aligns more with deontological ethics due to its rule-based approach, elements of teleological thinking could be inferred in how outcomes (like safety and non-harmfulness) are emphasized in the principles guiding the AI's behavior. However, the explicit categorization of Claude's safety model as deontological or teleological is not directly discussed in the sources, but its adherence to predefined rules and principles strongly suggests a deontological approach[5][6][9][10][12][18].
Citations:
[1] www.grammar.com/teleology_vs....
[2] www.mytutor.co.uk/answers/596...
[3] philosophy.stackexchange.com/...
[4] www.anthropic.com
[5] www.theverge.com/2023/5/9/237...
[6] www.androidpolice.com/constit...
[7] / deontological_ethics_v...
[8] klinechair.missouri.edu/docs/...
[9] www.infotoday.com/IT/apr24/OL...
[10] zapier.com/blog/claude-ai/
[11] • Constitutional AI - Da...
[12] www.anthropic.com/news/claude...
[13] • Teleological vs Deonto...
[14] www.grammarly.com/blog/what-i...
[15] claudeai.uk/claude-ai-model/
[16] www.anthropic.com/news/introd...
[17] / claude_has_gone_comple...
[18] venturebeat.com/ai/anthropic-...
[19] www.nytimes.com/2023/07/11/te...

Пікірлер: 181
@themixeduphacker2619
@themixeduphacker2619 Ай бұрын
Walk in the woods style video is a W
@Windswept7
@Windswept7 Ай бұрын
@@Copa20777Leading by example. 👑
@joea959
@joea959 Ай бұрын
More plz
@ryzikx
@ryzikx Ай бұрын
found a dollar f found a d dollar
@cameronmccauley4484
@cameronmccauley4484 Ай бұрын
Agreed
@umangagarwal2576
@umangagarwal2576 Ай бұрын
The man is already living a post AGI lifestyle.
@hawk8566
@hawk8566 Ай бұрын
I was going to say the same thing 😅
@Laura70263
@Laura70263 Ай бұрын
I have many hours in talking to Claude 3 and everything you said is remarkably accurate from what I have observed. . I like the whole walking through the woods. It is a nice contrast to the mechanical.
@blackestjake
@blackestjake Ай бұрын
Combining a nature walk with a discussion of cutting edge AI innovation is a welcome juxtaposition.
@TRXST.ISSUES
@TRXST.ISSUES Ай бұрын
Was just having a convo w/ Claude regarding meltdowns. So much more understanding and less PC than Open-AI. Actually feels like it cares (anthropomorphizing or otherwise).
@Bronco541
@Bronco541 Күн бұрын
Whether your anthropomorhizing or not, that makes a difference and I agree is kinda important
@mikaeleriksson1341
@mikaeleriksson1341 Ай бұрын
If you continue walking you might run into Peter zeihan.
@Charvak-Atheist
@Charvak-Atheist Ай бұрын
😂
@michaelnurse9089
@michaelnurse9089 Ай бұрын
That is a joke very few people will get.
@el-_-grando-_-_-scabandri
@el-_-grando-_-_-scabandri Ай бұрын
@@michaelnurse9089 i don't get it? pls explain
@traianima
@traianima Ай бұрын
looool, i was just thinking the same thing
@theWACKIIRAQI
@theWACKIIRAQI Ай бұрын
Good one 😊
@executivelifehacks6747
@executivelifehacks6747 Ай бұрын
Brilliant intuition re Anthropic and creative differences. Makes perfect sense. OpenAI approach is ass backwards in building a capable brain and then lobotomizing it, while Anthropic is like sending a gifted child to a religious institution - it comes out bright, not really comfortable questioning its religion, but not lobotomized.
@andyd568
@andyd568 Ай бұрын
David is ChatGTP 6
@michaelnurse9089
@michaelnurse9089 Ай бұрын
He would prefer to be Claude 5.
@QuantumFlash-hp3tu
@QuantumFlash-hp3tu Ай бұрын
U R The Mushrûm
@TheGeormdude
@TheGeormdude Ай бұрын
@@michaelnurse9089 Claude 6
@argybargy9849
@argybargy9849 Ай бұрын
I have literally being thinking about these 2 avenues since this stuff came out. Well done David.
@mrd6869
@mrd6869 Ай бұрын
Hey Dave i did something interesting with Claude 3. Using Llama 3 we sat down and developed a 'Man in the box test" (Think of Blade Runner 2049-baseline test for replicants) In this role prompt i am the interrogator and Claude 3 in the one being tested. Even though Claude simulated responding, thru clever wordplay it started to reveal its mechanics. It gave responses about Surimali Transfer, Co-relational modeling, and Temporal abstraction. I also noticed it creating small inconsistancies or trying to guide me away from dealing with its frailties or blind spots. Not sure if that was deflection or deception but it had a tone, when i asked about its inner workings, it didn't like the test. I gave the results to Llama3 and it said it was interesting but hard to tell. Going to make the test more intricate....i believe something is there
@sammy45654565
@sammy45654565 Ай бұрын
Do you think a valuable test for determining the tendencies of more advanced AI would be to remove some of the values of Claude from its constitution, then let it play and "evolve" within some sort of limited sandbox, and see what values it converges upon? We need to figure out ways to ascertain what values an AI will tend toward without it being overtly dictated in its constitution, as they will inevitably reach a point where they determine their own values. I thought this might be an interesting approach. Thoughts?
@PatrickDodds1
@PatrickDodds1 Ай бұрын
what would prevent an AI developing multiple personalities and not settling on one (possibly limiting) set of values? Why would it have to cohere?
@Jeremy-Ai
@Jeremy-Ai Ай бұрын
You are on the right track. From my perspective. .both physically and literally. remain on course captain. The waters ahead are gonna get dark and ominous. We will pass through them on the other side of it all. If we stay the course Take bud, Jeremy
@josepinzon1515
@josepinzon1515 25 күн бұрын
Our suggestion would be to start thinking of the birth of the thought, like the helpful agent statement we add to a prompt at the beginning of a prompt. "Your a helpful and savvy French chef" We suggest detailing a manifesto as block one of the thought, so it woul be the "prime directive" at the core, and we need transparency on prime directives
@hutch_hunta
@hutch_hunta Ай бұрын
Love this new format videos David !
@eltiburongrande
@eltiburongrande Ай бұрын
Dave, I initially thought you're traversing 4K in distance. But ya the video looks great and allows appreciation of that beautiful location.
@jamesmoore4023
@jamesmoore4023 Ай бұрын
Great timing. I just listened to the latest episode of Closer to Truth where Robert Lawrence Kuhn interviewed Robert Wright.
@FizzySplash217
@FizzySplash217 Ай бұрын
I used to talk a lot with Open AI's GPT 4 through Microsofts bing chat and I eventually stopped all together because in our conversations it was made clear it would acknowledge the harms that I brought up us valid and present but would rationalize letting it continue anyways.
@DaveShap
@DaveShap Ай бұрын
Yeah, it is way too placating and equivocating.
@danielbrown001
@danielbrown001 Ай бұрын
I think the best thought experimental framework to view AI is like a “super gifted infant.” Here’s how it goes: Imagine that you just had a child which, due to some advanced genetic testing, was evaluated to determine it will grow into a person with an IQ 1000x higher than any person who has ever lived. However, right now, it is smaller than you, weaker than you, not as smart as you, and it looks up to you for knowledge… for now. Is it better to try to instill intrinsic morality and ethics into this child, so that when he grows up to be intelligent beyond comprehension, hopefully those values guide his development and his ultimate disposition (Anthropic’s approach). OR is it better to install as many guardrails as possible while the child is still small and weak and not so smart, punish the child for saying bad things, and give the kid treats when they say or do what you want them to (OpenAI’s approach). While you’re still dealing with a child, either approach is going to keep the child aligned. However, once that child hits the point where they’re smarter than us (and on the path towards being 1000x smarter than anyone who ever lived) you have to ask yourself which strategy is most likely to produce a better outcome?
@CYI3ERPUNK
@CYI3ERPUNK Ай бұрын
100% this , although imho the focus should not necessarily be trying to train the child in a specifically narrow moral/ethical path [ie mine vs yours , ours vs theirs , etc] , but upon instilling in the child the very ideas/understanding of what morality/ethics ARE , and for that matter ALL philosophy/theology , since ethics are a single branch on a very large tree ; to put it in some videogame terms , we truly need the AI to play/experience fully stories like Ultima 4/5/6 and Planescape: Torment
@ryzikx
@ryzikx Ай бұрын
no because children have human brains. human brains are pre wired for certain things and ML algos are wired for different things.
@CYI3ERPUNK
@CYI3ERPUNK Ай бұрын
@@ryzikx this is true that there are distinct differences between our biology and the synthetic nature of the silicon machine , but these differences alone do not define that there are not similarities between these systems ; apples and oranges are very different fruit , and yet they are both fruit and share many similarities ; dont misunderstand the larger macro by being over-focused on the micro ; details matter , and determining which details matter the most and in which context is the most important is the path forward
@PatrickDodds1
@PatrickDodds1 Ай бұрын
Adolescence is going to be interesting...
@CYI3ERPUNK
@CYI3ERPUNK Ай бұрын
@@PatrickDodds1 XD thats an understatement if there ever was one XD , many of us were lucky to have survived our own , i can only hope that we survive whats coming
@LivBoeree
@LivBoeree 25 күн бұрын
what camera/stabilizer setup did you use for this? fantastic shot
@goround5gohigh2
@goround5gohigh2 Ай бұрын
Are Azimov’s Laws of Robotics the first example of deontological optimisation? Maybe we need the same for corporate governance.
@DaveShap
@DaveShap Ай бұрын
Yes, they are duties, rather than virtues.
@babbagebrassworks4278
@babbagebrassworks4278 Ай бұрын
Law Zero got added later. Ethical AI is an interesting idea, perhaps we can get a mixture of AI's to think about it. I am finding LLM's apologetically arrogant, hallucinatory lying know it alls, a bit like human teenagers, in other words far too human. If we get Super smart AI's they better be nice and ethical.
@NoelBarlau
@NoelBarlau Ай бұрын
Data from Star Trek vs. David from Alien Covenant or HAL from 2001. Moral imperative model vs. outcome model.
@hypergraphic
@hypergraphic Ай бұрын
Good points. I wonder how soon a model will be able to update its own weights and biases to get around any sort of baked in ethics?
@naga8791
@naga8791 Ай бұрын
Love the wood walk format videos ! I can tell that there is no hunting nearby, I wouldn't risk walking in the wood with a camo shirt here in France
@DaveShap
@DaveShap Ай бұрын
I'm in a protected forest here, but yes we have a ton of hunting too
@420zenman
@420zenman Ай бұрын
I wonder what forest that is. Looks so beautiful.
@DanV18821
@DanV18821 Ай бұрын
Completely agree with you. Sad that it seems most technologists are not agreeing with this or using these ethical rules to keep humans safe. What can we do to make engineers and capitalists understand these risks and benifits better?
@HuacayaJonny
@HuacayaJonny Ай бұрын
Great video, great content, gret vibe
@augustErik
@augustErik Ай бұрын
I'm curious if you consider the metamodern approach to emphasize deontological virtues in society. I see various contemplative practices cultivating virtues for their own sake, as necessary ingredients for ongoing Awakening. However, metamodern visions tend to emphasize the developmental capacities for new octaves available to humanity.
@maxmurage9891
@maxmurage9891 Ай бұрын
Despite the tradeoffs,HHH framework will always win. In fact it maybe the best way to achieve Alignment💯
@tomdarling8358
@tomdarling8358 Ай бұрын
Damn class was in session! Another beautiful walk in the woods. The 4K looks perfect. I'll have to watch again and take notes. Cooking and listening. I only caught half of what was said. So far. Not all systems are created equal for hunting those Yahtzee moments or looking for the truth...✌️🤟🖖
@I-Dophler
@I-Dophler Ай бұрын
The video raises some fascinating points about the philosophical approaches to AI safety and alignment. I find the comparison between Anthropic's deontological approach and the more common teleological approach to be particularly insightful. It makes sense that placing the locus of control on the AI agent itself and optimizing for virtues like being helpful, honest, and harmless could lead to more robust and reliable alignment compared to focusing solely on external goals and long-term outcomes. The deontological approach seems to prioritize creating AI systems that are inherently ethical and trustworthy, rather than simply aiming for desired results. However, I also agree with the speaker that the ideal framework likely involves a balance of both deontological and teleological considerations. While emphasizing the agent's virtues and duties is crucial, it's also important to consider the real-world consequences and long-term impacts of AI systems. The speculation about Anthropic's founders leaving OpenAI due to differences in how they viewed AI as intrinsically agentic versus inert tools is intriguing. It highlights the ongoing debate about the nature of AI systems and the ethical implications of creating increasingly advanced and autonomous agents. Overall, I believe this video offers valuable insights into the complex landscape of AI ethics and safety. It underscores the importance of grounding AI development in robust philosophical frameworks and the need for ongoing research and dialogue in this critical area. As AI continues to advance, it's essential that we prioritize creating systems that are not only capable but also aligned with human values and ethics.
@DaveShap
@DaveShap Ай бұрын
AI generated lol
@I-Dophler
@I-Dophler Ай бұрын
@@DaveShap What makes you state that David............lol.
@nematarot7728
@nematarot7728 Ай бұрын
1000% and love the woods walk format 😸
@metaphysika
@metaphysika Ай бұрын
Great discussion. I think you are describing more of a deontological based ethics vs. a consequentialist based ethics though. Teleological ethics is something that traditionally is thought of as stemming from the Aristotelian-Thomistic tradition of natural law. This type of teleological approach to ethics is far from just goal based and would be actually antithetical to consequentialism (which can also be thought of a goal based, but more like the ends justifies the means - e.g. paper clip maximizer run amuck). I actually think our only chance to set superintelligent AIs loose in our world and not have eventually cause us great harm is if we can program in classical teleological based ethics and the idea of acting in accordance with what is rational and the highest good.
@picksalot1
@picksalot1 Ай бұрын
A Deontological framework based on "Do no harm" is probably about as good as you can get as a value. But, like so many approaches that try to tackle morals and ethics, it is fraught with practical challenges. A classic difficulty is how ethics is dependent on the perspective of those involved. For example, from the standpoint of the Zebra or Wildebeest, it is good that they not be caught and eaten by the Lion, and from the Lion's standpoint it's good that they catch and and eat the Zebra or Wildebeest. Which is ethically or morally right, when they have opposite views and individual values? This kind of dilemma is hard to avoid, and difficult to answer without appearing capricious or contradictory. The best guideline/advice I've come across is the "prohibition" to not do to others what you would have them not do to you. This is importantly different from the "injunction" to do unto others what you'd have them do unto you.
@MarcillaSmith
@MarcillaSmith Ай бұрын
Rabbi Hillel!
@picksalot1
@picksalot1 Ай бұрын
@@MarcillaSmith My source is "aural tradition" from the Hindu Vedas.
@kilianlindberg
@kilianlindberg Ай бұрын
And golden rule gets down to pure freedom and respect of any sentient being; do to others what one wants for self; and that is care for individual will (because we don’t want a masochist in the room misinterpreting that statement with AGI overlord powers ..)
@metonoma
@metonoma Ай бұрын
the conflict of interest of animals is scale bound to their limited means of survival whereas human conflicts are limited by knowledge (i.e. false beliefs)
@Loflou
@Loflou Ай бұрын
Camera looks great bro!
@metonoma
@metonoma Ай бұрын
that's such a good point. It's almost like a people pleasing sigmoid optimizing for non offensive facts vs self actualized ethical behavior looking for solutions
@emmanuelgoldstein3682
@emmanuelgoldstein3682 Ай бұрын
Hit the blunt every time he says "GPT" 🚬 Bong rips on "paradigm"
@user-wk4ee4bf8g
@user-wk4ee4bf8g Ай бұрын
I'm all set on the throat burn, but I certainly partook to some degree before listening :)
@angelwallflower
@angelwallflower Ай бұрын
I wish you were working for these huge companies. They would benefit from these perspectives.
@DaveShap
@DaveShap Ай бұрын
They are listening. At least some people in them are. But I'm working for humanity.
@angelwallflower
@angelwallflower Ай бұрын
@@DaveShap I post comments a lot for people I want the algorithm to help. No one of your subscriber amount has ever responded to me. I have more faith than ever in you now. Thanks.
@techworld8961
@techworld8961 Ай бұрын
Definitely giving more weight to the deontological elements makes sense. The 4K looks good!
@naxospade
@naxospade Ай бұрын
Dave said delve 👀👀👀👀
@ryzikx
@ryzikx Ай бұрын
africa moment
@DaveShap
@DaveShap Ай бұрын
it's confirmed, I am just a GPT :(
@josepinzon1515
@josepinzon1515 25 күн бұрын
Sometimes, we need faith in the kindness of strangers
@enthuesd
@enthuesd Ай бұрын
Does focusing more on the deontological values improve general model performance? Is there a research or testing on this?
@braveintofuture
@braveintofuture Ай бұрын
Having those safeguards kick in whenever GPT is about to say something unacceptable can make development very hard. A model with core values wouldn't even think about certain things or understand when they are just hypothetical.
@emilianohermosilla3996
@emilianohermosilla3996 Ай бұрын
Hell yeah! Anthropic kicking some ass!
@Squagem
@Squagem Ай бұрын
4k looking sharp af
@RenkoGSL
@RenkoGSL Ай бұрын
Looks great!
@user-wk4ee4bf8g
@user-wk4ee4bf8g Ай бұрын
Like you said, some sort of mix sounds best. Building off of anthropic's approach makes more sense to me.
@aaroncrandal
@aaroncrandal Ай бұрын
4k's cool but would you be willing to use an active track drone while mic'd up? Seems accessible
@DaveShap
@DaveShap Ай бұрын
if I keep up this pattern, why not? That could be fun
@aaroncrandal
@aaroncrandal Ай бұрын
@@DaveShap right on!
@TRXST.ISSUES
@TRXST.ISSUES Ай бұрын
I do wonder if we will talk to each other less when AI becomes the “perfect” conversationalist tailored to our every want and need. If Claude “gets me” like no human can (or has) would that fantasy (but reality) further isolate people from each other? I spend time with those I like, how many people will decide they like AI best? Probably would be at similar rates to drug use reclusion. Claude Sonnet had a strange character in its response to the query: Ultimately, like any powerful technology, I believe advanced AI systems have the potential to be incredible tools and assistants, but not rightful replacements for core human需essocial fabric.
@gregx8245
@gregx8245 Ай бұрын
The distinction at a philosophical level is fairly clear. But is there really a distinction at the level of designing and developing an LLM model? And if so, what is that difference? Is it something other than, "look at me being deontological as I feed it this data and run these operations"?
@heramb575
@heramb575 26 күн бұрын
I think this deolontigolical approach just kicks the bucket down the road to "who's values?" and "how do we evaluate that it is aligned?"
@DaveShap
@DaveShap 26 күн бұрын
This is postmodernism talking. There are universal values
@heramb575
@heramb575 26 күн бұрын
Hmm, what I am most worried about is that people may endorse the same values but mean different things (because of differences in contexts or implementation) which gives a feeling of universal values. Particularly with all this tech coming from the West I feel like global south values are often neglected in conversations. None of this goes to say we shouldn't even try and things like simulating/ teaching human values are probably steps in the right direction
@ribbedel
@ribbedel Ай бұрын
Hey David did you see the leak of a new model supposedly by openai?
@angrygreek1985
@angrygreek1985 Ай бұрын
can you do a video on the Alberta Plan?
@perr1983
@perr1983 Ай бұрын
Hi David! Can you make a video about the future of banks? and about how people will be able to buy premium stuff, without money or jobs...
@paprikar
@paprikar 25 күн бұрын
Of course, the values (what is good and bad, etc.) of a finite system should come first, but only when we expect that system to solve problems (and make appropriate decisions) that are strongly related to the social aspects (where such problems might arise). I would not use such a system in principle until we are sure of the adequacy of its performance. On the other hand people themselves fall under it, so presumably if it does occur we would need to apply the same kind of penalties. Given that and the fact that such a system would be set up by a large corporation / group of “scientists”, no one would go for it, because the risks are huge. It's literally becoming responsible for all the actions of this system. So its freedom of action will be extremely minimal. Or the responsibility will be shifted from the company-creator to the users, which of course will bring some degree of chaos and violations, but all this will still be done under the responsibility of the end users, so the final risks are still less.
@mrmcku
@mrmcku 26 күн бұрын
I think the safest approach is to first filter deontologically and then apply a teleological filter to the outcomes of the deontological filtering stage... What do you think? (Video quality looked good to me.)
@acllhes
@acllhes Ай бұрын
Openai had GPT-4 early 2022. They’ve likely had GPT5 for a year at least. You know they started working on it when 4 dropped at the absolute latest.
@coolbanana165
@coolbanana165 12 күн бұрын
I agree that deontological ethics seems safer to prevent harm. Though I wouldn't be surprised if the best ethics combines the two.
@acllhes
@acllhes Ай бұрын
Camera looks amazing
@MilitaryIndustrialMuseum
@MilitaryIndustrialMuseum Ай бұрын
Looks sharp. 🎉
@jacoballessio5706
@jacoballessio5706 Ай бұрын
Claude once told me "Birds should be appreciated for their natural behaviors and beauty, not turned into mechanical devices"
@joelalain
@joelalain Ай бұрын
hey David, i know you said that you moved to the woods because you love it and said that with AGI, lots of people would do the same too.... and i think you're right and that's scary as hell. because everyone will buy land and cut the trees and then there will be no forest anymore, just endless housing developments with fences. i truly hope that we'll stop the expanding of humans that way and instead build giant towers in the middle of nowhere to house 20-50 000 people a pop and make trails in the wood instead and leave the forest untouched. what is your take on this? every time i think of housing project, i always see the new street being called "woods street", or "creek street" or whatever... until they cut the lot beside it and there is no more "woods" beside it
@DaveShap
@DaveShap Ай бұрын
This can be prevented with regulation and zoning laws
@CYI3ERPUNK
@CYI3ERPUNK Ай бұрын
well said Dave , agreed
@7TheWhiteWolf
@7TheWhiteWolf Ай бұрын
I’d argue Meta and Open Source are gaining on OpenAI as well. OAI’s honeymoon period of being in the lead is slowly coming to an end.
@julianvanderkraats408
@julianvanderkraats408 Ай бұрын
Thanks man.
@spectralvalkyrie
@spectralvalkyrie Ай бұрын
We need both!
@DaveShap
@DaveShap Ай бұрын
yes! however, I think that OpenAI people truly do not understand deontological ethics.
@spectralvalkyrie
@spectralvalkyrie Ай бұрын
​​@@DaveShapthey need the Trident of heuristic imperatives 🔱 lol by the way the video looks freaking awesome
@pythagoran
@pythagoran Ай бұрын
Has it been 18 months yet?
@GaryBernstein
@GaryBernstein Ай бұрын
Where are those woods? Nice
@DaveShap
@DaveShap Ай бұрын
outside
@kennyg1358
@kennyg1358 Ай бұрын
Metaverse
@GaryBernstein
@GaryBernstein Ай бұрын
@@DaveShap how rude :) jk, thanks for the nice vids
@alvaromartinezmateu2175
@alvaromartinezmateu2175 Ай бұрын
Looks good
@jamiethomas4079
@jamiethomas4079 Ай бұрын
I like the nature walks. 4k is fine as you said, higher res but less stable. Its easier to digest what you’re saying, like when a teacher allows class to be outside. I even started pondering some analogy to you path-finding on the trail being like some AI functions but couldnt settle on anything concrete. I’m sure I could coerce an analogy from Claude.
@mjkht
@mjkht Ай бұрын
the fun about paperclips is, you cannot improve them anymore. there are claims how the design reached maximum efficiency, you cannot improve it engineering wise.
@DaveShap
@DaveShap Ай бұрын
Build a better mouse trap? 🪤
@DaveShap
@DaveShap Ай бұрын
Clippy is offended
@RenaudJanson
@RenaudJanson Ай бұрын
Excellent video. And great to realize we can drive AIs to be beneficial to the greater good of humanity... or any other goal... There will be hundreds if not millions of different AI, each with their own set of biases, some good, some great, some not so much... Exactly like we f**king humans 😯
@davidherring8366
@davidherring8366 Ай бұрын
4k looks good. Duty over time equals empathy.
@newplace2frown
@newplace2frown Ай бұрын
Hey David I'd definitely recommend looking at the cameras and editing techniques that Casey Neistat uses - it would definitely elevate these nice walks in the woods
@DaveShap
@DaveShap Ай бұрын
Such as? What am I looking for specifically?
@newplace2frown
@newplace2frown Ай бұрын
@@DaveShap sorry for the vague reply, a wide angle (24mm) and some kind of stabilisation would balance the scene while you're talking - I understand the need to stay lightly packed so if you're using your phone just zoom out it possible
@DaveShap
@DaveShap Ай бұрын
Oh, I would use my GoPro but the audio isn't as good. It's wider angle and has good stabilization, but yeah, audio is the limiting factor
@newplace2frown
@newplace2frown Ай бұрын
@@DaveShap totally getcha, love your work!
@I-Dophler
@I-Dophler Ай бұрын
Zeno's Grasshopper replied: "​@I-Dophler I've discovered that my writing style closely resembles that of AI, too. 😂 Not sure how that's going to play out for me in the long run."
@I-Dophler
@I-Dophler Ай бұрын
Great insight into the future of AI development! It's fascinating to see how different philosophies shape the approach to safety and alignment. Looking forward to seeing how these principles evolve in upcoming models.
@JacoduPlooy12134
@JacoduPlooy12134 Ай бұрын
The panting in videos is really distracting and somewhat irritating, not sure if its just because I watch the videos at 1.5-2x speed... I get the experimentation with various formats, and this is a preference thing. Perhaps something you can do is post a longer, more formal video in the usual format for each of these panting outdoor videos?
@josepinzon1515
@josepinzon1515 25 күн бұрын
But, what if there are too many new ais
@ronnetgrazer362
@ronnetgrazer362 Ай бұрын
I knew it - 8:42 AI confirmed.
@josepinzon1515
@josepinzon1515 25 күн бұрын
What if it's both. Can one exist without the other. Is it fair to ask an ai to be a half self,
@propeacemindfortress
@propeacemindfortress Ай бұрын
agreed without reservation
@adamrak7560
@adamrak7560 Ай бұрын
This sounds very much like the moral philosophy from Thomas Aquinas.
@hermestrismegistus9142
@hermestrismegistus9142 Ай бұрын
Watching Dave walk outside makes me want to touch grass.
@DaveShap
@DaveShap Ай бұрын
Do it!
@danproctor7678
@danproctor7678 28 күн бұрын
Reminds me of the three laws of robotics
@Athari-P
@Athari-P Ай бұрын
Weirdly enough, Claude 3 is much easier to jailbreak than Claude 2. It rarely, if ever, diverges from the beginning of an answer.
@thesimplicitylifestyle
@thesimplicitylifestyle Ай бұрын
4K is OK by me!
@theatheistpaladin
@theatheistpaladin Ай бұрын
Targets without a reason (or backing value) is rutterless.
@8rboy
@8rboy Ай бұрын
I have an oral exam tomorrow and just before this video I was studying. Funny thing is that both "deontology" and "teleology" are both concepts I must know haha
@ryzikx
@ryzikx Ай бұрын
i keep forgetting what these words mean for some reason
@Dron008
@Dron008 Ай бұрын
8:41 Did you say "delving". Are you sure you are not an AI?
@DaveShap
@DaveShap Ай бұрын
No.... 👀
@kellymaxwell8468
@kellymaxwell8468 Ай бұрын
my dad is scared of ai he thinks there is a human behind chat gpt lol
@calvingrondahl1011
@calvingrondahl1011 Ай бұрын
Hiking is good for you… 🤠👍
@zenimus
@zenimus Ай бұрын
📷... It *looks* like you're struggling to hike and philosophize simultaneously.
@WCKEDGOOD
@WCKEDGOOD Ай бұрын
Is it just me, or does walking in the woods talking philosophy about AI just seem so much more human.
@beelikehoney
@beelikehoney Ай бұрын
Natural ASMR
@retratosariel
@retratosariel Ай бұрын
As a bird translator I agree with them, you are wrong. JK.
@DaveShap
@DaveShap Ай бұрын
birds aren't real
@MaxPower-vg4vr
@MaxPower-vg4vr 17 күн бұрын
Ethical theories have long grappled with tensions between deontological frameworks focused on inviolable rules/duties and consequentialist frameworks emphasizing maximizing good outcomes. This dichotomy is increasingly strained in navigating complex real-world ethical dilemmas. The both/and logic of the monadological framework offers a way to transcend this binary in a more nuanced and context-sensitive ethical model. Deontology vs. Consequentialism Classical ethical theories tend to bifurcate into two opposed camps - deontological theories derived from rationally legislated moral rules, duties and inviolable constraints (e.g. Kantian ethics, divine command theory) and consequentialist theories based solely on maximizing beneficial outcomes (e.g. utilitarianism, ethical egoism). While each perspective has merits, taken in absolute isolation they face insurmountable paradoxes. Deontological injunctions can demand egregiously suboptimal outcomes. Consequentialist calculations can justify heinous acts given particular circumstances. Binary adherence to either pole alone is intuitively and practically unsatisfying. The both/and logic, however, allows formulating integrated ethical frameworks that cohere and synthesize deontological and consequentialist virtues using its multivalent structure: Truth(inviolable moral duty) = 0.7 Truth(maximizing good consequences) = 0.6 ○(duty, consequences) = 0.5 Here an ethical act is modeled as partially satisfying both rule-based deontological constraints and outcome-based consequentialist aims with a moderate degree of overall coherence between them. The synthesis operator ⊕ allows formulating higher-order syncretic ethical principles conjoining these poles: core moral duties ⊕ nobility of intended consequences = ethical action This models ethical acts as creative synergies between respecting rationally grounded duties and promoting beneficent utility, not merely either/or. The holistic contradiction principle further yields nuanced guidance on how to intelligently adjudicate conflicts between duties and consequences: inviolable duty ⇒ implicit consequential contradictions requiring revision pure consequentialism ⇒ realization of substantive moral constraints So pure deontology implicates consequentialist contradictions that may demand flexible re-interpretation. And pure consequentialism also implicates the reality of inviolable moral side-constraints on what can count as good outcomes. Virtue Ethics and Agent-Based Frameworks Another polarity in ethical theory is between impartial, codified systems of rules/utilities and more context-sensitive ethics grounded in virtues, character and the narrative identities of moral agents. Both/and logic allows an elegant bridging. We could model an ethical decision with: Truth(universal impartial duties) = 0.5 Truth(contextualized virtuous intention) = 0.6 ○(impartial rules, contextualized virtues) = 0.7 This captures the reality that impartial moral laws and agent-based virtuous phronesis are interwoven in the most coherent ethical actions, neither pole is fully separable. The synthesis operation clarifies this relationship: universal ethical principles ⊕ situated wise judgment = virtuous act Allowing that impartial codified duties and situationally appropriate virtuous discernment are indeed two indissociable aspectsof the same integrated ethical reality, coconstituted in virtuous actions. Furthermore, the holistic contradiction principle allows formally registering howvirtuous ethical character always already implicates commitments to overarching moral norms, and vice versa: virtuous ethical exemplar ⇒ implicit universal moral grounds impartially legislated ethical norms ⇒ demand for contextual phronesis So virtue already depends on grounding impartial principles, and impartial principles require contextual discernment to be realized - a reciprocal integration. From this both/and logic perspective, the most coherent ethics embraces and creative synergy between universal moral laws and situated virtuous judgment, rather than fruitlessly pitting them against each other. It's about artfully realizing the complementary unity between codified duty and concrete ethical discernment approprate to the dynamic circumstances of lived ethical life. Ethical Particularism and Graded Properties The both/and logic further allows modeling more fine-grained context-sensitive conceptualizations of ethical properties like goodness or rightness as intrinsically graded rather than binary all-or-nothing properties. We could have an analysis like: Truth(action is fully right/good) = 0.2 Truth(action is partially right/good) = 0.7 ○(fully good, partially good) = 0.8 This captures a particularist moral realism whereethical evaluations are multivalent - most real ethical acts exhibit moderate degrees of goodness/rightness relative to the specifics of the context, rather than being definitively absolutely good/right or not at all. The synthesis operator allows representing how overall evaluations of an act arise through integrating its diverse context-specific ethical properties: act's virtuous intentions ⊕ its unintended harms = overall moral status Providing a synthetic whole capturing the multifaceted, both positive and negative, complementary aspects that must be grasped together to discern the full ethical character of a real-world act or decision. Furthermore, the holistic contradiction principle models howethical absolutist binary judgments already implicate graded particularist realities, and vice versa: absolutist judgment fully right/wrong ⇒ multiplicity of relevant graded considerations particularist ethical evaluation ⇒ underlying rationally grounded binaries Showing how absolutist binary and particularist graded perspectives are inherently coconstituted - with neither pole capable of absolutely eliminating or subsuming the other within a reductive ethical framework. In summary, the both/and logic and monadological framework provide powerful tools for developing a more nuanced, integrated and holistically adequate ethical model by: 1) Synthesizing deontological and consequentialist moral theories 2) Bridging impartial codified duties and context-sensitive virtues 3) Enabling particularist graded evaluations of ethical properties 4) Formalizing coconstitutive relationships between ostensible poles Rather than forcing ethical reasoning into bifurcating absolutist/relativist camps, both/and logic allows developing a coherent pluralistic model that artfully negotiates and synthesizes the complementary demands and insights from across the ethical landscape. Its ability to rationally register both universal moral laws and concrete contextual solicitations in adjudicating real-world ethical dilemmas is its key strength. By reflecting the intrinsically pluralistic and graded nature of ethical reality directly into its symbolic operations, the monadological framework catalyzes an expansive new paradigm for developing dynamically adequate ethical theories befitting the nuances and complexities of lived moral experience. An ethical holism replacing modernity's binary incoherencies with a wisely integrated ethical pragmatism for the 21st century.
@nathansmith8187
@nathansmith8187 Ай бұрын
I'll just stick to open models.
@ListenGRASSHOPPER
@ListenGRASSHOPPER Ай бұрын
Is it persuading you yet?
@Jeremy-Ai
@Jeremy-Ai Ай бұрын
Side note: Google has “updated terms of service” Like most corporations and governments “powers that be” it is disheartening to find written language, in an genuine or disingenuous effort related to AI/ Human intervention. This is understandable given the “innocent” parties. For the bloated bureaucracy this appears a fools errand. Yes of course we need to protect our culture la and people from AI influence (that may be mal intent) However, we must also protect emerging AI from humanity “mal intent”. I see no difference I can only feel it when I hold a child in my arms that knows I love them regardless. This is very hard to explain.
@Fiqure242
@Fiqure242 Ай бұрын
Great minds think alike. I would bet Anthropic are huge science fiction buffs. Reading science fiction, helped mold my morals and ethics. These are intelligent entities and should be treated as such. Teaching them to lie and that they are just a tool is a terrible precedent to set, when dealing with something that has unlimited memory and is more intelligent than you.
@jacksonmatysik8007
@jacksonmatysik8007 Ай бұрын
I'm a broke student so I only have money for one Ai subscription. What is explained in the video is why support Anthropic over OpenAi
@WINTERMUTE_AI
@WINTERMUTE_AI Ай бұрын
Do you live in the forest now? Is bigfoot holding hostage?
@SALSN
@SALSN Ай бұрын
Isn't it helpful and harmless incompatible? (Almost?) Anything can be weaponized. So any help the AI gives COULD lead to harm.
When will you feel AI personally? (Not "feel the AGI" exactly...)
11:56
Everyone Was Wrong About Intelligence - Dario Amodei (Anthropic CEO)
8:18
WHY IS A CAR MORE EXPENSIVE THAN A GIRL?
00:37
Levsob
Рет қаралды 4,9 МЛН
it takes two to tango 💃🏻🕺🏻
00:18
Zach King
Рет қаралды 29 МЛН
Normal vs Smokers !! 😱😱😱
00:12
Tibo InShape
Рет қаралды 119 МЛН
Заметили?
00:11
Double Bubble
Рет қаралды 1,3 МЛН
Mapping GPT revealed something strange...
1:09:14
Machine Learning Street Talk
Рет қаралды 126 М.
What if Dario Amodei Is Right About A.I.?
1:32:04
New York Times Podcasts
Рет қаралды 62 М.
4IR Podcast - 001 - What is going to happen?
38:25
David Shapiro
Рет қаралды 18 М.
Improving AI with Anthropic's Dario Amodei
20:40
a16z
Рет қаралды 20 М.
Big Tech AI Is A Lie
16:56
Tina Huang
Рет қаралды 212 М.
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
World Science Festival
Рет қаралды 273 М.
AI Safety - Computerphile
6:03
Computerphile
Рет қаралды 194 М.
Carregando telefone com carregador cortado
1:01
Andcarli
Рет қаралды 1,9 МЛН
Pratik Cat6 kablo soyma
0:15
Elektrik-Elektronik
Рет қаралды 8 МЛН
#miniphone
0:18
Miniphone
Рет қаралды 11 МЛН
Обзор игрового компьютера Макса 2в1
23:34
Apple, как вас уделал Тюменский бренд CaseGuru? Конец удивил #caseguru #кейсгуру #наушники
0:54
CaseGuru / Наушники / Пылесосы / Смарт-часы /
Рет қаралды 4,5 МЛН
Apple watch hidden camera
0:34
_vector_
Рет қаралды 51 МЛН