No video

Master Claude 3 Haiku - The Crash Course!

  Рет қаралды 20,170

Sam Witteveen

Sam Witteveen

Күн бұрын

Пікірлер: 54
@robxmccarthy
@robxmccarthy 4 ай бұрын
Haiku is the biggest release since GPT4 from a cost/performance perspective. Glad you got to dig into it, always enjoy your videos.
@samwitteveenai
@samwitteveenai 4 ай бұрын
Thanks! I agree it really seems to open up a lot of opportunities.
@joflo5950
@joflo5950 4 ай бұрын
Thanks for the video! I would really love to see the follow-up video on function calling you mentioned.
@pokerandphilosophy8328
@pokerandphilosophy8328 4 ай бұрын
My main use of Claude 3 currently is to have extended philosophical discussions with it, discuss texts and have it help me rewrite my own papers and drafts. I often begin the conversation with Opus to maximise quality. But when the context gets longer, I sometimes switch to Sonnet or Haiku. Haiku very often surprises me with how smart it is. When its responses are informed by the longer context, including the prior responses from Opus, this serves as something similar to a many-shot prompting method with explicit examples and it boosts Haiku's intelligence. Furthermore, Haiku's slightly more unfocused or meandering intellect enables it to make relevant connections between various parts of the conversation that Opus often misses due to its more focussed attention to user instructions and strict adherence to prompt. As a result of that, Haiku's responses sometimes are more intelligent, insightful and (broad) context sensitive even if it is slightly more prone to error than its bigger siblings.
@nas8318
@nas8318 4 ай бұрын
It's meandering intellect may be due to a higher temperature setting. You may wanna look into that.
@pokerandphilosophy8328
@pokerandphilosophy8328 4 ай бұрын
@@nas8318 I'm interfacing all three Claude 3 models through the Anthropic workbench with the temperature set to zero. So, it's really something else that is at play.
@paulmiller591
@paulmiller591 4 ай бұрын
Thanks, Sam. I had played with Haiku previously but have not done this optimised prompting. Jumping onto doing this now. Cheers.
@davidtindell950
@davidtindell950 3 ай бұрын
Great overview and multimodal examples from the Anthropic Claude Cookbook using Haiku! I ran some of the examples multiple times with variations and the cost so far is less than a U.S. $1.00. We should definitely consider Haiku for personal and business apps where the tradeoff between quality and cost must be balanced: e.g. summarizing a large volume of papers and documents and creating and maintaining a large database of vector embeddings to support documents Q&A.
@AdamTwardoch
@AdamTwardoch 4 ай бұрын
It seems to be that Haiku is a distilled / sparsified / quantized Opus. When it works, it gives results that are quite similar to Opus - while Sonnet gives very different results, so it looks it was trained independently. This is great: I often prep few-shot with Opus and then hand or over to Haiku for scale.
@amandamate9117
@amandamate9117 4 ай бұрын
cant wait for the crewAI + Haiku video !!! it would be nice to have a superagent thats using OPUS and small agents only haiku.
@ehza
@ehza 4 ай бұрын
Thank you for this. Quite helpful to me!
@samwitteveenai
@samwitteveenai 4 ай бұрын
Glad it was helpful!
@walterpark8824
@walterpark8824 4 ай бұрын
What a great model for local use. Thanks for showing it so clearly.
@JD-hk7iw
@JD-hk7iw 4 ай бұрын
I had written off Haiku after testing my use case with it using the same prompt I use for opus/gpt-4. Totally unusable. After watching this, I revised the wording & format of the system prompt and added three examples. Well I'll be damned. Touché Haiku, touché. Not as nuanced and focused as opus/gpt-4, but definitely serviceable. The combination of the 200K context window and the pricing really is what makes this model special. Thanks for the informative video showing the proper way to leverage Haiku.
@samwitteveenai
@samwitteveenai 4 ай бұрын
This is awesome to hear! I have found since the input tokens are so cheap I have been using 20 examples for some things and getting really good results in changing style and tone too.
@xemy1010
@xemy1010 4 ай бұрын
Haiku might be the perfect model to label / caption an image dataset at scale using natural language. Dalle-3's paper makes it clear that generating detailed natural language captions for each image was a big part of the magic behind its ability to understand and follow prompts so well at inference. SD3 only used a 50:50 mix of CogVLM-generated captions and captions from the original images. I think a Haiku-captioned training dataset would be a big step up for training these models.
@kenchang3456
@kenchang3456 4 ай бұрын
Cheaper works for me as I'm in the learning/experimenting stage. Looking forward to you Claude 3 based function calling video. Thanks for sharing.
@samwitteveenai
@samwitteveenai 4 ай бұрын
Thanks Ken
@brandonwinston
@brandonwinston 4 ай бұрын
One of the big challenges I’m having is plugging haiku into all the places OpenAi APIs are accepted
@samwitteveenai
@samwitteveenai 4 ай бұрын
checkout LiteLLM you can use that as proxy that takes OpenAI inputs and can reroute them
@alchemication
@alchemication 4 ай бұрын
So far wasn’t able to get anywhere with haiku for any production quality use case, but the idea with using many examples sounds promising. Will test out. Thx for inspiration to try again 😊
@jayhu6075
@jayhu6075 4 ай бұрын
What a amazing explanation how to do with vision, xml and other stuffs with Haiku. Hopefully more in the future about Agents what you all mention with Crew AI. Many thnx.
@aa-xn5hc
@aa-xn5hc 4 ай бұрын
Looking forward to your next video on CrewAI and Haiku
@EyadAiman
@EyadAiman 4 ай бұрын
Impressive tutorial as always SAM I suggest you make a tutorial on how to build RAG system with agents using claude 3 haiku
@silvacarl
@silvacarl 4 ай бұрын
I look forward to every one of these videos. Can you do more langchain or RAG examples with open source LLMs?
@vivekpadman5248
@vivekpadman5248 4 ай бұрын
Thanks for this
@EmadElazhary-tt8tl
@EmadElazhary-tt8tl 4 ай бұрын
Thanks for the video! Please Please, Function Calling using Haiku in Langchain
@hendoitechnologies
@hendoitechnologies Ай бұрын
Can you post full course video about Claude 3.5 sonnet model
@UTubeGuyJK
@UTubeGuyJK 4 ай бұрын
I hadn’t heard of the xml tag prompting with Claude before.
@carterjames199
@carterjames199 4 ай бұрын
Please go over function calling asap really looking forward to it, from my test haiku is amazing with a few examples but still has some issues when I go upwards of 4 functions that can be called
@lancemarchetti8673
@lancemarchetti8673 4 ай бұрын
DBRX just launched their new model on huggingface
@micbab-vg2mu
@micbab-vg2mu 4 ай бұрын
great video - thank you
@CookerSingh
@CookerSingh 4 ай бұрын
I think there is no function calling feature and for now only be used in a wrapper based applications.
@carterjames199
@carterjames199 4 ай бұрын
There is function calling
@carterjames199
@carterjames199 4 ай бұрын
Its just not as mature as OpenAI function calling
@CookerSingh
@CookerSingh 4 ай бұрын
​@@carterjames199Is there any way I can add function calling or use available proxies out there for all LLMs.
@samwitteveenai
@samwitteveenai 4 ай бұрын
The function calling is in a different format than OpenAI they use xml which can be nested. for proxies etc checkout LiteLLM but I don't think that will convert function calls yet
@ShoomonPerry
@ShoomonPerry 4 ай бұрын
I want to use claude haiku to process large documents...but end up running out of output tokens. Is there a simple hack (in langchain?...) to get a multipart response?
@samwitteveenai
@samwitteveenai 4 ай бұрын
The problem is Claude will only output 4k tokens (pretty sure from my memory). This is where getting a system to do multiple calls can be really useful. In langchain you can do it with MapReduce but it can be a bit hit or miss. Another way is to write your own splitting and prompting and run it in parallel. Let me try to think of a good use case I can show and I will try to make a video about it. It is certainly an issue.
@ShoomonPerry
@ShoomonPerry 4 ай бұрын
Thanks Sam. What if you chained the queries, passing in the response of the first query into the context window of the second....and telling the LLM to pick up where it left off?... @@samwitteveenai
@jdallain
@jdallain 4 ай бұрын
I’m not sure if it’s possible, but please use haiku with the more advanced crewai video that you mentioned making
@jdallain
@jdallain 4 ай бұрын
Commented before watching until the end 😊
@sauravmohanty3946
@sauravmohanty3946 4 ай бұрын
can you share the link to the notebook you explained in the video ?
@samwitteveenai
@samwitteveenai 4 ай бұрын
It’s the colab in the Video description
@RaitisPetrovs-nb9kz
@RaitisPetrovs-nb9kz 4 ай бұрын
Out of curiosity tested all 3 models got dogs correct with 1st shot without prompting
@samwitteveenai
@samwitteveenai 4 ай бұрын
😀 Thats interesting. Maybe their examples was planted to make it look wrong for the first one? For me it came back with 8 when I tried their fancy prompt but very quickly changed back to 9 with a few small changes.
@RaitisPetrovs-nb9kz
@RaitisPetrovs-nb9kz 4 ай бұрын
@@samwitteveenai
@clray123
@clray123 4 ай бұрын
The "fancy prompting" messing around with the count of dogs output is actually a glaring example of why all these models are crap. Lack of trustworthiness, making big mistakes on trivial tasks AND what's more, those mistakes depend on minute details of how you arrange the input! It reminds me of a fine tuned model claiming at the same time that it loves and hates tomatoes! Or claiming that its favorite animal is a tomato, unless, of course, you ask beforehand whether a tomato is an animal. This is simply ridiculous and inconsistencies of this sort highlight the simple fact that today even the most sophisticated of these models are still imitators of intelligence, rather than intelligence. Building castles on sand.
@andrada25m46
@andrada25m46 4 ай бұрын
Personally I haven’t had issues with haiku, it’s much better than GPT3.5 you just have to prompt it well.
@Quin.Bioinformatics
@Quin.Bioinformatics 4 ай бұрын
google bard is trash why was it rated so highly? it sucks at generative coding and variant annotation.
@snuwan
@snuwan 4 ай бұрын
Claude 3 is actually better.
15 INSANE Use Cases for NEW Claude Sonnet 3.5! (Outperforms GPT-4o)
28:54
Meet the one boy from the Ronaldo edit in India
00:30
Younes Zarou
Рет қаралды 16 МЛН
ПОМОГЛА НАЗЫВАЕТСЯ😂
00:20
Chapitosiki
Рет қаралды 28 МЛН
Zombie Boy Saved My Life 💚
00:29
Alan Chikin Chow
Рет қаралды 9 МЛН
The LeetCode Fallacy
6:08
NeetCode
Рет қаралды 478 М.
New Summarization via In Context Learning with a New Class of Models
28:12
This AI Agent with RAG Manages MY LIFE
10:52
Cole Medin
Рет қаралды 11 М.
The moment we stopped understanding AI [AlexNet]
17:38
Welch Labs
Рет қаралды 939 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 960 М.
Master CrewAI: Your Ultimate Beginner's Guide!
1:00:18
Sam Witteveen
Рет қаралды 67 М.
Why is this number everywhere?
23:51
Veritasium
Рет қаралды 8 МЛН
New Prompt Generator Just Ended the Need for Prompt Engineering
10:43
Skill Leap AI
Рет қаралды 120 М.
Why & When You Should Use Claude 3 Over ChatGPT
17:00
The AI Advantage
Рет қаралды 94 М.
What is an LLM Router?
9:16
Sam Witteveen
Рет қаралды 27 М.