No video

Large Language Models Are Zero Shot Reasoners

  Рет қаралды 31,629

IBM Technology

IBM Technology

Күн бұрын

Пікірлер: 13
@manomancan
@manomancan Жыл бұрын
Thank you for this video! Though, wasn't this video already published? I could even remember the beats of the first lines?
@IBMTechnology
@IBMTechnology Жыл бұрын
You're right. On occasion we have to republish to fix an issue found after publishing. We hope you will enjoy some of truly new content.
@arijitgoswami3652
@arijitgoswami3652 Жыл бұрын
Thanks for this video! Would love to see the next video on Tree of Thoughts method of prompting.
@EvanBoyar
@EvanBoyar 6 ай бұрын
There's such a strange, uncanny-valley feeling watching someone who's been inverted (flipped along the vertical axis like a mirror appears to do)
@Zulu369
@Zulu369 Жыл бұрын
Excellent explanation. I suggest that next time you add a little history at the beginning of your video about where the term is coming from (see original publication where the term was first coined).
@enthanna
@enthanna Жыл бұрын
Great video. Thank you Can you make a video about the current state of LLMs in the market place? There are lots of claims out there of capable models like GPT but it’s really hard to separate fact from fiction. Thanks again
@yuchentuan7011
@yuchentuan7011 Жыл бұрын
Thanks. I have one question. To do prompt tuning on a foundation model, how to choose data sets which are for general public domain (not for specific domain) and under which circumstances, we should train with few-shot prompts and zero-shot prompts? thanks
@fredericc2184
@fredericc2184 10 ай бұрын
I just try the same direct prompt right now to gpt4 and correct answer !
@michaeldausmann6066
@michaeldausmann6066 7 ай бұрын
Good video, is there an established way to provide step by step examples to the llm? E.g. will check get better results if I explicitly number my stems and provide enumerated examples, can I use arrows to indicate example-> step -> final ?
@amparoconsuelo9451
@amparoconsuelo9451 11 ай бұрын
Can a subsequent SFT and RTHF with different, additional or lesser contents change the character, improve, or degrade a GPT model? Can you modify a GPT model?
@sirk3v
@sirk3v Жыл бұрын
Has this been reuploaded or do I just have a really bad case of deja Vu? I'm 100% sure I have watched this video again before and it wasn't anywhere within the past 18hrs
@cyberpunk2978
@cyberpunk2978 11 ай бұрын
It's hallucination
How To Self Study AI FAST
12:54
Tina Huang
Рет қаралды 522 М.
Building Production-Ready RAG Applications: Jerry Liu
18:35
AI Engineer
Рет қаралды 303 М.
Parenting hacks and gadgets against mosquitoes 🦟👶
00:21
Let's GLOW!
Рет қаралды 13 МЛН
Large Language Models (LLMs) - Everything You NEED To Know
25:20
Matthew Berman
Рет қаралды 82 М.
Why Large Language Models Hallucinate
9:38
IBM Technology
Рет қаралды 187 М.
4 Methods of Prompt Engineering
12:42
IBM Technology
Рет қаралды 131 М.
What Makes Large Language Models Expensive?
19:20
IBM Technology
Рет қаралды 67 М.
What is Prompt Tuning?
8:33
IBM Technology
Рет қаралды 196 М.
A Hackers' Guide to Language Models
1:31:13
Jeremy Howard
Рет қаралды 521 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 966 М.
The Attention Mechanism in Large Language Models
21:02
Serrano.Academy
Рет қаралды 91 М.
Should You Use Open Source Large Language Models?
6:40
IBM Technology
Рет қаралды 351 М.
Parenting hacks and gadgets against mosquitoes 🦟👶
00:21
Let's GLOW!
Рет қаралды 13 МЛН