Stanford CS224N NLP with Deep Learning | 2023 | Lecture 11 - Natural Language Generation

  Рет қаралды 20,342

Stanford Online

Stanford Online

9 ай бұрын

For more information about Stanford's Artificial Intelligence professional and graduate programs visit: stanford.io/ai
This lecture covers:
1. What is NLG?
2. A review: neural NLG model and training algorithm
3. Decoding from NLG models
4. Training NLG models
5. Evaluating NLG Systems
6. Ethical Considerations
What is natural language generation?
Natural language generation is one side of natural
language processing. NLP =
Natural Language Understanding (NLU) +
Natural Language Generation (NLG)
NLG focuses on systems that produce fluent, coherent
and useful language output for human consumption
Deep Learning is powering next-gen NLG systems!
To learn more about this course visit: online.stanford.edu/courses/c...
To follow along with the course schedule and syllabus visit: web.stanford.edu/class/cs224n/
Xiang Lisa Li
xiangli1999.github.io/
Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)
#naturallanguageprocessing #deeplearning

Пікірлер: 8
@uraskarg710
@uraskarg710 9 ай бұрын
Great lecture! Thanks!
@420_gunna
@420_gunna 3 ай бұрын
Great lecture! :)
@mshonle
@mshonle 9 ай бұрын
Ah, interesting… I had wondered about the distinction between NLU and NLP and now it makes sense! Cheers!
@robertokalinovskyy7347
@robertokalinovskyy7347 6 ай бұрын
Great lecture!
@p4r7h-v
@p4r7h-v 28 күн бұрын
thanks!!
@l501l501l
@l501l501l 8 ай бұрын
Hi there, based on the schedule on your official course website, maybe this course should be lecture 10 and Prompting, Reinforcement Learning from Human Feedback by Jesse Mu) should be lecture 11?
@mshonle
@mshonle 9 ай бұрын
Can using dropout during inference be another way to set the temperature and perform sampling? E.g., if training had a 10% dropout rate, why not apply a similar random dropout during inference? The neurons which get zeroed out could depend on some distribution, such as selecting neurons evenly or favoring the earlier layers or targeting attention heads at specific layers. One might expect the token distributions would be more varied than what beam search alone could find.
@JQ0004
@JQ0004 9 ай бұрын
The TA seems attends Ng`s class a lot. Seems to imitate "ok cool" a lot. 😀
你们会选择哪一辆呢#short #angel #clown
00:20
Super Beauty team
Рет қаралды 42 МЛН
Language Understanding and LLMs with Christopher Manning - 686
55:41
The TWIML AI Podcast with Sam Charrington
Рет қаралды 2,4 М.
Classical Mechanics | Lecture 1
1:29:11
Stanford
Рет қаралды 1,4 МЛН
What are Transformer Models and how do they work?
44:26
Serrano.Academy
Рет қаралды 105 М.
Natural Language Generation at Google Research
14:40
Google Cloud Tech
Рет қаралды 115 М.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Alexander Amini
Рет қаралды 347 М.
Lecture 1 | New Revolutions in Particle Physics: Basic Concepts
1:54:10
Andrew Ng: Opportunities in AI - 2023
36:55
Stanford Online
Рет қаралды 1,8 МЛН