Рет қаралды 2,453
Today, we're joined by Christopher Manning, the Thomas M. Siebel professor in Machine Learning at Stanford University and a recent recipient of the 2024 IEEE John von Neumann medal. In our conversation with Chris, we discuss his contributions to foundational research areas in NLP, including word embeddings and attention. We explore his perspectives on the intersection of linguistics and large language models, their ability to learn human language structures, and their potential to teach us about human language acquisition. We also dig into the concept of “intelligence” in language models, as well as the reasoning capabilities of LLMs. Finally, Chris shares his current research interests, alternative architectures he anticipates emerging beyond the LLM, and opportunities ahead in AI research.
🎧 / 🎥 Listen or watch the full episode on our page: twimlai.com/go/686.
🔔 Subscribe to our channel for more great content just like this: kzfaq.info?sub_confi...
🗣️ CONNECT WITH US!
===============================
Subscribe to the TWIML AI Podcast: twimlai.com/podcast/twimlai/
Follow us on Twitter: / twimlai
Follow us on LinkedIn: / twimlai
Join our Slack Community: twimlai.com/community/
Subscribe to our newsletter: twimlai.com/newsletter/
Want to get in touch? Send us a message: twimlai.com/contact/
📖 CHAPTERS
===============================
00:00 - Introduction
2:10 - Emergence of LLMs
3:15 - Perspectives of a Linguist in AI
7:43 - LLMs and human language acquisition
14:09 - Relationship of intelligence and LLMs
20:02 - Reasoning in LLMs
27:37 - Breakthroughs of world models
29:19 - GloVe paper
32:39 - Contextual representations and word vectors
34:49 - Embeddings vs transformer architectures in retrieval
35:51 - Attention as mechanism
38:44 - Evolution of attention and transformers
40:21 - Current areas of interests
45:51 - New architectural ideas
50:30 - Recap and future directions
🔗 LINKS & RESOURCES
===============================
GloVe: Global Vectors for Word Representation - nlp.stanford.edu/pubs/glove.pdf
2024 IEEE John Von Neumann Medal: Christopher D. Manning Video - ieeetv.ieee.org/channels/ieee...
Direct Preference Optimization: Your Language Model is Secretly a Reward Model - arxiv.org/abs/2305.18290
Pushdown Layers: Encoding Recursive Structure in Transformer Language Models - arxiv.org/abs/2310.19089
For a COMPLETE LIST of links and references, head over to twimlai.com/go/686.
📸 Camera: amzn.to/3TQ3zsg
🎙️Microphone: amzn.to/3t5zXeV
🚦Lights: amzn.to/3TQlX49
🎛️ Audio Interface: amzn.to/3TVFAIq
🎚️ Stream Deck: amzn.to/3zzm7F5