ORPO: Monolithic Preference Optimization without Reference Model (Paper Explained)

  Рет қаралды 19,107

Yannic Kilcher

Yannic Kilcher

29 күн бұрын

Paper: arxiv.org/abs/2403.07691
Abstract:
While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on AlpacaEval2.0 (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-α (7B) and Mistral-ORPO-β (7B).
Authors: Jiwoo Hong, Noah Lee, James Thorne
Links:
Homepage: ykilcher.com
Merch: ykilcher.com/merch
KZfaq: / yannickilcher
Twitter: / ykilcher
Discord: ykilcher.com/discord
LinkedIn: / ykilcher
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: www.subscribestar.com/yannick...
Patreon: / yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Пікірлер: 47
@r9999t
@r9999t 21 күн бұрын
Glad you're back to technical content this time. Any AI KZfaqr can give us latest AI news, but you're just about the only one that can give technical insight into the stories.
@lone0017
@lone0017 21 күн бұрын
6 videos in 7 days, I'm having a holiday and this is such a perfect-timing treat.
@EternalKernel
@EternalKernel 21 күн бұрын
Thank you for being awesome Yannic, I send people from the classes that I "TA" for to you because you're reliably strong with your analysis.
@peach412
@peach412 21 күн бұрын
26:30
@borisbondarenko314
@borisbondarenko314 21 күн бұрын
I very like more technical content from you. I usually read tech news in telegram and your NL New are greats, but very ordinal and simple. So such paper explanations are kind of impact to the DS community, such videos grands new ideas and increase understanding of the field for those, who tried to dive in the deep. Of course it less popular due to complexity of material for audience, but much more interesting. So thank you for such format.
@tensorturtle1566
@tensorturtle1566 21 күн бұрын
Great to see research from my homeland of South Korea represented!
@max0x7ba
@max0x7ba Күн бұрын
That log of probability is also a power transform often used to narrow or widen a distribution.
@I-0-0-I
@I-0-0-I 21 күн бұрын
Thanks for explaining basic terms along with the more complex stuff, for dilettantes like myself. Cheers.
@MyCiaoatutti
@MyCiaoatutti 21 күн бұрын
"Specifically, 1 - p(y|x) in the denominators amplifies the gradients when the corresponding side of the likelihood p(y|x) is low". I think that (1 - p(y|x)) have two different meanings here: it can be the result of differentiation by coincidence and also the "corresponding side" of the likelihood, i.e., 1 - p(y|x). So, when it says the "corresponding side" of p(y|x) is low, it means that 1 - p(y|x) is low.
@justheuristic
@justheuristic 21 күн бұрын
The main loss function (7) looks like it can be meaningfully simplified with school-level math.
@Zed_Oud
@Zed_Oud 21 күн бұрын
27:57
@blender6426
@blender6426 21 күн бұрын
Nice I was waiting for this after you mentioned ORPO in ML News :))
@jondo7680
@jondo7680 16 сағат бұрын
You should make a video just focusing on log and explaining it's role in neuronal networks.
@simaogoncalves1957
@simaogoncalves1957 2 сағат бұрын
16:12
@meselfobviouslyme6292
@meselfobviouslyme6292 21 күн бұрын
Thank you Mr Klicher for delving into the paper, ORPO; Monolithic Preference Optimization without Reference Model
@syeshwanth6790
@syeshwanth6790 21 күн бұрын
Where does Yw and Yl come from. Is it from the training dataset or the LLM that is being trained generates these and are labelled by humans or reward models as W and L?
@yannickpezeu3419
@yannickpezeu3419 21 күн бұрын
I liked the self deprecation at
@kaikapioka9711
@kaikapioka9711 21 күн бұрын
Thx again yan! 🎉
@SLAM2977
@SLAM2977 21 күн бұрын
There seems to be a conceptual problem, where are the preferences coming from given that they are expressed on multiple responses to the same prompt? Let's suppose we wish to fine-tune a foundation model for chat, we would not have the preferences before having done SFT and gathered some responses on the chat template format based prompt, that would force us to do SFT first and then SFT+ODDS_RATIO loss. Doable but surely not a single pass approach.
@axelmarora6743
@axelmarora6743 21 күн бұрын
great! now apply ORPO to a reward model and round we go!
TransformerFAM: Feedback attention is working memory
37:01
Yannic Kilcher
Рет қаралды 33 М.
Mixtral of Experts (Paper Explained)
34:32
Yannic Kilcher
Рет қаралды 53 М.
格斗裁判暴力执法!#fighting #shorts
00:15
武林之巅
Рет қаралды 73 МЛН
ПЕЙ МОЛОКО КАК ФОКУСНИК
00:37
Masomka
Рет қаралды 10 МЛН
The Most Important Algorithm in Machine Learning
40:08
Artem Kirsanov
Рет қаралды 217 М.
Gemini has a Diversity Problem
17:36
Yannic Kilcher
Рет қаралды 52 М.
Flow Matching for Generative Modeling (Paper Explained)
56:16
Yannic Kilcher
Рет қаралды 36 М.
[ML News] Chips, Robots, and Models
39:14
Yannic Kilcher
Рет қаралды 26 М.