No video

Analyzing the Costs of Large Language Models in Production

  Рет қаралды 4,494

TensorOps

TensorOps

Күн бұрын

Пікірлер: 15
@CresGallego
@CresGallego 7 ай бұрын
Really great insights. Economics is well explained.
@tensorops
@tensorops 7 ай бұрын
Thank you!
@billykotsos4642
@billykotsos4642 7 ай бұрын
the economics are broken because the hardware setup just isnt there... instead of paying by the hour you pay by the token/call which is insane..... Cloud has been build on the idea that you fire up the instance and you know what you pay.... but these days you need huge cloud instances to run these huge models... The costs will go down significantly to run these models in about 3 years.... you wont have to think about these things...
@billykotsos4642
@billykotsos4642 7 ай бұрын
Being handed a bill based on tokens generated by a model is preposterous... These LLM apps cost so much right now that you need to have a solid use case in mind.... Else you just wait for a couple more years when inferencing these LLMs wont be as expensive... the only reason these LLMs are so expensive to run is that they are SOTA and Nvidia is the only player right now.
@loopaal
@loopaal 6 ай бұрын
fantastic
@tensorops
@tensorops 6 ай бұрын
Thank you so much 😀
@mohamedfouad1309
@mohamedfouad1309 8 ай бұрын
😊
@lionhuang9209
@lionhuang9209 8 ай бұрын
Where can we download the slides?
@balainblue
@balainblue 6 ай бұрын
Can you explain the math of 5 requests per minute translating it to 9,000$ per month?
@tensorops
@tensorops 6 ай бұрын
We recommend looking here gptforwork.com/tools/openai-chatgpt-api-pricing-calculator Assuming 220K requests, with proper prompts that are usually 1000-2000 tokens you can get to these costs. Additionally we want to remind that often a single request to an LLM application triggers more than one API call to an LLM
@balainblue
@balainblue 6 ай бұрын
@@tensorops Thank you so much.
@balainblue
@balainblue 6 ай бұрын
@@tensorops Can you please elaborate on that? "A single request to an LLM application triggers more than one API call to an LLM"
@tensorops
@tensorops 6 ай бұрын
@@balainblue We give an example on the next webinar where you have one query that triggers many LLM calls. Sometimes even simple chains like Map-Reduce or Refine can cause many LLM calls to OpenAI for a simple action as "summarization"
@balainblue
@balainblue 6 ай бұрын
@@tensorops Thank you. I look forward to it.
A Survey of Advanced Prompt Engineering Techniques [webinar]
1:02:35
What Makes Large Language Models Expensive?
19:20
IBM Technology
Рет қаралды 67 М.
My Cheetos🍕PIZZA #cooking #shorts
00:43
BANKII
Рет қаралды 27 МЛН
الذرة أنقذت حياتي🌽😱
00:27
Cool Tool SHORTS Arabic
Рет қаралды 14 МЛН
天使救了路飞!#天使#小丑#路飞#家庭
00:35
家庭搞笑日记
Рет қаралды 86 МЛН
Smart Sigma Kid #funny #sigma #comedy
00:40
CRAZY GREAPA
Рет қаралды 39 МЛН
How AI 'Understands' Images (CLIP) - Computerphile
18:05
Computerphile
Рет қаралды 195 М.
What are AI Agents?
12:29
IBM Technology
Рет қаралды 185 М.
I wish every AI Engineer could watch this.
33:49
1littlecoder
Рет қаралды 79 М.
Has Generative AI Already Peaked? - Computerphile
12:48
Computerphile
Рет қаралды 960 М.
Large Language Models (LLMs) - Everything You NEED To Know
25:20
Matthew Berman
Рет қаралды 79 М.
My Cheetos🍕PIZZA #cooking #shorts
00:43
BANKII
Рет қаралды 27 МЛН