No video

How To Use Meta Llama3 With Huggingface And Ollama

  Рет қаралды 41,457

Krish Naik

Krish Naik

Күн бұрын

Пікірлер: 64
@sanadasaradha8638
@sanadasaradha8638 4 ай бұрын
Instead of showing all new models it is better to implement a single open source llm for all use cases including fine tuning. At the same time it is better to make an end to end project with opensource llm.
@THOSHI-cn6hg
@THOSHI-cn6hg 4 ай бұрын
Agreed
@devagarwal3250
@devagarwal3250 4 ай бұрын
There are already same video about showing new model. It is better to make a video on how to implement it
@KumR
@KumR 4 ай бұрын
I am with you. New models will keep coming. Focus needs to be on an end to end project
@Shubhampalzy
@Shubhampalzy 3 ай бұрын
how to fine tune ? I need some help to build a custom chatbot trained on custom dataset using llama 3. Please help
@farajacod3717
@farajacod3717 2 ай бұрын
@@Shubhampalzy did you find a way to finetune llam3?
@ParthivShah
@ParthivShah 20 сағат бұрын
Thank You for this video, krish sir.
@rumingliu9787
@rumingliu9787 4 ай бұрын
Thanks sir. Very helpful. Just one question, what's the benefit of Ollama compared with hugging face? I guess it is Local deployed but has some basic requirements for your laptop's hardware.
@KunalDixitEdukraft
@KunalDixitEdukraft 4 ай бұрын
Hi Krish, Firstly, thanks to your consistent efforts to keep us updated and learn the latest techs in the realm of Data Science. How can I sponsor you on Git Hub and earn a badge?
@viratsasikishorevarma3535
@viratsasikishorevarma3535 4 ай бұрын
Hi Krish sir I need a help, please make a video on this basic topic: how ,why to setup virtual environment for python.❤
@nitinjain4519
@nitinjain4519 4 ай бұрын
When using the Llama3 model, sometimes it gives me an incomplete answer. What can I do to avoid incompleteness when generating responses from the Serverless Inference API?
@r21061991
@r21061991 4 ай бұрын
Hey Krish, It will be more helpful if you can take a session on how to use an offline LLM on a custom dataset for QnA
@girishkumar862
@girishkumar862 4 ай бұрын
Hi, there will be 10 billion models coming in future and so on..
@saharyarmohamadi9176
@saharyarmohamadi9176 Ай бұрын
Hi Krish, thank you for great knowledge you are sharing, I want to run ollama on aws sagemaker, do you know or have any video regarding doing that, I already saw your video to install and work locally, I do not know how to do on the cloud.
@happyhours.0214
@happyhours.0214 4 ай бұрын
Sir, please make a llm video on how to train llm models on custom data.
@THOSHI-cn6hg
@THOSHI-cn6hg 4 ай бұрын
Yupppp
@aryansalge4508
@aryansalge4508 4 ай бұрын
Thats fine tuning. He has videos on it
@mhemanthkmr
@mhemanthkmr 4 ай бұрын
Hii Krish I too tried the llama3 in ollama response is slow but in your machine the response is fast you are using GPU then what GPU you using ?
@shotbotop3790
@shotbotop3790 4 ай бұрын
He has a Titan RTX (around 64gp vram) 💀
@janneskleinau6332
@janneskleinau6332 4 ай бұрын
Please make a Video on how to finetune LLaMA! I would appreciate it :) Love your videos btw
@theyoungitscholar4127
@theyoungitscholar4127 13 күн бұрын
Is there a way I can use int8 (select specific quantization) for llama3.1 using ollama
@siddhanthbhattacharyya4206
@siddhanthbhattacharyya4206 4 ай бұрын
Krish i wanted to know what would be the pre requisites to follow your langchain series? How much knowledge do i need?
@KumR
@KumR 4 ай бұрын
New models will keep mushrooming every day. I think now videos should focus on more end to end projects using these models. Not just sentiment analysis or language translation or text summarization. Some real life project end to end.
@vysaivicky4724
@vysaivicky4724 4 ай бұрын
Sir one doubt how much knowledge of dsa is required in data scientist field please clarify
@nishant9847
@nishant9847 Ай бұрын
where will those downloaded model files get saved?
@dhaneshv-tz7qc
@dhaneshv-tz7qc 2 ай бұрын
can u make video for llama 3 fine-tuning and API creation
@gan13166
@gan13166 4 ай бұрын
do we really need ollama to run llama3. when we are able to download/ clone the entire model from HF, do we still need Ollama for running the model? What is the next step after you download the model from HF? how to use it in the Langchain code without Ollama? is that possible?
@podunkman2709
@podunkman2709 3 ай бұрын
Take look at this demo: >>> How many liters of water per minute can a Dutch windmill pump out? That's an interesting question! The answer depends on the specific design and size of the windmill. However, I can give you some general information. Traditionally, Dutch windmills are designed to pump water from shallow sources, such as polders or wells, for irrigation purposes. The capacity of these windmills varies greatly, but a typical small to medium-sized windmill might be able to pump around 0.5 to 2 liters per minute (L/min). Some larger industrial-scale windpumps can pump much more, up to 10-20 L/min or even more, depending on the turbine design and the pressure head of the water. Really? What a sh****
@JorgeLopez-gw9xc
@JorgeLopez-gw9xc 4 ай бұрын
I have ollama on my computer and I am currently using it to run AI models through Python. I need to correct complex instructions that I can only run with the 70B model, the problem is that due to its complexity it takes a long time to execute (2 minutes), how can I lower the times? Currently the model runs on the CPU, how can I configure ollama to use the GPU?
@kavururajesh1760
@kavururajesh1760 4 ай бұрын
Hi Krish can you please upload a video on Moirai for Time Series LLM Model
@asadurrehman3591
@asadurrehman3591 3 ай бұрын
sir plzzzzzzzz tell me about this error. RuntimeError: "triu_tril_cuda_template" not implemented for 'BFloat16'
@itzmeakash9695
@itzmeakash9695 4 ай бұрын
Hello sir, I have a doubt. Is there any platform to find the latest research papers to read? Also, how can I stay updated about the latest developments in the fields of general AI and AI?
@vipinsou3170
@vipinsou3170 4 ай бұрын
It's Google 😂
@itzmeakash9695
@itzmeakash9695 4 ай бұрын
@@vipinsou3170 please that onces again
@user-ue6lv9in8s
@user-ue6lv9in8s 4 ай бұрын
Papers with code
@siddhanthbhattacharyya4206
@siddhanthbhattacharyya4206 4 ай бұрын
Arxiv, it's managed by Cornell Uni
@herashak
@herashak 4 ай бұрын
When doing Question answering I got an error about logits and LlamaForCausalLM not being compatible, not sure how you got that to work as you said
@anaghacasaba9351
@anaghacasaba9351 2 ай бұрын
How can we fine tune llama 3 with a pdf?
@OmSingh-ng3np
@OmSingh-ng3np 4 ай бұрын
This can be fined tune in the same way right?
@Shubhampalzy
@Shubhampalzy 3 ай бұрын
how to fine tune ? I need some help to build a custom chatbot trained on custom dataset using llama 3. Please help
@claudiograssi5192
@claudiograssi5192 4 ай бұрын
to run locally which gpu do you use?
@vamsitharunkumarsunku4583
@vamsitharunkumarsunku4583 3 ай бұрын
How to download llama3 model in local from NVIDIA NIMS? kindly make a video on it please. Thank you
@cairo8905
@cairo8905 4 ай бұрын
Hi I have a voice model on Google drive but I don't know how to upload it on huggingface can you tell me how to upload it? or giving you the model link and you upload it if you don't mind 😁
@0f9yxtizitdl
@0f9yxtizitdl 4 ай бұрын
Liked your new look, Mr clean.
@JourneyWithMystics
@JourneyWithMystics 3 ай бұрын
Bhaiya how do I convert Hindi video into Hindi text, please 🙏 reply much needed ❤
@tejas4054
@tejas4054 4 ай бұрын
Ye kaam.chatgpt bhi krta hai to ye kyu use kre hum. Llama
@ChemFam.
@ChemFam. 4 ай бұрын
Sir how and from where we will get the api key
@rajsharma-bd3sl
@rajsharma-bd3sl 2 ай бұрын
Dude, don't just copy from hugging face and make a video... try to implement these models on some problem like NER
@Superteastain
@Superteastain 4 ай бұрын
This guys good.
@spiritualworld842
@spiritualworld842 4 ай бұрын
Sir I'm totally stuck between data field and software field plzz suggest me to overcame from depression 😢😪
@tejas4054
@tejas4054 4 ай бұрын
Best way don't see youtube its too much overloaded ,,,,,with so much go back to time travel use books read books programming and use pen paper this overload information on youtube is dangerous
@kshitijnishant4968
@kshitijnishant4968 4 ай бұрын
my command prompt is raising error saying Ollama not found? any help guys?
@krishnaik06
@krishnaik06 4 ай бұрын
U need to download and install it
@tarunmohapatra5734
@tarunmohapatra5734 4 ай бұрын
Sir please activate neurolab
@danielfischer4079
@danielfischer4079 4 ай бұрын
Ollama is downloading really slow for me, any1 else?
@surajramamurthysuresh7446
@surajramamurthysuresh7446 3 ай бұрын
Yes it's very slow..
@deepak4166
@deepak4166 4 ай бұрын
What's app meta ai is awesome 🎉
@AnkitVerma-62990
@AnkitVerma-62990 4 ай бұрын
First Comment 😅
@mohsenghafari7652
@mohsenghafari7652 4 ай бұрын
😂😂❤
@tejas4054
@tejas4054 4 ай бұрын
Pair kyu hilaate ho bhai video me
@itxmeJunaid
@itxmeJunaid 4 ай бұрын
😮
@rishiraj2548
@rishiraj2548 4 ай бұрын
🙏🙂
@mohsenghafari7652
@mohsenghafari7652 4 ай бұрын
tanks krish . please answer my email ❤
How to Run Llama 3.1 Locally on your computer? (Ollama, LM Studio)
4:49
Working with file systems lec -14
45:19
Coding lecture with ccbp
Рет қаралды 5
Unveiling my winning secret to defeating Maxim!😎| Free Fire Official
00:14
Garena Free Fire Global
Рет қаралды 10 МЛН
这三姐弟太会藏了!#小丑#天使#路飞#家庭#搞笑
00:24
家庭搞笑日记
Рет қаралды 92 МЛН
English or Spanish 🤣
00:16
GL Show
Рет қаралды 8 МЛН
Parenting hacks and gadgets against mosquitoes 🦟👶
00:21
Let's GLOW!
Рет қаралды 13 МЛН
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 208 М.
"okay, but I want Llama 3 for my specific use case" - Here's how
24:20
Things Required To Master Generative AI- A Must Skill In 2024
15:01
How to access LLMs from hugging face? (Practical Demo)
7:02
AI Researcher
Рет қаралды 2,5 М.
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,4 МЛН
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 1 МЛН
Unveiling my winning secret to defeating Maxim!😎| Free Fire Official
00:14
Garena Free Fire Global
Рет қаралды 10 МЛН