Ollama-Run large language models Locally-Run Llama 2, Code Llama, and other models

  Рет қаралды 41,316

Krish Naik

Krish Naik

Күн бұрын

Get up and running with large language models, locally.
Run Llama 2, Code Llama, and other models. Customize and create your own.
url: ollama.com/
----------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
/ @krishnaik06
-----------------------------------------------------------------------------------
►LLM Fine Tuning Playlist: • Steps By Step Tutorial...
►AWS Bedrock Playlist: • Generative AI In AWS-A...
►Llamindex Playlist: • Announcing LlamaIndex ...
►Google Gemini Playlist: • Google Is On Another L...
►Langchain Playlist: • Amazing Langchain Seri...
►Data Science Projects:
• Now you Can Crack Any ...
►Learn In One Tutorials
Statistics in 6 hours: • Complete Statistics Fo...
End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's
Machine Learning In 6 Hours: • Complete Machine Learn...
Deep Learning 5 hours : • Deep Learning Indepth ...
►Learn In a Week Playlist
Statistics: • Live Day 1- Introducti...
Machine Learning : • Announcing 7 Days Live...
Deep Learning: • 5 Days Live Deep Learn...
NLP : • Announcing NLP Live co...
---------------------------------------------------------------------------------------------------
My Recording Gear
Laptop: amzn.to/4886inY
Office Desk : amzn.to/48nAWcO
Camera: amzn.to/3vcEIHS
Writing Pad:amzn.to/3OuXq41
Monitor: amzn.to/3vcEIHS
Audio Accessories: amzn.to/48nbgxD
Audio Mic: amzn.to/48nbgxD

Пікірлер: 62
@neerajshrivastava5600
@neerajshrivastava5600 3 күн бұрын
Krish, Fantastic Video and great explanation!!! Keep it up
@vishalnagda7
@vishalnagda7 4 ай бұрын
I'm feeling lucky that I got this video in my suggestions.
@mehdi9771
@mehdi9771 4 ай бұрын
We need a long versions videos like previously and thanks for your efforts ❤
@rajendarkatravath2207
@rajendarkatravath2207 4 ай бұрын
Thanks krish! for sharing this knowledge . what an amazing model it is .....!
@computerauditor
@computerauditor 2 ай бұрын
Really insightful krish!!
@divyaramesh3105
@divyaramesh3105 4 ай бұрын
Thank you Krish sir. In Building RAG from scratch ,sunny sir showed about Ollama. Both of you were giving foundational knowledge and updates in GenAI. It was very useful sir.
@devanshgupta6064
@devanshgupta6064 4 ай бұрын
please give sunny sirs youtube @
@divyaramesh3105
@divyaramesh3105 4 ай бұрын
​@@devanshgupta6064 @sunnysavita10
@sailikitha8502
@sailikitha8502 4 ай бұрын
Sunny Savita @sunnysavita10
@kenchang3456
@kenchang3456 4 ай бұрын
Hey Krish, thanks for doing this video in Windows.
@ankitshaw2011
@ankitshaw2011 4 ай бұрын
Thankyou so much for these videos
@AjaySharma-jv6qn
@AjaySharma-jv6qn 4 ай бұрын
Content is helpful, thanks for your effort.🎉
@durgakorde3589
@durgakorde3589 4 ай бұрын
R u a data scientist?
@BelhsanMohamed
@BelhsanMohamed 2 ай бұрын
as always thanks for the information
@marcoaerlic2576
@marcoaerlic2576 Ай бұрын
Thanks for the video.
@deekshad4774
@deekshad4774 3 ай бұрын
You are the best!🤓
@NISHANTKumar-ct3pb
@NISHANTKumar-ct3pb 4 ай бұрын
Thanks , it's great video. Wanted to ask when we say local what is the configuration of local is it a cpu or GPU based system? Are models compressed / quantized or same as original ? Is there a model size limitation vs local system config?
@roshanchandel7929
@roshanchandel7929 2 ай бұрын
The heroes we need!!
@manjeshtiwari7434
@manjeshtiwari7434 4 ай бұрын
Thank You so much for a such a great video , I have a query , I am getting very slow response does the speed of response depends on system config , I have chekced out system use and while running it isn't using much resource , can you tell how can we increase response speed
@nasiksami2351
@nasiksami2351 4 ай бұрын
Great tutorial! Can you please make a video on finetuning model on custom csv dataset and integration with Ollama. For instance, consider I have class imbalance problem in my dataset. Can I finetune a model, then ask it in Ollama, to generate more samples of minority class using the finetuned model?
@haritdey430
@haritdey430 4 ай бұрын
Nice video sir
@ranemghalion581
@ranemghalion581 23 күн бұрын
thankyou
@usingsk
@usingsk 2 ай бұрын
Thanks for Sharing knowledge. Can we fine tune with company domain content in downloaded model and the data is not shared. I mean it comply with IPR if we use locally
@jacobashwinmathew3763
@jacobashwinmathew3763 4 ай бұрын
Can you make a complete video of production ready open source LLM basically LLMOps
@velugucharan8096
@velugucharan8096 4 ай бұрын
Sir please complete the fine tuning llms playlist as much as possible sir
@lionelshaghlil1754
@lionelshaghlil1754 3 ай бұрын
Thanks Krish, the briliant, innovative and master of the AI 😊, I have a question please related to the hosting, so assume I'd like to implement my solution on a server, will I need to have both, OLAMA and my app in two seperate dockers? they would communicate together? or they could be implemented in one single docker?
@krishnaik06
@krishnaik06 3 ай бұрын
It can be implemented in one docker
@ayushmishra5861
@ayushmishra5861 2 ай бұрын
Have you got clarity on the same, can you please share.
@krishnaprasadsheshadri6206
@krishnaprasadsheshadri6206 4 ай бұрын
Can we get a video about reading tables using unstructured and such frameworks
@user-lq7sx8qw5t
@user-lq7sx8qw5t 4 ай бұрын
Great content Krish...Need these coding files kindly share those
@tharunps8048
@tharunps8048 4 ай бұрын
Since it is running locally, using this model with organization's data doesn't expose it right ?
@YashDeveloper-rq2yc
@YashDeveloper-rq2yc 4 ай бұрын
Bro using these techniques can I convert it as superb ai assistant? And what capabilities can use?
@shashank046
@shashank046 Ай бұрын
Hi, how do I use gpu on open web ui? My model response is really slow and is not using gpu even though is used the command for using gpu for installing as mentioned on the open web ui GitHub page ..
@omarnahdi3380
@omarnahdi3380 4 ай бұрын
Hey sir😄, please make a video on BioMistral( a LLM trained on Medical and Scientific Data). It would perfectly fit your AI Nutriationist. Thanks for your daily dose of GenAI
@pssab8
@pssab8 4 ай бұрын
Excellent videos. I set up mistral model locally on ubuntu20.04 and found that it is taking more than a minute for every response .Running in cpu mode only.Can you suggest me to improve the performance.
@amazingedits9298
@amazingedits9298 3 ай бұрын
This models are running on your computer hardware.So it requires a good hardware like gpu or something for creating quicker responses
@KumR
@KumR 4 ай бұрын
Do we need to download the entire 7gb llama2 locally to use with ollama
@susnatakanjilal703
@susnatakanjilal703 4 ай бұрын
Sir I need to create a custom text data set from common crawl.for Bengali language....and train llama2 using that...can you plz demonstrate similar project!?
@kashishvarshney2225
@kashishvarshney2225 4 ай бұрын
hello sir, what is the minimum system configuration for ollama
@copilotcoder
@copilotcoder 4 ай бұрын
Sir please create a codebase understanding model using ollama and test it on a opensource codebase
@starkgaming1425
@starkgaming1425 4 ай бұрын
Please release a step by step guide on how to fine tune Gemini API in Python.....I tried by refering to documents but encountered a lot of errors with OAuth Setup please...........!!!
@SomethingSpiritual
@SomethingSpiritual 3 ай бұрын
why ollama not taking full gpu? its taking full cpu only, pls guide
@AjayYadav-xi9sj
@AjayYadav-xi9sj 4 ай бұрын
Make a video on Python framework of ollama. Make a end to end project and also host it somewhere where real people can use it
@nagasudha6928
@nagasudha6928 4 ай бұрын
Hi Krish This is Sudha from ISRO Hyderabad, I would like to know the documents to be provided for ollama and get the answers from it
@VishalTank-vk5ju
@VishalTank-vk5ju Ай бұрын
Hello, krish, I am facing an issue with the Ollama service. I have an RTX 4090 GPU with 80GB of RAM and 24GB of VRAM. When I run the Llama 3 70B model and ask it a question, it initially loads on the GPU, but after 5-10 seconds, it shifts entirely to the CPU. This causes the response time to be slow. Please provide me with a solution for this. Thank you in advance. Note:- GPU load is 6-12 % and CPU load is 70% .
@manasjohri2495
@manasjohri2495 4 ай бұрын
Can you please tell me how we can run this ollama on GPU right now it is working on CPU?
@sanjaynt7434
@sanjaynt7434 4 ай бұрын
Can this read a document and answer my questions on that document can it.
@hassanahmad1483
@hassanahmad1483 2 ай бұрын
How to deploy these custom gpts...?
@ashishdayal172
@ashishdayal172 2 ай бұрын
hii krish, i am facing error creating modelfile .Please help
@naveenkumarmaurya3182
@naveenkumarmaurya3182 4 ай бұрын
hi krsih i m getting this error Ollama run codella! 🐰💨 (Note: I'm just an AI, I don't have personal preferences or the ability to run code, but I can certainly help you with any questions or tasks you may have!)
@rajarshidey424
@rajarshidey424 2 ай бұрын
How can we get the code?
@rishiraj2548
@rishiraj2548 4 ай бұрын
🙏💯👍
@VishalKumar-gv6gy
@VishalKumar-gv6gy 2 ай бұрын
Does it require GPU ?
@DeadJDona
@DeadJDona 4 ай бұрын
please finish that Chrome update 😢
@mohammedalfarsi4361
@mohammedalfarsi4361 4 ай бұрын
are these model support arabic language ?
@parthwagh3607
@parthwagh3607 9 күн бұрын
Thank you so much krish. I am having problem running models downloaded from hugging face having safetensor file. I have these files in oobabooga/text-generation-webui. I have to use this for ollama. I followed everything, even created modelfile with path to safetensor directory, but it is not running >> ollama create model_name -f modelfile. Please help me.
@YashDeveloper-rq2yc
@YashDeveloper-rq2yc 4 ай бұрын
After installing it will work in offline?
@krishnaik06
@krishnaik06 4 ай бұрын
Yes
@YashDeveloper-rq2yc
@YashDeveloper-rq2yc 4 ай бұрын
@@krishnaik06 Thanks for sharing quality content
@Nagireddy-lw7rl
@Nagireddy-lw7rl 21 күн бұрын
Hi Krish sir I have need ollama chatbot python code provide me. I check with your Github.
@user-fs9mz3rn6q
@user-fs9mz3rn6q 4 ай бұрын
Every time we see a kid we ask him to say a poem and when you have so many llm models but you only want a poem on machine learning
@jatinchawla1680
@jatinchawla1680 3 ай бұрын
llm=ollama(base_url='localhost:11434',model="llama 2") TypeError: 'module' object is not callable Can someone pls help w this?
Things Required To Master Generative AI- A Must Skill In 2024
15:01
Nutella bro sis family Challenge 😋
00:31
Mr. Clabik
Рет қаралды 13 МЛН
Cat Corn?! 🙀 #cat #cute #catlover
00:54
Stocat
Рет қаралды 15 МЛН
ПРОВЕРИЛ АРБУЗЫ #shorts
00:34
Паша Осадчий
Рет қаралды 1,9 МЛН
Каха и суп
00:39
К-Media
Рет қаралды 5 МЛН
Fine-tuning Large Language Models (LLMs) | w/ Example Code
28:18
Shaw Talebi
Рет қаралды 272 М.
Unleash the power of Local LLM's with Ollama x AnythingLLM
10:15
Tim Carambat
Рет қаралды 108 М.
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 443 М.
What is LangChain?
8:08
IBM Technology
Рет қаралды 169 М.
How To Use Meta Llama3 With Huggingface And Ollama
8:27
Krish Naik
Рет қаралды 35 М.
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 126 М.
Should You Use Open Source Large Language Models?
6:40
IBM Technology
Рет қаралды 345 М.
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 863 М.
Nutella bro sis family Challenge 😋
00:31
Mr. Clabik
Рет қаралды 13 МЛН