How to Install and test LLaMA 3 Locally [2024]

  Рет қаралды 19,235

CodewithBro

CodewithBro

Ай бұрын

After the release of Llama3 i thought i should make a view to walk anyone who is looking to use it locally. i hope this video helps:)
Related links:
Download Ollama: ollama.com/download
Open web ui: github.com/open-webui/open-webui
Llama3 download link: : ollama.com/library/llama3
Link from video: llama.meta.com/llama3/
Release note from Meta: ai.meta.com/blog/meta-llama-3/
- - - - - - - - - - - - - - - - - - - - - -
Follow us on social networks:
Instagram: / codewithbro_
---
Support us on patreon: / codewithbro
#ai #artificialintelligence #llama3 #metaai #install #macos #machine #machinelearning #aitools #programming #softwaredeveloper #softwareengineer #webdeveloper #developer #iosdeveloper #mobiledevelopment #coding #coder #javascript #developer #computerscience #computersciencestudent #100daysofcode #html #css #programmer #vue #npmpackage #npm #package #CodeNewbies #Code_with_bro #code_withbro #youtubechannel #youtube #youtuber #youtubers #subscribe #youtubevideos #sub #youtubevideo #like #instagram #follow #video #vlog #subscribetomychannel #gaming #music #explorepage #love #smallyoutuber #vlogger #youtubegaming #instagood #llms #youtubecommunity #likes #explore #youtubelife #youtubecreator #ps #bhfyp #fotiecodes

Пікірлер: 49
@codewithbro95
@codewithbro95 Ай бұрын
Model variants ------------------------ Instruct is fine-tuned for chat/dialogue use cases. Example: ollama run llama3 ollama run llama3:70b Pre-trained is the base model. Example: ollama run llama3:text ollama run llama3:70b-text
@SevScout
@SevScout Күн бұрын
I love how in the intro, we're being told about a new AI, by an old one.
@codewithbro95
@codewithbro95 Күн бұрын
Hahahaa
@Knuhben
@Knuhben Ай бұрын
Nice video! Can you do one on how to set up a local database out of pdf files? So the A.I would be able to search that pdfs and answer questions regarding the content
@cptcrogge
@cptcrogge 2 күн бұрын
I'm glad that my PC handles 70B with 16GB VRAM and 64GB of RAM. I wonder what the absolute minimum requirements are for 400B.
@codewithbro95
@codewithbro95 Күн бұрын
You good to go, what os you running on? No one knows yet for the 400B param until it’s available, but I’m pretty sure there’s gonna be a quantized version which would probably need 64gb RAM. Maybe!
@gallyyouko5993
@gallyyouko5993 Ай бұрын
How can I run the original not_quantized version of llama 3-8B(It is almost 15gb)?
@codewithbro95
@codewithbro95 Ай бұрын
My best suggestion for you is to get access to the huggingface repo. You will have to apply to meta for this. Here is a helpful link huggingface.co/meta-llama/Meta-Llama-3-8B
@gallyyouko5993
@gallyyouko5993 Ай бұрын
@@codewithbro95 I get it,but I am looking for a web UI to run it.
@codewithbro95
@codewithbro95 Ай бұрын
@@gallyyouko5993 you can use this: github.com/open-webui/open-webui What I used in the video :)
@SirDragonClaw
@SirDragonClaw Ай бұрын
How can I run the larger version of the model?
@codewithbro95
@codewithbro95 Ай бұрын
ollama run llama3:70b
@dylandsilva3387
@dylandsilva3387 4 күн бұрын
Is there a way to run it on custom developed UI? Can someone please guide me as to how?
@codewithbro95
@codewithbro95 4 күн бұрын
You can use the ollama API Docs here: github.com/ollama/ollama/blob/main/docs/api.md
@Baly5
@Baly5 28 күн бұрын
I didn't really get the part on docker, can you help me ?
@codewithbro95
@codewithbro95 27 күн бұрын
How can I help?
@user-iz9sj1nn5q
@user-iz9sj1nn5q 6 күн бұрын
7:30 ollama web ui
@GiochiamoinsiemeadAndrydex
@GiochiamoinsiemeadAndrydex 17 күн бұрын
How to change the location of the insallation and the location of the download of the model?
@chintanpatel2229
@chintanpatel2229 15 күн бұрын
kzfaq.info/get/bejne/q9BhidF1tbidqKM.html
@codewithbro95
@codewithbro95 6 күн бұрын
You can have a look at their repo, issue here: github.com/ollama/ollama/issues/2551 Hope that helps!
@maorahuvim2108
@maorahuvim2108 Ай бұрын
How can I run ir with langchain?
@codewithbro95
@codewithbro95 Ай бұрын
python.langchain.com/docs/guides/development/local_llms/
@podunkman2709
@podunkman2709 7 күн бұрын
llama 3 in action: >>> How many liters of water per minute can a Dutch windmill pump out? That's an interesting question! The answer depends on the specific design and size of the windmill. However, I can give you some general information. Traditionally, Dutch windmills are designed to pump water from shallow sources, such as polders or wells, for irrigation purposes. The capacity of these windmills varies greatly, but a typical small to medium-sized windmill might be able to pump around 0.5 to 2 liters per minute (L/min). Some larger industrial-scale windpumps can pump much more, up to 10-20 L/min or even more, depending on the turbine design and the pressure head of the water. Really?
@PedroHenriquePS00000
@PedroHenriquePS00000 25 күн бұрын
why do all of these dont have a proper graphical interface... i hate having a black screen to stare at
@codewithbro95
@codewithbro95 24 күн бұрын
You can use the web ui I showed in the video
@jesuispasla2729
@jesuispasla2729 27 күн бұрын
How much G of ram would be needed
@codewithbro95
@codewithbro95 21 күн бұрын
What version do you wanna run?
@jesuispasla2729
@jesuispasla2729 20 күн бұрын
@@codewithbro95 well the best with 16 gb of ram on linux unbuntu
@jesuispasla2729
@jesuispasla2729 20 күн бұрын
@@codewithbro95 best model on 16gb ram linux unbuntu
@rs-wd9or
@rs-wd9or Ай бұрын
how can we add a model???
@codewithbro95
@codewithbro95 Ай бұрын
Follow the stops and run the ollama command as in the video, it will download the mode to you computer
@rs-wd9or
@rs-wd9or Ай бұрын
@@codewithbro95 I meant there is no option to select a model in the bar of Ollama Web UI. How can ve download it there?
@codewithbro95
@codewithbro95 Ай бұрын
@@rs-wd9or no need to, ollama web works and integrates with ollama automatically, so all the models you download on ollama will be listed automatically there
@hoangroyalir
@hoangroyalir Ай бұрын
​@@codewithbro95 I have downloaded the llama model using the command "ollama run llama3", but the Open Web UI didn't see the models. What should I do now? I use this command to start open webui: docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
@recaia
@recaia Ай бұрын
better gpt 3.5?
@codewithbro95
@codewithbro95 Ай бұрын
The 400B maybe, but it’s yet to be released!
@waves42069
@waves42069 Ай бұрын
Its really slow
@codewithbro95
@codewithbro95 Ай бұрын
There are minimum requirements for tanning the model, it works pretty well on my M1 16gb RAM and 8 core GPU
@-_.DI2BA._-
@-_.DI2BA._- Ай бұрын
​@@codewithbro95 does a pre-trained model with 400B work on a M3 128GB RAM?
@codewithbro95
@codewithbro95 Ай бұрын
@@-_.DI2BA._- Not sure, 400B is yet to be released by Meta. they are still training
@viniciusmelo5652
@viniciusmelo5652 25 күн бұрын
content is fine, but your explanation wise ...............................................................................................................................................
@viniciusmelo5652
@viniciusmelo5652 25 күн бұрын
when you say just go on the documentation, what so ever, you didn't said shit
@codewithbro95
@codewithbro95 18 күн бұрын
@@viniciusmelo5652 Thanks for the feedback, will try my best to do better next time...
@Thecurioshow1
@Thecurioshow1 25 күн бұрын
😂😂😂😂😂
@benbork9835
@benbork9835 Ай бұрын
stop click baiting, the 400b is not even out
@codewithbro95
@codewithbro95 Ай бұрын
Mark talks about it in the video?
@benbork9835
@benbork9835 Ай бұрын
​@@codewithbro95 if 70b is already this good 400b is going to be crazy
@tiolv1174
@tiolv1174 Ай бұрын
@codewithbro95
@codewithbro95 Ай бұрын
🔥
Llama 3 RAG: How to Create AI App using Ollama?
7:11
Mervin Praison
Рет қаралды 16 М.
Easy 100% Local RAG Tutorial (Ollama) + Full Code
6:50
All About AI
Рет қаралды 18 М.
Trágico final :(
01:00
Juan De Dios Pantoja
Рет қаралды 32 МЛН
The most impenetrable game in the world🐶?
00:13
LOL
Рет қаралды 15 МЛН
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 597 М.
How To Run Llama 3 8B, 70B Models On Your Laptop (Free)
4:12
School of Machine Learning
Рет қаралды 10 М.
LLaMA 3 Tested!! Yes, It’s REALLY That GREAT
15:02
Matthew Berman
Рет қаралды 201 М.
How to Use Llama 3 with PandasAI and Ollama Locally
13:55
Tirendaz AI
Рет қаралды 13 М.
How To Install CODE LLaMA LOCALLY (TextGen WebUI)
6:12
Matthew Berman
Рет қаралды 79 М.
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 98 М.
This Llama 3 is powerful and uncensored, let’s run it
14:58
David Ondrej
Рет қаралды 22 М.
Which Phone Unlock Code Will You Choose? 🤔️
0:14
Game9bit
Рет қаралды 11 МЛН
What’s your charging level??
0:14
Татьяна Дука
Рет қаралды 6 МЛН