Create your own CUSTOMIZED Llama 3 model using Ollama

  Рет қаралды 11,776

AI DevBytes

AI DevBytes

Күн бұрын

Llama 3 | In this video we will walk through step by step how to create a custom Llama 3 model using Ollama.
🚀 What You'll Learn:
* How to create an Ollama ModelFile
* Adding a custom system prompt for your custom Llama 3
* Customizing Llama 3 models parameters
Chatpers:
00:00:00 - Intro
00:01:03 - Getting Llama 3
00:01:48 - Testing Llama 3 model
00:02:38 - Creating Custom Model file
00:07:40 - Creating Custom Llama 3 model
00:09:20 - Testing Custom Llama 3 model
🧑‍💻 My MacBook Pro Specs:
Apple MacBook Pro M3 Max
14-Core CPU
30-Core GPU
36GB Unified Memory
1TB SSD Storage
📺 Other Videos you might like:
MACS OLLAMA SETUP - How To Run UNCENSORED AI Models on Mac (M1/M2/M3): • OLLAMA | Want To Run U...
WINDOWS OLLAMA SETUP - Run FREE Local UNCENSORED AI Models on Windows with Ollama: • OLLAMA | Want To Run U...
🖼️ Ollama & LLava | Build a FREE Image Analyzer Chatbot Using Ollama, LLava & Streamlit! • Mastering AI Vision Ch...
🤖 Streamlit & Ollama | How to Build a Local UNCENSORED AI Chatbot: • Streamlit & Ollama | H...
🚀 Build Your Own AI 🤖 Chatbot with Streamlit and OpenAI: A Step-by-Step Tutorial: • Build AI Chatbot with ...
🔗 Useful links
Modelfile Github Repo: github.com/AIDevBytes/Custom-...
Ollama ModelFile docs: github.com/ollama/ollama/blob...
Llama3 Model location: ollama.com/library/llama3:8b
_____________________________________
🔔 / @aidevbytes Subscribe to our channel for more tutorials and coding tips
👍 Like this video if you found it helpful!
💬 Share your thoughts and questions in the comments section below!
GitHub: github.com/AIDevBytes
🏆 My Goals for the Channel 🏆
_____________________________________
My goal for this channel is to share the knowledge I have gained over 20+ years in the field of technology in an easy-to-consume way. My focus will be on offering tutorials related to cloud technology, development, generative AI, and security-related topics.
I'm also considering expanding my content to include short videos focused on tech career advice, particularly aimed at individuals aspiring to enter "Big Tech." Drawing from my experiences as both an individual contributor and a manager at Amazon Web Services, where I currently work, I aim to share insights and guidance to help others navigate their career paths in the tech industry.
_____________________________________

Пікірлер: 40
@indiboy7
@indiboy7 26 күн бұрын
Perfect. Exactly what I was searching for!
@Unknown_22925
@Unknown_22925 Ай бұрын
Wow, you're awesome! That video was short, informative, and great. Thanks a bunch!😊
@AIDevBytes
@AIDevBytes Ай бұрын
Thanks! Glad you found it helpful. I try to keep it short so you don't fall asleep halfway through! 😁
@user-tt7rr1lt7o
@user-tt7rr1lt7o 24 күн бұрын
GREAT CONTENT
@Lucas2RC
@Lucas2RC 22 күн бұрын
This video is great. Thanks for the content.
@IdPreferNot1
@IdPreferNot1 27 күн бұрын
Have you tried a dolphin version or equivalent of llama 3and got a good working modelfile?? Would have thought this video would blow up now since this topic is still hard to find on the interwebs.
@AIDevBytes
@AIDevBytes 27 күн бұрын
I have played with the dolphin version a little. I may create a dedicated video for those that are interested. The channel is still new so it's hard for the channels videos to blow up right away 😁.
@IdPreferNot1
@IdPreferNot1 27 күн бұрын
@@AIDevBytes That would be great if you can get one to work well.. seems like many are having an issue getting it to work well under ollama... myself included.
@AIDevBytes
@AIDevBytes 27 күн бұрын
I'll probably get a video covering creating custom dolphin llama 3 and dolphin mixtral models sometime tomorrow.
@john_blues
@john_blues 21 күн бұрын
Is there a way to increase the context length past 8k? If so, does it degrade performance?
@AIDevBytes
@AIDevBytes 21 күн бұрын
The maximum context length is set by the model. So, for Llama 3, you can't go past the 8K context window. Theoretically, the larger the context window, the more data the model has to go through, sometimes making it harder for the model to differentiate important details from irrelevant information in the context. Usually, you see this in massive context windows like 100K+ context windows. You can check out Phi-3 which has a 128K context window. It's pretty good model for its size: ollama.com/library/phi3:3.8b
@john_blues
@john_blues 21 күн бұрын
@@AIDevBytes Thanks. I was hoping it would be possible to get it closer to 128 which believe is what Chatgpt and Gemini have. It makes it better for long form responses/content. I'll check out Phi3.
@sertenejoacustic
@sertenejoacustic 27 күн бұрын
would you use this in prod? Also, how powerful is your dev machine hardware-wise?? Keep up the great work bud!
@AIDevBytes
@AIDevBytes 26 күн бұрын
Thanks! Yes, you could use this in prod. I would recommend running it on a dedicated server with proper GPU power. Here are the specs for my computer. 🧑‍💻 My MacBook Pro Specs: Apple MacBook Pro M3 Max 14-Core CPU 30-Core GPU 36GB Unified Memory 1TB SSD Storage
@sertenejoacustic
@sertenejoacustic 26 күн бұрын
@@AIDevBytes thanks! Really appreciate you
@AIDevBytes
@AIDevBytes 26 күн бұрын
@@sertenejoacustic happy to help!
@thomasdeshayes9292
@thomasdeshayes9292 17 күн бұрын
Thanks. can we use Jupyter lab instead?
@AIDevBytes
@AIDevBytes 17 күн бұрын
Yes, as long as the notebook is running on a computer with a GPU.
@hotprinzify
@hotprinzify 22 күн бұрын
You didn't show where you saved the modelfile, what kind of document is that, or where the llama3 is in your computer
@AIDevBytes
@AIDevBytes 22 күн бұрын
Be sure to check out to the videos I reference for setting up Ollama on Windows or Mac in the description if you are needing a deeper dive into Ollama. They have a more detailed overview of installing and running Ollama. MACS OLLAMA SETUP - How To Run UNCENSORED AI Models on Mac (M1/M2/M3): kzfaq.info/get/bejne/Zpl6kr1nq8C8hGg.html WINDOWS OLLAMA SETUP - Run FREE Local UNCENSORED AI Models on Windows with Ollama: kzfaq.info/get/bejne/e5ubkqyd09PJmJc.html Ollama models are pulled into their own special directory that you shouldn't alter. Model File can be in any directory you would like to store it. The model file is a file with no extension. See model file here in Github github.com/DevTechBytes/Custom-Llama3-Model. When running the ollama commands makes sure you are in the directory you are storing your model file.
@eevvxx80
@eevvxx80 12 күн бұрын
Thanks mate, I have a question. Can I add my text to llama3?
@AIDevBytes
@AIDevBytes 12 күн бұрын
Can you explain further? Do you mean add you own text to the SYSTEM parameter? Not sure I'm am following your question.
@Raj-kt3mz
@Raj-kt3mz 21 күн бұрын
This is amazing
@laalbujhakkar
@laalbujhakkar 15 күн бұрын
So, what's the point of "customizing" when I can just change the system prompt? Isn't it like copying /bin/ls to /bin/myls and feeling like I accomplished something?
@AIDevBytes
@AIDevBytes 15 күн бұрын
This a very simple example, but the purpose would be if you wanted to change multiple parameters as part of the model and use it in another application. Example, you could use the model with something like Open WebUI and then lock users into only using the model you customized with your new parameters.
@mirzaakhena
@mirzaakhena 20 күн бұрын
i saw many your video only copy paste the existing template. Can you help explain or maybe create a video to make a custom template ?
@AIDevBytes
@AIDevBytes 20 күн бұрын
The templates are model specific so you don't want to change this. You will get strange output from the models if you try to create a custom template in your model file.
@mirzaakhena
@mirzaakhena 20 күн бұрын
@@AIDevBytes Alright. Fair enough. I thought before the template, parameter and other stuff will inherit from the ancestor model.
@AIDevBytes
@AIDevBytes 20 күн бұрын
You are correct those are inherited, what I noticed with my testing of lots of different models is that when you don't include the template into the custom model the respond output starts including weird characters in the text with some models. So, not sure if this is a bug in Ollama. That's why you see me always copy and paste the template into new model files.
@mirzaakhena
@mirzaakhena 20 күн бұрын
@@AIDevBytes Ok. Thanks, I just wondering that i can create a new role something like function_call or function_response in the template instead of it is embedding in assistant replied.
@lucasbrown7338
@lucasbrown7338 19 күн бұрын
Hold on, so my data stays on my device with this new AI? Now that's a win for privacy. The mediatek dimensity platform collab with the meta AI seems very interesting one!
@AIDevBytes
@AIDevBytes 19 күн бұрын
Yep! The beauty of Open-Source models!
@hamzahassan6726
@hamzahassan6726 9 күн бұрын
hi, I am trying to make a model file with these configurations: # Set the base model FROM llama3:latest # Set custom parameter values PARAMETER num_gpu 1 PARAMETER num_thread 6 PARAMETER num_keep 24 PARAMETER stop PARAMETER stop PARAMETER stop # Set the model template TEMPLATE "{{ if .System }}system {{ .System }}{{ end }}{{ if .Prompt }}user {{ .Prompt }}{{ end }}assistant getting Error: unexpected EOF Could you tell me what am I doing wrong?
@AIDevBytes
@AIDevBytes 9 күн бұрын
Looks like you didn't close your double quotes at the end of your template. Simple mistake which can drive you crazy 😁 Let me know if that fixes your issue. EDIT: Also, use triple quotes like this when using multiple lines for your template. TEMPLATE """ Template values goes here """
@hamzahassan6726
@hamzahassan6726 9 күн бұрын
@@AIDevBytes getting same error with this # Set the base model FROM llama3:latest # Set custom parameter values PARAMETER num_gpu 1 PARAMETER num_thread 6 PARAMETER num_keep 24 PARAMETER stop PARAMETER stop PARAMETER stop # Set the model template TEMPLATE """ {{ if .System }}system {{ .System }}{{ end }}{{ if .Prompt }}user {{ .Prompt }}{{ end }}assistant """
@AIDevBytes
@AIDevBytes 9 күн бұрын
When I get some free and I'm at my computer again today. I will give it a try to see if I can isolate the problem and let you know.
@hamzahassan6726
@hamzahassan6726 9 күн бұрын
@@AIDevBytes thanks mate. much appreciated
@AIDevBytes
@AIDevBytes 9 күн бұрын
@@hamzahassan6726 I copied the model file content you had and pasted into a new file and was able to create a new model. I am not quite sure why you are the getting the error: "Error: unexpected EOF". I have not been able to duplicate the error. Also, one thing to call out looks like you are not using the llama3 template from ollama, but that doesn't appear to be causing the issue. I would make sure you are not using rich text format in your model file and ensure that it is plaintext only. if you go to the llama3 model (ollama.com/library/llama3:latest/blobs/8ab4849b038c) the template looks like this: {{ if .System }}system {{ .System }}{{ end }}{{ if .Prompt }}user {{ .Prompt }}{{ end }}assistant {{ .Response }}
OLLAMA | Want To Run UNCENSORED AI Models on Mac (M1/M2/M3)
10:31
AI DevBytes
Рет қаралды 2,1 М.
Surprise Gifts #couplegoals
00:21
Jay & Sharon
Рет қаралды 33 МЛН
Зомби Апокалипсис  часть 1 🤯#shorts
00:29
INNA SERG
Рет қаралды 7 МЛН
[Vowel]물고기는 물에서 살아야 해🐟🤣Fish have to live in the water #funny
00:53
Customize Dolphin Llama 3 with Ollama!
17:28
AI DevBytes
Рет қаралды 1,8 М.
Google Search as We Know It is Gone!
12:35
Waveform Clips
Рет қаралды 16 М.
Fastfetch, alternative to Neofetch on Linux
5:58
Friendly Alien
Рет қаралды 732
LLaMA 3 Tested!! Yes, It’s REALLY That GREAT
15:02
Matthew Berman
Рет қаралды 198 М.
LLaMA 3 UNCENSORED 🥸 It Answers ANY Question
8:48
Matthew Berman
Рет қаралды 43 М.
Run Llama 2 Web UI on Colab or LOCALLY!
8:33
1littlecoder
Рет қаралды 36 М.
Best Gun Stock for VR gaming. #vr #vrgaming  #glistco
0:15
Glistco
Рет қаралды 10 МЛН
Пленка или защитное стекло: что лучше?
0:52
Слава 100пудово!
Рет қаралды 1,9 МЛН
Samsung vs Apple Vision Pro🤯
0:31
FilmBytes
Рет қаралды 889 М.
📱 SAMSUNG, ЧТО С ЛИЦОМ? 🤡
0:46
Яблочный Маньяк
Рет қаралды 1,2 МЛН
Я Создал Новый Айфон!
0:59
FLV
Рет қаралды 3,5 МЛН
НЕ ПОКУПАЙ iPad Pro
13:46
itpedia
Рет қаралды 385 М.