No video

Unlock Ollama's Modelfile | How to Upgrade your Model's Brain using the Modelfile

  Рет қаралды 19,308

Prompt Engineer

Prompt Engineer

Күн бұрын

Пікірлер: 41
@drmetroyt
@drmetroyt 5 ай бұрын
Thanks for taking up the request ... 😊
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
🤗 Welcome
@TokyoNeko8
@TokyoNeko8 5 ай бұрын
I use the web ui and I feel it's much easier to manage the modelfiles and the obvious history tracking oc the chat etc etc.
@renierdelacruz4652
@renierdelacruz4652 5 ай бұрын
Great Video, Thanks very much.
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
You are welcome!
@JavierCamacho
@JavierCamacho 4 ай бұрын
Stupid question. Does this creates a new model file or it just creates a instruction file for the base model to follow instructions?
@PromptEngineer48
@PromptEngineer48 4 ай бұрын
New Model File
@JavierCamacho
@JavierCamacho 4 ай бұрын
@PromptEngineer48 so the size on driver gets duplicated...? I mean 4gb of llama3 plus an extra 4gb for whatever copy we make?
@PromptEngineer48
@PromptEngineer48 4 ай бұрын
@@JavierCamacho No the old is not used. just the new one
@JavierCamacho
@JavierCamacho 4 ай бұрын
@@PromptEngineer48 thanks
@user-ms2ss4kg3m
@user-ms2ss4kg3m 3 ай бұрын
great thanks
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
You are welcome!
@fkxfkx
@fkxfkx 5 ай бұрын
Great 👍
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
Thank you! Cheers!
@enesnesnese
@enesnesnese 2 ай бұрын
Thanks for the clear explanation. But can we also do this for the llama3 model built on the ollama image in Docker? I assume that containers do not have access to our local files
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
Yes, you can
@enesnesnese
@enesnesnese 2 ай бұрын
@@PromptEngineer48 how? Should I create a file named Modelfile in container? Or should I create in my local? I am confused
@PromptEngineer48
@PromptEngineer48 2 ай бұрын
@@enesnesnese you should create the modelfile in local and you could run the model created from this modelfile in container
@enesnesnese
@enesnesnese 2 ай бұрын
@@PromptEngineer48 got it. Thanks
@saramirabi1485
@saramirabi1485 Ай бұрын
Have a question is it possible to fine-tune the llama-3 in Ollama?
@khalidkifayat
@khalidkifayat 5 ай бұрын
nice one, questions was how to use mistral_prompt for production purposes OR sending to client ??
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
Yes. U can push this to your Ollama login under your models. Then anyone will be able to pull the model by saying like Ollama pull promptengineer48/mistral_prompt . I will show the process in the next video on Ollama for sure.
@khalidkifayat
@khalidkifayat 5 ай бұрын
​@@PromptEngineer48appreciated mate
@autoboto
@autoboto 5 ай бұрын
This is great info. One thing I have wanted to do is migrate all my local models to another drive. With Win11 I was using wsl2 with Linux ollama then I installed windows ollama and lost the reference to the local models. I rather not download the models again. In addition would be nice to be able to migrate models to another SSD and have ollama reference the alternate model path. OLLAMA_MODELS in windows works but only for downloading new models. When I copied models from the original wsl2 location to new location ollama would not recognize the models in the list command Curious if anyone has needed to relocate the high number models to new location and have ollama able to refence this new model location
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
Got it
@michaelroberts1120
@michaelroberts1120 4 ай бұрын
What exactly does this do that koboldcpp or sillytavern does not already do in a much simpler way?
@PromptEngineer48
@PromptEngineer48 4 ай бұрын
basically if i can get the models running on ollama, we open another door of integration.
@user-wr4yl7tx3w
@user-wr4yl7tx3w 4 ай бұрын
do you have a video showing how to use crewai and ollama together?
@PromptEngineer48
@PromptEngineer48 4 ай бұрын
kzfaq.info/get/bejne/fbGiaLiDr9yydIU.html
@UTubeGuyJK
@UTubeGuyJK 5 ай бұрын
How does modelfile not have a file extension? This keeps me up at night not understanding how that works :)
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
I will find the reason and give you a night's sleep.
@robertranjan
@robertranjan 4 ай бұрын
❯ ollama run mistral >>> does a computer filename must have a extension? A computer file name does not strictly have to have an extension, but it is a common convention in many computing systems, including popular operating systems like Windows and macOS. An extension provides additional information about the type or format of the data contained within the file. For instance, a file named "example.txt" with no extension would still be considered a valid file, but the system might not recognize it as a text file and may not open it with the default text editor. In contrast, if the same file is saved with the ".txt" extension, the system is more likely to open it using the appropriate text editor. One popular file like `Modelfile` without an extension is `Dockerfile`. I think, developers named it like that one...
@romanmed9035
@romanmed9035 3 ай бұрын
how do I find out when the model is actually updated? when was it filled with data and how outdated are they?
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
U will have to put a different name for the model...
@romanmed9035
@romanmed9035 3 ай бұрын
@@PromptEngineer48 Thank you. but I asked how to find out the date of relevance when I download someone else's model and not make my own.
@PromptEngineer48
@PromptEngineer48 3 ай бұрын
if you ollama list command in cmd, you will see all the list of models in your own system
@EngineerAAJ
@EngineerAAJ 5 ай бұрын
Is it possible to prepare a model with RAG and then save it as a new model?
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
To prepare a model for RAG, we would need to do finetune the model separately using other tools, get the .bin file or gguf file, then convert to Ollama intergration mode.
@EngineerAAJ
@EngineerAAJ 5 ай бұрын
@@PromptEngineer48Thanks, I will try to take a deeper look into that, but something says that I won't have enough memory for that :(
@PromptEngineer48
@PromptEngineer48 5 ай бұрын
Try on runpods
@kevinfox9535
@kevinfox9535 4 күн бұрын
This no longer work
Adding Custom Models to Ollama
10:12
Matt Williams
Рет қаралды 27 М.
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
Tim Carambat
Рет қаралды 119 М.
SPONGEBOB POWER-UPS IN BRAWL STARS!!!
08:35
Brawl Stars
Рет қаралды 21 МЛН
Кадр сыртындағы қызықтар | Келінжан
00:16
Create Training Data for Finetuning LLMs
22:29
APC Mastery Path
Рет қаралды 1,1 М.
Create your own CUSTOMIZED Llama 3 model using Ollama
12:55
AI DevBytes
Рет қаралды 19 М.
Design Your Own Ollama Model Now!
7:39
Matt Williams
Рет қаралды 14 М.
This is how I run my OWN Custom-Models using OLLAMA
22:25
Prompt Engineer
Рет қаралды 4,8 М.
This 100% automatic AI Agent can do anything, just watch
1:19:48
David Ondrej
Рет қаралды 89 М.
Obsidian and Ollama - Free Local AI Powered PKM
39:51
Antone Heyward
Рет қаралды 3 М.
RAG from the Ground Up with Python and Ollama
15:32
Decoder
Рет қаралды 29 М.
Getting Started with Ollama and Web UI
13:35
Dan Vega
Рет қаралды 14 М.