Flowise Ollama Tutorial | How to Load Local LLM on Flowise

  Рет қаралды 13,195

Leon van Zyl

Leon van Zyl

Күн бұрын

Flowise Ollama Tutorial | How to Load Local LLM on Flowise
In this Flowise Ollama Tutorial video I will show you how to load Local LLMs on Flowise using Ollama.
Want to learn how to create Flowise Ollama Agents. This is the video for you!
🙏 Support My Channel:
Buy me a coffee ☕ : www.buymeacoffee.com/leonvanzyl
📑 Useful Links:
Ollama: ollama.com
💬 Chat with Like-Minded Individuals on Discord:
/ discord
🧠 I can build your chatbots for you!
www.cognaitiv.ai
🕒 TIMESTAMPS:
00:00 - Intro
00:21 - Local Models overview
01:00 - Ollama setup
02:13 - Start Ollama server
03:49 - Local Conversation Chain
05:22 - Local RAG Chatbot
08:26 - Open Source Limitations

Пікірлер: 104
@sadyaz64
@sadyaz64 4 ай бұрын
thank you please more video on open source models
@redrhino2048
@redrhino2048 4 ай бұрын
Hi Leon. Good work! Keep rolling out tutorials like this with Ollama! In my case, I didn't have to use the MMAP parameter. Everything works fine.
@swhitings007
@swhitings007 4 ай бұрын
Big thanks for these videos Leon! You do such a great job of editing as well.
@leonvanzyl
@leonvanzyl 4 ай бұрын
Thank you 🙏
@user-oj2ge8cb5z
@user-oj2ge8cb5z 4 ай бұрын
I've been watching your work on youtube for a very long time and wanted to say thank you very much for what you do and wanted to wish you a lot of luck for your wishes!
@leonvanzyl
@leonvanzyl 4 ай бұрын
Thank you!
@BadBite
@BadBite 4 күн бұрын
Oh! thank you very much. Really appreciate your work! 🎉
@leonvanzyl
@leonvanzyl 4 күн бұрын
You're welcome 🤗
@RuiminWang-hk1wz
@RuiminWang-hk1wz 4 ай бұрын
Thanks for your videos. They really help me a lot!
@conneyk
@conneyk 4 ай бұрын
This is exactly what i was looking for! Thank you so much! I’ve tried getting ollama working with flowise over the last days…
@leonvanzyl
@leonvanzyl 4 ай бұрын
Glad I could help 🙏
@Kartratte
@Kartratte 4 ай бұрын
Hallo and thank you so much… I started testing Flowise with this and it worked.
@leonvanzyl
@leonvanzyl 4 ай бұрын
Glad to hear 👍
@abelpouillet5114
@abelpouillet5114 Ай бұрын
thank you very much , please more video on open source models !
@fatemehjahedpari815
@fatemehjahedpari815 3 ай бұрын
Great Videos. Thanks a lot!
@KraaiduToit
@KraaiduToit 19 күн бұрын
Thanks you are an amazing tutor. This is a great tutorial
@leonvanzyl
@leonvanzyl 19 күн бұрын
Thank you
@zhalberd
@zhalberd 12 күн бұрын
Great video thank you.
@leonvanzyl
@leonvanzyl 12 күн бұрын
You're welcome 🤗
@toursian
@toursian 21 күн бұрын
Thanks for your awesome videos. Please add more videos on Opensource Models. Thanks again.
@Romusic1
@Romusic1 2 ай бұрын
great , thanks!❤
@leonvanzyl
@leonvanzyl 2 ай бұрын
You're welcome
@whackojaco
@whackojaco 4 ай бұрын
Dankie vir jou videos Leon. Ek leer baie by jou en dis lekker om 'n mede Suid Afrikaner te sien gesels oor AI en LLMs.
@leonvanzyl
@leonvanzyl 4 ай бұрын
Dankie Jaco! Bly jy hou daarvan 😁.
@cyborgmetropolis7652
@cyborgmetropolis7652 Ай бұрын
Great stuff. I tried some of the earlier tutorials using Ollama instead of OpenAI (the posiitve/negative review reply tutorial) and found the if/else didn't work with llama3. I'd like to learn more about how to make agents that are completely local and don't use any 3rd party services like open ai, pineapple, etc., but maybe that's not possible without too much missing functionality.
@nabildjelloudi7087
@nabildjelloudi7087 3 ай бұрын
thank's keep going !
@leonvanzyl
@leonvanzyl 3 ай бұрын
Will do 😁
@nabildjelloudi7087
@nabildjelloudi7087 3 ай бұрын
i juste have an issue with openai api key it shows me this error when i try to run the flowise.. => "InsufficientQuotaError: 429 You exceeded your current quota...." should i pay tokens ? @@leonvanzyl
@PIOT23
@PIOT23 4 ай бұрын
Love the open source content! Would love to see a good video on Mixtral
@leonvanzyl
@leonvanzyl 4 ай бұрын
Hehe, my PC can barely run it 😂.
@IliasSeddik
@IliasSeddik 7 күн бұрын
Thank you for this video, but I don't know why, I'm not able to bind chatOllama with the conversation chain, is there additional thing to do ?
@antonslashcev8800
@antonslashcev8800 4 ай бұрын
Hey Leon, amazing tutorials, thank you I'm trying to make a project on Flowise using your tutorials, maybe you could help with two questions: 1. Is there a way to make a Multi-Agent system where Agents with different roles and functions can give instructions or feedback to each other before executing? (like AutoGen) I saw your tutorial on how to make something like this using Conversation Chain, but is it possible to make more advanced system with Agents? 2. How do I load images from external URLs? I don't see such a template. If I upload a pdf with an image will it work? Thanks!
@subhamagrawal4740
@subhamagrawal4740 7 күн бұрын
Nice Video , what to do if ollama is running behind some proxy server , then this does not work , is there any alternate in flowwise
@jiuvk8393
@jiuvk8393 3 ай бұрын
can I choose the installation folder for Ollama and the model? I usually use a external drive to save space in my computer.
@BruWozniak
@BruWozniak 2 ай бұрын
Been mind-blown by pretty much every single one of your videos / tutorials 👏, I'm super grateful! 🙏 Just a thought, how about an entirely open source and local stack running on Docker - I mean, for example, Flowise, ChromaDB, Mixtral (a quantized model running through Ollama?) running in different containers locally - And then, wow, a deployment pipeline to, say, Google Cloud with a CI/CD script via Github Actions - So, dev locally with the Docker containers, push to Github when satisfied, automatically build and deploy on Github Actions and boom, app available on Google Cloud - That would be incredible! I'm going to try it right now, lots of research and trials and errors ahead... 😁
@leonvanzyl
@leonvanzyl 2 ай бұрын
Thank you for the feedback! That sounds like an awesome project 😁
@BruWozniak
@BruWozniak 2 ай бұрын
@@leonvanzyl Looks like ```ollama run mixtral:8x7b``` is going to be a little challenging for my modest hardware 😁 Gonna try with ``` ollama run gemma:2b```, may be even ```gemma:7b```, it is still open source and apparently it is lightweight...
@BruWozniak
@BruWozniak 2 ай бұрын
Ah sorry ```no markdown``` over here 😁
@mehdibelkhayat5088
@mehdibelkhayat5088 4 ай бұрын
Hi Leon, thanks a lot for the video. For my purpose i used nomic-embed-text as llm for embedding, it s faster. I managed to connect directly my ollama+flowise customtool to my crm api .. but it worked only with llama2 model and not mistral ... i struggled a while then found the trick ! no need to interface to make or n8n ... i'm still working on it ..Cheers
@leonvanzyl
@leonvanzyl 4 ай бұрын
Keep me in the loop. I haven't found a reliable way to use open source models with agents.
@mehdibelkhayat5088
@mehdibelkhayat5088 4 ай бұрын
@@leonvanzyl for now i have good results with llava:13b or llama2, they are the only one which use the custom tool on flowise with ollama (with the others no results : nexusraven, orca2,gemma,phi,openchat,mistral ....) i'll keep you in touch
@rickyS-D76
@rickyS-D76 4 ай бұрын
Hi Leon, great content. Really love your contents and presentation. Can you please make a video of RAG, where you embed different types of files, like CSV, pdf, doc and do the chats with those. Thanks
@leonvanzyl
@leonvanzyl 4 ай бұрын
Hey, I actually have a video on RAG in this series. We use a web-loader in that video, but you can simply swap the loader out for anything else.
@meister4831
@meister4831 3 ай бұрын
Thank you. How does this solution compare to using LocalAI as you showed in an older video?
@leonvanzyl
@leonvanzyl 3 ай бұрын
They do pretty much the same thing. Ollama is just a newer application for running models locally
@anilrajshinde7062
@anilrajshinde7062 3 ай бұрын
Your all Videos are great. I am creating small web applications using flowise. Can you create one video based on adding streaming effect after creating API and that should reflect in web application? This will be very useful.
@xavierf2229
@xavierf2229 Ай бұрын
Is it possible to make a chatbot for my website using llama? and ai tools to sell? Thanks
@centraldeexames7300
@centraldeexames7300 4 ай бұрын
Hi Leon! Thank you for sharing another excellent content. I´m struggling to figure out how to insert a system prompt along with my own prompt into the flow of a chatbot I'm creating on Flowise. I'm using an Open Source uncensored LLM via Replicate, and it needs a system prompt to behave the way I'd like. I would be very grateful if you could help me in any way.
@leonvanzyl
@leonvanzyl 4 ай бұрын
You can set the system prompt by clicking on Additional Parameters on the chain, or you can assign a Chat Prompt Template. Apologies if is misunderstood ☺️
@justdavebz
@justdavebz 3 ай бұрын
How does this change if I am using docker?
@vish_9409
@vish_9409 4 ай бұрын
can you please help me with how to add our own pdfs to this
@HermesMacedo
@HermesMacedo 4 ай бұрын
Leon, How to make Flow send Media (image, audio, video, PDF and other files) during the conversation and not just links? for example: Get information from a Google Drive.
@arberstudio
@arberstudio 4 ай бұрын
request structured data, parse the JSON and use client side rendering. I used this for recipe sample app, works very well
@DarkKnight-uk7mq
@DarkKnight-uk7mq 4 ай бұрын
Thanks for another great. We would really appreciate it if you made a bigger project like Chatbots for a Real Estate or e-commerce website
@leonvanzyl
@leonvanzyl 4 ай бұрын
Great ideas
@DarkKnight-uk7mq
@DarkKnight-uk7mq 4 ай бұрын
@@leonvanzyl thanks
@KevinBahnmuller
@KevinBahnmuller 4 ай бұрын
a video about Ollama function calling with flowise would be very nice :)
@leonvanzyl
@leonvanzyl 4 ай бұрын
Very few models support function calling. It's actually limited to OpenAI and Mistral at the moment. You could therefore simply download the Mistral model in Ollama 👍. Be warned, the hardware requirements for Mistral function calling is steep 😄
@JoaquinTorroba
@JoaquinTorroba 4 ай бұрын
👏🏼
@ricardofernandez2286
@ricardofernandez2286 3 ай бұрын
Hi Leon, very useful tutorial!!! I'm running this on CPU (8 vCPUs + 30Gb of RAM) and it is extremely slow. In fact Ollama uses just a few resources and I can't make it use all available CPUs or RAM. I know that GPU is the way to go with LLMs, but perhaps you have some suggestions on how to make this configuration perform a little better. Thank you!!!
@leonvanzyl
@leonvanzyl 3 ай бұрын
Hopefully Ollama will improve over time
@jiuvk8393
@jiuvk8393 3 ай бұрын
I did everything exactly the same as you for the rag and made sure that ollama server is running (and talked to the model in the terminal and responded find immediately) also made sure mmap is on but still get Error: "Error: Request to Ollama server failed: 404 Not Found".
@leonvanzyl
@leonvanzyl 3 ай бұрын
That message seems to indicate that the Ollama server is unavailable
@youwang9156
@youwang9156 4 ай бұрын
Thank you for your video!!!!! just wonder if we host everything locally on Flowise, after we set It up, can we generate the Python API and get to use it somewhere else? or we can only use API generated by flowise locally as well in VS code?
@leonvanzyl
@leonvanzyl 4 ай бұрын
If you're hosting it locally, then you can only access it locally. I have a video on deploying Flowise in this series, but I'm guessing you want to use Open Source models in the cloud? Your best option is to use Huggingface (video coming soon).
@youwang9156
@youwang9156 4 ай бұрын
thank you so much, you literally save my life. I been considering hugging face as well, but open source models like Mixtral model , it doesn't work with outputparaser of langchain, only openai model
@youwang9156
@youwang9156 4 ай бұрын
do you think i can deploy the Ollama locally and flowise locally and build a outputparser framework, and eventually get to use it through local API generated by local flowise ? my goal is to find a cheaper way to run outputparser with a decent performance , since openai will cost so much @@leonvanzyl
@Skiplegday1
@Skiplegday1 4 ай бұрын
Is there a way to share the created chatbot instance? If I would like for someone else to try out the chatbot for example.
@leonvanzyl
@leonvanzyl 4 ай бұрын
You can export a flow from the settings of the flow. The other person can then import the flow on their end.
@JoseManuel-fp7bn
@JoseManuel-fp7bn 2 ай бұрын
Hi Leon. I have tried but I get the fetch failed error. I get the message "Ollama is running" when I run the ip localhost, but somehow flowise doesn't detect it. What could it be? Thanks!
@hujeffrey5823
@hujeffrey5823 2 ай бұрын
me too
@eeling9212
@eeling9212 2 ай бұрын
im getting fetch failed error.
@stephensamuel2770
@stephensamuel2770 4 ай бұрын
Can I use it to create a knowledge based chabot for website ?
@leonvanzyl
@leonvanzyl 4 ай бұрын
Absolutely, send me an email and my agency will assist. Link in description
@Machiuka
@Machiuka 4 ай бұрын
The models are very slow in downloading. It is any way to download those models separately and not from ollama pull command?
@leonvanzyl
@leonvanzyl 4 ай бұрын
You can download them from Huggingface.
@Machiuka
@Machiuka 4 ай бұрын
@@leonvanzyl The problem was solved by themselve. Maybe there was a network problem. Today all worked flawlessly. Thank you for sharing this tutoria!
@zubinbalsara8414
@zubinbalsara8414 3 ай бұрын
I am getting "Fetch Failed" error. My flowise is running in docker on localhost :3000 and ollama server is running on the machine (not docker) at localhost:11434? Can you please help me? Has flowise running in docker got to do anything with this issue? I can run ChatOpenAi without any problem, its just Ollama.
@user-lh8ym9dx8k
@user-lh8ym9dx8k 2 ай бұрын
host.docker.internal:11434
@hujeffrey5823
@hujeffrey5823 2 ай бұрын
I have the same issue
@anindabanik208
@anindabanik208 4 ай бұрын
Wow its osam,but my machine is very slow😢is any alternative for kaggle notebook?
@leonvanzyl
@leonvanzyl 4 ай бұрын
Yeah, these models are resource intensive. You could always try smaller models. Kaggle is a no-go. The point of this video, and Ollama, is to run these models locally. We will look at using hosted solutions as well though, like Huggingface. But, again, this will / could result in costs for the API usage OR infrastructure. There is a reason why OpenAI is so popular 😊
@khalidkifayat
@khalidkifayat 4 ай бұрын
nice tutorial leon, few questions 1. can we use these Open Source Models to create a chatbot and give to clients, if yes then where will it reside ?? 2. For data privacy its a good option, but how to make it to production OR production ready deployment keeping privacy factor.
@leonvanzyl
@leonvanzyl 4 ай бұрын
Thanks! Then point of the video is to run the bots locally. If you want to use these models in the cloud you would need to use hosted services like Huggingface or AWS Bedrock. I'll definitely release a video on these. These is a cost involved in using these services of course, so I just wanted to give you guys a free local alternative.
@randomguyfrominternet
@randomguyfrominternet 4 ай бұрын
You don't need to always host everything in cloud. You can also have your own server at home, garage, office.. All you need is good enough hardware for the model and public IP with well-configured networking. But local hosting is a whole different topic to learn. So you either go: - Self-hosted server - Cloud - Combination of both (e.g. hosting Flowise, files and databases on your own server and calling your model hosted on stateless cloud compute endpoint from it)
@muchossablos
@muchossablos 4 ай бұрын
Leon, how to update Flowise ?
@leonvanzyl
@leonvanzyl 4 ай бұрын
Check out the first video in the series. There is a chapter for upgrading Flowise.
@thatsweirdt
@thatsweirdt 4 ай бұрын
Hello, could you please create a content using huggingface chat and embedding?
@leonvanzyl
@leonvanzyl 4 ай бұрын
Working on a Huggingface video actually.
@lumi.ai_
@lumi.ai_ 4 ай бұрын
Can anyone solve my doubt , I am unable to upsert?????
@leonvanzyl
@leonvanzyl 4 ай бұрын
What's the error? I had to enable MMap to get it to work, did you try that?
@lumi.ai_
@lumi.ai_ 4 ай бұрын
@@leonvanzyl Nope I havent tried but will try and get back to you. Thanks for the response i thought I should not use flowise but please help us we will make great llms. Hope you will solve our problems
@florentflote
@florentflote 4 ай бұрын
@nhtna4706
@nhtna4706 4 ай бұрын
no more api usage? not more spendings? no need of gpu's?
@leonvanzyl
@leonvanzyl 4 ай бұрын
Your PC does not have a GPU? ☺️ Unfortunately, you need powerful hardware to run the more impressive models.
@gonzalodijoux5953
@gonzalodijoux5953 2 ай бұрын
Rag doesn't work well. OLLAMA don't use the document
@leonvanzyl
@leonvanzyl 2 ай бұрын
Which embedding model and vector store did you use?
@pamelavelasquez7244
@pamelavelasquez7244 3 ай бұрын
Thanks for the video tutorial, but embedding is not working for me, the Ollama server is running, and embedding activate mmap, this is the error 2024-04-15 22:51:49 [ERROR]: fetch failed TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11372:11) at async OllamaEmbeddings._request (D:\flowise\Flowise ode_modules\.pnpm\@langchain+community@0.0.39_@aws-crypto+sha256-js@5.2.0_@aws-sdk+client-bedrock-runtime@3.422_sdjgbtbvm2dvzs44hyiv6rdbae ode_modules\@langchain\community\dist\embeddings\ollama.cjs:110:26) at async RetryOperation._fn (D:\flowise\Flowise ode_modules\.pnpm\p-retry@4.6.2 ode_modules\p-retry\index.js:50:12) 2024-04-15 22:51:49 [ERROR]: [server]: Error: TypeError: fetch failed Error: TypeError: fetch failed at buildFlow (D:\flowise\Flowise\packages\server\dist\utils\index.js:415:19) at async utilBuildChatflow (D:\flowise\Flowise\packages\server\dist\utils\buildChatflow.js:229:36) at async createInternalPrediction (D:\flowise\Flowise\packages\server\dist\controllers\internal-predictions\index.js:7:29) 2024-04-15 22:55:43 [INFO]: PUT /api/v1/chatflows/0d375ada-df1f-4d66-941e-1f495ea9f4e5 2024-04-15 22:55:48 [INFO]: POST /api/v1/vector/internal-upsert/0d375ada-df1f-4d66-941e-1f495ea9f4e5 2024-04-15 23:00:50 [ERROR]: TypeError: fetch failed Error: TypeError: fetch failed at InMemoryVectorStore_VectorStores.upsert (D:\flowise\Flowise\packages\components\dist odes\vectorstores\InMemory\InMemoryVectorStore.js:26:27) at async buildFlow (D:\flowise\Flowise\packages\server\dist\utils\index.js:352:37) at async upsertVector (D:\flowise\Flowise\packages\server\dist\utils\upsertVector.js:117:32) at async Object.upsertVectorMiddleware (D:\flowise\Flowise\packages\server\dist\services\vectors\index.js:9:16) at async createInternalUpsert (D:\flowise\Flowise\packages\server\dist\controllers\vectors\index.js:28:29) 2024-04-15 23:00:50 [ERROR]: [server]: Error: Error: TypeError: fetch failed Error: Error: TypeError: fetch failed at buildFlow (D:\flowise\Flowise\packages\server\dist\utils\index.js:415:19) at async upsertVector (D:\flowise\Flowise\packages\server\dist\utils\upsertVector.js:117:32)
@jamminrebel3614
@jamminrebel3614 4 ай бұрын
great video as always. thx for the premium content you deliver. 🦾💙 is this only for local models or could I use open router credentials here, or would I do dedicated API agent instead of chatbot? sorry for confusion =D
@leonvanzyl
@leonvanzyl 4 ай бұрын
You're welcome 🤗. I'm not familiar with Open Router, maybe someone in comments can assist.
Add Flowise to ANYTHING! Flowise API Crash Course
28:17
Leon van Zyl
Рет қаралды 16 М.
Beautiful gymnastics 😍☺️
00:15
Lexa_Merin
Рет қаралды 14 МЛН
마시멜로우로 체감되는 요즘 물가
00:20
진영민yeongmin
Рет қаралды 31 МЛН
50 YouTubers Fight For $1,000,000
41:27
MrBeast
Рет қаралды 172 МЛН
Happy 4th of July 😂
00:12
Alyssa's Ways
Рет қаралды 65 МЛН
Unleash the power of Local LLM's with Ollama x AnythingLLM
10:15
Tim Carambat
Рет қаралды 109 М.
Unlimited AI Agents running locally with Ollama & AnythingLLM
15:21
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
pixegami
Рет қаралды 161 М.
Ollama UI - Your NEW Go-To Local LLM
10:11
Matthew Berman
Рет қаралды 99 М.
Run your own AI (but private)
22:13
NetworkChuck
Рет қаралды 1,3 МЛН
Stop paying for ChatGPT with these two tools | LMStudio x AnythingLLM
11:13
How To Connect Local LLMs to CrewAI [Ollama, Llama2, Mistral]
25:07
codewithbrandon
Рет қаралды 64 М.
Top 50 Amazon Prime Day 2024 Deals 🤑 (Updated Hourly!!)
12:37
The Deal Guy
Рет қаралды 1,4 МЛН
Easy Art with AR Drawing App - Step by step for Beginners
0:27
Melli Art School
Рет қаралды 15 МЛН
تجربة أغرب توصيلة شحن ضد القطع تماما
0:56
صدام العزي
Рет қаралды 58 МЛН
Cheapest gaming phone? 🤭 #miniphone #smartphone #iphone #fy
0:19
Pockify™
Рет қаралды 4,2 МЛН
АЙФОН 20 С ФУНКЦИЕЙ ВИДЕНИЯ ОГНЯ
0:59
КиноХост
Рет қаралды 1,1 МЛН