Ollamac and OllamaKit creator here! 👋🏼 Great video, Karin!! ❤
@LinuxH2O14 күн бұрын
Really informative, something I kind of was in need. Thanks for showing things off.
@Another0neTimeАй бұрын
Thank you for the video, and sharing your knowledge.
@KD-SRE21 күн бұрын
I use '/bye' to exit out of the Ollama cli
@andrelabbe5050Ай бұрын
I enjoyed the video. Easy to understand and most importantly showing what you can do without to much hassle with a not too powerful MacBook. From the video I believe I have the same model as the one you used. I do like the idea of setting preset for the 'engine'. I do use the Copilot Apps. I can then check how both perform for the same question. I have just tested deepseek-coder-v2 with the same questions as you... Funny thing, it is not exactly the same answer. Also on my 16Gb Mac,,, The Memory activity get a nice yellow colour. Sadly contrary to the Mac in the video, I got more stuff running in the background like Dropbox, etc... Which I cannot really kill just for the sake of it,
@guitaripodАй бұрын
wondering what it'd take to get something running on iOS. Even with 2B it might prove useful
@tsalVlogАй бұрын
Great video!
@juliocuestaАй бұрын
if i understood correctly. The idea could be to create an app for macOS that includes some function that requires a LLM. The app is distributed without the LLM. The user is notified that said function will only be available if download the model. This message should be implemented in a View that contains a button that will download the file and configure the macOS app to start its use.
@kamertonaudiophileplayer847Ай бұрын
The awesome video!
@ericwilliams4554Ай бұрын
Great video. Thank you. I am interested to know if any developers are using this in their iOS apps.
@SwiftyPlaceАй бұрын
This is not working for iOS. If you want to run LLM on an iPhone you will need to use a smaller model which usually dont perform so well. Most iPhones have less than 8GB Ram. That is also why Apple Intelligence will process more advanced complex task in the cloud
@officialcreatisoftАй бұрын
I've tried using the LLM's locally, but I only have 8gb of ram. Great video!
@SwiftyPlaceАй бұрын
Unfortunately, Apple made the base models with 8GB. A lot of people have the same problem as you.
@jayadky5983Ай бұрын
I feel like you can still run the Phi3 model on your device.
@midnightcoderАй бұрын
Any way of running it on iOS?
@EsquireRАй бұрын
Only watchos sorry
@mindriversАй бұрын
Dear Karin, Could you please advise on how to put my entire Xcode project into a context window and ask the model about my entire codebase?
@bobgodwinxАй бұрын
LLMs have a long way to go. 4GB to run a simple question is a no go. The have to reduce it to 20MB and people will start paying attention.