FINALLY! Open-Source "LLaMA Code" Coding Assistant (Tutorial)

  Рет қаралды 129,276

Matthew Berman

Matthew Berman

5 ай бұрын

This is a free, 100% open-source coding assistant (Copilot) based on Code LLaMA living in VSCode. It is super fast and works incredibly well. Plus, no internet connection is required!
Download Cody for VS Code today: srcgr.ph/ugx6n
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V

Пікірлер: 292
@matthew_berman
@matthew_berman 5 ай бұрын
Llama code 70b video coming soon!
@DopeTropic
@DopeTropic 5 ай бұрын
Can you make a video to a local LLM with fine tuning guide?
@orangeraven3869
@orangeraven3869 5 ай бұрын
codellama 70b has been amazing for me so far. code is definitely SOTA for local model. Can't wait to see tunes and merges like Phind or DeepSeek in the near future. Will you cover miqu 70b too? Rumors aside, it's closest to GPT-4 for any local model yet and I predict it produces a surprise or two if you put it through your normal benchmarks.
@Ricolaaaaaaaaaaaaaaaaa
@Ricolaaaaaaaaaaaaaaaaa 5 ай бұрын
@@orangeraven3869 How is it compared to the latest GPT4 build?
@SaveTheDoctor-fl7hn
@SaveTheDoctor-fl7hn 5 ай бұрын
LOL cant wait!
@Chodak166
@Chodak166 5 ай бұрын
How about the current huggingface leader, the moreh momo 72b model?
@5Komma5
@5Komma5 5 ай бұрын
Need to sign in to use the plugin. No thanks. That is not completely local.
@carktok
@carktok 5 ай бұрын
Are you saying you had to login to authenticate your license to use a local instance of their software for free? 🤯
@nicolaspace1182
@nicolaspace1182 5 ай бұрын
⁠@@carktokYes, and that is a deal breaker for many people, believe it or not.
@cesarruiz1202
@cesarruiz1202 5 ай бұрын
Yeah buts that's mainly because they're paying OpenAI and Claude 2 completions API to use it without cost. Also if you want to I think you can self host Cody without login to sourcegraph.
@vaisakhkm783
@vaisakhkm783 5 ай бұрын
cody is open source, you can completely run it locally..
@SFSylvester
@SFSylvester 5 ай бұрын
@@vaisakhkm783 It's not open-source if they force you to login. My machine, my rules!
@rohithgoud30
@rohithgoud30 5 ай бұрын
I typically don't rely too heavily on AI when coding. I use TabbyML, which has a limited model, but it works for me. It's completely open-source and includes a VSCode extension too. It's free and doesn't require login. I use the DeepSeekCoder 6.7B model locally.
@hrgdavor
@hrgdavor 5 ай бұрын
thanks for the hint, I was looking for that. I hate that cloud crap.
@haroldasraz
@haroldasraz 4 ай бұрын
Cheers for the suggestion.
@YadraVoat
@YadraVoat 4 ай бұрын
VSCode? Why not VSCodium?
@justingolden21
@justingolden21 2 ай бұрын
Just tried tabby, thanks!
@a5tr00
@a5tr00 5 ай бұрын
since you have to sign in, does it sends any data upstream when you use local models?
@mayorc
@mayorc 5 ай бұрын
The problem with Cody is that it just autocomplete with local models, a thing you can do with many VsCode Extensions like LLaMA Coder, an many more. All the nice features use the online version, which is extremely limited in numbers of requests if you go for the free plan (a bit of expansion on the monthly numbers of these would make things better to test or to grow a serious interest later leading to a better plan). Also there is a, not indifferent, number of extensions that do those nice features (chat, document, smells, refactoring, explain and tests) the same all in one extension and for free using local models (ollama or openai compatible endpoints). Cody does these features a little better and has a better interaction with the codebase, probably due to the bigger context window (at least from my tests) and a nicer implementaion/integration in VScode, but unless you pay you're not gonna really benefit from them cause of the low free number of requests you can afford, which aren't really enough to seriously dive in.
@ruifigueiredo5695
@ruifigueiredo5695 5 ай бұрын
Matthew just confirmed on a post above that the limitations on the free tier does not apply if you run the model locally.
@alx8439
@alx8439 5 ай бұрын
Can you suggest any particular alternative among those "different number of extensions"?
@mayorc
@mayorc 5 ай бұрын
@@alx8439 There are many, I so far tested a few but I'm not using them at the moment so I don't remember those names. What I did was searching extensions with names like "chat, gpt, AI, code, llama" and many will be there, then you have to test them one by one (this if what i did). I suggest you go for those who already in the description and in the pictures show options for customization like base URL for ollama or openai compatible local servers. I think one of them has "genie" in the name.
@woozie_tv
@woozie_tv 5 ай бұрын
i'm curious of those too @@alx8439
@alx8439
@alx8439 5 ай бұрын
I'll answer myself then: Twinny, Privy, Continue, TabbyML
@jbo8540
@jbo8540 5 ай бұрын
Matt Williams, a member of the ollama team, shows how to make this work 100% free and open source in his video "writing better code with ollama"
@mickelodiansurname9578
@mickelodiansurname9578 5 ай бұрын
thanks for that head up man
@brian2590
@brian2590 5 ай бұрын
This is how i am setup. works great!
@LanceJordan
@LanceJordan 5 ай бұрын
link please?
@mickelodiansurname9578
@mickelodiansurname9578 5 ай бұрын
@@LanceJordan "writing better code with ollama" btw there's an issue on YT putting links into a comment, even YT links, seemingly a lot of comments with links go on the missing list!
@ArthurMartins-jw8fq
@ArthurMartins-jw8fq 3 ай бұрын
Does it have knowledge of the entire codebase?
@evanmarshall9498
@evanmarshall9498 5 ай бұрын
Does this method also allow for completion for large code bases like you went over in a previous tutorial using universal-ctags? Or do you have to still download and use universal-ctags? I think it was your aider-chat tutorial. I do not work with pyhton so using this vscode extension and cody is much better for me (front end developer using HTML, CSS and JS).
@RichardGetzPhotography
@RichardGetzPhotography 5 ай бұрын
Is it cody that understands? I think it is the LM that does. Also, why $9 if I am running everything locally?
@AlexanderBukh
@AlexanderBukh 5 ай бұрын
How is it local if i have to authorize with 3rd party 😮
@HUEHUEUHEPony
@HUEHUEUHEPony 5 ай бұрын
it is not, it is clickbait
@hqcart1
@hqcart1 5 ай бұрын
nothing is free dude.
@zachlevine1857
@zachlevine1857 5 ай бұрын
Pay a little money and have fun my people!
@olimpialucio
@olimpialucio 5 ай бұрын
Thank you very much for your replay. What type of HW is required to run this model locally
@iseverynametakenwtf1
@iseverynametakenwtf1 5 ай бұрын
can you select the OpenAI one and run it through LM Studio locally too?
@KodandocomFaria
@KodandocomFaria 5 ай бұрын
I know it is a sponsored video, but is there any open source alternative to Cody extension? We need a completely local solution, because Cody may use telemetry and gathering some information behind the scenes
@Nik.leonard
@Nik.leonard 5 ай бұрын
Continue does chat and fix, but doesn’t do autocompletion, and is quite unstable. There is another one that does autocomplete with ollama (LlamaCode).
@UvekProblem
@UvekProblem 5 ай бұрын
You have collama which is a fork of Cody and uses llama.cpp
@hqcart1
@hqcart1 5 ай бұрын
@@Nik.leonardPhind, best free one.
@alx8439
@alx8439 5 ай бұрын
Twinny, Privy, TabbyML
@kartiknarang3152
@kartiknarang3152 2 ай бұрын
one more issue with cody is it can take only 15 files for context at a time while i need an assistant that can take whole folder of project
@Resursator
@Resursator 5 ай бұрын
The only time I'm coding - is while being on flight. I'm so glad I can use LLM from now on!
@AlexanderBukh
@AlexanderBukh 5 ай бұрын
About 40 minutes of battery life. Yep, i ran llms on my 15watt 7520U laptop. My 5900HX would gobble the battery even faster i think.
@supercurioTube
@supercurioTube 5 ай бұрын
Wait, you have GitHub Copilot enabled there too, which shows up in your editor. Are you sure that the completion itself is not provided by the GitHub Copilot extension and not Cody with the local model?
@kate-pt2ny
@kate-pt2ny 5 ай бұрын
There is text for you to choose in the video, and it has the icon of cody, so you can see that it is the code generated by cody.
@jawadmansoor6064
@jawadmansoor6064 5 ай бұрын
can it only work with ollama? or what if i have a server from llama.cpp running on desired/same port as that of ollama, will it not work? what url (complete, including port) does ollama output, so that I can make my server running on same url. of course it will be localhost like localhost:8080 (original as where llama.cpp server runs) localhost:8081/v1/chat/completion (if api_like_OAI is used). so what does ollama output?
@ew3995
@ew3995 5 ай бұрын
can you use this for reviewing PRs?
@scitechtalktv9742
@scitechtalktv9742 5 ай бұрын
What an amazing new development! Thanks for you video. A question: can I use this to complete translate a Python code repository to C++ with the goal to make it run faster? How exactly would we go about doing this?
@janalgos
@janalgos 5 ай бұрын
how does Cody compare to the Cursor extension with GitHub copilot?
@vransomware7601
@vransomware7601 5 ай бұрын
can it be run using text generation web UI
@TubatsiM
@TubatsiM 5 ай бұрын
I followed your instructions and I failed at 2:38 because I'm using Linux I'm seeing a different output. And thanks for your assistance.
@kate-pt2ny
@kate-pt2ny 5 ай бұрын
I chose the ollama local model, can cody only use codellama:7b-code? Can I switch to other models that can't be used, or where can I modify them?
@vivekpadman5248
@vivekpadman5248 5 ай бұрын
thanks for the video, this is absolutely a blessing of an assistant
@michai333
@michai333 5 ай бұрын
Thanks so much! We need a video on how to train a local model via LM Studio / VS / Python
@ScottWinterringer
@ScottWinterringer 5 ай бұрын
or just use oobabooga and stop using junk?
@froggy5967
@froggy5967 5 ай бұрын
Might I ask M2 Max with how much memory and is 14inch? Thinking about get a Max 14" as well. Thanks
@mc9723
@mc9723 5 ай бұрын
Even if its not world changing breakthroughs, the speed at which all this tech is expanding can not be overstated. I remember one of the research labs was talking about how every morning they would wake up and another lab had solved something they had just started/were about to start. This is a crazy time to be alive, stay healthy everyone.
@paolovolante
@paolovolante 5 ай бұрын
Hi, thanks! I use chatgpt 3.5 for generating python code by just describing what I want. It kind of works... In your opinion is this solution you propose better than gpt 3.5?
@Daniel-xh9ot
@Daniel-xh9ot 5 ай бұрын
Way better than gpt3.5, gpt3.5 is pretty outdated even for simple tasks.
@stvn0378
@stvn0378 5 ай бұрын
I'm pretty capped using 2080s (8gb)/16gb ram--have you tried out HS spaces yet? Would love to figure a way to test out dolphin mixtral etc
@d-popov
@d-popov 5 ай бұрын
That's great! But how is it magically linked to the ollama? How to specify other ollama hosted model (13/34b)?
@shaileshsundram
@shaileshsundram 4 ай бұрын
I am using 2017 MacBook Air. Will using it be instantaneous?
@rrrrazmatazzz-zq9zy
@rrrrazmatazzz-zq9zy 4 ай бұрын
Can it reference variables in other files, same directory, while working in a separate file?
@DiomedesDominguez
@DiomedesDominguez 3 ай бұрын
Do I need a GPU of 4 GB vRAM or more for the 7b? Also, Python is the easiest of the programming languages, can I use cody locally for C/C++ or C# and other more robust languages?
@olimpialucio
@olimpialucio 5 ай бұрын
Is it possible to use it in a Windows and WSL system? If yes how we should install LLaMA?
@thethiny
@thethiny 5 ай бұрын
Same steps
@Joe_Brig
@Joe_Brig 5 ай бұрын
I'm looking for a local code assistant. I don't mind supporting the project, with a license for example, but I don't want to log in each use or at all. How often does this phone-home? Will it work if my IDE is offline? Pass.
@ryzikx
@ryzikx 5 ай бұрын
wait for llama code 70b tutorial
@InnocentiusLacrimosa
@InnocentiusLacrimosa 5 ай бұрын
​@@ryzikxthat should require > 40GB VRAM.
@AlexanderBukh
@AlexanderBukh 5 ай бұрын
@@ryzikx 70b would require 2x 4090 or 3090. 34b takes 1.
@kshitijnigam
@kshitijnigam 5 ай бұрын
Tabby and code llama can do that , let me find the link to the playlist
@toml6535
@toml6535 4 ай бұрын
how do i get the cody settings when useing webstorm? or can i only do this on vscode?
@SageGoatKing
@SageGoatKing 5 ай бұрын
Am I misunderstanding something or are you advertising this as an open source solution while it still dependent on a 3rd party service? What exactly is cody? I would have assumed if it is completely local, it's just a plugin that lets you use local models on your machine. Yet you describe it as having multiple versions with different features in each tier, including a paid tier. How exactly does that qualify as open source?
@zachlevine1857
@zachlevine1857 5 ай бұрын
He shows you how fast it is.
@BrandosLounge
@BrandosLounge 5 ай бұрын
No matter what i do, i always get this when asking for instructions - "retrieved codebase context before initialization". Is there a discord where we can get support for this?
@LanceJordan
@LanceJordan 5 ай бұрын
I seem I seem to have missed something even though I followed steps exactly, I can't tell if I'm using local model or not. But when I unplugged my modem, it didn't respond until I plugged it back in. So I'm doing something wrong. I am running Windows with WSL Linux Subsystem. Typically I can install and run anything Linux/Ubuntu and I do have the ollama server running. 🤷🏻‍♂
@jakeaquilina505
@jakeaquilina505 4 ай бұрын
is their an extension for visual studio rather than VS code?
@Krisdomain
@Krisdomain 5 ай бұрын
How can you are not enjoying creating unit test
@Baleur
@Baleur 5 ай бұрын
So the local one is the 7b version, not the 70b? Or is it a typo in the release?
@InnocentiusLacrimosa
@InnocentiusLacrimosa 5 ай бұрын
70b was released and it can be run locally, but it is a massive model and should require around 40GB VRAM.
@skybuck2000
@skybuck2000 2 ай бұрын
Seems to conflict with omni pascal extension/code completion, not sure if both can be used ? Any ideas ?
@DanVoronov
@DanVoronov 5 ай бұрын
Despite the extension being available in the marketplace of VSCodium, after registration, it attempts to open regular Visual Studio Code (VSC) and doesn't function properly. It's unfortunate to encounter developers creating coding helpers that turn out to be broken tools.
@quincy1048
@quincy1048 5 ай бұрын
any plan to roll this into a visual studio extension for c++ c# coding.
@piratepartyftw
@piratepartyftw 4 ай бұрын
Will the Chat function be available with Ollama soon?
@nobound
@nobound 3 ай бұрын
I have a similar setup, but I'm encountering difficulty getting Cody to function offline. Despite specifying the local model (codellama) and disabling telemetry, the logs indicate that it's still attempting to connect to sourcegraph for each operation.
@user-mz2ei2nx2p
@user-mz2ei2nx2p 5 ай бұрын
Can anyone tell me if thre is a difference in code production between Q4 and Q8? i mean Q8 will produce less errors? is it more ''complete''? thnx!
@skybuck2000
@skybuck2000 2 ай бұрын
Also it's already downloaded, what is the pull for ?
@michaelvarney.
@michaelvarney. 4 ай бұрын
How do you deploy this on an a completely airgapped network? No network connections during install.
@aijokker
@aijokker 5 ай бұрын
Is it better than chatgpt4?
@cyanophage4351
@cyanophage4351 3 ай бұрын
Tried on windows and couldn't get it to connect to my ollama. The dropdown was set to "experimental-ollama" and "codellama" but when I asked in the chat "what can you do" it would reply with "i'm claude from anthropic" so not sure what is up with that
@rogermarquez1314
@rogermarquez1314 5 ай бұрын
Is this just for Mac users?
@skybuck2000
@skybuck2000 2 ай бұрын
Must the pull be placed in some special folder ? This is not explained, I doubt this will work, the way I did it, don't want models on SSD C drive but HD G drive to experiment with it and safe space on SSDs which really need it like windows updates, got twice 4 TB on SSD but still...
@skybuck2000
@skybuck2000 2 ай бұрын
Tried it with C cause python apperently not installed in vs code by default, didn't work for C code, but I see cody is working somewhat, a yellow light bulb appears. I came here for code translation, though code generation is interesting too and similiar, but can cody translate code too ? from go to delphi/pascal ? Is what I am interested in...
@henrychien9177
@henrychien9177 5 ай бұрын
what about window?~anyway to run llama ?
@WhiteDragon103
@WhiteDragon103 4 ай бұрын
ollama : The term 'ollama' is not recognized as the name of a cmdlet, function, script file, or operable program is there a working tutorial for windows 10?
@xCallMeLucky
@xCallMeLucky 23 күн бұрын
restart your pc
@themaridv2000
@themaridv2000 4 ай бұрын
Apparently they only support the given models. And the llama one actually only uses coda-llama13b. Basically it can't run something like Mistral or other llama models. Am I right?
@MasterSage50307
@MasterSage50307 3 ай бұрын
Hi @matthew_berman, I just tried following your instructions and it seems that the Cody provider dropdown never lists any of the models I pulled. All i see is "experimental-codellama:7b", even though I actually downloaded the 13b parameter model. Is there another option? Also with the free account, if you're running a local model, are we still held to the 20-chat/500-autocomplete limits?
@JoeBrigAI
@JoeBrigAI 4 ай бұрын
No local models when using JetBrains plugin?
@pierruno
@pierruno 5 ай бұрын
Can you write in the Title for what OS this Tutorial is?
@technovangelist
@technovangelist 5 ай бұрын
It’s not actually fully offline. It still uses their services for embedding and caching even when using local models.
@mdazhardware
@mdazhardware 5 ай бұрын
Thanks for this awesome tutorial, how to do that for Windows os??
@skybuck2000
@skybuck2000 2 ай бұрын
I get some strange window says: edit instruction code, I guess I have to tell it with to do... generate fibonnaci sequence code perhaps ?
@Ludecan
@Ludecan 5 ай бұрын
This is so cool, but doesn't the Cody login kind of invalidate the local benefits? A 3rd party still gets access to your code.
@mayorc
@mayorc 5 ай бұрын
Yes, don't know though how and if the code is retained long term somehow the moment you start chatting with your codebase, plus free version has very limited amount of request you can issue a month, 500 autocomplete requests a month ( that you would probably end in a day or two considering the moment you stop typing it will process a request immediately in a few seconds delay), this is solvable with the local model, but then you have only 20 chat messages or builtin commands per months which make them useless unless you choose the paid plan.
@ruifigueiredo5695
@ruifigueiredo5695 5 ай бұрын
Does anyone knows if the 500 autocompletions per month on the Free tier, also applies if we run codellama locally?
@matthew_berman
@matthew_berman 5 ай бұрын
You get unlimited code completions with a local model.
@synaestesia-bg3ew
@synaestesia-bg3ew 5 ай бұрын
​@matthew_berman it said "Windows version is coming soon",I had to stop at the download step ,so I cannot continue this tutorial. Not everyone got a lunix machine or a powerful Mac. Could you warn people about prerequisites before starting new videos? That would help thanks.
@Ray88G
@Ray88G 5 ай бұрын
Can you please also include steps for those who are using Windows
@Yewbzee
@Yewbzee 5 ай бұрын
Does anybody know if this can code Swift UI ?
@kninghtanirecaps1470
@kninghtanirecaps1470 3 ай бұрын
can i use that without internet ?
@Sigmatechnica
@Sigmatechnica 2 ай бұрын
what's the point of a local model if you have to sign into some random service to use it???
@peterfallman1106
@peterfallman1106 5 ай бұрын
Great but what are the the requirements for Microsoft servers and clients?
@yagoa
@yagoa 5 ай бұрын
how do I do it if Ollama is on my LAN?
@user-dy9mp1pf2t
@user-dy9mp1pf2t 4 ай бұрын
Curious how they compare.
@Sergatx
@Sergatx 5 ай бұрын
I just tried running this while being offline and it doesnt work how is this local?
@skybuck2000
@skybuck2000 2 ай бұрын
Now the only thing I need to figure out is how to add a command to the cody pop up menu or something to add: "translate from go language to pascal language" so I don't have to re-type this constantly... testing big translation now...
@jayashankarmaddipoti6964
@jayashankarmaddipoti6964 5 ай бұрын
Seems like ollama is compatiable for both linux and max. How to use it for windows user?
@RonaldvanWeerd
@RonaldvanWeerd 5 ай бұрын
Try running it in a Docker container. Works fine for me.
@nufh
@nufh 5 ай бұрын
Damn... That is super dope.
@mrdl9199
@mrdl9199 2 ай бұрын
Thanks for this awesome tutorial
@skybuck2000
@skybuck2000 2 ай бұрын
However I did not yet install go extension, maybe if go extension is installed, maybe cody can then do code translation from go language ? hmmm not sure yet.... probably not... but very maybe..
@haydnrayturner1383
@haydnrayturner1383 5 ай бұрын
*sigh*Any idea when Ollama is coming to windows??
@efexzium
@efexzium 4 ай бұрын
Do you know any open source projects like this ?
@georgeknerr
@georgeknerr 5 ай бұрын
Love your channel Matthew! For me however, 100% Local is not having to have an account with an external vendor to run your coding assistant completely locally. I'm looking for just that.
@neronenerone7366
@neronenerone7366 4 ай бұрын
How about using the same idea but with gpt pilot
@skybuck2000
@skybuck2000 2 ай бұрын
cody settings: provider: Now it says experimental ollama, curious how to connect it to the pull/download... watching video/continueing
@C650101
@C650101 Ай бұрын
Can it do c#?
@keithprice3369
@keithprice3369 5 ай бұрын
So, Pro is free for 2 more days? 😁
@skybuck2000
@skybuck2000 2 ай бұрын
You lost me at the terminal step, how you get into ollama, is that it's folder ?
@warezit
@warezit 5 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 💻 *Introduction to Local Coding Assistants* - Introduction to the concept of a local coding assistant and its advantages, - Mention of the coding assistant "Codi" setup with "Olama" for local development. 01:07 🔧 *Setting Up the Coding Environment* - Guide on installing Visual Studio Code and the Codi extension, - Instructions on signing in and authorizing the Codi extension for use. 02:00 🚀 *Enabling Local Autocomplete with Olama* - Steps to switch from GPT-4 to local model support using Olama, - Downloading and setting up the Olama model for local inference. 03:39 🛠️ *Demonstrating Local Autocomplete in Action* - A practical demonstration of the local autocomplete feature, - Examples include writing a Fibonacci method and generating code snippets. 05:27 🌟 *Exploring Additional Features of Codi* - Description of other useful features in Codi not powered by local models, - Examples include chatting with the assistant, adding documentation, and generating unit tests. 07:04 📣 *Conclusion and Sponsor Acknowledgment* - Final thoughts on the capabilities of Codi and its comparison to GitHub Copilot, - Appreciation for Codi's sponsorship of the video. Made with HARPA AI
@bhanunamikaze2508
@bhanunamikaze2508 5 ай бұрын
This is awesome
@planetchubby
@planetchubby 5 ай бұрын
Nice! Seems to work pretty well on my linux laptop. Would be great if I could save my 10 euros a month for copilot.
@Lucas-iv6ld
@Lucas-iv6ld 2 күн бұрын
I'm saving this
@dannyprats824
@dannyprats824 5 ай бұрын
Does this need a dedicated GPU?
@matthew_berman
@matthew_berman 5 ай бұрын
No
@monaluthra4769
@monaluthra4769 5 ай бұрын
Please make a tutorial on how to use AlphaGeometry
@skybuck2000
@skybuck2000 2 ай бұрын
Ok it worked, kinda funny: I wrote first two lines and last line and the rest cody did after telling it to "generate fibonnaci sequence code"... thanks might be usefull some day, bit flimsy, but interesting, next I try if it can translate code too function Fibannoci : integer; begin var a, b, c: integer; a := 0; b := 1; while b < 100 do begin writeln(b); c := a + b; a := b; b := c; end; end; end;
@manhomme3870
@manhomme3870 5 ай бұрын
Would it be possible to use MPT-7B instead?????????????????????????? Anybody has an idea?
@tubasweb
@tubasweb 5 ай бұрын
Can you do it on a real PC?
@user-nm9sy6fr7h
@user-nm9sy6fr7h 4 ай бұрын
Enterprise AI is the best alternative for OpenAI, always helpful with coding questions
@skybuck2000
@skybuck2000 2 ай бұрын
It also automatically opened a command prompt... can proceed from there... plus there is an item in the start menu... probably linked to this messy installation.
@YadraVoat
@YadraVoat 4 ай бұрын
1:17 - Um, why Visual Studio Code when there's VSCodium available?
@freaq.creation
@freaq.creation 5 ай бұрын
It's not working... I get an error where it says it can't find the model :(
@skybuck2000
@skybuck2000 2 ай бұрын
It now looks like once code is selected from pull down list: CodyCompletionProvider:initialized: experimental-ollama/codellama:7b-code
@bradstudio
@bradstudio 5 ай бұрын
Nova editor needs support for this.
META's New Code LLaMA 70b BEATS GPT4 At Coding (Open Source)
9:25
Matthew Berman
Рет қаралды 79 М.
Using Llama Coder As Your AI Assistant
9:18
Matt Williams
Рет қаралды 65 М.
Мы никогда не были так напуганы!
00:15
Аришнев
Рет қаралды 5 МЛН
small vs big hoop #tiktok
00:12
Анастасия Тарасова
Рет қаралды 32 МЛН
That's how money comes into our family
00:14
Mamasoboliha
Рет қаралды 6 МЛН
Writing Better Code with Ollama
4:43
Matt Williams
Рет қаралды 41 М.
Intro to RAG for AI (Retrieval Augmented Generation)
14:31
Matthew Berman
Рет қаралды 36 М.
Is CODE LLAMA Really Better Than GPT4 For Coding?!
10:21
Matthew Berman
Рет қаралды 111 М.
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 121 М.
Boost Productivity with FREE AI in VSCode (Llama 3 Copilot)
5:39
Mervin Praison
Рет қаралды 25 М.
КРУТОЙ ТЕЛЕФОН
0:16
KINO KAIF
Рет қаралды 1,6 МЛН