No video

How To Install CODE LLaMA LOCALLY (TextGen WebUI)

  Рет қаралды 80,962

Matthew Berman

Matthew Berman

11 ай бұрын

In this video, I show you how to install Code LLaMA locally using Text Generation WebUI. We'll install the WizardLM fine-tuned version of Code LLaMA, which recently beat GPT4 in HumanEval.
Enjoy :)
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewber...
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew...
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
Model From Video - huggingface.co...
WizardLM - huggingface.co...
Instructions: gist.github.co...

Пікірлер: 255
@jackflash6377
@jackflash6377 11 ай бұрын
Hands down the best AI focused KZfaq channel. Like your style, all stuff, no fluff. Easy to follow tutorials and always the cutting edge content. Subbed and joined! Now, let's push for something Aider like for local models.
@matthew_berman
@matthew_berman 11 ай бұрын
Thank you! Doesn’t Aider support local models now?
@emil5684
@emil5684 11 ай бұрын
@@matthew_berman Aider? Dont know what is this. I just start to study IA
@michaelmalzacher6018
@michaelmalzacher6018 11 ай бұрын
@@emil5684 respecf
@MrLargonaut
@MrLargonaut 11 ай бұрын
You have been my go-to source for how-to's since GPT4 launched, which is when I joined the game. I started from literal scratch, knowing nothing about coding at all, instead letting a lifetime dream propel me. Thank you especially for your 'from scratch' videos, because there are many things that I don't know the jargon or phrasing for. For my amateur position, these types of vids do me the most good. *edited for them lovely mispellin's*
@matthew_berman
@matthew_berman 11 ай бұрын
Love to hear it!
@Shinkaze33
@Shinkaze33 11 ай бұрын
Holy Crap this is amazing. I just ran some programing tests and it's REALLY good, can't believe this is running on my local machine when 8 months ago this sort of tech required a Datacenter.....just WOW....all the WOW
@matthew_berman
@matthew_berman 11 ай бұрын
Yea it’s super impressive. Especially because you can run this on pretty much any computer with the 1B versions
@harisjaved1379
@harisjaved1379 11 ай бұрын
MAN YOU DELIVERED! We asked you, and you delivered! Thanks Matt!
@jeffersonvega622
@jeffersonvega622 11 ай бұрын
00:00 📋 Introduction to installing Code LLaMA locally 01:01 🧪 Setting up the Conda environment and cloning the code 02:34 🛠 Troubleshooting installation issues 03:34 📦 Downloading and configuring the Code LLaMA model 05:05 💻 Setting model parameters and using prompt templates 06:07 📚 Conclusion and call to action
@creatiiveart341
@creatiiveart341 11 ай бұрын
I am on M1 Max Mac 64 GB and I am getting the following error when I try to load the model with ExLlama_HF :( Traceback (most recent call last): File “/Users/kc/text-generation-webui/modules/exllama_hf.py”, line 14, in from exllama.model import ExLlama, ExLlamaCache, ExLlamaConfig ModuleNotFoundError: No module named ‘exllama’ During handling of the above exception, another exception occurred: Traceback (most recent call last): File “/Users/kc/text-generation-webui/modules/ui_model_menu.py”, line 182, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name, loader) File “/Users/kc/text-generation-webui/modules/models.py”, line 79, in load_model output = load_func_map[loader](model_name) File “/Users/kc/text-generation-webui/modules/models.py”, line 322, in ExLlama_HF_loader from modules.exllama_hf import ExllamaHF File “/Users/kc/text-generation-webui/modules/exllama_hf.py”, line 21, in from model import ExLlama, ExLlamaCache, ExLlamaConfig ModuleNotFoundError: No module named ‘model’
@interesting_vdos
@interesting_vdos 11 ай бұрын
I'm getting the same error. Can anyone help on this please
@interesting_vdos
@interesting_vdos 11 ай бұрын
@creatiiveart341 were you able to fix this error? Please let me know if you were able to fix this issue
@Joeespo2009
@Joeespo2009 11 ай бұрын
Same error here too
@creatiiveart341
@creatiiveart341 11 ай бұрын
Nop! I tried few things but no luck so far :(@@interesting_vdos
@owenrichards661
@owenrichards661 11 ай бұрын
Same with me.
@mercadolibreventas
@mercadolibreventas 11 ай бұрын
You are Awesome ! I am now a Patreon! As you conitnue to build that methodology if you can, I conitnue to add more recurring. Thanks! Guys like you are Gold in Creation giving without expecting is the key to Abundance!
@matthew_berman
@matthew_berman 11 ай бұрын
Much appreciated!! Keep the feedback and requests coming :)
@seanbergman8927
@seanbergman8927 11 ай бұрын
Great video. Looking forward to trying this soon. Thanks for walking through the entire process, including resolving the errors you encountered.
@matthew_berman
@matthew_berman 11 ай бұрын
You got it!
@xmysty
@xmysty 11 ай бұрын
have learned so much from you - local results beyond expectations Can't thank you enough Matthew, you've condensed so much into clearly explained new information 👏 oh and thanks to TheBloke ▶ all your vids
@matthew_berman
@matthew_berman 11 ай бұрын
Thanks so much, I love hearing this!
@johnnybueti
@johnnybueti 11 ай бұрын
How long does did that task take to generate with your RTX 4090? Great video. :)
@russellmm
@russellmm 11 ай бұрын
from scratch is typically "from scratch" not from the Anaconda environment. Or from scratch could be their Installer... Other than that (which I see happen a lot) I enjoy your channel.
@Shinkaze33
@Shinkaze33 11 ай бұрын
Onscreen Typo @2:30 in the package name. You typed tortchvision instead of torchvision. The correct command should be: conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
@astroportterraformationfor2776
@astroportterraformationfor2776 9 ай бұрын
Great howto. What hardware caracteristics are minimum and recommended. System CPU and RAM? minimum GPU and RAM ?
@vargonian
@vargonian 11 ай бұрын
One question I have before I try it out: One of the biggest limitations of ChatGPT is that its knowledge only extends to 2021, so there are lots of libraries / updates it's unfamiliar with. Are the models for Code LLaMA more up to date?
@imadreamerboy
@imadreamerboy 11 ай бұрын
According to META the cutoff of llma2 is somwhere in july 2022 but some info is up2date till early 2023
@contractorwolf
@contractorwolf 11 ай бұрын
love your teaching style Matthew!
@user-ui9rx6ni3n
@user-ui9rx6ni3n 11 ай бұрын
Great job, clear and straight to the point. Many thanks!!!! Not sure if you have plan to explain (in your lovely way) how we can build our purpose model and train and optimise based on private data
@normanlove222
@normanlove222 11 ай бұрын
Maybe I missed it, but what Programming languages can we ask about? Just Python? or other languages as well?
@nicosilva4750
@nicosilva4750 11 ай бұрын
This is great. What are the recommended requirements: GPU cores, and RAM size and type for the the two largest models?
@ryanwebster3267
@ryanwebster3267 11 ай бұрын
Yes, please! I agree that this would be nice to know.
@matthew_berman
@matthew_berman 11 ай бұрын
70b you will need 48gb vram. 34b you can put on 24gb and possibly less if you used a quantized version and offload some to the CPU. It’s never a simple mapping of model to GPU.
@ryanwebster3267
@ryanwebster3267 11 ай бұрын
Thank you!@@matthew_berman
@IvanRosaT
@IvanRosaT 7 ай бұрын
This is so cool!, but in practice if following then either the repro hasn't been maintained or it has been mainted too much lol, most of us have issues with missing exllama, this breaks the in code chain, more info in the github instructions. But as a concept looks great
@user-pp9xk8kb2r
@user-pp9xk8kb2r 11 ай бұрын
PSA for those going through the demo, the template needs to be changed at the bottom of the screen on the Default tab from QA to Alpaca-with-input.
@Proprogrammer001
@Proprogrammer001 11 ай бұрын
The "QA" one worked fine for me
@tomski2671
@tomski2671 11 ай бұрын
Anybody knows if this model can be further trained on user data? If so, how difficult would that process be?
@mickelodiansurname9578
@mickelodiansurname9578 11 ай бұрын
Be good if you could plug it into VS Code as a coding assistant extension.
@Tyronne_
@Tyronne_ 11 ай бұрын
Needed this! Thanks bro, much appreciated.
@jayprice8246
@jayprice8246 11 ай бұрын
Bro, you are killing the AI tutorial game right now. Thanks for all your awesome content!!!!
@matthew_berman
@matthew_berman 11 ай бұрын
Haha thank you!
@xavierzolander
@xavierzolander 6 ай бұрын
u da man, matt!
@SasaBocki
@SasaBocki 11 ай бұрын
Possible to run any of that WizardCoder Python models without nvidia GPU?
@marcfruchtman9473
@marcfruchtman9473 11 ай бұрын
That is great. Thanks for the video. Out of curiosity, what GPU are you running?
@matthew_berman
@matthew_berman 11 ай бұрын
RTX4090
@bubbajones5873
@bubbajones5873 11 ай бұрын
Wow! Talk about timing. I just sat down to do this and needed this video 🎉
@Spacewarpstudio
@Spacewarpstudio 11 ай бұрын
Unfortunately when using this to write c# code for Unity, it doesn't even seem to come close to what I can do with GPT-4, especially when it comes to developing code bit by bit. I've had amazing results with GPT-4 getting certain things working then adding more functionality as I go, copy pasting errors and asking for them to be corrected etc. I've tried doing similar with this in chat mode, or chat-instruct mode, and it doesn't seem to have a clue what I'm talking about. I ask it to correct its mistake and it just spits out totally random unrelated code. Until I can have a conversation about the code we're working on together like I can with chat GPT, this has extremely limited use.
@philcox2355
@philcox2355 2 ай бұрын
Thanks Matthew So far so good.
@UserB_tm
@UserB_tm 11 ай бұрын
I'm still testing out this model but so far I'm not impressed I asked it to write a basic python module that creates a text document named hello world write hello world inside of it and save it to the home directory. The first attempt it opened up a screenshot app and save the screenshot to the desktop the second attempt it did the same thing. And then when I asked it to correct the code it said it was good. Finally I had chat GPT write it in like 5 seconds and it worked perfect. I'm gonna do a few more tests but I'm not sure at this point.
@Lorant1984
@Lorant1984 11 ай бұрын
Have you run further tests? If so how did Llama fared against chatgpt, please?
@roymikael4888
@roymikael4888 11 ай бұрын
Great info. Keep up the good work.
@matthew_berman
@matthew_berman 11 ай бұрын
Thanks!
@user-pp9xk8kb2r
@user-pp9xk8kb2r 11 ай бұрын
Awesome video! Keep up the great work!
@SanctuaryLife
@SanctuaryLife 11 ай бұрын
Great Job Matt!
@mikegodfrey4482
@mikegodfrey4482 11 ай бұрын
The error message I am getting is this " ERROR:Could not find repositories/exllama/. Make sure that exllama is cloned inside repositories/ and is up to date." Its up to date and its in my Anaconda folder. Any ideas?
@shootdaj
@shootdaj 11 ай бұрын
Thank you so much Matthew! You're a godsend 🙏
@personone6881
@personone6881 11 ай бұрын
stupid question: when you first bring up Anaconda Prompt and you change the directory from your user folder on C: to a top level path on D: - C:\Users\mberm>d: - IS that an external drive? If so more importantly for me to understand - Is that the directory in where you installed your initial anaconda3 installation? - Just to be clear, what path did you install your initial installation on? I've just done mine on C:\anaconda3 ...have I fekked up already?
@Bundit_Buddhahai
@Bundit_Buddhahai 4 ай бұрын
Thank you for your great video. Just to ask where is the downloaded WizardCoder-Python-13B-V1.0-GPTQ model located in Windows directory.
@emil5684
@emil5684 11 ай бұрын
no link to install PyTorch. For me worked this: conda install pytorch torchvision torchaudio pytorch-cuda -c pytorch -c nvidia
@caseyclayton01
@caseyclayton01 11 ай бұрын
On my 4090 I can get the 34b "working" but it seems to run out of memory pretty quickly. With only around 52 tokens of input it was running out for me and with less input at 12 it was super slow and again errored out before completing the response. I was using the Phind v2 CoderLlama 34b.
@perc-ai
@perc-ai 11 ай бұрын
Ur ram is bottlenecking
@muneerraza8521
@muneerraza8521 11 ай бұрын
Subscribed and enabled notifications for all videos. Thanks from Pakistan !
@coolmn786
@coolmn786 11 ай бұрын
I wish I can like this video x10 Brilliant video, thanks man!
@wrOngplan3t
@wrOngplan3t 11 ай бұрын
Linux Mint 21.1. This is the fist step-by-step that went without a hitch! Only thing was the very last input field was different. From the "Prompt" drop-down list at the bottom I used "Alpaca-with-input" as a template and changed it to yours (I'm lazy, easier editing lol). I'm not sure if I have to do this every time though. Also, I'm not familiar with the somewhat strange-looking format ( "Below is an instruction that describes a task", etc.). Any more info on that btw? I'm not familiar with Python either, my amateur coding language of choice is Processing Java (Java is close enough), and some C++ for Arduino. My simple first coding test is like your output integers 1-100 inclusive, with the added sum of them all. Worked great! Great tutorial, Awesome stuff! Thanks!
@ML-ud5pf
@ML-ud5pf 11 ай бұрын
Very nice instructional video! Which AI tool would be best suited for non programmers to create an own python code (e.g. for developing an API client)? I find that GPT4 still requires me to know coding and that I cannot efficiently write code by prompting AI. Any advice on which tool to use for that case would be much appreciated!
@itlackey1920
@itlackey1920 11 ай бұрын
Thank you this is very straightforward and helpful as always! Now to figure out how to get it to run on my Arc770 🤔
@hiroroong693
@hiroroong693 11 ай бұрын
Great tutorial! I love the clear steps. The current prompt is for 1 answer, how to change the prompt to have conversation style and has memory of past content?
@dirtyPeter2
@dirtyPeter2 10 ай бұрын
Love it. Thanks!
@saravanajogan1221
@saravanajogan1221 11 ай бұрын
Thank you keep up the good work 👏
@matthew_berman
@matthew_berman 11 ай бұрын
Thanks!
@ricardo_cravo
@ricardo_cravo 11 ай бұрын
Thank you ! I love you! Great video! IT worked!
@rforestier
@rforestier 11 ай бұрын
Superfan of the channel, I would like to see a video of LLaVA: Large Language and Vision Assistant
@matthew_berman
@matthew_berman 11 ай бұрын
Second time I’m hearing about it, I’ll have to check it out!
@OriginalRaveParty
@OriginalRaveParty 11 ай бұрын
You're a boss. Thank you very much 👍
@jareddinwiddie2332
@jareddinwiddie2332 Ай бұрын
I installed conda but it doesnt open. when I grab the window before it closes it will stay as command prompt but not conda
@theresalwaysanotherway3996
@theresalwaysanotherway3996 11 ай бұрын
you might want to mention that the max new tokens/context length can both be at 4096 'cause it's LlaMA 2, and also that the 34B is the one that competes with GPT4, the 13B is not as good.
@keylanoslokj1806
@keylanoslokj1806 11 ай бұрын
What PC can make it run though
@CoffeeblackUk
@CoffeeblackUk 11 ай бұрын
I have a pretty awesome bug. If i ask it a question the first answer is spot on. then if i ask another and click generate it forgets what we were talking about and i i click continue it gives me a sales speech about using google assistant. :-) good stuff though
@csabakallaicranq7055
@csabakallaicranq7055 11 ай бұрын
Great video!
@REDULE26
@REDULE26 11 ай бұрын
Nice tutorial ^^
@matthew_berman
@matthew_berman 11 ай бұрын
Thank you!
@enthrax1639
@enthrax1639 11 ай бұрын
Hi matthew... Appreciate your work... Thanks... One question... How to choose which version would be best for ones pc or laptop?
@AMindInOverdrive
@AMindInOverdrive 8 ай бұрын
Any time I'm following these tutorials I always get an error that nobody else in the universe gets LOL I was missing git - after a quick Google search I found that I needed to download it, and install it. Thanks for your hard work making these videos for noobs like me ;-) Appreciate you man Edit: Finally got to the end but mine displays no code in response...not sure why but will try figure it out LOL
@KevinTheCardigan
@KevinTheCardigan 8 ай бұрын
The video is terrible because the packages and lines are outdated. I follow the instructions step by step, and get an error regarding typing_extensions being the wrong version. I downgrade to an appropriate version, and it only causes another error, which requires an upgrade on my pytorch. Upgrading pytorch then makes my typing_extensions obsolete. I'm going in circles and I hate the uploader because I spent my entire day on their terrible instructions.
@alx8439
@alx8439 11 ай бұрын
Each time I hear quantized version doesn't lose a lot of quality I ask how people back this statement :) to my experience quantisation is alike a soft form of lobotomy - all the good stuff you see on leaderboards is just fading away when you take a closer look at the quantized version of the same size model.
@scuzzynate11
@scuzzynate11 8 ай бұрын
Hey Matthew - any chance you can drop the instructions you mentioned at 2:19? Not seeing the comment on my end here.
@DariuszMakowski
@DariuszMakowski 10 ай бұрын
Did u do any vids about how to train on top of that model using pdfs or c++/py source files to have our own fine tuning?
@stephenthumb2912
@stephenthumb2912 11 ай бұрын
very easy to miss the torch install, thanks for pointing that out.
@jeremywatson
@jeremywatson 11 ай бұрын
Awesome Work. Is it possible to do the same setup on a Mac? I saw your video on how good M1/M2 is. Look forward to your next video nevertheless! I love the the no nonesense this is how it is. i.e. no dribble!
@genebeidl4011
@genebeidl4011 11 ай бұрын
The WizardLM models don't load for me as it says the header is too large for both the 34B and 13B models. I have a 4090 so at a minimum the 13B model should load. TheBloke doesn't seem to have a 34B quantized model.
@grizzlybeer6356
@grizzlybeer6356 11 ай бұрын
You are a bad influence on me Matt, because of you I literally am obssessed with anything AI. In fact you were such a bad influence on me that I took all the certificate courses on Coursera that was Machine learning related, and spent many hours listening to Andrew Nga..... lol, Love you bro! Thanks for being such a bad influence!
@mercadolibreventas
@mercadolibreventas 11 ай бұрын
Hi Matthew, can you do a video about the Docker desktop, that seems to be now integrated into VScode It seems so much cleaner, it loads the container directly into it. I have been using a NAS docker and it seems to always have issues, especially with so many projects and dependencies, The computer begins to lag and eventually needs to format and reinstall everything. Can you help with a video on how to do all these projects you keep putting out, the proper way, so we spend more time understanding, utilizing, and building learning patterns? Organization of all these tests, utilizing the nas only to store the containers/images, Thanks!
@user-jn8ht9ww4q
@user-jn8ht9ww4q 6 ай бұрын
I got this error "ImportError: DLL load failed while importing exllamav2_ext: The specified module could not be found." Anyone how to solve that
@anamnaeem3399
@anamnaeem3399 10 ай бұрын
I have facing the following error: OSError: [WinError 127] The specified procedure could not be found. Error loading "C:\ProgramData\miniconda3\envs\textgen\lib\site-packages\torch\lib vfuser_codegen.dll" or one of its dependencies. Anyone who knows a fix to this?
@almahmeed
@almahmeed 10 ай бұрын
Hi .. This is really helpful .. Just a question, how can I remove the models that did not work for me as I was testing with many options?
@matthew_berman
@matthew_berman 10 ай бұрын
Go into the install folder and look for "model" folder.
@almahmeed
@almahmeed 10 ай бұрын
@@matthew_berman Thank you so much, Mathew .. I hope I can be sharing some results soon :)
@yogenghodke
@yogenghodke 6 ай бұрын
White Bobby Deol
@RichardGetzPhotography
@RichardGetzPhotography 11 ай бұрын
LMAO!! 5:28 are you looking at someone who might be a bit too creative with their code :)
@user-kt6ee3ue1l
@user-kt6ee3ue1l 10 ай бұрын
Great video! question what is the difference between GPTQ and AWQ models?
@diegoigr7
@diegoigr7 7 ай бұрын
If you are having this error "ImportError: cannot import name 'Doc' from 'typing_extensions'" simply upgrade this library: "pip install typing_extensions==4.8.0 --upgrade" =)
@mikegodfrey4482
@mikegodfrey4482 11 ай бұрын
I’m at the last step load the model loader. I’m using the llama2 3B to run on a laptop to test. I’ve tried every model loader and can’t seem to get it to work.
@Tr3kkR
@Tr3kkR 8 ай бұрын
I'm getting an error for the install of cchardet (as you did, noted in the build instructions on your Git repo) and am installing the Visual Studio Build Tools 2022. cchardet requires Microsoft Visual C++ 14.0 or greater. Since you probably already had this installed, you may have had a different error.
@mohamed_salah3165
@mohamed_salah3165 6 ай бұрын
where are the links you said you gonna put in the comments/description
@shukanimator
@shukanimator 11 ай бұрын
I've been using ChatGPT to write Python scripts for a few months and it's almost always necessary to tell GPT what errors happened or copy in some API or library documentation so it can fix the code. Is there a way to use this WizardCoder model and do a back and forth with the errors so that the AI can fix stuff? I have it running on the WebUI, and it's been able to generate a few working Python scripts using the 'default' method you demoed, but when the code doesn't work, the 'chat' mode isn't able to write code as well. It's almost as if it's not running the same model when in 'chat' mode.
@mikegodfrey4482
@mikegodfrey4482 11 ай бұрын
another error im getting is that "Failed building wheel for cchardet" Frustrating
@peterpavelka8565
@peterpavelka8565 11 ай бұрын
same here, have you managed to fix it?
@mikegodfrey4482
@mikegodfrey4482 11 ай бұрын
@@peterpavelka8565 nope. Unfortunately. Looking at some alternatives
@duanelawrence4311
@duanelawrence4311 11 ай бұрын
I did not ever get code llama working based on this video. This never appeared to install llama, just the pre-requisits.
@AntmanClashBro
@AntmanClashBro 11 ай бұрын
Hi Matt, I am struggling alot to follow along with the text-generation-webui install. My error is Failed building wheel for llama-cpp-python when following the steps after pip install -r requirements_nocuda.txt. Any ideas? I am on a mac with 2.9 GHz Dual-Core Intel Core i5
@PomStas
@PomStas 9 ай бұрын
I don’t know why but I’m getting an error:” module can’t be found” when I’m trying to load a model
@vasile2321
@vasile2321 6 ай бұрын
I've got an i7-6700 + 16GB RAM + GTX1650 and is running very slow....What configuration do you have to run?
@KratomSyndicate
@KratomSyndicate 11 ай бұрын
The issue is its latest python code is 3.9, cuda is 10.2 and pytorch version is 1.9.0, so is a few years old :( same as chat gpt 4, that is my biggest issue is with chat gpt it just doesn't have new info to referance like github projects that are now out but were not when it was trained.
@RedCloudServices
@RedCloudServices 10 ай бұрын
Matthew I have an old git repo that I would like to revive with a different backend would code llama allow me to upsert a repo and improve the code based on instructions?
@nicolasottavi9158
@nicolasottavi9158 11 ай бұрын
Great! If I understood well it is only working in Python code generation ? No PHP or JS ?
@NoName-bu5cj
@NoName-bu5cj 11 ай бұрын
Why don't you use dockerized text-generation-webui? they have an image for that. much easier and no hussle.
@korchi
@korchi 11 ай бұрын
Hey Matt, is there a way that you can use local GPT project with code LLaMA and ingest your code? - use case would be to ask code LLaMA to add function or change function of your code. It would be great if you can do a video using CPU only (preferred on a mac).
@modolief
@modolief 11 ай бұрын
Matthew, sorry to bother you, but can you talk about, or give me some feedback about *conda* ? I used *pyenv* on my Mac about 2 years ago to install Python. I don't know how to square that with your comments about conda.
@gazzalifahim
@gazzalifahim 11 ай бұрын
Just A-W-E-S-O-M-E!! I have 2 questions. My PC currently have Python 3.11, If I install conda Python 3.10, will there be any conflict? (I am noob into this Python stuff)😅 2nd, I have Nvidia 940mx in my laptop, can I run anything greater than 13B parameter?
@MrDataStorm007
@MrDataStorm007 11 ай бұрын
Thank you so much !
@antonkozyk
@antonkozyk 9 ай бұрын
How can I use this LLM not only in the text-generation-webui but in my own apps? Is it some API or how it works?
@Person-hb3dv
@Person-hb3dv 8 ай бұрын
Is there a way to integrate the model or the web ui into vscode using some extension or something? It would be really nice if the model could have access to my code so it can use it as context.
@echofloripa
@echofloripa 11 ай бұрын
Great video, thanks foe the work. Do you think it will work on a 6 giga GPU? I tried lamma 7b and it worked, super slow but it worked 😅
@christopherbrown1187
@christopherbrown1187 3 ай бұрын
does not work for me. EnvironmentNameNotFound: Could not find conda environment: tg You can list all discoverable environments with `conda info --envs`.
@pathead
@pathead 11 ай бұрын
A+ Video!!
@goonie79
@goonie79 11 ай бұрын
Thank you for such great video. Is there a way to use a colab in this project?
@ulfschack
@ulfschack 11 ай бұрын
So what if you don’t nVidia? (I use stable diffusion on my m2 macbook, and it’s fast)
@chrisdorian4068
@chrisdorian4068 11 ай бұрын
How about doing a video on installing and fine tuning Llava?
@matthew_berman
@matthew_berman 11 ай бұрын
Did you mean Llama? Not sure what Llava is.
@radioreactivity3561
@radioreactivity3561 11 ай бұрын
​​@@matthew_berman LLaVa is LLaMa with computer vision and image captioning.
@qmameraesto
@qmameraesto 11 ай бұрын
when running code Llama locally, is there any way in which it can have the context of my entire codebase? I am exploring this option to be able to find bugs and quickly address them and for what i have seen with both this Web UI and Chat gpt is that we can only provide context via the prompt message, has anyone tried this?
@MyWhyAI
@MyWhyAI 10 ай бұрын
Great Tutorial! I ran into a subprocess-exited-with-error. I needed to install Microsoft Visual C++ 14.0 or higher. Hope this helps someone that runs into this error. Your commands that you show on screen are not in the description of the video and you have a spelling mistake on one of the commands at 2:25 tortch. In the parameters it only lets me set it to 4096, how do I set it to 16k?
Gemini Live - Google Beats OpenAI To True Voice AI (Launch Breakdown)
24:18
Is CODE LLAMA Really Better Than GPT4 For Coding?!
10:21
Matthew Berman
Рет қаралды 111 М.
Doing This Instead Of Studying.. 😳
00:12
Jojo Sim
Рет қаралды 30 МЛН
Алексей Щербаков разнес ВДВшников
00:47
Does this sound illusion fool you?
24:55
Veritasium
Рет қаралды 425 М.
FREE Local LLMs on Apple Silicon | FAST!
15:09
Alex Ziskind
Рет қаралды 158 М.
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 137 М.
Amazing New VS Code AI Coding Assistant with Open Source Models
10:37
This Llama 3 is powerful and uncensored, let’s run it
14:58
David Ondrej
Рет қаралды 124 М.
How BYD, Nio And Other Chinese EVs Compare To Tesla
15:05
CODE-LLAMA For Talking to Code Base and Documentation
14:49
Prompt Engineering
Рет қаралды 21 М.
Using Llama Coder As Your AI Assistant
9:18
Matt Williams
Рет қаралды 68 М.
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
Doing This Instead Of Studying.. 😳
00:12
Jojo Sim
Рет қаралды 30 МЛН