META's New Code LLaMA 70b BEATS GPT4 At Coding (Open Source)

  Рет қаралды 78,214

Matthew Berman

Matthew Berman

4 ай бұрын

Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
MassedCompute - bit.ly/matthew-berman-youtube USE CODE "MatthewBerman" for 50% discount
ollama.ai/library/codellama
ai.meta.com/blog/code-llama-l...
huggingface.co/papers/2308.12950
huggingface.co/codellama/Code...
ai.meta.com/llama
github.com/defog-ai/sqlcoder
/ 1752329471867371659
zuck/posts/1...
ai.meta.com/resources/models-...
/ 1752013879532782075
Disclosures:
I am an investor in LMStudio

Пікірлер: 215
@matthew_berman
@matthew_berman 4 ай бұрын
I'm creating a video testing Code LLaMA 70b in full. What tests should I give it?
@santiagomartinez3417
@santiagomartinez3417 4 ай бұрын
Metaprogramming or transfer learning.
@johnclay7422
@johnclay7422 4 ай бұрын
those which are not available on youtube.
@SteveSimpson
@SteveSimpson 4 ай бұрын
Please show how to add the downloaded model to LMStudio. I add the downloaded model to a subdir in LMStudio's models directory but LMStudio doesn't see it.
@so_annoying
@so_annoying 4 ай бұрын
What about to write a kubernetes operator 🤣
@OwenIngraham
@OwenIngraham 4 ай бұрын
suggesting optimization on existing code, preferably having context of many code files
@ordinarygg
@ordinarygg 4 ай бұрын
It worked, you just have bad driver
@bradstudio
@bradstudio 4 ай бұрын
Could you make a video on how to train an LLM on a GitHub repo and then be able to ask questions and instruct it to make code, for example, a plug-in?
@lironharel
@lironharel 4 ай бұрын
Thanks for actually showing the errors you encountered and keeping it as real as possible! Great and enjoyable content❤
@jacobnunya808
@jacobnunya808 4 ай бұрын
True. Keeps expectation realistic.
@EdToml
@EdToml 4 ай бұрын
Mixtral 8x7B was able to build a working snake game in python here...
@efifragin7455
@efifragin7455 4 ай бұрын
can you share exactly which model was it? i also looking for model that can run on my pc i9-11k gtx 3060 16gb that i can code and make programs like snake
@EdToml
@EdToml 4 ай бұрын
@@efifragin7455 I have a 7700 cpu with 64G of 5600 memory and rx6600xt (8g) gpu and am using rocm 5.7. The model is Mixtral 8x7G K_M_4bit (thebloke on hugging face). Using llama.cpp about 7G gets loaded onto the gpu with about 26G in cpu memory.
@charlies4850
@charlies4850 4 ай бұрын
@@efifragin7455Use OpenRouter
@auriocus
@auriocus 4 ай бұрын
The error comes from libGL failing to load and is clearly NOT in teh code that codellama wrote. It's a problem with your machine's graphics drivers.
@theguildedcage
@theguildedcage 4 ай бұрын
I appreciate your disclosure. I intend to check this out.
@TubelatorAI
@TubelatorAI 4 ай бұрын
0:00 1. Meta's New CodeLama 70B 👾 Introduction to Meta's latest coding model, CodeLama 70B, known for its power and performance. 0:22 2. Testing CodeLama 70B with Snake Game 🐍 The host plans to test CodeLama 70B's capabilities by building the popular Snake game using the model. 0:25 3. Announcement by AI at Meta 📢 AI at Meta announces the release of CodeLama 70B, a more performant version of their LLM for code generation. 0:56 4. Different Versions of CodeLama 70B 💻 An overview of the three versions of CodeLama 70B: base model, Python-specific model, and Instruct model. 1:21 5. CodeLama 70B License and Commercial Use 💼 Confirmation that CodeLama 70B models are available for both research and commercial use, under the same license as previous versions. 1:40 6. Mark Zuckerberg's Thoughts on CodeLama 70B 💭 Mark Zuckerberg shares his thoughts on the importance of AI models like CodeLama for writing and editing code. 2:37 7. Outperforming GPT-4 with CodeLama 70B 🎯 A comparison between the performance of CodeLama 70B and GPT-4 in SQL code generation, where CodeLama 70B comes out as the clear winner. 3:25 8. Evolution of CodeLama Models ⚡ An overview of the various versions of CodeLama models released, highlighting the capabilities of CodeLama 70B. 4:21 9. Using Olamma with CodeLama 70B 🖥 Integration of CodeLama 70B with Olamma for seamless code generation and execution. 5:18 10. Testing CodeLama 70B with Massive Models 🧪 The host tests the performance of CodeLama 70B using a massive quantized version and shares the requirements for running it. 5:47 11. Selecting GPU Layers Choosing the appropriate number of GPU layers for better performance. 6:08 12. Testing the Model Running a test to ensure the model is functioning correctly. 6:43 13. Running the Test Requesting the model to generate code for a specific task. 7:27 14. Generating Code Observing the model's output and determining its effectiveness. 8:16 15. Code Cleanup Removing unnecessary code and preparing the generated code for execution. 8:40 16. Testing the Generated Code Attempting to run the generated code and troubleshooting any errors. 9:09 17. Further Testing Continuing to experiment with the generated code to improve its functionality. 9:15 18. Verifying CodeLama70b's Capabilities Acknowledging that CodeLama70b has successfully generated working code. 9:20 19. Conclusion and Call to Action Encouraging viewers to like, subscribe, and anticipate the next video. Generated with Tubelator AI Chrome Extension!
@geographyman562
@geographyman562 4 ай бұрын
It would be great to get a price breakdown on how much computer you need to have to get in the door to run these locally and compare those ranges to the VM host options.
@marcinkrupinski
@marcinkrupinski 4 ай бұрын
Cool, all the time we get better and better open source models!
@allenbythesea
@allenbythesea 4 ай бұрын
awesome videos man, I learn so much from these. I wish there were models tuned for c# though. Very few of us create large applications with python.
@mrquicky
@mrquicky 4 ай бұрын
It is surprising that the DeepSeek Coder 6.7b model was not listed in the Rubrik, though I recall Matthew reviewing & confirming that it did create a working version of the snake game. That was the most interesting part of the video for me. Seeing that it was not even being ranked anymore. I'm assuming a 70 billion parameter model would use more memory and perform more slowly than the 6.7 billion parameter model.
@RevMan001
@RevMan001 4 ай бұрын
If I can download the model from TheBloke, why do I have to apply for access from Meta? Is it just for the license?
@technerd10191
@technerd10191 4 ай бұрын
For LLM and CodeLlama inference, the M3 Max with 64GB of unified memory (50 GB actuallu usable) seems promising. So, it would be interesting to see how Macs would perform for quantized 70B param LLMs...
@Phasma6969
@Phasma6969 4 ай бұрын
Bro do you mean DRAM, not "unified memory"? Lol wut
@TDKOnafets
@TDKOnafets 4 ай бұрын
@@Phasma6969 No, its not DRAM
@DanteS-119
@DanteS-119 4 ай бұрын
Why don’t you just build a server with a decent beefy GPU and then hundreds of gigs of RAM? Genuine question, I love Apple Silicon just as much as the next guy
@VuLamDang
@VuLamDang 3 ай бұрын
@@DanteS-119 the power draw will be too high. M series chips are scarily efficient
@NicolasSouthern
@NicolasSouthern 3 ай бұрын
@@DanteS-119 I think it's because if you don't have enough ram on the GPU itself, it'll start processing on the CPU which is extremely slow. The apple silicon has the unified memory, so the GPU can access it with very little bottleneck. I believe theoretically, you could build a Mac with a hundred gigs of unified memory, and be able to load the largest models out there. If you wanted to load the largest models into a GPU memory you'd need to find ones with 24-48GB of ram (not the lower level cards). Having 128GB of system ram would not help, as the GPU can't really utilize that. The apple silicon is a bit different, there isn't really a "GPU", but there are processor cores that are built for GPU functions. A lot like integrated graphics on intel chips, you can run a desktop environment without a GPU, because the intel chips have limited capabilities. Apple just blew that concept out of the water with their graphics chips.
@user-iq8lr8wc8l
@user-iq8lr8wc8l 4 ай бұрын
I'm running LMStudio on my system running Debian bookworm 12 and it's running good. Really want to be able to run models locally on this system to to my work when I'm home. Any ideas about local models etc. would be helpful
@MrDoobieStooba
@MrDoobieStooba 4 ай бұрын
Thanks for the great content Matt!
@kevyyar
@kevyyar 4 ай бұрын
can you create a video on how to setup these LLMs on VSCode with the extension like Continue, Twinny, etc? I have downloaded Ollama and have downloaded the models i need but im not sure how to configure them to run on the extensions on vscode
@pcdowling
@pcdowling 4 ай бұрын
I have codellama 70b working well on ollama. Rtx 4090 / 7950x / 64gb. The newest version of olama uses about 10-20% gpu utilization and offloads the rest to the cpu, using about 55% of the cpu. Overall it runs reasonably well for my use.
@freedom_aint_free
@freedom_aint_free 3 ай бұрын
What is his context window ? How big is the code that it can generate ? Is it accurate ?
@jackonell1451
@jackonell1451 4 ай бұрын
Great vid ! What's "second state" though ?
@aboghaly2000
@aboghaly2000 4 ай бұрын
Hello Matthew, great job on your work! Could you please compare the performance of Large Language Models on Intel, Nvidia, and Apple platforms?
@stargator4945
@stargator4945 4 ай бұрын
I used a mixtral instruct model 8x7B and it was quite good, especially with other languages than English. So would this 70B model actually be better?
@voncolborn9437
@voncolborn9437 4 ай бұрын
Matt, you mentioned you were using a VM from Mass Compute with the model pre-installed. Who are they? So to be clear, you were not running the VM locally, right?
@cacogenicist
@cacogenicist 4 ай бұрын
Sure would be cool to be able to run this on my own hardware. So, what are we talking VRAM-wise? 92GB do it? ... sadly, I don't have a couple A6000s sitting around.
@warezit
@warezit 4 ай бұрын
🎯 Key Takeaways for quick navigation: 00:00 🚀 *Meta's Code LLaMA 70b Announcement* - Meta announces the most powerful coding model yet, Code LLaMA 70b, which is open-source and designed for coding tasks. - The model comes in three versions: the base model, a Python-specific variant, and an instruct model optimized for instructions. - Code LLaMA 70b is notable for its performance in human evaluation and its applicability for both research and commercial use under its license. 02:31 💾 *SQL Coder 70b Performance Highlights* - SQL Coder 70b, fine-tuned on Code LLaMA 70b, showcases superior performance in PostgreSQL text to SQL generation. - The model outperforms all publicly accessible LLMs, including GP4, with a significant margin in SQL eval benchmarks. - Rishab from Defog Data highlights the model's effectiveness and the open-sourcing of this tuned model on Hugging Face. 03:39 📈 *Code LLaMA 70b Technical and Access Details* - Introduction of Code LLaMA 70b as a powerful tool for software development, emphasizing ease of access and its licensing that allows for both research and commercial use. - Details on the expansion of the Code LLaMA series, including future plans for LLaMA 3 and the model's exceptional benchmark performances. - Mention of mass compute support for testing the model and an overview of the quantized version's requirements for operation. 06:11 🐍 *Testing Code LLaMA 70b with a Snake Game* - Demonstration of Code LLaMA 70b's capabilities by attempting to write a Snake game in Python using a cloud-based virtual machine. - Highlight of the potential and limitations of the model when generating code for complex tasks and the practical aspects of running such a model. - Transparency about the author's investment in LM Studio and the intention to disclose any interests for full transparency. Made with HARPA AI
@vishnunallani
@vishnunallani 4 ай бұрын
What kind of machine is needed to run these type of models?
@countofst.germain6417
@countofst.germain6417 4 ай бұрын
I just found this channel, it is great to see an AI channel that actually knows how to code.
@elon-69-musk
@elon-69-musk 4 ай бұрын
give more examples and thorough testing pls
@dungalunga2116
@dungalunga2116 4 ай бұрын
I’d like to see you run it on your mac.
@sumitmamoria
@sumitmamoria 4 ай бұрын
Which version will run reasonably fast on rtx3090 ?
@endgamefond
@endgamefond 4 ай бұрын
What virtual computer you use?
@janalgos
@janalgos 4 ай бұрын
still underperforms gpt4 turbo though right?
@user-iq8lr8wc8l
@user-iq8lr8wc8l 4 ай бұрын
I don't see the link to mass compute.
@zkmalik
@zkmalik 4 ай бұрын
yes ! please make more on the new llama model!
@hishtadlut1005
@hishtadlut1005 4 ай бұрын
Did the snake game worked at the end? What was the problem there?
@DevPythonUnity
@DevPythonUnity 4 ай бұрын
how do one bomecs an inverster in LLM studio?
@dgiri2333
@dgiri2333 2 ай бұрын
I need olamma text(nlp)to sql query or nlm to django orms is there any llms for that.
@william5931
@william5931 4 ай бұрын
can you make a video on mamba?
@1986hr
@1986hr 4 ай бұрын
How well does it perform with C# code?
@hqcart1
@hqcart1 4 ай бұрын
it's trained on pht, might not be as good for c#
@Leto2ndAtreides
@Leto2ndAtreides 4 ай бұрын
Wonder if the Macbook guy was running a quantized version or not. The maxed out M3 Macbook has a 128GB option also.
@frankjohannessen6383
@frankjohannessen6383 4 ай бұрын
unquantized 70B would probably need around 150GB ram.
@dungalunga2116
@dungalunga2116 4 ай бұрын
Would it run on an m3 max 36gb RAM ?
@BlayneOliver
@BlayneOliver 4 ай бұрын
That’s you just flexing 😅
@Noobsitogamer10
@Noobsitogamer10 3 ай бұрын
Coding battles, LLaMA crushes with its mad skills yet stays so chill.
@user-bd8jb7ln5g
@user-bd8jb7ln5g 4 ай бұрын
So you are an investor in LM Studio, perfect. Can you please tell them to allow increasing font size. My vision vascilates between good and poor and sometimes I'm having problems reading LM Studio text. BTW I'm seeing LM Studio release frequency ramping up 👍
@DailyTuna
@DailyTuna 4 ай бұрын
Your videos are awesome!
@matthew_berman
@matthew_berman 4 ай бұрын
Glad you like them!
@janfilips3244
@janfilips3244 4 ай бұрын
Matthew, is there a way to reach out to you directly?
@matthew_berman
@matthew_berman 4 ай бұрын
my email is in my bio
@Pithukuly
@Pithukuly 4 ай бұрын
i am mostly looking if there is any model that generate vertica sql syntax
@AliYar-Khan
@AliYar-Khan 4 ай бұрын
How much compute power it requires to run locally ?
@MelroyvandenBerg
@MelroyvandenBerg 4 ай бұрын
The RAM was already stated. And regarding GPU. You need 48GB VRAM to fit the entire model. That means 2x RTX 3090 or better. You could also use CPU only, depending on the CPU but I think that will result into 1 token a second or something. Hopefully we soon have ASICS. Since I think GPUs can't hold up.
@synaestesia-bg3ew
@synaestesia-bg3ew 4 ай бұрын
​@MelroyvandenBerg, this is quite sad .
@osamaa.h.altameemi5592
@osamaa.h.altameemi5592 4 ай бұрын
can you share the link for mass-compute? (the ones who provided the VM)
@kenhedges
@kenhedges 4 ай бұрын
It's in the Description.
@LuckyLAK17
@LuckyLAK17 2 ай бұрын
...please a test with installation/access insteuctions will be great. Tks
@emmanuelgoldstein3682
@emmanuelgoldstein3682 4 ай бұрын
GPT-4 ranks at 86.6 on HumanEval versus CodeLlama's 67.8. Meta used the zero-shot numbers for GPT-4 in their benchmark comparison, which is pretty dishonest.
@michaeldarling5552
@michaeldarling5552 4 ай бұрын
🙄👆👆👆👆👆👆👆👆👆👆👆👆👆
@romantroman6270
@romantroman6270 4 ай бұрын
They used GPT-4's HumanEval score from all the way back in March.
@michaelpiper8198
@michaelpiper8198 4 ай бұрын
I already have a setup that can code snake that I plug into AI so this should be amazing 🤩
@DanOneOne
@DanOneOne 4 ай бұрын
I asked it to write a program to connect bluetooth 3D glasses to a PC. it responded: It's not a good idea, because bluetooth is limited by 10m. Use wi-fi. I said: 10m is good enough for me, please write this program. -Ok, I will. And that was it 😆
@CV-wo9hj
@CV-wo9hj 4 ай бұрын
Love to see you running locally. What Specifications are needed to run it locally?
@footube3
@footube3 3 ай бұрын
At 4 bit quantisation (the most compression you'll really want to perform) you'd need a machine with 35GB of memory in order to run it (whether its CPU RAM, GPU RAM or a mixture of the two). For it to be fast you need that memory to be as high bandwidth as possible, where GPUs are generally the highest bandwidth, but where some CPUs have pretty high memory bandwidth too (e.g. Mac M1/M2/M3 & AMD Epyc).
@CV-wo9hj
@CV-wo9hj 3 ай бұрын
@@footube3 gah when I got my Mac Studio M2, I couldn't imagine why I needed more than 32 gigs 🤦
@DoctorMandible
@DoctorMandible 4 ай бұрын
AI will replace some Jr devs. Never replace coding entirely as you suggest.
@dominick253
@dominick253 4 ай бұрын
I think if anything it just will expose more people to programming. At least that's the effect it had on me. Before I I felt like it was such a huge mountain to climb and now I feel like the AI can do the templates and 90% of the work then I can focus on getting everything to work together to actually make the project.
@JT-Works
@JT-Works 4 ай бұрын
Never say never...
@starblaiz1986
@starblaiz1986 4 ай бұрын
"Never" is a long time ;) When AI gets to human-level intelligence (likely this year, or at most by the end of the decade), what will stop it from replacing programmers?
@EdToml
@EdToml 4 ай бұрын
Suspect coding will become much more of a collaboration. Less so with poor human coder sand much more so with good & great coders.
@seriousjan5655
@seriousjan5655 4 ай бұрын
@@EdToml Actaully, as a !£ years living from programming .... to write actuall code is the last thing. This models do not know what they are doing, that just had huge sets of probabilites. Last week I spent an hour with 6 coleagues discussing 3 options of approach from technical, economical and future advance standpoint. No, no replacement by AI. Sorry.
@vcekal
@vcekal 4 ай бұрын
Hey Matt, will you do a vid on the leaked early version of mistral-medium? Would be cool!
@NimVim
@NimVim 2 ай бұрын
How did you manage to get a checkmark? I thought only 100k+ channels and pre-existing businesses could get verified?
@vcekal
@vcekal 2 ай бұрын
@@NimVim I found a security vulnerability in KZfaq which allowed me to do that. It’s patched now, though.
@harisjaved1379
@harisjaved1379 4 ай бұрын
Matt how do you become an investor in LM studio? I am also interested in becoming an investor
@samuelcatlow
@samuelcatlow 4 ай бұрын
It's on their website
@stickmanland
@stickmanland 4 ай бұрын
Me, looking on with my lovely 3GB Geforce GT 780
@K.F-R
@K.F-R 4 ай бұрын
1. Install ollama 2. Run 'ollama run codellama:70b-instruct' No forms or fees. Two or theee clicks and you're running.
@gaweyn
@gaweyn Ай бұрын
but why in LM Studio, why not in an open-source project?
@onoff5604
@onoff5604 4 ай бұрын
yes please try it out! (and let us know the results of your experiments with snake please...)
@fabiankliebhan
@fabiankliebhan 3 ай бұрын
Do you plan to test the new mistral-next model available on the LLM Chatbot Arena? It is crazy good. Possibly better than GPT-4.
@LanceJordan
@LanceJordan 4 ай бұрын
What was the secret sauce to "get it to work" ?
@BrianCarver
@BrianCarver 4 ай бұрын
Hey @matthew_berman, love your videos, this one sounds a little different. Are you using AI to generate any parts of your videos now?
@matthew_berman
@matthew_berman 4 ай бұрын
Nope! What sounds different about it?
@first-thoughtgiver-of-will2456
@first-thoughtgiver-of-will2456 4 ай бұрын
Thank you for investing in LM Studio. I regard you as the most transparent AI Engineer journalist (for lack of a better term). Please keep up the important and quality work you've been doing for AI.
@dan-cj1rr
@dan-cj1rr 4 ай бұрын
a dude on youtube click baiting everyone about AI isnt an engineer
@matthew_berman
@matthew_berman 4 ай бұрын
❤️
@ChrisS-oo6fl
@ChrisS-oo6fl 4 ай бұрын
@@matthew_berman I take it that although you invested in LM studio you’ll still discuss other projects like oobabooga, open-llm, 02 LM Studio, hugginfacechat, silly, or the countless others if there’s anything notable to cover right? Or inform the public of the options and tools that are available. I do use LM studio but for some reason I personally don’t trust it especially with any uncensored model. Even as an extremely novice user I find it a little meh..so often stuck with oobabooga for most stuff. I also use other platforms for different use cases like my Home Assistant LLM API. Its human nature to become biased and unintentionally push, showcase or primary feature a resource which we are personally invested with. I personally prefer my creators to remain neutral with diverse content experiences unless o sought them out for their products.
@scottamolinari
@scottamolinari 4 ай бұрын
Can I make a request? If you are going to highlight the text you are reading, just highlight the whole sentence with click and drag (which you do later in the video) and get rid of that highlighted cursor.
@theresalwaysanotherway3996
@theresalwaysanotherway3996 4 ай бұрын
I'd be very interested to see this compared to the current best open source programming model (exlcuding the recent alpha mistral medium leak), deepseek 33b. As far as I can tell it's not as good, but maybe this 70b really is the new front runner
@fbravoc9748
@fbravoc9748 4 ай бұрын
Amazing video! How can I become an investor in LMStudio?
@beelikehoney
@beelikehoney 4 ай бұрын
please test this version!
@reynoeka9241
@reynoeka9241 4 ай бұрын
Please, you should test it in macbook pro m2 max
@StephenRayner
@StephenRayner 4 ай бұрын
Not watched yet. But really want to fine tune this bad boy. This will be so nuts!
@michaelestrinone2111
@michaelestrinone2111 4 ай бұрын
Does it support c# and .net 8?
@hqcart1
@hqcart1 4 ай бұрын
no, just phyt and js
@michaelestrinone2111
@michaelestrinone2111 4 ай бұрын
@@hqcart1 Thank you. I am using GPT3.5 with average success, but it is not up to date with .net 8, and I don't know if open-source LLMs exist that are trained on this framework.
@hqcart1
@hqcart1 4 ай бұрын
@@michaelestrinone2111use phind, it's online coding ai and free, its level somewhere between gpt3.5 and 4
@benscottbongiben
@benscottbongiben 4 ай бұрын
Be good to see locally
@user-iq8lr8wc8l
@user-iq8lr8wc8l 4 ай бұрын
coding as we know it will be replaced and a new programming paradigm will emerge . This is absolutely wonderful. I'm glad I lived to see this. I've only have been experimenting with AI for about 2 months and I can't get enough of it.
@voncolborn9437
@voncolborn9437 4 ай бұрын
And then what? There will only be "programmers" that know how to ask questions and hope they get what they need? That doesn't sound very promising for the futhre of computing.
@chineseducksauce9085
@chineseducksauce9085 4 ай бұрын
@@voncolborn9437 yes it does
@MetaphoricMinds
@MetaphoricMinds 4 ай бұрын
Thank you! Remember everyone, download while you can. Regulations are on their way!
@TheReferrer72
@TheReferrer72 4 ай бұрын
Don't be silly. These LLM's are not AGI
@michaeldarling5552
@michaeldarling5552 4 ай бұрын
@@TheReferrer72 You're assuming the government knows the difference!
@TheReferrer72
@TheReferrer72 4 ай бұрын
@@michaeldarling5552 Governments are much smarter than people give them credit for.
@brunobergami6482
@brunobergami6482 2 ай бұрын
"I think this will make programming obsolete" - Matthew lol why people still believe that full trust on code will be passed to AI?
@knowhrishi
@knowhrishi 4 ай бұрын
We need test video pleaseeeeee
@mazensmz
@mazensmz 4 ай бұрын
Hi Noobi, you need to delete the old prompt before prompting again, because it will consider the old prompts part of the context.
@avi7278
@avi7278 4 ай бұрын
yeah because compated to GPT-4 it has the intellect of a chipmunk.
@mirek190
@mirek190 4 ай бұрын
mixtral 8x7b doesn't have such limitations. You can ask so completely different code later and is no a problem. I whing llma2 architecture is too obsolete now.
@wayne8797
@wayne8797 4 ай бұрын
Can this run on a M1 Max 64gb MBP?
@cacogenicist
@cacogenicist 4 ай бұрын
Not acceptably, I wouldn't think.
@user-iq8lr8wc8l
@user-iq8lr8wc8l 4 ай бұрын
ask it!!! what test to perform.
@MelroyvandenBerg
@MelroyvandenBerg 4 ай бұрын
Let's go Code LLama!
@karolinagutierrez4383
@karolinagutierrez4383 3 ай бұрын
Sweet, this llama model crushs GPT4 at coding.
@miguelangelpallares8234
@miguelangelpallares8234 Ай бұрын
Please test in Macbook Pro M2 Max
@SuperZymantas
@SuperZymantas 4 ай бұрын
its better better vs gtp3 or 4? so bard and gtp wrote working code also and it is working, or this model can spit out more codes lines?
@GaelNoh
@GaelNoh 4 ай бұрын
Llama is impressive!
@ReligionAndMaterialismDebunked
@ReligionAndMaterialismDebunked 4 ай бұрын
Early crew. Shalom. :3 Noice!
@vladvrinceanu5430
@vladvrinceanu5430 4 ай бұрын
bro llm studio i guess fu cked up something with new updates. i cannot run even old models on my mbp14 pro m1 ( m1 pro with hightes core count ) as i was able before. improvements to make: - Beeing able to use model for scientific propose as generating molecule formula and so on. ( there is not a single LLM scientific tool supported on llm studio even if the model is available on huggingface) - Fix the gpu metal for m1 mbp14, in fact i was able to use it now not anymore.
@user-oc2db7nm8o
@user-oc2db7nm8o 3 ай бұрын
LLama is super impressive at coding.
@pebre79
@pebre79 4 ай бұрын
Please run on m1 & m3
@montserrathernandezgonzale6856
@montserrathernandezgonzale6856 3 ай бұрын
Looks like GPT-4 is getting put out to pasture.
@Derek-bg6rg
@Derek-bg6rg 4 ай бұрын
This video has me wishing I was a coding LLaMA too.
@MEXICANOGOLD
@MEXICANOGOLD 4 ай бұрын
META made a coding animal cooler than all the rest.
@MelroyvandenBerg
@MelroyvandenBerg 4 ай бұрын
If you are in investor in the project, also put in the video on screen in the future. Not just in the description. ok?
@user-iq8lr8wc8l
@user-iq8lr8wc8l 4 ай бұрын
and try not to piss it off!!!
@user-bc2kc9hn1p
@user-bc2kc9hn1p 4 ай бұрын
where is the .NET version. ?
@fuzzylogicq
@fuzzylogicq 4 ай бұрын
A lot of these models seem to assume everything is python, for most other low level languages no model can beat GPT 4. Yet !
@MH-sl4kv
@MH-sl4kv 4 ай бұрын
I'm surprised it didn't refuse and give you a lecture on the ethics of caging snakes and making them move around looking for food in a little box until they run out of room and die. The censorship on AI is getting insane.
@nannan3347
@nannan3347 4 ай бұрын
*cries in RTX 4080*
@mattmaas5790
@mattmaas5790 4 ай бұрын
On the other hand, I have a 4090 but still probably won't use 70b version because it'll be slower than 30b version
@kyrilgarcia
@kyrilgarcia 4 ай бұрын
same but in 3060. There is no such thing as enough vram 🤣
@brunoais
@brunoais 4 ай бұрын
2:14: Not in the near future. AI is still programming worse than a junior programmer. Right now it's almost as good as a code monkey.
@user-gm1im2vx9o
@user-gm1im2vx9o 3 ай бұрын
Enterprise AI FTW!
@virtualassistantbureau
@virtualassistantbureau 4 ай бұрын
Bard is better it said but I use it too
@mickelodiansurname9578
@mickelodiansurname9578 4 ай бұрын
So the virtual rig Matt set up there was $4 USD for 3 hours... (from the link in his description) So lets say your working day coding is '8 hours' total where you need on demand LLM, and remember you could HOST all three Llamas and Mistral and Stable Diffusion I suppose, whatever open source model you want, and likely Llama 3 when it's released on that rig, but you couldn't run them all at once. You might get 2 running concurrently if you don't dump them both into RAM. Hell a few coders could get together at this point and do time share deal. BUT ITS FOR LESS THAN A BUCK FIFTY AN HOUR! I spend more on Coffee in a week than that!
@FunwithBlender
@FunwithBlender 4 ай бұрын
a bit of an anti climax at the end there
The SIMPLE Way to Build Full Stack AI Apps (Tutorial)
10:41
Matthew Berman
Рет қаралды 24 М.
All You Need To Know About Running LLMs Locally
10:30
bycloud
Рет қаралды 110 М.
Шокирующая Речь Выпускника 😳📽️@CarrolltonTexas
00:43
Глеб Рандалайнен
Рет қаралды 11 МЛН
I Tried Every AI Coding Assistant
24:50
Conner Ardman
Рет қаралды 699 М.
LLaMA 3 Tested!! Yes, It’s REALLY That GREAT
15:02
Matthew Berman
Рет қаралды 206 М.
NEW AI Jailbreak Method SHATTERS GPT4, Claude, Gemini, LLaMA
21:17
Matthew Berman
Рет қаралды 309 М.
Using ollama and phi3 in VS code as an github copilot alternative
16:37
I Analyzed My Finance With Local LLMs
17:51
Thu Vu data analytics
Рет қаралды 413 М.
Apple AI FINALLY Arrives | Full Breakdown (Featuring ChatGPT?!)
35:27
Matthew Berman
Рет қаралды 52 М.
New LLM BEATS LLaMA3 - Fully Tested
17:03
Matthew Berman
Рет қаралды 36 М.
5 НЕЛЕГАЛЬНЫХ гаджетов, за которые вас посадят
0:59
Кибер Андерсон
Рет қаралды 1,5 МЛН
Выложил СВОЙ АЙФОН НА АВИТО #shorts
0:42
Дмитрий Левандовский
Рет қаралды 2,1 МЛН
How much charging is in your phone right now? 📱➡️ 🔋VS 🪫
0:11
Где раздвижные смартфоны ?
0:49
Не шарю!
Рет қаралды 762 М.