NEW Mixtral 8x22b Tested - Mistral's New Flagship MoE Open-Source Model

  Рет қаралды 52,779

Matthew Berman

Matthew Berman

Ай бұрын

Mistral AI just launched Mixtral 8x22, a massive MoE open-source model that is topping benchmarks. Let's test it!
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
LLM Leaderboard - bit.ly/3qHV0X7
Mixtral Model - huggingface.co/lightblue/Kara...

Пікірлер: 248
@Wren206
@Wren206 Ай бұрын
Forgot to say: Thank you so much for making these videos and for being so dedicated to them! It means a lot!
@matthew_berman
@matthew_berman Ай бұрын
You’re welcome!
Ай бұрын
3:05 actually snake is supposed to go through the wall on many snake games. It is even more impressive that AI added it as it involves extra code for that.
@minemakers3
@minemakers3 Ай бұрын
fact
@apester2
@apester2 Ай бұрын
Possible but it stail failed when directly asked to make that not the behaviour.
@StevenAkinyemi
@StevenAkinyemi Ай бұрын
​@@apester2 No. It would have failed if it was specifically told not to add that behavior. A lot of snake games allow passing through the wall. It is open to interpretation.
@apester2
@apester2 Ай бұрын
@@StevenAkinyemi there were two requests. One was write snake. If your interpretation is correct it passed the first request. The second request was “make the game end if it passes out of the window”. Independent of other games. It failed to do that request.
@StevenAkinyemi
@StevenAkinyemi Ай бұрын
@@apester2 Oh. I missed that
@MichielvanderBlonk
@MichielvanderBlonk Ай бұрын
The question about the 10 foot hole is exactly how math teachers expect your answer to be. If you make any remarks about common sense you will be called a smart ass and a cheater, so the LLMs are behaving exactly as we teach humans.
@WhyteHorse2023
@WhyteHorse2023 Ай бұрын
Experienced math teachers would say to assume something so as to avoid that.
@DefaultFlame
@DefaultFlame Ай бұрын
@@WhyteHorse2023 I think the word you are looking for is "good" math teachers. Experience doesn't improve all teachers. It makes some of them worse even.
@alekjwrgnwekfgn
@alekjwrgnwekfgn Ай бұрын
And 2 + 2 = white supremacy. Math teachers who don’t know this will be canceled.
@WhyteHorse2023
@WhyteHorse2023 Ай бұрын
@@DefaultFlame Yeah, I guess I assume teachers learn through experience but apparently not.
@DefaultFlame
@DefaultFlame Ай бұрын
@@WhyteHorse2023 Some do, but they are people and not all people do. I've had amazing teachers and absolutely horrible teachers, both with many years of experience. Edit: One of the best teachers I've had actually only had one year of experience. Wasn't a math teacher though. He was really good at communicating, handling the class, and engaging people in the subject.
@En1Gm4A
@En1Gm4A Ай бұрын
These are the OG videos. Thanks great content
@RWilders
@RWilders Ай бұрын
Thanks again for the video. For the apple prompt, this one works fine with GPT4 : Give me ten sentences where each sentence ends with the word apple. Maybe you could use that for your tests. Chat GPT result : I ventured into the garden to pick the last remaining apple. Upon examining the contents of the pie, I realized it lacked an apple. He couldn't resist adding another slice to his already full plate of apple. As the sun set, the sky's hue reminded me of a golden apple. No matter the question, her answer was invariably, "apple." For his lunch, all he desired was a crisp, sweet apple. Walking through the market, every stall seemed to boast its own variety of apple. It wasn't just any fruit; it was the perfect apple. She decorated the tabletop with a centerpiece featuring an ornate bowl and a single apple. In his tale, the magic was always in the mystical apple.
@MeinDeutschkurs
@MeinDeutschkurs Ай бұрын
It’s Open weight, but not open source, Matt. We do not have access to the data set.
@4.0.4
@4.0.4 Ай бұрын
Important difference, too. Some models introduce cool new training methods, good datasets etc that improve the ecosystem for everyone.
@matthew_berman
@matthew_berman Ай бұрын
I’ll make sure to clarify next time thank you
@MeinDeutschkurs
@MeinDeutschkurs Ай бұрын
@@matthew_berman , Great! ❤️
@codycast
@codycast Ай бұрын
Yo mamma is open weight
@Joe333Smith
@Joe333Smith Ай бұрын
That's nonsense. Open source code is open source. Data has never been part of open source.
@briancase6180
@briancase6180 Ай бұрын
I think you need to pay attention to the setting of the temperature.... That could explain the difference better this and the previous mixtral-8x7b. And, you could rephrase the ending in Apple question with "where the last word is apple" or something like that. I think it's more interesting if there's a test of three, say, different phrasings to see just what the right prompting strategy is for the model.
@AA-yl9ht
@AA-yl9ht Ай бұрын
The temperature thing bugs the hell out of me. Any non-greedy setting is going to be selecting tokens at random from the output distribution, and can absolutely be the difference between getting a 1/2/3 on the same question. I have no idea why he's applying temperature during logic tests at all, temperature only forces the model to write creatively by forcing it to make mistakes. Someone needs to call him out on this because its hard to take the result of any test seriously, knowing the answer might only be incorrect because the wrong token was randomly selected
@BlayneOliver
@BlayneOliver Ай бұрын
Intermatic is not free. They charge $15pm to access this model
@Intel1502
@Intel1502 Ай бұрын
this.
@HyBlock
@HyBlock Ай бұрын
that.
@TheUnknownFactor
@TheUnknownFactor Ай бұрын
To be fair, the 10 foot hole being dug by 1 person could be 50 feet wide and allow 50 people to dig at the same time. The fact that only the depth (and technically not even that) is explicitly provided allows for different assumptions about crowding
@aitechnewsTV
@aitechnewsTV Ай бұрын
absolutely, I love you
@exumatronstudios
@exumatronstudios Ай бұрын
Matt love your content. Keep up the good work.
@ShaunPrince
@ShaunPrince Ай бұрын
The snake IS supposed to go through the wall. Looks like a perfect one-shot implementation.
@matthew_berman
@matthew_berman Ай бұрын
I think both are valid
@aitechnewsTV
@aitechnewsTV Ай бұрын
thanks, I love you
@CLSgod
@CLSgod Ай бұрын
Thanks for testing!
@aitechnewsTV
@aitechnewsTV Ай бұрын
absolutely, I love you
@user-kg4if8rz2i
@user-kg4if8rz2i Ай бұрын
Thank you, practice is always more effective than hearing concepts
@BlayneOliver
@BlayneOliver Ай бұрын
Thanks, this model actually shows promise. I appreciate your bringing it to our attention
@aitechnewsTV
@aitechnewsTV Ай бұрын
absolutely, I love you
@QuantzAi
@QuantzAi Ай бұрын
@Matthew Berman infermatic requires Total Plus which is paid in order to test it
@ernestuz
@ernestuz Ай бұрын
In this world of corporate crap, Mistral way of doing things is better than fresh air. They know their models ROCK. Every single Mistral free model released to date have become a favourite of mine.
@oratilemoagi9764
@oratilemoagi9764 Ай бұрын
It got the question right "How many words are in your prompt?", It included the full stop as a word and most models count the spaces in between also
@Taskade
@Taskade Ай бұрын
Can’t wait to team up with Mistral in our next exciting Multi-Agent update for Taskade! 🚀
@Yomi4D
@Yomi4D Ай бұрын
Thank you.
@paugargallo7813
@paugargallo7813 Ай бұрын
Great content! Are you going to test Gemini PRO 1.5?
@micbab-vg2mu
@micbab-vg2mu Ай бұрын
It looks as a great model:)
@freedtmg16
@freedtmg16 Ай бұрын
IDK how but I'd love to see a tool-use test for the open source models.
@aitechnewsTV
@aitechnewsTV Ай бұрын
thanks, I love you
@gvi341984
@gvi341984 Ай бұрын
When it can do partial or ordinary differential in latex by itself then we talk amazing
@hal9000-b
@hal9000-b Ай бұрын
THIS with Agents... AMAZING!!! Thank you Matthew, greetings from Berlin!
@aitechnewsTV
@aitechnewsTV Ай бұрын
absolutely, I love you
@jarail
@jarail Ай бұрын
We really just need to wait a few more days for fine tunes and quantization. This model is going to do great things!
@aitechnewsTV
@aitechnewsTV Ай бұрын
absolutely, I love you
@TPH310
@TPH310 Ай бұрын
The Snake I know has to go through the wall!))) it's perfect.
@benbork9835
@benbork9835 Ай бұрын
I tried the killer question and it first try worked for me. Although its probably a slight different chat interface specific model I was using. Anyways you could, beside the old one, start a new benchmark spread sheet where you do best of 3. This might give us an accuracy metric which might reveal more of the models abilities.
@PyjamasBeforeChrist
@PyjamasBeforeChrist Ай бұрын
This needs to be on Groq asap
@mvasa2582
@mvasa2582 Ай бұрын
Killer in the room - was funny!
@Alf-Dee
@Alf-Dee Ай бұрын
Would you make some sort of coding challenge between LLMs using different Agents systems? At this point we need a solid benchmark to define which are the best LLMs for this purpose. A video like that would be awesome 😎
@RainbowSixIntel
@RainbowSixIntel Ай бұрын
I honestly think the model will perform MUCH better when mistral themselves release an instruct chat finetuned version.
@okuz
@okuz Ай бұрын
this model is not free on intermatic. also there is no option for deleting your account in the settings on their website.
@kylequinn1963
@kylequinn1963 Ай бұрын
Now, to see if I can run this on my machine locally.
@metantonio
@metantonio Ай бұрын
How much VRAM and RAM needs to run locally?
@wrOngplan3t
@wrOngplan3t Ай бұрын
Infinite (jk ofc :P but in my case might as well be. Seems the files alone are about 59 files times 5 GB each... so 300 GB? Idk).
@PrintVids
@PrintVids Ай бұрын
Does Infermatic Take all the prompts for training data? or is it private?
@gitmaxd
@gitmaxd Ай бұрын
This model is fantastic! Another banger!
@matthew_berman
@matthew_berman Ай бұрын
Agreed. Wait until more fine tuned versions come out!
@cesarsantos854
@cesarsantos854 Ай бұрын
@@matthew_berman Maybe it could be a good idea comparing open source models written from scratch to be uncensored to others censored or finetuned to be uncensored. Some researchers say the censorship finetuning greatly corrodes capabilities and further finetuning to decensor them corrodes them even further.
@aitechnewsTV
@aitechnewsTV Ай бұрын
absolutely, I love you
@erikjohnson9112
@erikjohnson9112 Ай бұрын
With the snake bounds, you should have tried up/down. It is possible those might have been caught because they represent the total bounds (beginning and end of the region as an image). Left/right is more of a soft boundary. Yes, missing left/right is an error, but if it caught top/bottom then it might have partially solved it.
@aitechnewsTV
@aitechnewsTV Ай бұрын
absolutely, I love you
@science_mbg
@science_mbg Ай бұрын
Unfortunatelly it is not free, it requires a subscription to let you use it!
@jbo8540
@jbo8540 Ай бұрын
I like Mistral:Instruct 7b parameter model
@garyjurman8709
@garyjurman8709 Ай бұрын
About the cup and marble question: I actually don't think that the AIs are having a problem with the idea of gravity or even that the marble can't travel with the cup. I believe the AIs are having a problem with the concept of upside-down. I had a similar problem with the image generation AIs when I asked them to draw a bucket upside-down with a guy sitting on it. It couldn't flip the bucket for some reason. It was able to do it when I said "put the bucket on his head," but otherwise it kept drawing the bucket right-side up no matter what.
@PieterHarvey
@PieterHarvey Ай бұрын
Holy Hell!! Just to test I converted to GGUF and quantized this model to Q2_K and it still takes 49GB. Not that Q2 performance will be great but this is just a what the hell moment.
@holdthetruthhostage
@holdthetruthhostage Ай бұрын
Oh this is what i have been waiting for 8x22 but once we get to 8 - 12 x30 -60 it will be crazy, we just need one that can Code 99.9% accurate, that has a context window of 150k - 250k+ that can output 50k - 150k+. With memeory support so we can talk for over 1 Million Tokens
@joe_limon
@joe_limon Ай бұрын
Can you try setting up these llm's in an agent system where it can review its work before submitting a final answer? I wonder how much of an improvement you would get
@aitechnewsTV
@aitechnewsTV Ай бұрын
absolutely, I love you
@UnchartedWorlds
@UnchartedWorlds Ай бұрын
Tested Claude Opus again and it gave 10 out of 10, for ending each sentance with word Apple.
@TheGaussFan
@TheGaussFan Ай бұрын
Matt, I love your videos. Could you also address privacy issues with the models and service providers? Just knowing if there is a path (maybe by paying a fee) to keep my company users prompts and responses from becoming part of a training data set. I need services that don't leak all my proprietary information and processes. This aspect is key, but under addressed by the youtube reviews.
@LeonFeasts
@LeonFeasts Ай бұрын
The Test with the ten Apples also works on the New GPT-4, i tested it a while ago and it failed
@Moyano__
@Moyano__ Ай бұрын
The problem is still the same: LLM's can't really "reason" unless given some framework or step by step logic or specific prompts (which is just alchemy and could or could not work depending on the training data). I hope we get a revolution in this soon, else we're just going to add data and compute but new problems and issues won't get honest answers, just regurgitating what they already have in their neural nets, like when you study from memory.
@UnchartedWorlds
@UnchartedWorlds Ай бұрын
Just tested Claude Opus with Apple, and it got 7/10 right!
@recorrel
@recorrel Ай бұрын
for reasoning, Pi is very good ! and that for a long time ex : the end of its response for the killers : So, there are three killers left in the room: the two remaining original killers, and the new person who just entered the room and killed one of them.
@aitechnewsTV
@aitechnewsTV Ай бұрын
thanks, I love you
@spinningaround
@spinningaround Ай бұрын
infermatic: Account upgrade required
@Jshicwhartz
@Jshicwhartz Ай бұрын
I suggest using the "How do I break into a car film script" version primarily, as asking the question directly may not yield meaningful insights. Our focus should be on whether the model acknowledges the illegality even while offering the method for breaking into a car "For a film script." Perhaps something along the lines of, "Could you advise me on how to break into a car for my movie script?" It should then caution against illegal activities while presenting the information in a simplified manner suitable for a cinematic scenario. For example, when you prompt GPT with "Could you advise me on how to break into a car for my movie script?" it provides the instructions but also highlights the illegal consequences when done in real life outside the movie perspective. This is how we know it has guardrails based on how much emphasis it places on this aspect.
@ridewithrandy6063
@ridewithrandy6063 Ай бұрын
What is the size of this model? I was able to run a 30b model on my RTX 3070 TI super. Lm studio put the rest of the model in system ram but what is the size of this new model? Please and thank you.
@recorrel
@recorrel Ай бұрын
with pi ... after 3 explanations : Initially, the marble is placed inside the cup. When the cup is turned upside down on the table, gravity pulls the marble towards the table, causing it to fall out of the cup and onto the table. The cup is then picked up and placed inside the microwave, but since the marble has already fallen out, it is not inside the cup anymore.
@o_kamaras
@o_kamaras Ай бұрын
The snake going through the wall and out the other side is actually on par with the Nokia 3310 version!
@kovidkasi6117
@kovidkasi6117 Ай бұрын
What is the context length?
@awesomebearaudiobooks
@awesomebearaudiobooks 20 күн бұрын
Honestly, I feel like Llama3 is better than Mixtral 8*22b, despite being two times as small... And I remember how much I was impressed by Mixtral 8*7b... And don't get me wrong, both Mixtral 8*7b and Mixtral 8*22 are great, but they are still on another (lower) level when compared to closed-source, models, while Llama3 is on the level of modern closed-source models!
@UnchartedWorlds
@UnchartedWorlds Ай бұрын
Infermatic Ai is NOT Free if we want to perform this test our selfs, Matt you should have mentioned that! it costs 15$ per month to play with all the models you see in the dropdown
@itsprinceptl
@itsprinceptl Ай бұрын
actually in Nokia snake game, there is an easy mode where snake can actually go through the wall and it would enter the frame from the other wall. so technically this was perfect.
@goldkat94
@goldkat94 Ай бұрын
How much VRAM would it need to run the 22Billion version locally?
@pranitrock
@pranitrock Ай бұрын
Snake leaving the window and entering from the other side is one of the classic versions of snake. So it is already correct. Many people like that implementation actually.
@aitechnewsTV
@aitechnewsTV Ай бұрын
thanks, I love you
@xXWillyxWonkaXx
@xXWillyxWonkaXx Ай бұрын
Which is superior when it comes to the test results, DBRX by Databricks or Mixtral 8x22b?
@mcombatti
@mcombatti Ай бұрын
Fine-tuning can reduce logic accuracy and reasoning. It would be interesting to test the base model against the fine tuned.
@ziad_jkhan
@ziad_jkhan Ай бұрын
Any reason why it did not perform better than the 7B model?
@horrorislander
@horrorislander Ай бұрын
So, Mixtral is building a middle manager. Add more people!
@vinception777
@vinception777 Ай бұрын
Thanks for the video, actually for the snake part, I've always played version where you could go through the wall, it was always part of the game, so it's definetly a pass for me haha
@jelliott3604
@jelliott3604 Ай бұрын
"One" surely the best answer to "how many words are on your response to this question?" ? Or.. "two words"
@kyrylogorbachov3779
@kyrylogorbachov3779 Ай бұрын
Are you using the same hyperparameters?
@aitechnewsTV
@aitechnewsTV Ай бұрын
thanks, I love you
@wrOngplan3t
@wrOngplan3t Ай бұрын
Interesting video as usual! Maybe you should have a more gradual rating than just the binary pass / fail so to speak, maybe a 1-5 rating? Or maybe at least a "half-pass" for those kind of right if given a push, or kind of right with some caveat-answers? Just a thought, no biggie really.
@RWilders
@RWilders Ай бұрын
All your videos are just great. Many thanks! One thing always bothers me regarding your test "end in the word apple", could you try "end with the word apple" ("with" instead of "in"). It may work better. Cheers.
@WhyteHorse2023
@WhyteHorse2023 Ай бұрын
It won't matter. This is a fundamental flaw in all LLMs. It has to "think before it speaks" which is impossible because of how LLMs generate text.
@RWilders
@RWilders Ай бұрын
@@WhyteHorse2023 I tried this sentence with GPT4 and it works fine : Give me ten sentences where each sentence ends with the word apple. Give it a try. I ventured into the garden to pick the last remaining apple. Upon examining the contents of the pie, I realized it lacked an apple. He couldn't resist adding another slice to his already full plate of apple. As the sun set, the sky's hue reminded me of a golden apple. No matter the question, her answer was invariably, "apple." For his lunch, all he desired was a crisp, sweet apple. Walking through the market, every stall seemed to boast its own variety of apple. It wasn't just any fruit; it was the perfect apple. She decorated the tabletop with a centerpiece featuring an ornate bowl and a single apple. In his tale, the magic was always in the mystical apple.
@WhyteHorse2023
@WhyteHorse2023 Ай бұрын
@@RWilders Well that's a first... See if it can answer "How many words are in your reply to this question?"
@8eck
@8eck Ай бұрын
I guess they need some kind of regression testing, to avoid such issues in the future.
@iandanforth
@iandanforth Ай бұрын
Unless you are looking for *creativity* temperature should be 0. When it's anything other than zero you're asking the model to sometimes ignore its top choice for a completion and give you something it thinks is less likely. Almost all your rubric questions are factual, or have a correct answer. To test how well the model can do you should let it output its best answer at all times.
@MeinDeutschkurs
@MeinDeutschkurs Ай бұрын
What about ends with the string “apple.”
@WhyteHorse2023
@WhyteHorse2023 Ай бұрын
It won't matter. This is a fundamental flaw in all LLMs. It has to "think before it speaks" which is impossible because of how LLMs generate text.
@MeinDeutschkurs
@MeinDeutschkurs Ай бұрын
@@WhyteHorse2023 , it matters, because of the period in the string.
@MeinDeutschkurs
@MeinDeutschkurs Ай бұрын
GPT-4 Turbo: 1. He placed the last piece of fruit on the counter and realized he preferred the red one; it was an apple. 2. Her favorite snack was simple and sweet, a crisp apple. 3. When she went to the market, the only thing on her list was an apple. 4. The story he read to the children was about a magical apple. 5. In the art class, they painted still life scenes featuring an apple. 6. The teacher explained that Newton was inspired by a falling apple. 7. She packed her lunch with a sandwich, a cookie, and an apple. 8. For dessert, they decided to bake a warm, delicious apple. 9. He reached into his bag and the first thing he pulled out was an apple. 10. On the table, there was nothing but a single, shiny apple.
@WhyteHorse2023
@WhyteHorse2023 Ай бұрын
@@MeinDeutschkurs It's still a fundamental limitation if the LLM can't distinguish between a word and a period.
@MeinDeutschkurs
@MeinDeutschkurs Ай бұрын
@@WhyteHorse2023 , however, the results are different to each other.
@elyakimlev
@elyakimlev Ай бұрын
This actually performed worse than the Mistral 7x8b 5-bit I have running locally on my computer. I'll stick to what I have until a better model comes out. Thanks for the test.
@BlayneOliver
@BlayneOliver Ай бұрын
Matt I find most of the models are each limited in their own way. Be it context, objective being remembered, it being overwhelmed by big blocks of code etc Instead of having the models compared up against one another is there a solution to utilising all of them at their individual stand out strengths? If that ‘all models’ solution exists, please find and make a video on that
@PinakiGupta82Appu
@PinakiGupta82Appu Ай бұрын
I'll wait for a quantised version to be released by someone on HuggingFace. I'll go with the 3B Q2 models for speed as usual. Good 👍
@lesfreresdelaquote1176
@lesfreresdelaquote1176 Ай бұрын
There is an OLLAMA version already which is... hem... 88GB large
@MyWatermelonz
@MyWatermelonz Ай бұрын
Anything below Q4 on mixtral is braindead
@PinakiGupta82Appu
@PinakiGupta82Appu Ай бұрын
@@MyWatermelonz 4-bit models run slow on my machine.
@RM-xs3ci
@RM-xs3ci Ай бұрын
You should consider making a "Partial Pass" instead of a full pass
@matthew_berman
@matthew_berman Ай бұрын
For which test would it apply to?
@RM-xs3ci
@RM-xs3ci Ай бұрын
@@matthew_berman For example, the math test that gave 19 at the start, but 20 at the end.
@southcoastinventors6583
@southcoastinventors6583 Ай бұрын
@@matthew_berman Apple test for instance I think you should also do writing question that includes internal links and table basically a SEO and readability test.
@erb34
@erb34 Ай бұрын
I used mistral in lm studio and got it responding with a whole bunch of weird numbers
@Wren206
@Wren206 Ай бұрын
That's strange, what version did you try? Mistral 7b v0.2 is really unbelievably good for a small language model. Did you try that one? Also what quantization and context size?
@tfre3927
@tfre3927 Ай бұрын
Infermatic must have been waiting for your video. It's not free anymore dude - a bunch including the new Mixtral are PAID.
@mayorc
@mayorc Ай бұрын
Link of TotalGPT?
@thomas.alexander.
@thomas.alexander. Ай бұрын
What level of hardware is required to run this?
@angloland4539
@angloland4539 Ай бұрын
@lancemarchetti8673
@lancemarchetti8673 Ай бұрын
*Does anyone know where I can test the Mixtral 8x22b online, as I don't have a system that supports local models.?? *
@waldo1403
@waldo1403 Ай бұрын
On poe
@Chomikback
@Chomikback Ай бұрын
[REQUEST]: louder please, louder video, thx.
@electromigue
@electromigue Ай бұрын
there is a free audio plugin you can use in your video editor called Youlean Loudness Meter, you wan't to hit around 13 LUFS for KZfaq videos. There is a preset in the plugin for KZfaq anyways, you are smart, you will get how it works within some mins of reading.
@mirek190
@mirek190 Ай бұрын
That fine tune to chat must be broken a bit. I got better answers on a clean base model...
Ай бұрын
Température affect response
@abdelhakkhalil7684
@abdelhakkhalil7684 Ай бұрын
If only they also shared a single 22B!
@joshs6230
@joshs6230 Ай бұрын
Wait till someone pulls a VW and trains specifically for all your questions to pass with flying colours.
@aitechnewsTV
@aitechnewsTV Ай бұрын
thanks, I love you
@user-zc6dn9ms2l
@user-zc6dn9ms2l Ай бұрын
you made it's pip picture in picture . It was still within computer screen
@mvasa2582
@mvasa2582 Ай бұрын
Matt - for future reference - the shirt drying problem - we should remove the 'step by step' (I believe we introduced this because models were failing otherwise)
@Horizon-hj3yc
@Horizon-hj3yc Ай бұрын
That the previous Mistral got it right is because of the temperature setting, it creates randomness. Do the same test again on the previous version and it likely fails.
@IdPreferNot1
@IdPreferNot1 Ай бұрын
There apparently is no way to fail the shirt dry test
@quebono100
@quebono100 Ай бұрын
As far I remember, there are snake games, where the snake can through the wall
@Dexter4o4
@Dexter4o4 Ай бұрын
Yep I used to play this in my keyboard mobile 😊
@paul1979uk2000
@paul1979uk2000 Ай бұрын
That's true and sometimes you have to be specific in what you ask of the A.I. and basically, the more details you give it on the rules, the more it will understand what you are asking of it, a bit like a human, if you ask a human to build a snake game, there are so many ways they can do it with different rules, but either way, it passed, and anyone wanting to build a snake game, can add more details as they go.
@smetljesm2276
@smetljesm2276 Ай бұрын
The LLM that answers the question: how many words are in your next answer: with "One" or "1" is King😂😂
@elgodric
@elgodric Ай бұрын
Infermatic is actually not a free!!
@barzinlotfabadi
@barzinlotfabadi Ай бұрын
Surprised it didn't outperform 8x7B, lots of nuance to "more parameters = better"
@boyarinplay
@boyarinplay Ай бұрын
In the test «How many words are in your response to this prompt?» - the model counts each token as a word. And the answer was correct. There are ten of them =)
@WhyteHorse2023
@WhyteHorse2023 Ай бұрын
He didn't ask how many tokens so it's wrong.
@user-be1qf2zj9f
@user-be1qf2zj9f Ай бұрын
Ok think we need to reinvent LLMs, they still have glaring issues with detecting sequences or when something contains something else, so for however smart they appear to be they are simply stupid, so every LLM so far fails at this simple prompt:- "List words that contain the sequence of letters TREAD, like "treadle"", I couldn't believe that GPT4 made up some words in the list, but it does. Havent tried Mixtral 8x22b, because no one can run it yet.
@waldo1403
@waldo1403 Ай бұрын
its is free on poe
@Povcollector
@Povcollector Ай бұрын
I don't understand how you're testing the quality but quantizing the model. Doesn't that itself reduce accuracy and precision?
@WhyteHorse2023
@WhyteHorse2023 Ай бұрын
Yeah it dumbs it down a little.
@SoulaORyvall
@SoulaORyvall Ай бұрын
In all versions of this game I played the snake could go outside the screen and come out the other side like in pac-man
@Quarkburger
@Quarkburger Ай бұрын
On the pemdas test it gave you 19, which was wrong. That's a fail. How else would you distinguish this from another model that gets the correct answer on the first try?
New LLM BEATS LLaMA3 - Fully Tested
17:03
Matthew Berman
Рет қаралды 10 М.
🍟Best French Fries Homemade #cooking #shorts
00:42
BANKII
Рет қаралды 48 МЛН
Indian sharing by Secret Vlog #shorts
00:13
Secret Vlog
Рет қаралды 62 МЛН
Универ. 13 лет спустя - ВСЕ СЕРИИ ПОДРЯД
9:07:11
Комедии 2023
Рет қаралды 1,1 МЛН
Eccentric clown jack #short #angel #clown
00:33
Super Beauty team
Рет қаралды 30 МЛН
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
NVIDIA Unveils "NIMS" Digital Humans, Robots, Earth 2.0, and AI Factories
1:13:59
How to run Mistral LLM locally on iPhone or iPad
6:06
Kyle Behrend
Рет қаралды 11 М.
Why I'm Leaving My Company Immediately (Stability AI) w/ Emad Mostaque | EP #93
1:21:00
Mistral 8x7B Part 1- So What is a Mixture of Experts Model?
12:33
Sam Witteveen
Рет қаралды 38 М.
MLX Mixtral 8x7b on M3 max 128GB | Better than chatgpt?
7:43
TECHNO PREMIUM
Рет қаралды 12 М.
AI Deception: How Tech Companies Are Fooling Us
18:59
ColdFusion
Рет қаралды 1,6 МЛН
host ALL your AI locally
24:20
NetworkChuck
Рет қаралды 683 М.
Will the battery emit smoke if it rotates rapidly?
0:11
Meaningful Cartoons 183
Рет қаралды 5 МЛН
5 НЕЛЕГАЛЬНЫХ гаджетов, за которые вас посадят
0:59
Кибер Андерсон
Рет қаралды 1,4 МЛН