No video

Mastering Google's VLM PaliGemma: Tips And Tricks For Success and Fine Tuning

  Рет қаралды 10,141

Sam Witteveen

Sam Witteveen

Күн бұрын

Пікірлер: 20
@paulmiller591
@paulmiller591 3 ай бұрын
This is an exciting sub-field. We have a lot of clients making observations so keen to try this. Happy travels Sam.
@amandamate9117
@amandamate9117 3 ай бұрын
excellent video, cant wait for more visual model examples especially with ScreenAI for agents who browse the web
@user-en4ek6xt6w
@user-en4ek6xt6w 3 ай бұрын
Thank you for your video
@sundarrajendiran2722
@sundarrajendiran2722 9 күн бұрын
Can we upload multiple images in the demo and ask questions which have answer in any one of the images?
@SonGoku-pc7jl
@SonGoku-pc7jl 3 ай бұрын
thanks, we will see phi 3 with vision for compare :)
@unclecode
@unclecode 3 ай бұрын
Fascinating. I wonder if there is any example for fine-tuning for segmentation involved. If so, the way we collate the data should be different. I have one question about the timeline at 15 minutes and 30 seconds. I noticed a part of the code that splits the data set into train and test. But after split it says `train_ds = split_ds["test"]` shouldn't be "train"?. I think that might be a mistake. What do you think? Very interesting content, especially if the model has the general knowledge to get into a game like your McDonald's example. This definitely has great applications in medical and education fields as well. Thank you for the content.
@samwitteveenai
@samwitteveenai 3 ай бұрын
just look at the output from the model when you do segmentation and copy that. Yes you will need to to update the collate function. The "test" part is correct because it is just setting it to train on a very small number of examples, in a real training yes use the 'train' with is 95% of the data as opposed to 5% on the test.
@unclecode
@unclecode 3 ай бұрын
@@samwitteveenai Oh ok, that was for just video demo, thx for clarification 👍
@unclecode
@unclecode 2 ай бұрын
​@@samwitteveenai Thx, I get it now, the "test" is just for the demo in this colab. Although It would've been clearer if they used a subset of like 100 rows from the train split. I experimented a bit, the model is super friendly to fine-tuning. Whatever they did, it made this model really easy to tune. We're in a time where "tune-friendly" actually makes sense.
@SenderyLutson
@SenderyLutson 3 ай бұрын
I think the the Aria dataset from Meta is also open
@samwitteveenai
@samwitteveenai 3 ай бұрын
interesting dataset. Didn't know about this. Thanks
@ricardocosta9336
@ricardocosta9336 3 ай бұрын
Ty my dude
@FirstArtChannel
@FirstArtChannel 3 ай бұрын
Inference speed and size of the model still seems reasonable longer/larger than a Multimodal LLM such as LLaVA, or am I wrong?
@samwitteveenai
@samwitteveenai 3 ай бұрын
honestly its a while since I played with LLaVA and mostly I have just used it on Ollama, so not sure how it compares. Phi3-Vision is also worth checking out. I may make a video on that as well
@miguelalba2106
@miguelalba2106 2 ай бұрын
Do you know how good the dataset should be in terms of completeness for fine tuning? I have lots of images-texts of clothes, but in some there are more details than others, so I guess during training the model will be confused. Ex. There are thousands of images of dresses with only the color, and thousands of images with color + other details
@AngusLou
@AngusLou 3 ай бұрын
Is it possible to make the whole thing local?
@SenderyLutson
@SenderyLutson 3 ай бұрын
How many VRAM do this model consume on while running? And the Q4 version?
@samwitteveenai
@samwitteveenai 3 ай бұрын
the inference was running on a T4 so it is manageable. The FT was on an A100.
@willjohnston8216
@willjohnston8216 3 ай бұрын
Do you know if they are going to release a model for real time video sentiment analysis? I thought there was a demo of that by either Google or OpenAI?
@samwitteveenai
@samwitteveenai 3 ай бұрын
not sure but you can do some of this already with Gemini, just not realtime (publicly at least)
5 Problems Getting LLM Agents into Production
13:12
Sam Witteveen
Рет қаралды 13 М.
Testing Microsoft's New VLM - Phi-3 Vision
14:53
Sam Witteveen
Рет қаралды 12 М.
How I Did The SELF BENDING Spoon 😱🥄 #shorts
00:19
Wian
Рет қаралды 37 МЛН
Prank vs Prank #shorts
00:28
Mr DegrEE
Рет қаралды 10 МЛН
Meet the one boy from the Ronaldo edit in India
00:30
Younes Zarou
Рет қаралды 18 МЛН
ML Was Hard Until I Learned These 5 Secrets!
13:11
Boris Meinardus
Рет қаралды 281 М.
New Summarization via In Context Learning with a New Class of Models
28:12
AI’s Dirty Little Secret
6:41
Sabine Hossenfelder
Рет қаралды 548 М.
Fine-tune Multi-modal LLaVA Vision and Language Models
51:06
Trelis Research
Рет қаралды 21 М.
Florence 2 - The Best Small VLM Out There?
14:02
Sam Witteveen
Рет қаралды 14 М.
What Makes A Great Developer
27:12
ThePrimeTime
Рет қаралды 179 М.
What is an LLM Router?
9:16
Sam Witteveen
Рет қаралды 27 М.
Master CrewAI: Your Ultimate Beginner's Guide!
1:00:18
Sam Witteveen
Рет қаралды 67 М.
How I Did The SELF BENDING Spoon 😱🥄 #shorts
00:19
Wian
Рет қаралды 37 МЛН