Рет қаралды 2,154
Free COLAB NB to fine-tune new Vision Language Models (VLM) on your datasets, including code to fine-tune w/ LoRA your simple LLMs.
From simple PyTorch notebooks ... to advanced massive parallel 8 TPU JAX FLAX notebooks to fine-tune LLMs and VLMs with KERAS 3.
Full code examples. Plus my recommendations for free compute infrastructures to run this examples on Google COLAB, Vertex AI, Model Garden, Kaggle, etc.
Fine-tune PaliGemma (from beginning of my video) ipynb:
github.com/google-research/bi...
And if you only want to PEFT fine-tune your Gemma Model (LLM), I recommend this here (with full model parallelism):
ai.google.dev/gemma/docs/dist...
For advanced coder: Inference with Gemma using JAX and Flax, that runs on a free Google T4 TPU:
ai.google.dev/gemma/docs/jax_...
All rights w/ authors:
keras.io/guides/distribution/
ai.google.dev/gemma/docs/setup
ai.google.dev/gemma/docs/pali...
#airesearch
#ai
#aicoding