Gemma 3n Toolkit
Everything you need to get started with Gemma 3n development. From quick setup to advanced deployment strategies.
Quick Start Tools
Ollama Setup
Get Gemma 3n running locally with Ollama in under 5 minutes.
# Install Ollama first
ollama run gemma-3n:e4b
# Or for smaller model
ollama run gemma-3n:e2b
Hugging Face
Use Gemma 3n models directly from Hugging Face Hub.
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "google/gemma-3n-e4b-it"
model = AutoModelForCausalLM.from_pretrained(model_name)
E2B vs E4B
Interactive tool to help you choose the right model size.
Platform Integrations
📱 iOS Development
Deploy Gemma 3n models on iOS devices with optimized performance.
Recommended Setup:
- Gemma 3n E2B for iPhone (2GB model)
- CoreML conversion for optimal performance
- GGUF quantization for reduced size
# Convert to CoreML
pip install coremltools
# Follow our detailed iOS guide
🔧 Fine-tuning Tools
Customize Gemma 3n models for your specific use cases.
Available Methods:
- LoRA (Low-Rank Adaptation)
- Unsloth for 2x faster training
- Google Colab notebooks ready
# Start with Unsloth
pip install unsloth
# Or try our LoRA tutorial
Hardware Requirements
Model | RAM (FP16) | RAM (4-bit) | Best Use Case |
---|---|---|---|
Gemma 3n E2B | 4GB | 2GB | Mobile, Edge devices |
Gemma 3n E4B | 8GB | 4GB | Laptops, Workstations |
Both models run efficiently on CPU-only setups
Significant speedup with CUDA/Metal support
E2B optimized for iOS and Android deployment
Official Resources
Access the latest official documentation, research papers, and community discussions directly from Google and the open source community.
Google Official
DeepMind Official Page
Official model overview, architecture details, and performance benchmarks.
deepmind.googleAI for Developers
Technical documentation, API references, and integration guides.
ai.google.devOfficial Announcement
Launch announcement, key features, and architectural innovations.
developers.googleblog.com
Download Models
Hugging Face Hub
Official model repositories with multiple format options
Ollama Library
Easy one-command installation for local development
ollama run gemma3n:e2b
ollama run gemma3n:e4b
Community & Research
Ready to Build with Gemma 3n?
Join thousands of developers already using Gemma 3n in production.