Available Models
Select a model to use for chat. Models will be downloaded if not already cached.
Llama-3.2-1B-Instruct
The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks.
1.2 GB
Not downloaded
Llama-3.2-3B-Instruct
The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks.
2.4 GB
Not downloaded
DeepSeek-R1-Distill-Qwen-1.5B
DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
1.4 GB
Not downloaded
Phi-3.5-mini-instruct
The model belongs to the Phi-3 model family and supports 128K token context length.
2.1 GB
Not downloaded
SmolLM2-1.7B-Instruct
SmolLM2 is capable of solving a wide range of tasks while being lightweight enough to run on-device. This is 1.7B parameter model.
1.1 GB
Not downloaded
Qwen2.5-0.5B-Instruct
Qwen2.5 has significantly more knowledge and has greatly improved capabilities in coding and mathematics
2.0 GB
Not downloaded
Qwen2.5-Coder-1.5B-Instruct
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models.
1.3 GB
Not downloaded
Qwen2.5-Coder-3B-Instruct
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models.
2.1 GB
Not downloaded