Supported LLM Base Models

Navigator supports a variety of open source LLMs, and is adding more all the time. Below is the current list of supported models with links to their Hugging Face model pages where you can learn more about each model’s strengths and weaknesses. Change the model you are training here:

For most use cases that require training or inference on consumer hardware, we recommend using a 7B parameter model. This is the sweet spot between model performance and size for consumer devices. All LLM features in Navigator work well with 7B parameter models.

Full List of Supported Models

Click on a model below to learn more on Hugging Face:

TinyLlama/TinyLlama-1.1B-Chat-v1.0

Parameters: 1.1 billion

Description: Optimized for lightweight deployment with efficient performance.

View on Hugging Face

gradientai/Llama-3-8B-Instruct-262k

Parameters: 8 billion

Description: Fine-tuned on 262k instruction-following examples, ideal for complex tasks.

View on Hugging Face

mlx-community/Meta-Llama-3-8B-Instruct

Parameters: 8 billion

Description: Available in 4-bit and 8-bit quantized versions, balancing size and performance.

View on Hugging Face

mlx-community/gemma-1.1-2b-it-4bit

Parameters: 2 billion

Description: Designed for Italian language tasks with 4-bit precision, efficient for specialized usage.

View on Hugging Face

mlx-community/Mixtral-8x22B-Instruct-v0.1-4bit

Parameters: 22 billion

Description: Multi-modal model supporting diverse instructional tasks, with 4-bit quantization for better efficiency.

View on Hugging Face

mlx-community/WizardLM2-8x22B-4bit-mlx

Parameters: 22 billion

Description: Advanced language model suitable for extensive instructional prompts, available in 4-bit format for reduced resource consumption.

View on Hugging Face

mlx-community/Meta-Llama-3-70B-Instruct-4bit

Parameters: 70 billion

Description: One of the largest models in the Llama series, finely tuned for complex instructional tasks, with 4-bit optimization.

View on Hugging Face

mlx-community/Phi-3-mini-4k-instruct

Parameters: Mini model optimized for 4k input sequences

Description: Available in 4-bit and 8-bit versions for versatile use in low-resource environments.

View on Hugging Face

mlx-community/Phi-3-mini-128k-instruct

Parameters: Mini model fine-tuned on 128k instructions

Description: Supporting both 4-bit and 8-bit precision for flexible deployment.

View on Hugging Face

mlx-community/OpenELM-270M-Instruct

Parameters: 270 million

Description: Ultra-lightweight model optimized for instructional tasks in resource-constrained environments.

View on Hugging Face

mlx-community/OpenELM-450M-Instruct

Parameters: 450 million

Description: Slightly larger version for handling more complex tasks while maintaining efficiency.

View on Hugging Face

mlx-community/OpenELM-1_1B-Instruct

Parameters: 1.1 billion

Description: Balances performance with efficiency, available in 4-bit and 8-bit quantizations for different use cases.

View on Hugging Face

mlx-community/Qwen1.5-1.8B-Chat-4bit

Parameters: 1.8 billion

Description: Chat-oriented model optimized for conversational AI, available in 4-bit for resource efficiency.

View on Hugging Face

mlx-community/Qwen1.5-0.5B-Chat-4bit

Parameters: 500 million

Description: Lightweight chat model designed for fast inference, available in 4-bit format.

View on Hugging Face

mlx-community/Qwen1.5-7B-Chat-4bit

Parameters: 7 billion

Description: Mid-sized chat model providing a balance between performance and computational cost, in 4-bit format.

View on Hugging Face

mlx-community/Qwen1.5-72B-Chat-4bit

Parameters: 72 billion

Description: High-end chat model for sophisticated conversational AI tasks, optimized in 4-bit for enhanced performance.

View on Hugging Face

Maykeye/TinyLLama-v0

Parameters: Experimental

Description: Experimental version of TinyLLama, designed for ultra-low resource environments, focusing on efficiency over size.

View on Hugging Face