Supported LLM Base Models


Navigator supports a variety of open source LLMs, and is adding more all the time. Below is the current list of supported models with links to their Hugging Face model pages where you can learn more about each model's strengths and weaknesses.

For most use cases that require training or inference on consumer hardware, we recommend using a 7B parameter model. This is the sweet spot between model performance and size for consumer devices. All LLM features in Navigator work well with 7B parameter models.

Full List of Supported Models

Model Selection Considerations

When selecting a model, consider these key factors:

  • Parameter Count: Generally, larger models (higher parameter counts) offer better performance but require more computing resources.
  • Memory Requirements: Each model requires a specific amount of RAM. Ensure your hardware meets these requirements.
  • Specialization: Some models are optimized for specific tasks (e.g., code generation, instruction following).
  • Quantization: Many models offer quantized versions (4-bit, 8-bit) that reduce memory requirements at a small cost to performance.