Companion: Setting Up and Creating Custom LLM Chatbots
This guide will walk you through setting up Companion (your personal AI chat application), creating custom LLM chatbots, and deploying them for use in Companion.
Getting Started with Companion
Installing the Required Applications
To run Companion and deploy flows to it, you'll need to install and set up Navigator first.
- Open Navigator, create your account, and sign in.
Dependencies will install during setup. Accept the X-code pop-up when prompted (it may be hidden behind another window).
- Once in Navigator, set up a cluster:
- Go to the Clusters tab
- Click "Host a New Cluster"
- Add a name and description for your cluster
- Click into your new cluster and add a Node
- Select your device and any other devices running Navigator that you want to use for hosting models
- Open Companion and log in with the same credentials as your Navigator account.
- Wait for the included LLM to install (this may take some time).
- Once the status dot next to the icon turns green, you can start chatting with the default model.
For an in-depth guide on clusters, see Deploying & Using Clusters.
Understanding Companion Interface
Threads and Attachments
- Threads: Create new conversation threads using the left panel
-
Attachments: Automatically created by Companion when generating content
- Click any attachment to open it in the attachment panel
- Use the back arrow on the panel to flip between attachments
Creating a Custom LLM Chatbot
Before creating a custom chatbot, you'll need a trained model artifact. Choose one of these options:
Download Pre-made Model
Create Your Own Model
Setting Up Your Custom LLM in Navigator
- Create a new Canvas in Navigator
- Drag the LLM element onto the Canvas
- Configure the LLM Element:
- Cluster: Select the device cluster where you want the LLM to run
- Model: Select a model compatible with your available devices
- Hugging Face API Key: Add if required for models that need Hugging Face permissions
Models that can't run on your available devices will have their requirements highlighted in red
- Open the LLM Element settings and make the following adjustments:
- Model Dropdown: Select a model you have trained yourself
- Model Adapter Folder Path: Use this to upload trained models or models shared by others
- Leave all other settings at their default values
- Click "Run" to test your model
Dependencies will be installed the first time this flow runs, which may take some time
Compatible Elements for Companion
You can use several different elements to create chatbots for Companion:
-
LLM Chat
Create a standard chatbot with a local or cloud-based LLM
-
LLM (webFrame)
Use webFrame for optimized performance on distributed systems
-
Document QnA
Create a chatbot that can answer questions based on your documents
-
Advanced RAG
Build a sophisticated retrieval-augmented generation system for knowledge-intensive applications
Deploying Your Custom Chatbot to Companion
- Once you've created and tested your LLM flow in Navigator:
- Click the three-dot menu on your Canvas
- Select "Save for Deployment"
- Enter a name for the deployment version
- Go to your cluster in Navigator:
- Click "Create Deployment"
- Select your saved Canvas version
- Configure deployment settings as needed
- Click "Deploy"
- Wait for the deployment status to show as "Active"
- Open Companion - your custom model will automatically appear in the left sidebar
Using Your Custom Models in Companion
- Each icon in the left sidebar represents a different deployment from Navigator
- You can switch between models at any time by clicking their icons
- Create separate conversation threads for each model
- Each model maintains its own conversation history
For more detailed guidance on fine-tuning a model with custom information, see LLM Dataset Generation.
Advanced Tips
- For optimal performance, match model size to your available hardware resources
- Consider using webFrame for larger models to distribute processing across multiple devices
- When using Document QnA, ensure your documents are properly processed and indexed
- Advanced RAG provides the best results for domain-specific knowledge applications