LLM Chatbot
If you want to chat with a custom (fine tuned) LLM, you'll need a Trained Model Artifact. If you haven't created one yet, you can:
Download a pre-made one
Create your own using this guide
- Start by creating a new Canvas and drag the LLM element onto the Canvas.
- Select the device cluster you want the LLM to run on.
- Select the Large Language Model you want to chat with. Models that can't run on your available devices will highlight their requirements in red.
- Add a Hugging Face API Key to access models that require accepting Hugging Face permissions.
- Open the LLM Element setting and make the following adjustments:
- Model Dropdown: Select a model you have trained yourself. If you haven't trained a model, download a pre-made one at the link above and add it to the Model Adapter Folder Path.
- Model Adapter Folder Path: Use this to upload trained models from us, or models shared by others.
- Leave all other settings as the default.
- You can now hit run.
Dependencies will be installed the first time this flow is run, so it may take a while for them to install.