LLM Chatbot


If you want to chat with a custom (fine tuned) LLM, you'll need a Trained Model Artifact. If you haven't created one yet, you can:

Download a pre-made one

Create your own using this guide

  1. Start by creating a new Canvas and drag the LLM element onto the Canvas.
  2. Select the device cluster you want the LLM to run on.
  3. Select the Large Language Model you want to chat with. Models that can't run on your available devices will highlight their requirements in red.
  4. Add a Hugging Face API Key to access models that require accepting Hugging Face permissions.
  5. Open the LLM Element setting and make the following adjustments:
    1. Model Dropdown: Select a model you have trained yourself. If you haven't trained a model, download a pre-made one at the link above and add it to the Model Adapter Folder Path.
    2. Model Adapter Folder Path: Use this to upload trained models from us, or models shared by others.
  6. Leave all other settings as the default.
  7. You can now hit run.

Dependencies will be installed the first time this flow is run, so it may take a while for them to install.