Train an LLM
After generating your LLM Dataset, you're ready to train your custom language model. This guide walks you through the training process.
Setup Process
- Create a new Canvas
- Drag the LLM Trainer Element onto your Canvas
- Open the element settings by clicking the ... button in the corner
Configure Training Settings
Configure the following parameters to prepare your training:
-
Base Model Architecture
The default model works well for creating a model quickly.
Want more options? webAI supports a variety of base models for different use cases. Check out our Supported LLM Base Models page to learn more about all available models.
Supported LLM Base Models -
Dataset Folder Path
Using the Select Directory button, choose the folder where you saved your LLM dataset during the Dataset Generation step.
Need a dataset? Try one of our samples:
Sci-Fi Novels
Logistics Warehouse Management -
Artifact Save Path
Using the Select Directory button, choose the folder where you want to save your trained adapter.
-
Base Model Assets Path
Using the Select Directory button, specify the folder where you want to save your base model.
-
Evaluator API Key
Add a Groq, OpenAI, Claude, or Gemini API key to enable the Faithfulness and Relevancy benchmarks in your training metrics.
If using Groq, a single API key is sufficient. If using other providers (OpenAI, Claude, or Gemini), you must provide at least two different API keys.
You can get a free Groq key here:
Groq API Key -
Batch Size
Recommended setting: 4 for testing
For all other settings, the default values work well for most use cases.
Start Training
- Click the Run button to begin the training process
The first time you run this flow, dependencies will be installed, which may take some time.
- Training progress will be displayed in the element
Next Steps
Once training completes, you can use your custom LLM in various workflows or with the Document QnA element.