Element Registry
Full List of Elements inside webAI
-
Object Detector
A YOLOv8 object detector that can identify 91 different objects. For in-depth information on the data used to train the model, you can find that here.
Connect to inputs such as a Camera or Media Loader and outputs such as Output Preview or Zone Counter.
If you want to identify certain objects but not others, connect the Class Filter element.
-
Class Filter
Limit the number of classes an object detector will identify by creating a white-list of the classes you want to detect. Simply open the settings and list the class names you want to detect separated by commas.
-
Object Tracker
Tracks objects that have been identified with an object detector or classifier across a video feed.
Connect with inputs like a camera and outputs such as Output Preview or Zone Counter.
-
Barcode Reader
Scan barcodes with this pre-trained Barcode reader. Connect inputs such as camera feeds or the Media Loader and outputs such as Output Preview or Zone Counter.
Reads the following barcode types:
- EAN_13
- EAN_8
- UPC_A
- UPC_E
- QR_CODE
-
Zone Counter
The Zone Counter keeps track of the number of objects detected within certain zones. Use this instead of Output Preview as the output element for object detection or classification models.
-
Text Reader
Need to process/read text from an image or video feed? This element is a pre-trained Optical Character Recognition (OCR) model.
Connect it to input elements such as Cameras or the Media Loader and outputs such as Output Preview to see the inference.
-
Object Detection Inference
Detect and locate objects in images or videos using models you've trained with the Object Detection Trainer. Connect to a data source and an output element such as Output Preview to see the inference.
-
Image Classification Trainer
Classifiers are computer vision models that determine whether an object is present in a frame or not (think Hot Dog, No Hot Dog). Not sure that's the right model for your solution? Check out this Guide to Use Cases and Model Architectures.
You will also need a dataset ready to train. If you don't have one, download one here
-
Object Detection Trainer
Object Detection models allow users to identify objects of certain defined classes. Object detection models receive an image as input and output the images with bounding boxes and labels on detected objects.
You will need a dataset of images and annotations in the COCO JSON format. If you don't have a dataset, you can download one here.
-
Image Classification Inference
Understand and categorize images under a specific label. Requires data inputs, a trained classification model, and an output element such as Output Preview to see the inference.
-
Output Preview
Output Preview is the Navigator-native preview window to see the inference on your vision specific models.
Drag the element to canvas to the end of the flow and connect the last element to the endpoint on the left of the element. Open the settings to select the Minimum level of Confidence for bounding boxes being drawn in the output preview.
-
Media Loader
The Media Loader is the way to upload images and videos into Navigator to be processed and inference. Drag the Media Loader to canvas and open the settings.
Select the folder of images or videos to upload. Supported file formats:
- .jpg
- .png
- .jpeg
- .npy
- .raw
- .mp4
- .avi
- .mov
Frame Rate: How fast the image or video frames are processed. The "0" Default setting processes images as 1 per second and videos to the FPS they are set at in the file.
Stay Alive: After all the files are uploaded and processed, this element will shut down. If you need it to stay alive for your flow, select this setting.
-
Camera
Camera elements are how you connect a range of cameras to your models and solutions. Navigator supports:
- Computer cameras: Built-in cameras, or USB port connected cameras
- Network cameras: Accessible through a URL such as an IP or RTSP Camera
To connect a camera, drag the element to canvas and select which camera you'd like to use in the settings. Rename the camera if you like.
To add an additional camera:
- Open the Devices Menu. If you have not added the computer where the camera is connected to the Device Registry yet, you will need to connect it before adding the camera.
- Select 'Input Devices' and 'Add Input Device'
- Select which type of camera you're connecting and click next.
For Computer Cameras: give it a name, select the computer its attached to, and input the ID linked with the camera.
For Network Cameras: give it a name, add the URL where the camera is located, select the computer that should be analyzing the video.
-
LLM Dataset Generator
The LLM Dataset Generator is the first step to creating a custom LLM. Whether you're creating a local expert or a custom model to use with the Document QnA element
Before you can generate your dataset, make sure you have your documents you want to use ready. Gather all relevant documents you want your model to be built off of. These documents must be in one folder and in the following formats - PDFs, text, and docx.
-
LLM Trainer
Train your own custom LLM with this element. Select a supported model, connect your dataset from the LLM Dataset Generator, and click Run. For more info on how to build an LLM Dataset, read our deep dive here.
-
Response API
Connect to the output end of the LLM Chat or Document QnA elements to get responses back from the API
-
Document QnA
Document QnA (also known as RAG, Retrieval-Augmented Generation) is an element to query and interact with documents. Once you load them in, you can ask questions of the whole document repository and the model will respond with answers based on the documents. The model will also provide citations from where it got its information in your documents.