- Adding AI models to ThingsBoard
- Provider configuration
- Model configuration
- Connectivity test
- Next steps
Available since TB Version 4.2 |
AI models are machine learning or large language models that can process data, generate predictions, detect anomalies, or produce human-like responses. In the context of ThingsBoard, AI models are used to extend IoT data processing capabilities by enabling advanced analytics and automation.
By integrating external AI providers (such as OpenAI, Google Gemini, Azure OpenAI, Amazon Bedrock, etc.), you can:
- Predict future values (e.g., energy consumption or equipment temperature).
- Detect anomalies in real-time telemetry streams. (see industrial equipment fault detection example).
- Classify device states (e.g., OK, Warning, Fault).
- Generate responses or natural-language insights for operators and end-users.
ThingsBoard allows you to configure and connect to different AI providers, manage model settings, and use the models inside the Rule Engine for automation and decision-making.
Adding AI models to ThingsBoard
To add an AI model in ThingsBoard, follow these steps:
- Open the “Settings” page in your ThingsBoard instance.
- Go to the “AI models” tab.
- Click the “Add model” button (located in the top-right corner).
- This will open a form where you can configure AI model:
- Name - provide a meaningful name for the AI model.
- Provider – select the AI provider and specify its authentication credentials.
- Model ID – choose which model to use (or deployment name, in the case of Azure OpenAI).
- Advanced settings – configure optional parameters (such as temperature, top P, max tokens) if supported by the provider.
- Click “Save” to complete adding the new AI model.
Once saved, the model becomes available for use in the AI request node of the Rule Engine.


- Name - provide a meaningful name for the AI model.
- Provider – select the AI provider and specify its authentication credentials.
- Model ID – choose which model to use (or deployment name, in the case of Azure OpenAI).
- Advanced settings – configure optional parameters (such as temperature, top P, max tokens) if supported by the provider.
- Click "Save" to complete adding the new AI model.

Provider configuration
In the “Provider” section you need to select the AI provider you want to use, as well as the authentication method for that provider (e.g., API key, key file, etc.).
Supported AI providers
ThingsBoard currently supports integration with the following AI providers:

OpenAI
- Authentication: API key.
- You can obtain your API key from the OpenAI dashboard.
Azure OpenAI
- Authentication: API key and endpoint.
- You need to create a deployment of the desired model in Azure AI Studio.
- Obtain the API key and endpoint URL from the deployment page.
- Optionally, you may set the service version.
Google AI Gemini
- Authentication: API key.
- You can obtain the API key from the Google AI Studio.
Google Vertex AI Gemini
- Authentication: Service account key file.
- Required parameters:
- Google Cloud Project ID.
- Location of the target model (region).
- Service account key file with correct permission to be able to interact with Vertex AI.
Mistral AI
- Authentication: API key.
- You can obtain your API key from the Mistral AI portal.
Anthropic
- Authentication: API key.
- You can obtain your API key from the Anthropic console.
Amazon Bedrock
- Authentication: AWS IAM credentials.
- Required parameters:
- Access key ID.
- Secret access key.
- AWS region (where inference will run).
Note: Authentication with Bedrock API keys is not supported.
GitHub Models
- Authentication: Personal access token.
- Token must have the
models:read
permission. - You can create a token following this guide.
Model configuration
After you've selected and authenticated your AI provider, you need to specify which particular AI model to use (or deployment name in the case of Azure OpenAI).
For some providers (like OpenAI), ThingsBoard offers autocomplete options with popular models. You are not limited to this list – you can specify any model ID supported by the provider, including model aliases or snapshots. For production usage, we recommend using model snapshots to ensure predictable performance (Model aliases may be updated by the provider to point to a new snapshot, which can change response quality).

Advanced model settings
Some models support advanced configuration parameters (depending on the provider), such as:
- Temperature – Adjusts the level of randomness in the model’s output. Higher values increase randomness, while lower values decrease it.
- Top P – Creates a pool of the most probable tokens for the model to choose from. Higher values create a larger and more diverse pool, while lower values create a smaller one.
- Top K - Restricts the model’s choices to a fixed set of the “K” most likely. tokens.
- Presence penalty - Applies a fixed penalty to the likelihood of a token if it has already appeared in the text.
- Frequency penalty - Applies a penalty to a token’s likelihood that increases based on its frequency in the text.
- Maximum output tokens – limit response length. Sets the maximum number of tokens that the model can generate in a single response.
If advanced settings cause errors, try removing their values. In this case, defaults will be applied. This often resolves incompatibility issues for certain models.
Connectivity test
Click a Check connectivity button to validate your configuration. A test request is sent to the provider API using the supplied credentials and model settings.
If the response is successful, you will see a ✅ green checkmark.


Best practice: Always use the connectivity check after configuring a provider to ensure smooth runtime execution.
If an error occurs (e.g., invalid API key, non-existing model), an error message with details will be displayed ❌.

This feature ensures your configuration is valid and prevents runtime errors when models are used in production.
Note: Even though the test request is trivial (e.g., “What is the capital of X country?”), providers usually charge for it. However, the cost is minimal.
Next steps
-
Connect your device - Learn how to connect devices based on your connectivity technology or solution.
-
Data visualization - These guides contain instructions on how to configure complex ThingsBoard dashboards.
-
Data processing & actions - Learn how to use ThingsBoard Rule Engine.
-
IoT Data analytics - Learn how to use rule engine to perform basic analytics tasks.
-
Hardware samples - Learn how to connect various hardware platforms to ThingsBoard.
-
Advanced features - Learn about advanced ThingsBoard features.
-
Contribution and Development - Learn about contribution and development in ThingsBoard.