queue.type |
TB_QUEUE_TYPE |
kafka |
kafka (Apache Kafka) |
queue.prefix |
TB_QUEUE_PREFIX |
|
Global queue prefix. If specified, prefix is added before default topic name: 'prefix.default_topic_name'. Prefix is applied to all topics (and consumer groups for kafka). |
queue.in_memory.stats.print-interval-ms |
TB_QUEUE_IN_MEMORY_STATS_PRINT_INTERVAL_MS |
60000 |
For debug lvl |
queue.kafka.bootstrap.servers |
TB_KAFKA_SERVERS |
localhost:9092 |
Kafka Bootstrap Servers |
queue.kafka.ssl.enabled |
TB_KAFKA_SSL_ENABLED |
false |
Enable/Disable SSL Kafka communication |
queue.kafka.ssl.truststore.location |
TB_KAFKA_SSL_TRUSTSTORE_LOCATION |
|
The location of the trust store file |
queue.kafka.ssl.truststore.password |
TB_KAFKA_SSL_TRUSTSTORE_PASSWORD |
|
The password of trust store file if specified |
queue.kafka.ssl.keystore.location |
TB_KAFKA_SSL_KEYSTORE_LOCATION |
|
The location of the key store file. This is optional for the client and can be used for two-way authentication for the client |
queue.kafka.ssl.keystore.password |
TB_KAFKA_SSL_KEYSTORE_PASSWORD |
|
The store password for the key store file. This is optional for the client and only needed if ‘ssl.keystore.location’ is configured. Key store password is not supported for PEM format |
queue.kafka.ssl.key.password |
TB_KAFKA_SSL_KEY_PASSWORD |
|
The password of the private key in the key store file or the PEM key specified in ‘keystore.key’ |
queue.kafka.acks |
TB_KAFKA_ACKS |
all |
The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed:0,1 and all |
queue.kafka.retries |
TB_KAFKA_RETRIES |
1 |
Number of retries. Resend any record whose send fails with a potentially transient error |
queue.kafka.compression.type |
TB_KAFKA_COMPRESSION_TYPE |
none |
none or gzip |
queue.kafka.batch.size |
TB_KAFKA_BATCH_SIZE |
16384 |
Default batch size. This setting gives the upper bound of the batch size to be sent |
queue.kafka.linger.ms |
TB_KAFKA_LINGER_MS |
1 |
This variable creates a small amount of artificial delay—that is, rather than immediately sending out a record |
queue.kafka.max.request.size |
TB_KAFKA_MAX_REQUEST_SIZE |
1048576 |
The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests |
queue.kafka.max.in.flight.requests.per.connection |
TB_KAFKA_MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION |
5 |
The maximum number of unacknowledged requests the client will send on a single connection before blocking |
queue.kafka.buffer.memory |
TB_BUFFER_MEMORY |
33554432 |
The total bytes of memory the producer can use to buffer records waiting to be sent to the server |
queue.kafka.replication_factor |
TB_QUEUE_KAFKA_REPLICATION_FACTOR |
1 |
The multiple copies of data over the multiple brokers of Kafka |
queue.kafka.max_poll_interval_ms |
TB_QUEUE_KAFKA_MAX_POLL_INTERVAL_MS |
300000 |
The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records |
queue.kafka.max_poll_records |
TB_QUEUE_KAFKA_MAX_POLL_RECORDS |
8192 |
The maximum number of records returned in a single call to poll() |
queue.kafka.max_partition_fetch_bytes |
TB_QUEUE_KAFKA_MAX_PARTITION_FETCH_BYTES |
16777216 |
The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer |
queue.kafka.fetch_max_bytes |
TB_QUEUE_KAFKA_FETCH_MAX_BYTES |
134217728 |
The maximum amount of data the server will return. Records are fetched in batches by the consumer |
queue.kafka.request.timeout.ms |
TB_QUEUE_KAFKA_REQUEST_TIMEOUT_MS |
30000 |
(30 seconds) |
queue.kafka.session.timeout.ms |
TB_QUEUE_KAFKA_SESSION_TIMEOUT_MS |
10000 |
(10 seconds) |
queue.kafka.auto_offset_reset |
TB_QUEUE_KAFKA_AUTO_OFFSET_RESET |
earliest |
earliest, latest or none |
queue.kafka.use_confluent_cloud |
TB_QUEUE_KAFKA_USE_CONFLUENT_CLOUD |
false |
Enable/Disable using of Confluent Cloud |
queue.kafka.confluent.ssl.algorithm |
TB_QUEUE_KAFKA_CONFLUENT_SSL_ALGORITHM |
https |
The endpoint identification algorithm used by clients to validate server hostname. The default value is https |
queue.kafka.confluent.sasl.mechanism |
TB_QUEUE_KAFKA_CONFLUENT_SASL_MECHANISM |
PLAIN |
The mechanism used to authenticate Schema Registry requests. SASL/PLAIN should only be used with TLS/SSL as a transport layer to ensure that clear passwords are not transmitted on the wire without encryption |
queue.kafka.confluent.sasl.config |
TB_QUEUE_KAFKA_CONFLUENT_SASL_JAAS_CONFIG |
org.apache.kafka.common.security.plain.PlainLoginModule required username=\"CLUSTER_API_KEY\" password=\"CLUSTER_API_SECRET\"; |
Using JAAS Configuration for specifying multiple SASL mechanisms on a broker |
queue.kafka.confluent.security.protocol |
TB_QUEUE_KAFKA_CONFLUENT_SECURITY_PROTOCOL |
SASL_SSL |
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL |
queue.kafka.consumer-properties-per-topic.tb_ota_package.key |
|
max.poll.records |
Key-value properties for Kafka consumer per specific topic, e.g. tb_ota_package is a topic name for ota, tb_rule_engine.sq is a topic name for default SequentialByOriginator queue. Check TB_QUEUE_CORE_OTA_TOPIC and TB_QUEUE_RE_SQ_TOPIC params |
queue.kafka.consumer-properties-per-topic.tb_ota_package.key.value |
TB_QUEUE_KAFKA_OTA_MAX_POLL_RECORDS |
10 |
Example of specific consumer properties value per topic |
queue.kafka.consumer-properties-per-topic.tb_version_control.key |
|
max.poll.interval.ms |
Example of specific consumer properties value per topic for VC |
queue.kafka.consumer-properties-per-topic.tb_version_control.key.value |
TB_QUEUE_KAFKA_VC_MAX_POLL_INTERVAL_MS |
600000 |
Example of specific consumer properties value per topic for VC |
queue.kafka.other-inline |
TB_QUEUE_KAFKA_OTHER_PROPERTIES |
|
In this section you can specify custom parameters (semicolon separated) for Kafka consumer/producer/admin |
queue.kafka.topic-properties.core |
TB_QUEUE_KAFKA_CORE_TOPIC_PROPERTIES |
retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000;partitions:1;min.insync.replicas:1 |
Kafka properties for Core topics |
queue.kafka.topic-properties.notifications |
TB_QUEUE_KAFKA_NOTIFICATIONS_TOPIC_PROPERTIES |
retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000;partitions:1;min.insync.replicas:1 |
Kafka properties for Notifications topics |
queue.kafka.topic-properties.version-control |
TB_QUEUE_KAFKA_VC_TOPIC_PROPERTIES |
retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000;partitions:1;min.insync.replicas:1 |
Kafka properties for Core topics |
queue.kafka.topic-properties.housekeeper |
TB_QUEUE_KAFKA_HOUSEKEEPER_TOPIC_PROPERTIES |
retention.ms:604800000;segment.bytes:52428800;retention.bytes:1048576000;partitions:10;min.insync.replicas:1 |
Kafka properties for Housekeeper tasks topic |
queue.kafka.consumer-stats.enabled |
TB_QUEUE_KAFKA_CONSUMER_STATS_ENABLED |
true |
Prints lag between consumer group offset and last messages offset in Kafka topics |
queue.kafka.consumer-stats.print-interval-ms |
TB_QUEUE_KAFKA_CONSUMER_STATS_MIN_PRINT_INTERVAL_MS |
60000 |
Statistics printing interval for Kafka's consumer-groups stats |
queue.kafka.consumer-stats.kafka-response-timeout-ms |
TB_QUEUE_KAFKA_CONSUMER_STATS_RESPONSE_TIMEOUT_MS |
1000 |
Time to wait for the stats-loading requests to Kafka to finis |
queue.partitions.hash_function_name |
TB_QUEUE_PARTITIONS_HASH_FUNCTION_NAME |
murmur3_128 |
murmur3_32, murmur3_128 or sha256 |
queue.core.topic |
TB_QUEUE_CORE_TOPIC |
tb_core |
Default topic name |
queue.core.notifications_topic |
TB_QUEUE_CORE_NOTIFICATIONS_TOPIC |
tb_core.notifications |
For high-priority notifications that require minimum latency and processing time |
queue.core.poll-interval |
TB_QUEUE_CORE_POLL_INTERVAL_MS |
25 |
Interval in milliseconds to poll messages by Core microservices |
queue.core.partitions |
TB_QUEUE_CORE_PARTITIONS |
10 |
Amount of partitions used by Core microservices |
queue.core.pack-processing-timeout |
TB_QUEUE_CORE_PACK_PROCESSING_TIMEOUT_MS |
2000 |
Timeout for processing a message pack by Core microservices |
queue.core.ota.topic |
TB_QUEUE_CORE_OTA_TOPIC |
tb_ota_package |
Default topic name for OTA updates |
queue.core.ota.pack-interval-ms |
TB_QUEUE_CORE_OTA_PACK_INTERVAL_MS |
60000 |
The interval of processing the OTA updates for devices. Used to avoid any harm to the network due to many parallel OTA updates |
queue.core.ota.pack-size |
TB_QUEUE_CORE_OTA_PACK_SIZE |
100 |
The size of OTA updates notifications fetched from the queue. The queue stores pairs of firmware and device ids |
queue.core.usage-stats-topic |
TB_QUEUE_US_TOPIC |
tb_usage_stats |
Stats topic name |
queue.core.stats.enabled |
TB_QUEUE_CORE_STATS_ENABLED |
true |
Enable/disable statistics for Core microservices |
queue.core.stats.print-interval-ms |
TB_QUEUE_CORE_STATS_PRINT_INTERVAL_MS |
60000 |
Statistics printing interval for Core microservices |
queue.core.housekeeper.topic |
TB_HOUSEKEEPER_TOPIC |
tb_housekeeper |
Topic name for Housekeeper tasks |
queue.vc.topic |
TB_QUEUE_VC_TOPIC |
tb_version_control |
Default topic name |
queue.vc.partitions |
TB_QUEUE_VC_PARTITIONS |
10 |
Number of partitions to associate with this queue. Used for scaling the number of messages that can be processed in parallel |
queue.vc.poll-interval |
TB_QUEUE_VC_INTERVAL_MS |
25 |
Interval in milliseconds between polling of the messages if no new messages arrive |
queue.vc.pack-processing-timeout |
TB_QUEUE_VC_PACK_PROCESSING_TIMEOUT_MS |
180000 |
Timeout before retrying all failed and timed-out messages from the processing pack |
queue.vc.msg-chunk-size |
TB_QUEUE_VC_MSG_CHUNK_SIZE |
250000 |
Limit for single queue message size |