您的位置:首页 > 编程语言 > Java开发

kafka 配置大全(中文,英文)

2016-12-02 09:50 399 查看
配置名默认值英文描述中文描述
zookeeper.connectZookeeper host stringZookeeper主机字符串
advertised.host.namenullDEPRECATED: only used when `advertised.listeners` or `listeners` are not set. Use `advertised.listeners` instead.

Hostname to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, it will use the value for `host.name` if configured. Otherwise it will use the value returned
from java.net.InetAddress.getCanonicalHostName().
弃用的:仅当未设置`advertised.listeners'或`listeners'时使用。 请改用`advertised.listeners`。

要发布到ZooKeeper以供客户端使用的主机名。 在IaaS环境中,这可能需要与代理绑定的接口不同。 如果这没有设置,它将使用`host.name`的值(如果配置)。 否则它将使用从java.net.InetAddress.getCanonicalHostName()返回的值。
advertised.listenersnullListeners to publish to ZooKeeper for clients to use, if different than the listeners above. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for `listeners` will be used.发布到ZooKeeper的客户端使用的侦听器,如果不同于上面的侦听器。 在IaaS环境中,这可能需要与代理绑定的接口不同。 如果没有设置,将使用`listeners`的值。
advertised.portnullDEPRECATED: only used when `advertised.listeners` or `listeners` are not set. Use `advertised.listeners` instead.

The port to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the port to which the broker binds. If this is not set, it will publish the same port that the broker binds to.
弃用的:仅当未设置`advertised.listeners'或`listeners'时使用。 请改用`advertised.listeners`。

发布到ZooKeeper的端口供客户端使用。 在IaaS环境中,这可能需要与代理绑定的端口不同。 如果没有设置,它将发布代理绑定到的相同端口。
auto.create.topics.enableTRUEEnable auto creation of topic on the server启用在服务器上自动创建topic
auto.leader.rebalance.enableTRUEEnables auto leader balancing. A background thread checks and triggers leader balance if required at regular intervals
background.threads10The number of threads to use for various background processing tasks用于各种后台处理任务的线程数
broker.id-1The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker idsstart from reserved.broker.max.id + 1.每个broker都可以用一个唯一的非负整数id进行标识;这个id可以作为broker的“名字”,并且它的存在使得broker无须混淆consumers就可以迁移到不同的host/port上。你可以选择任意你喜欢的数字作为id,只要id是唯一的即可。
compression.typeproducerSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the
original compression codec set by the producer.
指定给定topic的最终压缩类型。 此配置接受标准压缩编解码器('gzip','snappy','lz4')。 它还接受“未压缩”,这相当于没有压缩; 和 'producer' ,意味着保留由producer设置的原始压缩编解码器。
delete.topic.enableFALSEEnables delete topic. Delete topic through the admin tool will have no effect if this config is turned off启用删除topic。 如果此配置已关闭,通过管理工具删除topic将没有任何效果
host.name""DEPRECATED: only used when `listeners` is not set. Use `listeners` instead.

hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfaces
弃用的:仅在未设置`listeners`时使用。 使用`listener`s代替。代理的主机名。 如果设置,它将只绑定到此地址。 如果没有设置,它将绑定到所有接口
leader.imbalance.check.interval.seconds300The frequency with which the partition rebalance check is triggered by the controller
leader.imbalance.per.broker.percentage10The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.
listenersnullListener List - Comma-separated list of URIs we will listen on and their protocols.

Specify hostname as 0.0.0.0 to bind to all interfaces.

Leave hostname empty to bind to default interface.

Examples of legal listener lists:

PLAINTEXT://myhost:9092,TRACE://:9091

PLAINTEXT://0.0.0.0:9092, TRACE://localhost:9093
log.dir/tmp/kafka-logsThe directory in which the log data is kept (supplemental for log.dirs property)保存日志数据的目录(对log.dirs属性的补充)
log.dirsnullThe directories in which the log data is kept. If not set, the value in log.dir is used保存日志数据的目录。 如果未设置,则使用log.dir中的值
log.flush.interval.messages9223372036854775807The number of messages accumulated on a log partition before messages are flushed to disk 消息刷新到磁盘之前在日志分区上累积的消息数
log.flush.interval.msnullThe maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used任何topic中的消息在刷新到磁盘之前保存在内存中的最大时间(以毫秒为单位)。 如果未设置,则使用log.flush.scheduler.interval.ms中的值
log.flush.offset.checkpoint.interval.ms60000The frequency with which we update the persistent record of the last flush which acts as the log recovery point我们更新用作日志恢复点的最后一次冲洗的持久记录的频率
log.flush.scheduler.interval.ms9223372036854775807The frequency in ms that the log flusher checks whether any log needs to be flushed to disk日志刷新器检查是否有任何日志需要刷新到磁盘的频率(以毫秒为单位)
log.retention.bytes-1The maximum size of the log before deleting it删除日志之前的日志的最大大小
log.retention.hours168The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property删除日志文件之前保留的小时数(以小时为单位),第三级为log.retention.ms属性
log.retention.minutesnullThe number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used在删除日志文件之前保持日志文件的分钟数(以分钟为单位),次于log.retention.ms属性。 如果未设置,则使用log.retention.hours中的值
log.retention.msnullThe number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used在删除日志文件之前保留日志文件的毫秒数(以毫秒为单位),如果未设置,则使用log.retention.minutes中的值
log.roll.hours168The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property新日志段推出之前的最长时间(以小时为单位),次于log.roll.ms属性
log.roll.jitter.hours0The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property从logRollTimeMillis(以小时为单位)减去的最大浮动,继承于log.roll.jitter.ms属性
log.roll.jitter.msnullThe maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used从logRollTimeMillis中减去的最大浮动(以毫秒为单位)。 如果未设置,则使用log.roll.jitter.hours中的值
log.roll.msnullThe maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used新日志段推出之前的最长时间(以毫秒为单位)。 如果未设置,则使用log.roll.hours中的值
log.segment.bytes1073741824The maximum size of a single log file单个日志文件的最大大小
log.segment.delete.delay.ms60000The amount of time to wait before deleting a file from the filesystem从文件系统中删除文件之前等待的时间
message.max.bytes1000012The maximum size of message that the server can receive服务器可以接收的消息的最大大小
min.insync.replicas1When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception
(either NotEnoughReplicas or NotEnoughReplicasAfterAppend).When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas
to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
当生产者将acks设置为“all”(或“-1”)时,min.insync.replicas指定必须确认写入的副本的最小数量,以使写入被认为成功。 如果这个最小值不能满足,那么生产者将引发一个异常(NotEnoughReplicas或NotEnoughReplicasAfterAppend)。当一起使用时,min.insync.replicas和ack允许你强制更强的耐久性保证。 典型的情况是创建一个复制因子为3的主题,将min.insync.replicas设置为2,并产生一个“all”的acks。
这将确保生成器在大多数副本没有接收到写入时引发异常。
num.io.threads8The number of io threads that the server uses for carrying out network requests服务器用于执行网络请求的io线程数
num.network.threads3the number of network threads that the server uses for handling network requests服务器用于处理网络请求的网络线程数
num.recovery.threads.per.data.dir1The number of threads per data directory to be used for log recovery at startup and flushing at shutdown每个数据目录的线程数,用于在启动时进行日志恢复并在关闭时刷新
num.replica.fetchers1Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker.用于从源broker复制消息的提取线程数。 增加此值可以提高跟随器broker中的I / O并行度。
offset.metadata.max.bytes4096The maximum size for a metadata entry associated with an offset commit与offset提交关联的元数据条目的最大大小
offsets.commit.required.acks-1The required acks before the commit can be accepted. In general, the default (-1) should not be overridden可以接受提交之前所需的acks。 通常,不应覆盖默认值(-1)
offsets.commit.timeout.ms5000Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout.偏移提交将被延迟,直到偏移主题的所有副本都收到提交或达到此超时。 这类似于生产者请求超时。
offsets.load.buffer.size5242880Batch size for reading from the offsets segments when loading offsets into the cache.用于在将偏移量装入缓存时从偏移段读取的批量大小。
offsets.retention.check.interval.ms600000Frequency at which to check for stale offsets检查旧偏移的频率
offsets.retention.minutes1440Log retention window in minutes for offsets topic偏移topic的日志保留时间(分钟)
offsets.topic.compression.codec0Compression codec for the offsets topic - compression may be used to achieve "atomic" commits用于偏移topic的压缩编解码器 - 压缩可以用于实现“原子”提交
offsets.topic.num.partitions50The number of partitions for the offset commit topic (should not change after deployment)偏移提交topic的分区数(部署后不应更改)
offsets.topic.replication.factor3The replication factor for the offsets topic (set higher to ensure availability). To ensure that the effective replication factor of the offsets topic is the configured value, the number of alive brokers has to be at least the replication factor at the
time of the first request for the offsets topic. If not, either the offsets topic creation will fail or it will get a replication factor of min(alive brokers, configured replication factor)
offsets.topic.segment.bytes104857600The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads偏移主题段字节应保持相对较小,以便于加快日志压缩和缓存加载
port9092DEPRECATED: only used when `listeners` is not set. Use `listeners` instead.

the port to listen and accept connections on
弃用的:仅在未设置`listeners`时使用。 使用`listeners`代替。

端口监听和接受连接
queued.max.requests500The number of queued requests allowed before blocking the network threads在阻止网络线程之前允许的排队请求数
quota.consumer.default9223372036854775807DEPRECATED: Used only when dynamic default quotas are not configured for or in Zookeeper. Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-second弃用的:仅当未在Zookeeper中或在Zookeeper中配置动态默认配额时使用。 由clientId / consumer组区分的任何消费者如果每秒获取的字节数超过此值,则会受到限制
quota.producer.default9223372036854775807DEPRECATED: Used only when dynamic default quotas are not configured for , or in Zookeeper. Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second弃用的:仅在未配置动态默认配额时使用,或在Zookeeper中使用。 由clientId区分的任何生产者如果每秒产生的字节数大于此值,则会受到限制
replica.fetch.min.bytes1Minimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMs每个获取响应所需的最小字节数。 如果没有足够的字节,请等待到replicaMaxWaitTimeMs
replica.fetch.wait.max.ms500max wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics由跟随者副本发出的每个获取器请求的最大等待时间。 此值应始终小于replica.lag.time.max.ms以防止低吞吐量topic的ISR频繁收缩
replica.high.watermark.checkpoint.interval.ms5000The frequency with which the high watermark is saved out to diskhigh watermark保存到磁盘的频率
replica.lag.time.max.ms10000If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr
replica.socket.receive.buffer.bytes65536The socket receive buffer for network requests用于网络请求的套接字接收缓冲区
replica.socket.timeout.ms30000The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms网络请求的套接字超时。 其值应至少为replica.fetch.wait.max.ms
request.timeout.ms30000The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.配置控制客户端等待请求响应的最大时间量。 如果在超时时间之前没有收到响应,客户端将在必要时重新发送请求,如果重试次数耗尽,则请求失败。
socket.receive.buffer.bytes102400The SO_RCVBUF buffer of the socket sever sockets. If the value is -1, the OS default will be used.套接字服务器的SO_RCVBUF缓冲区插槽。 如果值为-1,将使用操作系统默认值。
socket.request.max.bytes104857600The maximum number of bytes in a socket request套接字请求中的最大字节数
socket.send.buffer.bytes102400The SO_SNDBUF buffer of the socket sever sockets. If the value is -1, the OS default will be used.套接字服务器的SO_SNDBUF缓冲区插槽。 如果值为-1,将使用操作系统默认值。
unclean.leader.election.enableTRUEIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss
zookeeper.connection.timeout.msnullThe max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used客户端等待与zookeeper建立连接的最长时间。 如果未设置,则使用zookeeper.session.timeout.ms中的值
zookeeper.session.timeout.ms6000Zookeeper session timeoutZookeeper会话超时
zookeeper.set.aclFALSESet client to use secure ACLs设置客户端使用安全ACLS
broker.id.generation.enableTRUEEnable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed.在服务器上启用自动broker id生成。 启用时,应检查为reserved.broker.max.id配置的值。
broker.racknullRack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: `RACK1`, `us-east-1d`代理的机架。 这将在机架感知复制分配中用于容错。 示例:`RACK1`,`us-east-1d`
connections.max.idle.ms600000Idle connections timeout: the server socket processor threads close the connections that idle more than this空闲连接超时:服务器socket处理器线程关闭空闲的连接超过这个时间
controlled.shutdown.enableTRUEEnable controlled shutdown of the server启用服务器的关闭受控
controlled.shutdown.max.retries3Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens受控关机可能由于多种原因而失败。 这将确定发生此类故障时的重试次数
controlled.shutdown.retry.backoff.ms5000Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.在每次重试之前,系统需要时间从导致先前故障的状态(控制器故障切换,副本滞后等)恢复。 此配置确定重试之前等待的时间量。
controller.socket.timeout.ms30000The socket timeout for controller-to-broker channels控制器到代理通道的套接字超时时间
default.replication.factor1default replication factors for automatically created topics默认replication factors为自动创建的topic
fetch.purgatory.purge.interval.requests1000The purge interval (in number of requests) of the fetch request purgatory
group.max.session.timeout.ms300000The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.注册的消费者的最大允许会话超时时间。 更长的超时使消费者有更多的时间在心跳检测,但花费更长的时间来检测故障。
group.min.session.timeout.ms6000The minimum allowed session timeout for registered consumers. Shorter timeouts leader to quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.注册用户的最小允许会话超时。 缩短超时导致更快的故障检测,代价是更频繁的消费者心跳,这可能压倒broker资源。
inter.broker.protocol.version0.10.1-IV2Specify which version of the inter-broker protocol will be used.

This is typically bumped after all brokers were upgraded to a new version.

Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list.
指定将使用代理间协议的哪个版本。

这通常是在所有代理升级到新版本后发生。

一些有效值的示例是:0.8.0,0.8.1,0.8.1.1,0.8.2,0.8.2.0,0.8.2.1,0.9.0.0,0.9.0.1检查完整列表的ApiVersion。
log.cleaner.backoff.ms15000The amount of time to sleep when there are no logs to clean当没有日志要清理时,睡眠的时间量
log.cleaner.dedupe.buffer.size134217728The total memory used for log deduplication across all cleaner threads用于所有清除程序线程的日志重复数据删除的总内存
log.cleaner.delete.retention.ms86400000How long are delete records retained?删除的记录保留多长时间?
log.cleaner.enableTRUEEnable the log cleaner process to run on the server? Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.启用日志清理器进程在服务器上运行? 如果使用任何包含cleanup.policy = compact的主题包括内部偏移主题,应该启用。 如果禁用,那些主题将不会被压缩并且尺寸不断增大。
log.cleaner.io.buffer.load.factor0.9Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions日志清除器重复数据删除缓冲区负载因子。 重复数据删除缓冲区已满的百分比。 较高的值将允许同时清除更多的日志,但会导致更多的哈希冲突
log.cleaner.io.buffer.size524288The total memory used for log cleaner I/O buffers across all cleaner threads用于清理所有清除程序线程的日志清除器I / O缓冲区的总内存
log.cleaner.io.max.bytes.per.second1.7976931348623157E308The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average日志清理器将被节流,以使其读取和写入i / o的总和将小于平均值的此值
log.cleaner.min.cleanable.ratio0.5The minimum ratio of dirty log to total log for a log to eligible for cleaning脏日志与日志的总日志的最小比率,以符合清理条件
log.cleaner.min.compaction.lag.ms0The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.消息在日志中保持压缩的最小时间。 仅适用于正在压缩的日志。
log.cleaner.threads1The number of background threads to use for log cleaning用于日志清理的后台线程数
log.cleanup.policy[delete]The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact"超出保留时间段的段的默认清除策略。 逗号分隔的有效策略列表。 有效的策略是:“delete”和“compact”
log.index.interval.bytes4096The interval with which we add an entry to the offset index我们向偏移索引添加条目的间隔
log.index.size.max.bytes10485760The maximum size in bytes of the offset index偏移索引的最大大小(以字节为单位)
log.message.format.version0.10.1-IV2Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version,
the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.
指定代理将用于将消息附加到日志的消息格式版本。 该值应为有效的ApiVersion。 一些示例是:0.8.2,0.9.0.0,0.10.0,检查ApiVersion以获取更多详细信息。 通过设置特定的消息格式版本,用户证明磁盘上的所有现有消息小于或等于指定的版本。 不正确地设置此值将导致旧版本的客户中断,因为他们将接收到他们不理解的格式的邮件。
log.message.timestamp.difference.max.ms9.22337E+18The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold.
This configuration is ignored if log.message.timestamp.type=LogAppendTime.
代理接收消息时的时间戳和消息中指定的时间戳之间允许的最大差异。 如果log.message.timestamp.type = CreateTime,则如果时间戳的差异超过此阈值,则会拒绝邮件。 如果log.message.timestamp.type = LogAppendTime,则忽略此配置。
log.message.timestamp.typeCreateTimeDefine whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`定义消息中的时间戳是消息创建时间还是日志附加时间。 该值应该是“创建时间”或“LogAppend时间”
log.preallocateFALSEShould pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.应该在创建新段时预分配文件? 如果您在Windows上使用Kafka,则可能需要将其设置为true。
log.retention.check.interval.ms300000The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion日志清除程序检查任何日志是否有资格删除的频率(以毫秒为单位)
max.connections.per.ip2147483647The maximum number of connections we allow from each ip address我们从每个IP地址允许的最大连接数
max.connections.per.ip.overrides""Per-ip or hostname overrides to the default maximum number of connectionsper-ip或hostname覆盖默认最大连接数
num.partitions1The default number of log partitions per topic每个topic的默认日志分区数
principal.builder.classclass org.apache.kafka.common.security.auth.DefaultPrincipalBuilderThe fully qualified name of a class that implements the PrincipalBuilder interface, which is currently used to build the Principal for connections with the SSL SecurityProtocol.实现Principal Builder接口的类的完全限定名,该接口当前用于构建与SSL安全协议的连接的Principal。
producer.purgatory.purge.interval.requests1000The purge interval (in number of requests) of the producer request purgatory生产者请求purgatory的清除间隔(请求数)
replica.fetch.backoff.ms1000The amount of time to sleep when fetch partition error occurs.发生抓取分区错误时休眠的时间。
replica.fetch.max.bytes1048576The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that progress
can be made. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).
尝试为每个分区提取的消息的字节数。 这不是绝对最大值,如果提取的第一个非空分区中的第一个消息大于此值,则仍会返回消息以确保可以进行进度。 代理接受的最大消息大小通过message.max.bytes(broker config)或max.message.bytes(topic config)定义。
replica.fetch.response.max.bytes10485760Maximum bytes expected for the entire fetch response. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that progress can be made.
The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).
整个获取响应所需的最大字节数。 这不是绝对最大值,如果提取的第一个非空分区中的第一个消息大于此值,则仍会返回消息以确保可以进行进度。 代理接受的最大消息大小通过message.max.bytes(broker config)或max.message.bytes(topic config)定义。
reserved.broker.max.id1000Max number that can be used for a broker.id可用于broker.id的最大数量
sasl.enabled.mechanisms[GSSAPI]The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.Kafka服务器中启用的SASL机制列表。 该列表可以包含安全提供者可用的任何机制。 默认情况下仅启用GSSAPI。
sasl.kerberos.kinit.cmd/usr/bin/kinitKerberos kinit command path.Kerberos kinit命令路径。
sasl.kerberos.min.time.before.relogin60000Login thread sleep time between refresh attempts.登录线程在刷新尝试之间的休眠时间。
sasl.kerberos.principal.to.local.rules[DEFAULT]A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are
ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see security authorization and acls.
用于从主体名称到短名称(通常是操作系统用户名)的映射规则列表。 按顺序评估规则,并且使用与主体名称匹配的第一个规则将其映射到短名称。 将忽略列表中的任何后续规则。 默认情况下,{username} / {hostname} @ {REALM}形式的主体名称映射到{username}。 有关格式的详细信息,请参阅安全授权和acls。
sasl.kerberos.service.namenullThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.Kafka运行的Kerberos主体名称。 这可以在Kafka的JAAS配置或Kafka的配置中定义。
sasl.kerberos.ticket.renew.jitter0.05Percentage of random jitter added to the renewal time.随机抖动的百分比添加到更新时间。
sasl.kerberos.ticket.renew.window.factor0.8Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.
sasl.mechanism.inter.broker.protocolGSSAPISASL mechanism used for inter-broker communication. Default is GSSAPI.SASL机制用于代理间通信。 默认为GSSAPI。
security.inter.broker.protocolPLAINTEXTSecurity protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.用于在代理之间通信的安全协议。 有效值为:PLAINTEXT,SSL,SASL PLAINTEXT,SASL SSL。
ssl.cipher.suitesnullA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites
are supported.
密码套件列表。 这是一种命名的认证,加密,MAC和密钥交换算法的组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。 默认情况下,支持所有可用的密码套件。
ssl.client.authnoneConfigures kafka broker to request client authentication. The following settings are common: ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike
requested , if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed.
配置kafka代理以请求客户端认证。 以下设置很常见:ssl.client.auth =必需如果设置为必需,则需要客户端身份验证。 ssl.client.auth = requested这意味着客户端认证是可选的。 与请求不同,如果设置此选项,客户端可以选择不提供有关本身的身份验证信息ssl.client.auth = none这意味着不需要客户端身份验证。
ssl.enabled.protocols[TLSv1.2, TLSv1.1, TLSv1]The list of protocols enabled for SSL connections.为SSL连接启用的协议列表。
ssl.key.passwordnullThe password of the private key in the key store file. This is optional for client.私钥到密钥库文件的密码。 这对于客户端是可选的。
ssl.keymanager.algorithmSunX509The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。
ssl.keystore.locationnullThe location of the key store file. This is optional for client and can be used for two-way authentication for client.密钥存储文件的位置。 这对于客户端是可选的,并且可以用于客户端的双向认证。
ssl.keystore.passwordnullThe store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. 密钥存储文件的存储密码。 这对于客户端是可选的,只有在配置了ssl.keystore.location时才需要。
ssl.keystore.typeJKSThe file format of the key store file. This is optional for client.密钥存储文件的文件格式。 这对于客户端是可选的。
ssl.protocolTLSThe SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to
known security vulnerabilities.
用于生成SSLContext的SSL协议。 默认设置为TLS,这在大多数情况下是正常的。 最近的JVM中允许的值为TLS,TLSv1.1和TLSv1.2。 旧的JVM可能支持SSL,SSLv2和SSLv3,但是由于已知的安全漏洞,它们的使用不受欢迎。
ssl.providernullThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。
ssl.trustmanager.algorithmPKIXThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。
ssl.truststore.locationnullThe location of the trust store file. 信任存储文件的位置。
ssl.truststore.passwordnullThe password for the trust store file. 信任存储文件的密码。
ssl.truststore.typeJKSThe file format of the trust store file.信任存储文件的文件格式。
authorizer.class.name""The authorizer class that should be used for authorization应该用于授权的授权程序类
metric.reporters[]A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.用作度量报告器的类的列表。 实现MetricReporter接口允许插入将通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples2The number of samples maintained to compute metrics.维持计算度量的样本数。
metrics.sample.window.ms30000The window of time a metrics sample is computed over.计算度量样本的时间窗口。
quota.window.num11The number of samples to retain in memory for client quotas要在客户端配额的内存中保留的样本数
quota.window.size.seconds1The time span of each sample for client quotas客户端配额的每个样本的时间跨度
replication.quota.window.num11The number of samples to retain in memory for replication quotas要在复制配额的内存中保留的样本数
replication.quota.window.size.seconds1The time span of each sample for replication quotas复制配额的每个样本的时间跨度
ssl.endpoint.identification.algorithmnullThe endpoint identification algorithm to validate server hostname using server certificate. 端点标识算法,使用服务器证书验证服务器主机名。
ssl.secure.random.implementationnullThe SecureRandom PRNG implementation to use for SSL cryptography operations. 用于SSL加密操作的SecureRandom PRNG实现。
zookeeper.sync.time.ms2000How far a ZK follower can be behind a ZK leaderZK leader背后有多远ZK follower
cleanup.policy[delete]A string that is either "delete" or "compact". This string designates the retention policy to use on old log segments. The default policy ("delete") will discard old segments when their retention time or size limit has been reached. The "compact" setting
will enable log compaction on the topic.
"delete" or "compact"的字符串。 此字符串指定要在旧日志段上使用的保留策略。 默认策略(“删除”)会在达到保留时间或大小限制时丢弃旧细分。 “compact”设置将对主题启用日志压缩。
compression.typeproducerSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', lz4). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the
original compression codec set by the producer.
指定给定主题的最终压缩类型。 此配置接受标准压缩编解码器('gzip','snappy',lz4)。 它还接受“未压缩”,这相当于没有压缩; 和“生成器”,意味着保留由生产者设置的原始压缩编解码器。
delete.retention.ms86400000The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage
(otherwise delete tombstones may be collected before they complete their scan).
保留删除日志压缩主题的逻辑删除标记的时间量。 此设置还给出了消费者必须完成读取的时间的界限,如果它们从偏移0开始,以确保它们获得最后阶段的有效快照(否则可以在完成扫描之前收集删除的tombstones)。
file.delete.delay.ms60000The time to wait before deleting a file from the filesystem从文件系统中删除文件之前等待的时间
flush.messages9.22337E+18This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you
not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section).
此设置允许指定我们将强制写入日志的数据的fsync的间隔。 例如,如果这被设置为1,我们将在每个消息后fsync; 如果是5,我们将在每5个消息后fsync。 一般来说,我们建议您不要设置此值,并使用复制持久性,并允许操作系统的后台刷新功能,因为它更高效。 此设置可以在每个主题的基础上覆盖(请参阅每主题配置部分)。
flush.ms9.22337E+18This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability
and allow the operating system's background flush capabilities as it is more efficient.
此设置允许指定我们将强制写入日志的数据的fsync的时间间隔。 例如,如果这被设置为1000,我们将在1000毫秒过去后fsync。 一般来说,我们建议您不要设置此值,并使用复制持久性,并允许操作系统的后台刷新功能,因为它更高效。
follower.replication.throttled.replicas[]A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle
all replicas for this topic.
在follower端应该限制日志复制的副本列表。 该列表应该以[PartitionId]的形式描述一组副本:[BrokerId],[PartitionId]:[BrokerId]:...或者通配符“*”可以用于限制此主题的所有副本。
index.interval.bytes4096This setting controls how frequently Kafka adds an index entry to it's offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index
larger. You probably don't need to change this.
此设置控制Kafka向其偏移索引添加索引条目的频率。 默认设置确保我们大约每4096个字节对消息进行索引。 更多索引允许读取更接近日志中的确切位置,但使索引更大。 你可能不需要改变这个。
leader.replication.throttled.replicas[]A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all
replicas for this topic.
在leader端应该限制日志复制的副本列表。 该列表应该以[PartitionId]的形式描述一组副本:[BrokerId],[PartitionId]:[BrokerId]:...或者通配符“*”可以用于限制此主题的所有副本。
max.message.bytes1000012This is largest message size Kafka will allow to be appended. Note that if you increase this size you must also increase your consumer's fetch size so they can fetch messages this large.这是Kafka允许附加的最大message大小。 请注意,如果您增加此大小,您还必须增加消费者抓取大小,以便他们可以抓取这么大的message。
message.format.version0.10.1-IV2Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version,
the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.
指定代理将用于将消息附加到日志的消息格式版本。 该值应为有效的ApiVersion。 一些示例是:0.8.2,0.9.0.0,0.10.0,检查ApiVersion以获取更多详细信息。 通过设置特定的消息格式版本,用户证明磁盘上的所有现有消息小于或等于指定的版本。 不正确地设置此值将导致旧版本的客户中断,因为他们将收到他们不明白的格式的message。
message.timestamp.difference.max.ms9.22337E+18The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This
configuration is ignored if message.timestamp.type=LogAppendTime.
代理接收消息时的时间戳和消息中指定的时间戳之间允许的最大差异。 如果message.timestamp.type = CreateTime,如果时间戳的差异超过此阈值,则会拒绝message。 如果message.timestamp.type = LogAppendTime,则忽略此配置。
message.timestamp.typeCreateTimeDefine whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`定义消息中的时间戳是消息创建时间还是日志附加时间。 该值应该是“创建时间”或“LogAppend时间”
min.cleanable.dirty.ratio0.5This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space
wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log.
此配置控制日志压缩程序尝试清理日志的频率(假设启用日志压缩)。 默认情况下,我们将避免清除超过50%的日志已压缩的日志。 此比率限制了日志中由重复项浪费的最大空间(在50%处,最多50%的日志可能是重复项)。 较高的比率意味着更少,更有效的清洁,但是将意味着更多的浪费在日志中的空间。
min.compaction.lag.ms0The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.消息在日志中保持压缩的最小时间。 仅适用于正在压缩的日志。
min.insync.replicas1When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception
(either NotEnoughReplicas or NotEnoughReplicasAfterAppend).When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas
to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.
当生产者将ack设置为“all”(或“-1”)时,min.insync.replicas指定必须确认写入的副本的最小数量,以使写入被认为成功。 如果这个最小值不能满足,那么生产者将引发一个异常(NotEnoughReplicas或NotEnoughReplicasAfterAppend)。当一起使用时,min.insync.replicas和ack允许你强制更强的耐久性保证。 典型的情况是创建一个复制因子为3的主题,将min.insync.replicas设置为2,并产生一个“all”的acks。 这将确保生成器在大多数副本没有接收到写入时引发异常。
preallocateFALSEShould pre allocate file when create new segment?应该在创建新段时预分配文件?
retention.bytes-1This configuration controls the maximum size a log can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit.此配置控制日志可以增长到的最大大小,直到我们放弃旧的日志段以释放空间,如果我们使用“删除”保留策略。 默认情况下,没有大小限制只有时间限制。
retention.ms604800000This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data.此配置控制我们将保留日志的最大时间,直到我们放弃旧的日志段以释放空间,如果我们使用“删除”保留策略。 这代表了关于消费者必须读取其数据的SLA。
segment.bytes1073741824This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.此配置控制日志的段文件大小。 保留和清除总是一次执行一个文件,因此较大的段大小意味着更少的文件,但对保留的粒度控制较少。
segment.index.bytes10485760This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting.此配置控制将偏移映射到文件位置的索引的大小。 我们预分配此索引文件,并收缩日志滚动后。 您通常不需要更改此设置。
segment.jitter.ms0The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling
segment.ms604800000This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data.此配置控制Kafka将强制日志滚动(即使分段文件未满)的时间段,以确保保留可以删除或压缩旧数据。
unclean.leader.election.enableTRUEIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss指示是否启用不在ISR集中的副本作为最后手段被选为leader,即使这样做可能会导致数据丢失
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover
the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set
of servers (you may want more than one, though, in case a server is down).
用于建立与Kafka集群的初始连接的主机/端口对的列表。 客户端将使用所有服务器,而不考虑在此指定哪些服务器进行引导 - 此列表仅影响用于发现全套服务器的初始主机。 此列表应采用host1:port1,host2:port2,...的形式。由于这些服务器仅用于初始连接以发现完整的集群成员资格(可能会动态更改),所以此列表不需要包含完整集 的服务器(您可能想要多个服务器,以防万一服务器关闭)。
key.serializerSerializer class for key that implements the Serializer interface.实现Serializer接口的密钥的Serializer类。
value.serializerSerializer class for value that implements the Serializer interface.用于实现Serializer接口的值的Serializer类。
acks1The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero then the producer
will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not
take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement
from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge
the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting.
生产者需要领导者在考虑请求完成之前已经接收的确认的数目。这控制发送的记录的持久性。允许以下设置:acks = 0如果设置为零,则生产者将不会等待来自服务器的任何确认。该记录将立即添加到套接字缓冲区并考虑发送。在这种情况下,不能保证服务器已经接收到记录,并且重试配置将不会生效(因为客户端通常不知道任何故障)。每个记录返回的偏移将始终设置为-1。 acks = 1这将意味着领导者将记录写入其本地日志,但将作出响应,而不等待所有追随者的完全确认。在这种情况下,如果领导者在确认记录之后立即失败,但在追随者复制它之前失败,则记录将丢失。
acks = all这意味着领导者将等待完整的同步副本集来确认记录。这保证只要至少一个同步中的副本保持活动,记录就不会丢失。这是最强的可用保证。这相当于acks = -1设置。
buffer.memory33554432The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for max.block.ms after which it will throw an exception.This setting
should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining
in-flight requests.
生产者可以用来缓冲等待发送到服务器的记录的内存的总字节数。 如果记录的发送速度比可以传递到服务器的速度快,那么生产者将阻塞max.block.ms,之后将抛出异常。这个设置应该大致对应于生产者将使用的总内存,但不是严格的 因为并不是生产者使用的所有内存都用于缓冲。 一些额外的内存将用于压缩(如果启用压缩)以及用于维护in-flight请求
compression.typenoneThe compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are none, gzip, snappy, or lz4. Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio
(more batching means better compression).
生产者生成的所有数据的压缩类型。 默认值为none(即不压缩)。 有效值为none,gzip,snappy或lz4。 压缩是完全批次的数据,因此批量化的效果也将影响压缩比(更多的批次意味着更好的压缩)。
retries0Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries without
setting max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear
first.
设置大于零的值将导致客户端重新发送任何发送失败且可能存在临时错误的记录。 请注意,此重试与客户端在接收到错误时重新发送记录没有什么不同。 允许重试而不将max.in.flight.requests.per.connection设置为1将潜在地更改记录的顺序,因为如果两个批次发送到单个分区,并且第一个失败并重试,但第二个成功,则记录 在第二批可以先出现。
ssl.key.passwordnullThe password of the private key in the key store file. This is optional for client.密钥存储文件中的私钥的密码。 这对于客户端是可选的。
ssl.keystore.locationnullThe location of the key store file. This is optional for client and can be used for two-way authentication for client.密钥存储文件的位置。 这对于客户端是可选的,并且可以用于客户端的双向认证。
ssl.keystore.passwordnullThe store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. 密钥存储文件的存储密码。 这对于客户端是可选的,只有在配置了ssl.keystore.location时才需要。
ssl.truststore.locationnullThe location of the trust store file. 信任存储文件的位置。
ssl.truststore.passwordnullThe password for the trust store file. 信任存储文件的密码。
batch.size16384The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes.
No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch
size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.
每当多个记录被发送到同一分区时,生产者将尝试将批记录一起分成较少的请求。 这有助于客户端和服务器上的性能。 此配置控制默认批量大小(以字节为单位)。 不会尝试大于此大小的批记录。 发送到代理的请求将包含多个批次,每个分区一个,可用于发送的数据。 小批量大小将使批处理不那么常见,并可能降低吞吐量(批量大小为零将完全禁用批处理)。 非常大的批量大小可能更浪费地使用存储器,因为我们将总是分配预定额外记录的指定批量大小的缓冲器。
client.id""An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.在发出请求时传递给服务器的id字符串。 这样做的目的是通过允许在服务器端请求记录中包括一个逻辑应用程序名称,能够跟踪超出ip / port的请求源。
connections.max.idle.ms540000Close idle connections after the number of milliseconds specified by this config.在此配置指定的毫秒数后关闭空闲连接。
linger.ms0The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may
want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay—that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other
records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be
sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5,
for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load.
生成器将在请求传输之间到达的任何记录集合到单个批处理请求中。通常这仅在负载下发生,当记录到达比它们可以被发送时快。然而在某些情况下,即使在中等负载下,客户端也可能希望减少请求的数量。该设置通过添加少量的人为延迟来实现这一点,即,不是立即发出记录,而是生产者将等待给定的延迟,以允许发送其他记录,使得发送可以被批处理在一起。这可以被认为类似于TCP中的Nagle算法。这个设置给出了批处理的延迟的上限:一旦我们获得一个分区的batch.size值的记录,它将被立即发送,而不管这个设置,但是如果我们对这个分区累积的字节数少于“在指定的时间等待更多的记录显示。此设置默认为0(即没有延迟)。例如,设置linger.ms
= 5将具有减少发送的请求数量的效果,但是对于在无负载的情况下发送的记录,会增加5ms的延迟。
max.block.ms60000The configuration controls how long KafkaProducer.send() and KafkaProducer.partitionsFor() will block.These methods can be blocked either because the buffer is full or metadata unavailable.Blocking in the user-supplied serializers or partitioner will not
be counted against this timeout.
配置控制KafkaProducer.send()和KafkaProducer.partitionsFor()将阻塞的时间。这些方法可能被阻止,因为缓冲区已满或元数据不可用。用户提供的序列化程序或分区程序中的阻塞将不会计入此超时 。
max.request.size1048576The maximum size of a request in bytes. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will
send in a single request to avoid sending huge requests.
请求的最大大小(以字节为单位)。 这也是有效的最大记录大小的上限。 请注意,服务器有自己的记录大小上限,可能与此不同。 此设置将限制生产者在单个请求中发送的记录批次数,以避免发送大量请求。
partitioner.classclass org.apache.kafka.clients.producer.internals.DefaultPartitionerPartitioner class that implements the Partitioner interface.实现分区器接口的分区器类。
receive.buffer.bytes32768The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.读取数据时使用的TCP接收缓冲区大小(SO_RCVBUF)。 如果值为-1,将使用操作系统默认值。
request.timeout.ms30000The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.配置控制客户端等待请求响应的最大时间量。 如果在超时时间之前没有收到响应,客户端将在必要时重新发送请求,如果重试次数耗尽,则请求失败请求。
sasl.kerberos.service.namenullThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.Kafka运行的Kerberos主体名称。 这可以在Kafka的JAAS配置或Kafka的配置中定义。
sasl.mechanismGSSAPISASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.SASL机制用于客户端连接。 这可以是安全提供者可用的任何机制。 GSSAPI是默认机制。
security.protocolPLAINTEXTProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.用于与代理通信的协议。 有效值为:PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL。
send.buffer.bytes131072The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.发送数据时使用的TCP发送缓冲区(SO_SNDBUF)的大小。 如果值为-1,将使用操作系统默认值。
ssl.enabled.protocols[TLSv1.2, TLSv1.1, TLSv1]The list of protocols enabled for SSL connections.为SSL连接启用的协议列表。
ssl.keystore.typeJKSThe file format of the key store file. This is optional for client.密钥存储文件的文件格式。 这对于客户端是可选的。
ssl.protocolTLSThe SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to
known security vulnerabilities.
用于生成SSLContext的SSL协议。 默认设置为TLS,这在大多数情况下是正常的。 最近的JVM中允许的值为TLS,TLSv1.1和TLSv1.2。 在较早的JVM中可能支持SSL,SSLv2和SSLv3,但是由于已知的安全漏洞,它们的使用不受欢迎。
ssl.providernullThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。
ssl.truststore.typeJKSThe file format of the trust store file.信任存储文件的文件格式。
timeout.ms30000The configuration controls the maximum amount of time the server will wait for acknowledgments from followers to meet the acknowledgment requirements the producer has specified with the acks configuration. If the requested number of acknowledgments are
not met when the timeout elapses an error will be returned. This timeout is measured on the server side and does not include the network latency of the request.
配置控制服务器等待来自跟随者的确认以满足生产者使用ack配置指定的确认要求的最大时间量。 如果在超时过期时未满足所请求的确认数量,则将返回错误。 此超时在服务器端测量,不包括请求的网络延迟。
block.on.buffer.fullFALSEWhen our memory buffer is exhausted we must either stop accepting new records (block) or throw errors. By default this setting is false and the producer will no longer throw a BufferExhaustException but instead will use the max.block.ms value to block,
after which it will throw a TimeoutException. Setting this property to true will set the max.block.ms to Long.MAX_VALUE. Also if this property is set to true, parameter metadata.fetch.timeout.ms is no longer honored.This parameter is deprecated and will be
removed in a future release. Parameter max.block.ms should be used instead.
当我们的内存缓冲区用尽时,我们必须停止接受新的记录(块)或抛出错误。 默认情况下,此设置为false,生成器将不再抛出一个BufferExhaustException,而是使用max.block.ms值来阻止,之后将抛出一个TimeoutException。 将此属性设置为true将max.block.ms设置为Long.MAX_VALUE。 此外,如果此属性设置为true,则不再支持参数metadata.fetch.timeout.ms.This参数已弃用,将在以后的版本中删除。 应使用参数max.block.ms。
interceptor.classesnullA list of classes to use as interceptors. Implementing the ProducerInterceptor interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors.要用作拦截器的类的列表。 实现ProducerInterceptor接口允许您在生成器接收到的记录发布到Kafka集群之前拦截(并且可能变异)记录。 默认情况下,没有拦截器。
max.in.flight.requests.per.connection5The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if
retries are enabled).
客户端在阻止之前在单个连接上发送的未确认请求的最大数量。 请注意,如果此设置设置为大于1并且发送失败,则可能会由于重试(即启用重试)而导致消息重新排序。
metadata.fetch.timeout.ms60000The first time data is sent to a topic we must fetch metadata about that topic to know which servers host the topic's partitions. This config specifies the maximum time, in milliseconds, for this fetch to succeed before throwing an exception back to the
client.
第一次将数据发送到主题时,我们必须获取有关该主题的元数据,以了解哪些服务器托管主题的分区。 此配置指定在将异常返回到客户端之前此次提取成功的最大时间(以毫秒为单位)。
metadata.max.age.ms300000The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.即使我们没有看到任何分区领导更改以主动发现任何新的代理或分区,我们强制刷新元数据的时间(以毫秒为单位)。
metric.reporters[]A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.用作度量报告器的类的列表。 实现MetricReporter接口允许插入将被通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples2The number of samples maintained to compute metrics.维持计算度量的样本数。
metrics.sample.window.ms30000The window of time a metrics sample is computed over.计算度量样本的时间窗口。
reconnect.backoff.ms50The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.尝试重新连接到给定主机之前等待的时间。 这避免了在紧密环路中重复连接到主机。 此回退适用于消费者发送给代理的所有请求。
retry.backoff.ms100The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.尝试对指定主题分区重试失败的请求之前等待的时间。 这避免了在一些故障情况下在紧密循环中重复发送请求。
sasl.kerberos.kinit.cmd/usr/bin/kinitKerberos kinit command path.Kerberos kinit命令路径。
sasl.kerberos.min.time.before.relogin60000Login thread sleep time between refresh attempts.登录线程在刷新尝试之间的休眠时间。
sasl.kerberos.ticket.renew.jitter0.05Percentage of random jitter added to the renewal time.添加到更新时间的随机抖动的百分比。
sasl.kerberos.ticket.renew.window.factor0.8Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.登录线程将睡眠,直到从上次刷新到票据到期的时间的指定窗口因子已经到达,在该时间它将尝试更新票证。
ssl.cipher.suitesnullA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites
are supported.
密码套件列表。 这是一种命名的认证,加密,MAC和密钥交换算法的组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。默认情况下,支持所有可用的密码套件。
ssl.endpoint.identification.algorithmnullThe endpoint identification algorithm to validate server hostname using server certificate. 端点标识算法,使用服务器证书验证服务器主机名。
ssl.keymanager.algorithmSunX509The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。
ssl.secure.random.implementationnullThe SecureRandom PRNG implementation to use for SSL cryptography operations. 用于SSL加密操作的SecureRandom PRNG实现。
ssl.trustmanager.algorithmPKIXThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover
the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set
of servers (you may want more than one, though, in case a server is down).
用于建立与Kafka集群的初始连接的主机/端口对的列表。 客户端将使用所有服务器,而不考虑在此指定哪些服务器进行引导 - 此列表仅影响用于发现全套服务器的初始主机。 此列表应采用host1:port1,host2:port2,...的形式。由于这些服务器仅用于初始连接以发现完整的集群成员资格(可能会动态更改),所以此列表不需要包含完整集的服务器(您可能想要多个服务器,以防万一服务器关闭)。
key.deserializerDeserializer class for key that implements the Deserializer interface.用于实现Deserializer接口的键的Deserializer类。
value.deserializerDeserializer class for value that implements the Deserializer interface.用于实现Deserializer接口的值的Deserializer类。
fetch.min.bytes1The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered
as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit
at the cost of some additional latency.
服务器应该为获取请求返回的最小数据量。 如果数据不足,请求将在应答请求之前等待多少数据累积。 默认设置为1字节表示只要单个字节的数据可用或获取请求超时等待数据到达,就会应答获取请求。 将此值设置为大于1的值将导致服务器等待大量数据累积,这可以以一些额外延迟为代价提高服务器吞吐量。
group.id""A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy.标识此消费者所属的使用者组的唯一字符串。 如果消费者通过使用subscribe(主题)或基于Kafka的偏移管理策略使用组管理功能,则需要此属性。
heartbeat.interval.ms3000The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group.
The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.
使用Kafka的组管理设施时,心跳到消费者协调器之间的预期时间。 心跳用于确保消费者的会话保持活动并且当新消费者加入或离开组时促进重新平衡。 该值必须设置为低于session.timeout.ms,但通常应设置为不高于该值的1/3。 它可以调整得更低,以控制正常再平衡的预期时间。
max.partition.fetch.bytes1048576The maximum amount of data per-partition the server will return. If the first message in the first non-empty partition of the fetch is larger than this limit, the message will still be returned to ensure that the consumer can make progress. The maximum
message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). See fetch.max.bytes for limiting the consumer request size
服务器将返回的每个分区的最大数据量。 如果提取的第一个非空分区中的第一条消息大于此限制,则仍会返回消息以确保消费者可以取得进展。 代理接受的最大消息大小通过message.max.bytes(broker config)或max.message.bytes(topic config)定义。 请参阅fetch.max.bytes以限制使用者请求大小
session.timeout.ms10000The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout,
then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms.
用于在使用Kafka的组管理工具时检测用户故障的超时。 消费者发送周期性心跳以向代理指示其活跃度。 如果在此会话超时期满之前代理没有收到心跳,则代理将从组中删除此使用者并启动重新平衡。 请注意,该值必须在代理配置中通过group.min.session.timeout.ms和group.max.session.timeout.ms配置的允许范围内。
ssl.key.passwordnullThe password of the private key in the key store file. This is optional for client.密钥存储文件中的私钥的密码。 这对于客户端是可选的。
ssl.keystore.locationnullThe location of the key store file. This is optional for client and can be used for two-way authentication for client.密钥存储文件的位置。 这对于客户端是可选的,并且可以用于客户端的双向认证。
ssl.keystore.passwordnullThe store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. 密钥存储文件的存储密码。 这对于客户端是可选的,只有在配置了ssl.keystore.location时才需要。
ssl.truststore.locationnullThe location of the trust store file. 信任存储文件的位置。
ssl.truststore.passwordnullThe password for the trust store file. 信任存储文件的密码。
auto.offset.resetlatestWhat to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): earliest: automatically reset the offset to the earliest offsetlatest: automatically reset the
offset to the latest offsetnone: throw exception to the consumer if no previous offset is found for the consumer's groupanything else: throw exception to the consumer.
当Kafka中没有初始偏移或如果当前​​偏移在服务器上不再存在时(例如,因为该数据已被删除),该怎么办:earliest:自动将偏移重置为最早偏移:自动将偏移重置为最新的offsetnone:如果没有为消费者的组找到任何以前的偏移,向消费者抛出异常anyany:throw exception to the consumer。
connections.max.idle.ms540000Close idle connections after the number of milliseconds specified by this config.在此配置指定的毫秒数后关闭空闲连接。
enable.auto.commitTRUEIf true the consumer's offset will be periodically committed in the background.如果为true,则消费者的偏移量将在后台定期提交。
exclude.internal.topicsTRUEWhether records from internal topics (such as offsets) should be exposed to the consumer. If set to true the only way to receive records from an internal topic is subscribing to it.来自内部主题(如抵消)的记录是否应向客户公开。 如果设置为true,则从内部主题接收记录的唯一方法是订阅。
fetch.max.bytes52428800The maximum amount of data the server should return for a fetch request. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the
consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.
服务器应针对抓取请求返回的最大数据量。 这不是绝对最大值,如果提取的第一个非空分区中的第一个消息大于此值,则仍会返回消息以确保消费者可以取得进展。 代理接受的最大消息大小通过message.max.bytes(broker config)或max.message.bytes(topic config)定义。 请注意,消费者并行执行多个提取。
max.poll.interval.ms300000The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout,
then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member.
使用消费组管理时poll()调用之间的最大延迟。 这提供了消费者在获取更多记录之前可以空闲的时间量的上限。 如果在超时到期之前未调用poll(),则消费者被视为失败,并且组将重新平衡以便将分区重新分配给另一个成员。
max.poll.records500The maximum number of records returned in a single call to poll().在对poll()的单个调用中返回的最大记录数。
partition.assignment.strategy[class org.apache.kafka.clients.consumer.RangeAssignor]The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used分区分配策略的类名,客户端将在分组管理使用时用于在消费者实例之间分配分区所有权
receive.buffer.bytes65536The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.读取数据时使用的TCP接收缓冲区大小(SO_RCVBUF)。 如果值为-1,将使用操作系统默认值。
request.timeout.ms305000The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.配置控制客户端等待请求响应的最大时间量。 如果在超时时间之前没有收到响应,客户端将在必要时重新发送请求,如果重试次数耗尽,则请求失败请求。
sasl.kerberos.service.namenullThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.Kafka运行的Kerberos主体名称。 这可以在Kafka的JAAS配置或Kafka的配置中定义。
sasl.mechanismGSSAPISASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.SASL机制用于客户端连接。 这可以是安全提供者可用的任何机制。 GSSAPI是默认机制。
security.protocolPLAINTEXTProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.用于与代理通信的协议。 有效值为:PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL。
send.buffer.bytes131072The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.发送数据时使用的TCP发送缓冲区(SO_SNDBUF)的大小。 如果值为-1,将使用操作系统默认值。
ssl.enabled.protocols[TLSv1.2, TLSv1.1, TLSv1]The list of protocols enabled for SSL connections.为SSL连接启用的协议列表。
ssl.keystore.typeJKSThe file format of the key store file. This is optional for client.密钥存储文件的文件格式。 这对于客户端是可选的。
ssl.protocolTLSThe SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to
known security vulnerabilities.
用于生成SSLContext的SSL协议。 默认设置为TLS,这在大多数情况下是正常的。 最近的JVM中允许的值是TLS,TLSv1.1和TLSv1.2。 旧的JVM可能支持SSL,SSLv2和SSLv3,但是由于已知的安全漏洞,它们的使用不受欢迎。
ssl.providernullThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。
ssl.truststore.typeJKSThe file format of the trust store file.信任存储文件的文件格式。
auto.commit.interval.ms5000The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commit is set to true.如果enable.auto.commit设置为true,则消费者偏移的频率(以毫秒为单位)将自动提交到Kafka。
check.crcsTRUEAutomatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.自动检查所消耗记录的CRC32。 这确保没有发生消息的在线或磁盘损坏。 此检查会增加一些开销,因此在寻求极高性能的情况下可能会禁用此检查。
client.id""An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.在发出请求时传递给服务器的id字符串。 这样做的目的是通过允许在服务器端请求记录中包括一个逻辑应用程序名称,能够跟踪超出ip / port的请求源。
fetch.max.wait.ms500The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.如果没有足够的数据来立即满足fetch.min.bytes给出的要求,则服务器在应答提取请求之前将阻止的最长时间。
interceptor.classesnullA list of classes to use as interceptors. Implementing the ConsumerInterceptor interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.要用作拦截器的类的列表。 实现ConsumerInterceptor接口允许您拦截(并且可能变异)消费者接收的记录。 默认情况下,没有拦截器。
metadata.max.age.ms300000The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.即使我们没有看到任何分区领导更改以主动发现任何新的代理或分区,我们强制刷新元数据的时间(以毫秒为单位)。
metric.reporters[]A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.用作度量报告器的类的列表。 实现MetricReporter接口允许插入将通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples2The number of samples maintained to compute metrics.维持计算度量的样本数。
metrics.sample.window.ms30000The window of time a metrics sample is computed over.计算度量样本的时间窗口。
reconnect.backoff.ms50The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.尝试重新连接到给定主机之前等待的时间。 这避免了在紧密环路中重复连接到主机。 此回退适用于消费者发送给代理的所有请求。
retry.backoff.ms100The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.尝试对指定主题分区重试失败的请求之前等待的时间。 这避免了在一些故障情况下在紧密循环中重复发送请求。
sasl.kerberos.kinit.cmd/usr/bin/kinitKerberos kinit command path.Kerberos kinit命令路径。
sasl.kerberos.min.time.before.relogin60000Login thread sleep time between refresh attempts.登录线程在刷新尝试之间的休眠时间。
sasl.kerberos.ticket.renew.jitter0.05Percentage of random jitter added to the renewal time.添加到更新时间的随机抖动的百分比。
sasl.kerberos.ticket.renew.window.factor0.8Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.登录线程将睡眠,直到从上次刷新到票据到期的时间的指定窗口因子已经到达,在该时间它将尝试更新票证。
ssl.cipher.suitesnullA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites
are supported.
密码套件列表。 这是一种命名的认证,加密,MAC和密钥交换算法的组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。默认情况下,支持所有可用的密码套件。
ssl.endpoint.identification.algorithmnullThe endpoint identification algorithm to validate server hostname using server certificate. 端点标识算法,使用服务器证书验证服务器主机名。
ssl.keymanager.algorithmSunX509The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。
ssl.secure.random.implementationnullThe SecureRandom PRNG implementation to use for SSL cryptography operations. 用于SSL加密操作的SecureRandom PRNG实现。
ssl.trustmanager.algorithmPKIXThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。
group.idA string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group.唯一标识此消费者所属的消费者进程组的字符串。 通过设置相同的组ID,多个进程指示它们都是同一使用者组的一部分。
zookeeper.connectSpecifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts
in the form hostname1:port1,hostname2:port2,hostname3:port3.

The server may also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so the consumer should use the same chroot path in its connection string. For example to
give a chroot path of /chroot/path you would give the connection string as hostname1:port1,hostname2:port2,hostname3:port3/chroot/path.
consumer.idnullGenerated automatically if not set.未设置时自动生成。
socket.timeout.ms30 * 1000The socket timeout for network requests. The actual timeout set will be max.fetch.wait + socket.timeout.ms.网络请求的套接字超时。 实际的超时设置将是max.fetch.wait + socket.timeout.ms。
socket.receive.buffer.bytes64 * 1024The socket receive buffer for network requests用于网络请求的套接字接收缓冲区
fetch.message.max.bytes1024 * 1024The number of bytes of messages to attempt to fetch for each topic-partition in each fetch request. These bytes will be read into memory for each partition, so this helps control the memory used by the consumer. The fetch request size must be at least as
large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch.
尝试在每个获取请求中为每个主题分区获取的消息的字节数。 这些字节将被读入每个分区的内存,因此这有助于控制消费者使用的内存。 获取请求大小必须至少与服务器允许的最大消息大小一样大,否则生产者可能发送大于客户可以提取的消息。
num.consumer.fetchers1The number fetcher threads used to fetch data.用于提取数据的提取线程数。
auto.commit.enableTRUEIf true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin.如果为true,请定期向ZooKeeper提交消费者已经获取的消息的偏移量。 当进程失败作为新消费者将从其开始的位置时,将使用该提交的偏移。
auto.commit.interval.ms60 * 1000The frequency in ms that the consumer offsets are committed to zookeeper.消费者抵消的频率以毫秒为单位提交给zookeeper。
queued.max.message.chunks2Max number of message chunks buffered for consumption. Each chunk can be up to fetch.message.max.bytes.缓冲消耗的消息块的最大数量。 每个块最多可以达到fetch.message.max.bytes。
rebalance.max.retries4When a new consumer joins a consumer group the set of consumers attempt to "rebalance" the load to assign partitions to each consumer. If the set of consumers changes while this assignment is taking place the rebalance will fail and retry. This setting
controls the maximum number of attempts before giving up.
当新消费者加入消费者组时,该组消费者尝试“重新平衡”负载以向每个消费者分配分区。 如果在进行此分配时,消费者集合发生变化,则重新平衡将失败并重试。 此设置控制放弃之前的最大尝试次数。
fetch.min.bytes1The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request.服务器应该为获取请求返回的最小数据量。 如果数据不足,请求将在应答请求之前等待多少数据累积。
fetch.wait.max.ms100The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch.min.bytes如果没有足够的数据立即满足fetch.min.bytes,则服务器在应答提取请求之前将阻止的最长时间
rebalance.backoff.ms2000Backoff time between retries during rebalance. If not set explicitly, the value in zookeeper.sync.time.ms is used.

在重新平衡期间重试之间的退避时间。 如果未明确设置,则使用zookeeper.sync.time.ms中的值。
refresh.leader.backoff.ms200Backoff time to wait before trying to determine the leader of a partition that has just lost its leader.在尝试确定刚刚失去其领导者的分区的领导者之前等待的退避时间。
auto.offset.resetlargestWhat to do when there is no initial offset in ZooKeeper or if an offset is out of range:* smallest : automatically reset the offset to the smallest offset* largest : automatically reset the offset to the largest offset* anything else: throw exception to the
consumer
当ZooKeeper中没有初始偏移或偏移超出范围时,该怎么办:*最小:将偏移自动重置为最小偏移*最大:自动将偏移重置为最大偏移*其他:throw exception to the消费者
consumer.timeout.ms-1Throw a timeout exception to the consumer if no message is available for consumption after the specified interval如果在指定的时间间隔后没有消息可用,则向使用者抛出超时异常
exclude.internal.topicsTRUEWhether messages from internal topics (such as offsets) should be exposed to the consumer.来自内部主题的消息(如偏移量)是否应向消费者公开。
client.idgroup id valueThe client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request.客户端标识是在每个请求中发送的用户指定的字符串,以帮助跟踪调用。 它应该在逻辑上标识发出请求的应用程序。
zookeeper.session.timeout.ms 6000ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur.ZooKeeper会话超时。 如果消费者在这段时间内没有心跳到ZooKeeper,它被认为死了,并且将发生重新平衡。
zookeeper.connection.timeout.ms6000The max time that the client waits while establishing a connection to zookeeper.客户端在建立与zookeeper的连接时等待的最长时间。
zookeeper.sync.time.ms 2000How far a ZK follower can be behind a ZK leaderZK领导者背后有多远ZK追随者
offsets.storagezookeeperSelect where offsets should be stored (zookeeper or kafka).选择偏移量应存储在哪里(zookeeper或kafka)。
offsets.channel.backoff.ms1000The backoff period when reconnecting the offsets channel or retrying failed offset fetch/commit requests.重新连接偏移通道或重试失败的偏移提取/提交请求时的退避周期。
offsets.channel.socket.timeout.ms10000Socket timeout when reading responses for offset fetch/commit requests. This timeout is also used for ConsumerMetadata requests that are used to query for the offset manager.读取偏移提取/提交请求的响应时的套接字超时。 此超时还用于用于查询偏移管理器的ConsumerMetadata请求。
offsets.commit.max.retries5Retry the offset commit up to this many times on failure. This retry count only applies to offset commits during shut-down. It does not apply to commits originating from the auto-commit thread. It also does not apply to attempts to query for the offset
coordinator before committing offsets. i.e., if a consumer metadata request fails for any reason, it will be retried and that retry does not count toward this limit.
在失败时重试偏移提交多次。 此重试计数仅适用于关闭期间的偏移提交。 它不适用于源自自动提交线程的提交。 它也不适用于在提交偏移之前查询偏移协调器的尝试。 即,如果消费者元数据请求由于任何原因失败,则它将被重试,并且重试不计入该限制。
dual.commit.enabledTRUEIf you are using "kafka" as offsets.storage, you can dual commit offsets to ZooKeeper (in addition to Kafka). This is required during migration from zookeeper-based offset storage to kafka-based offset storage. With respect to any given consumer group,
it is safe to turn this off after all instances within that group have been migrated to the new version that commits offsets to the broker (instead of directly to ZooKeeper).
如果你使用“kafka”作为offsets.storage,你可以双向提交偏移到ZooKeeper(除了Kafka)。 这在从基于zookeeper的偏移存储迁移到基于kafka的偏移存储期间是必需的。 对于任何给定的消费者组,可以在该组中的所有实例都迁移到向代理提交偏移量的新版本(而不是直接到ZooKeeper)后将其关闭。
partition.assignment.strategyrangeSelect between the "range" or "roundrobin" strategy for assigning partitions to consumer streams.The round-robin partition assignor lays out all the available partitions and all the available consumer threads. It then proceeds to do a round-robin assignment
from partition to consumer thread. If the subscriptions of all consumer instances are identical, then the partitions will be uniformly distributed. (i.e., the partition ownership counts will be within a delta of exactly one across all consumer threads.) Round-robin
assignment is permitted only if: (a) Every topic has the same number of streams within a consumer instance (b) The set of subscribed topics is identical for every consumer instance within the group. Range partitioning works on a per-topic basis. For each topic,
we lay out the available partitions in numeric order and the consumer threads in lexicographic order. We then divide the number of partitions by the total number of consumer streams (threads) to determine the number of partitions to assign to each consumer.
If it does not evenly divide, then the first few consumers will have one extra partition.
在用于向消费者流分配分区的“范围”或“roundrobin”策略之间进行选择。循环分区分配器将所有可用分区和所有可用的消费者线程布局。 然后它继续进行从分区到消费者线程的轮询分配。 如果所有消费者实例的预订相同,则分区将均匀分布。 (即,分区所有权计数将在所有消费者线程中在正好一个的delta内。)只有在以下情况下才允许循环分配:(a)每个主题在消费者实例内具有相同数量的流(b)的订阅主题对于组内的每个消费者实例是相同的。 范围分区在每个主题的基础上工作。对于每个主题,我们按数字顺序布置可用分区,并按字典顺序布置使用线程。 然后,我们将分区数除以消费者流(线程)的总数,以确定要分配给每个使用者的分区数。 如果它不均匀分配,那么前几个消费者将有一个额外的分区。
config.storage.topickafka topic to store configskafka主题来存储配置
group.idA unique string that identifies the Connect cluster group this worker belongs to.标识此工人所属的Connect群集组的唯一字符串。
key.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector
to work with any serialization format. Examples of common formats include JSON and Avro.
Converter类用于在Kafka Connect格式和写入Kafka的序列化格式之间进行转换。 这控制写入或读取Kafka的消息中的键的格式,并且由于它独立于连接器,因此它允许任何连接器使用任何序列化格式。 常见格式的示例包括JSON和Avro。
offset.storage.topickafka topic to store connector offsets inkafka主题来存储连接器偏移量
status.storage.topickafka topic to track connector and task statuskafka主题来跟踪连接器和任务状态
value.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector
to work with any serialization format. Examples of common formats include JSON and Avro.
Converter类用于在Kafka Connect格式和写入Kafka的序列化格式之间进行转换。 这控制写入或读取Kafka的消息中的值的格式,并且由于它独立于连接器,因此它允许任何连接器使用任何序列化格式。 常见格式的示例包括JSON和Avro。
internal.key.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector
to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter
implementation.
Converter类用于在Kafka Connect格式和写入Kafka的序列化格式之间进行转换。 这控制写入或读取Kafka的消息中的键的格式,并且由于它独立于连接器,因此它允许任何连接器使用任何序列化格式。 常见格式的示例包括JSON和Avro。 此设置控制用于框架使用的内部记录数据的格式,例如配置和偏移量,因此用户通常可以使用任何正在运行的Converter实现。
internal.value.converterConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector
to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter
implementation.
Converter类用于在Kafka Connect格式和写入Kafka的序列化格式之间进行转换。 这控制写入或读取Kafka的消息中的值的格式,并且由于它独立于连接器,因此它允许任何连接器使用任何序列化格式。 常见格式的示例包括JSON和Avro。 此设置控制用于框架使用的内部记录数据的格式,例如配置和偏移量,因此用户通常可以使用任何正在运行的Converter实现。
bootstrap.servers[localhost:9092]A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover
the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set
of servers (you may want more than one, though, in case a server is down).
用于建立与Kafka集群的初始连接的主机/端口对的列表。 客户端将使用所有服务器,而不考虑在此指定哪些服务器进行引导 - 此列表仅影响用于发现全套服务器的初始主机。 此列表应采用host1:port1,host2:port2,...的形式。由于这些服务器仅用于初始连接以发现完整的集群成员资格(可能会动态更改),所以此列表不需要包含完整集的服务器(您可能想要多个服务器,以防万一服务器关闭)。
heartbeat.interval.ms3000The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The
value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.
使用Kafka的组管理设施时,心跳到组协调器之间的预计时间。 心跳用于确保工作者的会话保持活动状态,并在新成员加入或离开组时方便重新平衡。 该值必须设置为低于session.timeout.ms,但通常应设置为不高于该值的1/3。 它可以调整得更低,以控制正常再平衡的预期时间。
rebalance.timeout.ms60000The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be
removed from the group, which will cause offset commit failures.
重新平衡开始后每个工作人员加入群组的最长允许时间。 这基本上是对所有任务刷新任何挂起的数据和提交偏移量所需的时间量的限制。 如果超过超时,那么工作程序将从组中删除,这将导致偏移提交失败。
session.timeout.ms10000The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from
the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms.
用于检测worker失败的超时。 工作者发送定期心跳以向代理指示其活跃度。 如果代理在此会话超时期满之前未收到心跳,则代理将从组中删除工作程序并启动重新平衡。 请注意,该值必须在代理配置中通过group.min.session.timeout.ms和group.max.session.timeout.ms配置的允许范围内。
ssl.key.passwordnullThe password of the private key in the key store file. This is optional for client.密钥存储文件中的私钥的密码。 这对于客户端是可选的。
ssl.keystore.locationnullThe location of the key store file. This is optional for client and can be used for two-way authentication for client.密钥存储文件的位置。 这对于客户端是可选的,并且可以用于客户端的双向认证。
ssl.keystore.passwordnullThe store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. 密钥存储文件的存储密码。 这对于客户端是可选的,只有在配置了ssl.keystore.location时才需要。
ssl.truststore.locationnullThe location of the trust store file. 信任存储文件的位置。
ssl.truststore.passwordnullThe password for the trust store file. 信任存储文件的密码。
connections.max.idle.ms540000Close idle connections after the number of milliseconds specified by this config.在此配置指定的毫秒数后关闭空闲连接。
receive.buffer.bytes32768The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.读取数据时使用的TCP接收缓冲区大小(SO_RCVBUF)。 如果值为-1,将使用操作系统默认值。
request.timeout.ms40000The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.配置控制客户端等待请求响应的最大时间量。 如果在超时时间之前没有收到响应,客户端将在必要时重新发送请求,如果重试次数耗尽,则请求失败请求。
sasl.kerberos.service.namenullThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.Kafka运行的Kerberos主体名称。 这可以在Kafka的JAAS配置或Kafka的配置中定义。
sasl.mechanismGSSAPISASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.SASL机制用于客户端连接。 这可以是安全提供者可用的任何机制。 GSSAPI是默认机制。
security.protocolPLAINTEXTProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.用于与代理通信的协议。 有效值为:PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL。
send.buffer.bytes131072The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.发送数据时使用的TCP发送缓冲区(SO_SNDBUF)的大小。 如果值为-1,将使用操作系统默认值。
ssl.enabled.protocols[TLSv1.2, TLSv1.1, TLSv1]The list of protocols enabled for SSL connections.为SSL连接启用的协议列表。
ssl.keystore.typeJKSThe file format of the key store file. This is optional for client.密钥存储文件的文件格式。 这对于客户端是可选的。
ssl.protocolTLSThe SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to
known security vulnerabilities.
用于生成SSLContext的SSL协议。 默认设置为TLS,这在大多数情况下是正常的。 最近的JVM中允许的值是TLS,TLSv1.1和TLSv1.2。 旧的JVM可能支持SSL,SSLv2和SSLv3,但是由于已知的安全漏洞,它们的使用不受欢迎。
ssl.providernullThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。
ssl.truststore.typeJKSThe file format of the trust store file.信任存储文件的文件格式。
worker.sync.timeout.ms3000When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.当工作程序与其他工作程序不同步并且需要重新同步配置时,请等待到这个时间量,然后放弃,离开组,并在重新加入之前等待一个后退周期。
worker.unsync.backoff.ms300000When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.当工作程序与其他工作程序不同步并且未能在worker.sync.timeout.ms中赶上时,在重新加入之前,将Connect集群保留此时间很长时间。
access.control.allow.methods""Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.通过设置Access-Control-Allow-Methods头来设置跨源请求支持的方法。 Access-Control-Allow-Methods标头的默认值允许GET,POST和HEAD的跨源请求。
access.control.allow.origin""Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only
allows access from the domain of the REST API.
为REST API请求设置Access-Control-Allow-Origin标头的值。要启用跨源访问,请将此设置为应允许访问API的应用程序的域,或'*'以允许从任何域。 默认值仅允许从REST API的域进行访问。
client.id""An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.在发出请求时传递给服务器的id字符串。 这样做的目的是通过允许在服务器端请求记录中包括一个逻辑应用程序名称,能够跟踪超出ip / port的请求源。
metadata.max.age.ms300000The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.即使我们没有看到任何分区领导更改以主动发现任何新的代理或分区,我们强制刷新元数据的时间(以毫秒为单位)。
metric.reporters[]A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.用作度量报告器的类的列表。 实现MetricReporter接口允许插入将通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples2The number of samples maintained to compute metrics.维持计算度量的样本数。
metrics.sample.window.ms30000The window of time a metrics sample is computed over.计算度量样本的时间窗口。
offset.flush.interval.ms60000Interval at which to try committing offsets for tasks.尝试提交任务的偏移量的间隔。
offset.flush.timeout.ms5000Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt.在取消过程之前等待记录刷新并将偏移数据分区为偏移存储的最大毫秒数,并恢复在未来尝试中提交的偏移数据。
reconnect.backoff.ms50The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.尝试重新连接到给定主机之前等待的时间。 这避免了在紧密环路中重复连接到主机。 此回退适用于消费者发送给代理的所有请求。
rest.advertised.host.namenullIf this is set, this is the hostname that will be given out to other workers to connect to.如果设置,这是将给予其他工作人员连接到的主机名。
rest.advertised.portnullIf this is set, this is the port that will be given out to other workers to connect to.如果设置,这是将给予其他工作人员连接到的端口。
rest.host.namenullHostname for the REST API. If this is set, it will only bind to this interface.REST API的主机名。 如果设置,它将只绑定到此接口。
rest.port8083Port for the REST API to listen on.用于REST API的端口。
retry.backoff.ms100The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.尝试对指定主题分区重试失败的请求之前等待的时间。 这避免了在一些故障情况下在紧密循环中重复发送请求。
sasl.kerberos.kinit.cmd/usr/bin/kinitKerberos kinit command path.Kerberos kinit命令路径。
sasl.kerberos.min.time.before.relogin60000Login thread sleep time between refresh attempts.登录线程在刷新尝试之间的休眠时间。
sasl.kerberos.ticket.renew.jitter0.05Percentage of random jitter added to the renewal time.添加到更新时间的随机抖动的百分比。
sasl.kerberos.ticket.renew.window.factor0.8Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.登录线程将睡眠,直到从上次刷新到票据到期的时间的指定窗口因子已经到达,在该时间它将尝试更新票证。
ssl.cipher.suitesnullA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites
are supported.
密码套件列表。 这是一种命名的认证,加密,MAC和密钥交换算法的组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。 默认情况下,支持所有可用的密码套件。
ssl.endpoint.identification.algorithmnullThe endpoint identification algorithm to validate server hostname using server certificate. 端点标识算法,使用服务器证书验证服务器主机名。
ssl.keymanager.algorithmSunX509The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。
ssl.secure.random.implementationnullThe SecureRandom PRNG implementation to use for SSL cryptography operations. 用于SSL加密操作的SecureRandom PRNG实现。
ssl.trustmanager.algorithmPKIXThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。
task.shutdown.graceful.timeout.ms5000Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.等待任务正常关闭的时间。 这是总时间量,而不是每个任务。 所有任务已关闭触发,然后他们按顺序等待。
application.idAn identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix.流处理应用程序的标识符。 在Kafka集群中必须是唯一的。 它用作1)默认的client-id前缀,2)用于成员资格管理的group-id,3)changelog主题前缀。
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover
the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set
of servers (you may want more than one, though, in case a server is down).
用于建立与Kafka集群的初始连接的主机/端口对的列表。 客户端将使用所有服务器,而不考虑在此指定哪些服务器进行引导 - 此列表仅影响用于发现全套服务器的初始主机。 此列表应采用host1:port1,host2:port2,...的形式。由于这些服务器仅用于初始连接以发现完整的集群成员资格(可能会动态更改),所以此列表不需要包含完整集的服务器(您可能想要多个服务器,以防万一服务器关闭)。
client.id""An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.在发出请求时传递给服务器的id字符串。 这样做的目的是通过允许在服务器端请求记录中包括一个逻辑应用程序名称,能够跟踪超出ip / port的请求源。
zookeeper.connect""Zookeeper connect string for Kafka topics management.Zookeeper连接字符串用于Kafka主题管理。
key.serdeclass org.apache.kafka.common.serialization.Serdes$ByteArraySerdeSerializer / deserializer class for key that implements the Serde interface.用于实现Serde接口的密钥的Serializer / deserializer类。
partition.grouperclass org.apache.kafka.streams.processor.DefaultPartitionGrouperPartition grouper class that implements the PartitionGrouper interface.实现PartitionGrouper接口的分区分组器类。
replication.factor1The replication factor for change log topics and repartition topics created by the stream processing application.由流处理应用程序创建的更改日志主题和重新分区主题的复制因素。
state.dir/tmp/kafka-streamsDirectory location for state store.状态存储的目录位置。
timestamp.extractorclass org.apache.kafka.streams.processor.ConsumerRecordTimestampExtractorTimestamp extractor class that implements the TimestampExtractor interface.实现TimestampExtractor接口的Timestamp提取器类。
value.serdeclass org.apache.kafka.common.serialization.Serdes$ByteArraySerdeSerializer / deserializer class for value that implements the Serde interface.用于实现Serde接口的值的Serializer / deserializer类。
windowstore.changelog.additional.retention.ms86400000Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day添加到windows的maintainMs,以确保数据不会被过早地从日志中删除。 允许时钟漂移。 默认值为1天
application.server""A host:port pair pointing to an embedded user defined endpoint that can be used for discovering the locations of state stores within a single KafkaStreams application指向嵌入式用户定义端点的主机:端口对,可用于在单个KafkaStreams应用程序中发现状态存储的位置
buffered.records.per.partition1000The maximum number of records to buffer per partition.每个分区缓冲的最大记录数。
cache.max.bytes.buffering10485760Maximum number of memory bytes to be used for buffering across all threads要用于所有线程缓冲的最大内存字节数
commit.interval.ms30000The frequency with which to save the position of the processor.保存处理器位置的频率。
metric.reporters[]A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.用作度量报告器的类的列表。 实现MetricReporter接口允许插入将被通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples2The number of samples maintained to compute metrics.维持计算度量的样本数。
metrics.sample.window.ms30000The window of time a metrics sample is computed over.计算度量样本的时间窗口。
num.standby.replicas0The number of standby replicas for each task.每个任务的备用副本数。
num.stream.threads1The number of threads to execute stream processing.执行流处理的线程数。
poll.ms100The amount of time in milliseconds to block waiting for input.阻止等待输入的时间(以毫秒为单位)。
rocksdb.config.setternullA Rocks DB config setter class that implements the RocksDBConfigSetter interfaceRocks DB配置设置器类实现RocksDBConfigSetter接口
state.cleanup.delay.ms60000The amount of time in milliseconds to wait before deleting state when a partition has migrated.迁移分区时删除状态之前等待的时间(以毫秒为单位)。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  java kafka