您的位置:首页 > 其它

note of kafka learning (first part, before replication)

2014-11-29 07:11 597 查看
sequential
disk access can in some cases be faster than random memory access!

The memory overhead of objects is very high, often doubling the size of the data stored (or worse).
Java garbage collection becomes increasingly fiddly and slow as the in-heap data increases.

All data is immediately written to a persistent log on the filesystem without necessarily flushing to disk. In effect this just means
that it is transferred into the kernel's pagecache.

kafka use poll instead of push

To avoid this we have parameters in our pull request that allow the consumer request
to block in a "long poll" waiting until data arrives

(Kafka's persistent storage makes me feel like the same as Git ... :-))

So effectively Kafka guarantees at-least-once delivery by default and allows the
user to implement at most once delivery by disabling retries on the producer and committing its offset prior to processing a batch of messages. Exactly-once delivery requires co-operation with the destination storage system but Kafka provides the offset which
makes implementing this straight-forward.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  kafka