您的位置:首页 > 其它

kafka集群配置与安装

2017-03-09 19:48 369 查看
#LicensedtotheApacheSoftwareFoundation(ASF)underoneormore

#contributorlicenseagreements.SeetheNOTICEfiledistributedwith

#thisworkforadditionalinformationregardingcopyrightownership.

#TheASFlicensesthisfiletoYouundertheApacheLicense,Version2.0

#(the"License");youmaynotusethisfileexceptincompliancewith

#theLicense.YoumayobtainacopyoftheLicenseat

#

#'target='_blank'>http://www.apache.org/licenses/LICENSE-2.0[/code]
#

#Unlessrequiredbyapplicablelaworagreedtoinwriting,software

#distributedundertheLicenseisdistributedonan"ASIS"BASIS,

#WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied.

#SeetheLicenseforthespecificlanguagegoverningpermissionsand

#limitationsundertheLicense.

#seekafka.server.KafkaConfigforadditionaldetailsanddefaults


#############################ServerBasics#############################


#Theidofthebroker.Thismustbesettoauniqueintegerforeachbroker.

broker.id=0


#############################SocketServerSettings#############################


#Theportthesocketserverlistenson

port=9092


#Hostnamethebrokerwillbindto.Ifnotset,theserverwillbindtoallinterfaces

host.name=10.189.122.207


#Hostnamethebrokerwilladvertisetoproducersandconsumers.Ifnotset,itusesthe

#valuefor"host.name"ifconfigured.Otherwise,itwillusethevaluereturnedfrom

#java.net.InetAddress.getCanonicalHostName().

#advertised.host.name=<hostnameroutablebyclients>


#TheporttopublishtoZooKeeperforclientstouse.Ifthisisnotset,

#itwillpublishthesameportthatthebrokerbindsto.

#advertised.port=<portaccessiblebyclients>


#Thenumberofthreadshandlingnetworkrequests

num.network.threads=3


#ThenumberofthreadsdoingdiskI/O

num.io.threads=8


#Thesendbuffer(SO_SNDBUF)usedbythesocketserver

socket.send.buffer.bytes=102400


#Thereceivebuffer(SO_RCVBUF)usedbythesocketserver

socket.receive.buffer.bytes=102400


#Themaximumsizeofarequestthatthesocketserverwillaccept(protectionagainstOOM)

socket.request.max.bytes=104857600



#############################LogBasics#############################


#Acommaseperatedlistofdirectoriesunderwhichtostorelogfiles

log.dirs=/tmp/kafka-logs


#Thedefaultnumberoflogpartitionspertopic.Morepartitionsallowgreater

#parallelismforconsumption,butthiswillalsoresultinmorefilesacross

#thebrokers.

num.partitions=2


#Thenumberofthreadsperdatadirectorytobeusedforlogrecoveryatstartupandflushingatshutdown.

#ThisvalueisrecommendedtobeincreasedforinstallationswithdatadirslocatedinRAIDarray.

num.recovery.threads.per.data.dir=1


#############################LogFlushPolicy#############################


#Messagesareimmediatelywrittentothefilesystembutbydefaultweonlyfsync()tosync

#theOScachelazily.Thefollowingconfigurationscontroltheflushofdatatodisk.

#Thereareafewimportanttrade-offshere:

#1.Durability:Unflusheddatamaybelostifyouarenotusingreplication.

#2.Latency:Verylargeflushintervalsmayleadtolatencyspikeswhentheflushdoesoccurastherewillbealotofdatatoflush.

#3.Throughput:Theflushisgenerallythemostexpensiveoperation,andasmallflushintervalmayleadtoexceessiveseeks.

#Thesettingsbelowallowonetoconfiguretheflushpolicytoflushdataafteraperiodoftimeor

#everyNmessages(orboth).Thiscanbedonegloballyandoverriddenonaper-topicbasis.


#Thenumberofmessagestoacceptbeforeforcingaflushofdatatodisk

#log.flush.interval.messages=10000


#Themaximumamountoftimeamessagecansitinalogbeforeweforceaflush

#log.flush.interval.ms=1000


#############################LogRetentionPolicy#############################


#Thefollowingconfigurationscontrolthedisposaloflogsegments.Thepolicycan

#besettodeletesegmentsafteraperiodoftime,orafteragivensizehasaccumulated.

#Asegmentwillbedeletedwhenever*either*ofthesecriteriaaremet.Deletionalwayshappens

#fromtheendofthelog.


#Theminimumageofalogfiletobeeligiblefordeletion

log.retention.hours=24


#Asize-basedretentionpolicyforlogs.Segmentsareprunedfromthelogaslongastheremaining

#segmentsdon'tdropbelowlog.retention.bytes.

#log.retention.bytes=1073741824


#Themaximumsizeofalogsegmentfile.Whenthissizeisreachedanewlogsegmentwillbecreated.

log.segment.bytes=1073741824


#Theintervalatwhichlogsegmentsarecheckedtoseeiftheycanbedeletedaccording

#totheretentionpolicies

log.retention.check.interval.ms=300000


#Bydefaultthelogcleanerisdisabledandthelogretentionpolicywilldefaulttojustdeletesegmentsaftertheirretentionexpires.

#Iflog.cleaner.enable=trueissetthecleanerwillbeenabledandindividuallogscanthenbemarkedforlogcompaction.

log.cleaner.enable=false


#############################Zookeeper#############################


#Zookeeperconnectionstring(seezookeeperdocsfordetails).

#Thisisacommaseparatedhost:portpairs,eachcorrespondingtoazk

#server.e.g."127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".

#Youcanalsoappendanoptionalchrootstringtotheurlstospecifythe

#rootdirectoryforallkafkaznodes.

zookeeper.connect=10.189.122.207:2181,10.189.122.208:2181,10.189.122.213:2181


#Timeoutinmsforconnectingtozookeeper

zookeeper.connection.timeout.ms=6000


config目录下server.conf的配置如上所示,其它的broker服务器需要修改broker.id与host.name(zk上显示的),然后通过bin/kafka-server-start.sh
config/server.properties进行启动.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  bigdata kafka