您的位置:首页 > 其它

Storm的初步学习总结(二)

2016-01-16 00:00 363 查看
摘要: Storm的DRPC Trident

5.Storm的DRPC机制

RPC就是远程过程调用,简单点说就是你写了个函数在服务器上执行,此时你在客户端上需要用那个函数处理,那么你就可以通过ip的端口来获得那个函数的代理,从而得到从理解过。DRPC就是分布式的远程过程调用。Storm的DRPC书写和正常的storm处理程序类似,只不过spout和传结果到客户端的bolt都已经给好了,需要书写的只是中间处理的bolt。服务器端代码如下:

public class DrpcServer {
public static class HelloBolt extends BaseRichBolt{
private Map stormConf;
private TopologyContext context;
private OutputCollector collector;
public void prepare(Map stormConf, TopologyContext context,
OutputCollector collector) {
this.stormConf = stormConf;
this.context = context;
this.collector = collector;
}

@Override
public void execute(Tuple tuple) {
//DrpcSpout发送过来的tuple,第一个是函数名称,第二个是参数信息
Long functionName=tuple.getLong(0);
// System.err.println(functionName);
String value=tuple.getString(1);
value="hello"+value;
collector.emit(new Values(functionName,value));
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("functionname","value"));
}

}
public static void main(String[] args) {
//发布drpc服务,服务名称设为hello
LinearDRPCTopologyBuilder linearDRPCTopologyBuilder = new LinearDRPCTopologyBuilder("hello");
linearDRPCTopologyBuilder.addBolt(new HelloBolt());

Config conf=new Config();
try {
StormSubmitter.submitTopology(DrpcServer.class.getSimpleName(),conf , linearDRPCTopologyBuilder.createRemoteTopology());
} catch (AlreadyAliveException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InvalidTopologyException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
客户端如下

public class DrpcClient {

public static void main(String[] args) {
DRPCClient client=new DRPCClient("115.28.138.100",3772);
try {
//第一个为服务名称,第二个为参数
String result=client.execute("hello", "world");
System.out.println(result);
} catch (TException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (DRPCExecutionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

}

}

6.Storm的trident

Trident就是storm的一个封装,它简化了原先storm的繁杂的书写,而且它tuple的输入是分成组传输的(batch)。Trident中有一个写好的spout,是FixedBatchSpout,它的代码分析可以参考

http://www.cnblogs.com/chengxin1982/p/3999641.html
关于trident的一些api的详细分析,可以查看
http://www.bubuko.com/infodetail-467560.html

下面是wordcount练习代码, 两代码相似

代码1

public class TridentWordCount {
public static class MySpout implements IBatchSpout{
@Override
public void open(Map conf, TopologyContext context) {

}

@Override
&
3ff8
nbsp; public void emitBatch(long batchId, TridentCollector collector) {
Collection<File> files=FileUtils.listFiles(new File("D:\\test\\"),new String[]{"txt"}, false);
for (File file : files) {
try {
List<String> lines=FileUtils.readLines(file);
for (String line : lines) {
collector.emit(new Values(line));
}
//因为该方法不断被调用,所以读过的文件改名字防止重复读取
FileUtils.moveFile(file, new File(file.getAbsolutePath()+"--checked"+System.currentTimeMillis()));
} catch (IOException e) {
e.printStackTrace();
}
}
}

@Override
public void ack(long batchId) {

}

@Override
public void close() {

}

@Override
public Map getComponentConfiguration() {
Config conf = new Config();
conf.setMaxTaskParallelism(1);
return conf;
}

@Override
public Fields getOutputFields() {
return new Fields("line");
}

}
public static class SplitBolt extends BaseFunction{

@Override
public void execute(TridentTuple tuple, TridentCollector collector) {
String line=tuple.getStringByField("line");
String[] words=line.split("\t");
for (String word : words) {
collector.emit(new Values(word));
}
}

}
public static class SumBolt extends BaseFunction{
HashMap<String, Integer> map = new HashMap<String, Integer>();
@Override
public void execute(TridentTuple tuple, TridentCollector collector) {
String word=tuple.getString(0);
Integer num=map.get(word);
if(num==null){
num=0;
}
map.put(word, ++num);
for(Entry<String, Integer> entry:map.entrySet()){
System.out.println(entry.getKey()+"---"+entry.getValue());
}
}

}
public static void main(String[] args) {
TridentTopology tridentTopology=new TridentTopology();
tridentTopology.newStream("SpoutId", new MySpout())
.each(new Fields("line"), new SplitBolt(), new Fields("word"))
.each( new Fields("word"), new SumBolt(), new Fields(""));
LocalCluster localCluster = new LocalCluster();
localCluster.submitTopology(TridentWordCount.class.getSimpleName(), new Config(), tridentTopology.build());
}
}
代码2

public class TridentWordCount2 {
public static class MySpout implements IBatchSpout{
@Override
public void open(Map conf, TopologyContext context) {

}

@Override
public void emitBatch(long batchId, TridentCollector collector) {
Collection<File> files=FileUtils.listFiles(new File("D:\\test\\"),new String[]{"txt"}, false);
for (File file : files) {
try {
List<String> lines=FileUtils.readLines(file);
for (String line : lines) {
collector.emit(new Values(line));
}
//因为该方法不断被调用,所以读过的文件改名字防止重复读取
FileUtils.moveFile(file, new File(file.getAbsolutePath()+"--checked"+System.currentTimeMillis()));
} catch (IOException e) {
e.printStackTrace();
}
}
}

@Override
public void ack(long batchId) {

}

@Override
public void close() {

}

@Override
public Map getComponentConfiguration() {
Config conf = new Config();
conf.setMaxTaskParallelism(1);
return conf;
}

@Override
public Fields getOutputFields() {
return new Fields("line");
}

}
public static class SplitBolt extends BaseFunction{

@Override
public void execute(TridentTuple tuple, TridentCollector collector) {
String line=tuple.getStringByField("line");
String[] words=line.split("\t");
for (String word : words) {
collector.emit(new Values(word));
}
}

}
//自定义一个聚合类
public static class MyAggregate extends BaseAggregator<Map<String, Integer>>{

HashMap<String, Integer> hashMap = new HashMap<String, Integer>();

@Override
//每个batch来到时执行,返回的值共下面两个方法使用
public Map<String, Integer> init(Object batchId,
TridentCollector collector) {
return hashMap;
}

@Override
//batch中的每个tuple执行
public void aggregate(Map<String, Integer> val, TridentTuple tuple,
TridentCollector collector) {
String value = tuple.getString(0);
Integer num=val.get(value);
if(num==null){
num=0;
}
val.put(value, ++num);
}

@Override
public void complete(Map<String, Integer> val,
TridentCollector collector) {
collector.emit(new Values(val));
}
}
public static class SumBolt extends BaseFunction{

@Override
public void execute(TridentTuple tuple, TridentCollector collector) {
Map<String, Integer> map = (Map<String, Integer>)tuple.getValue(0);
for(Entry<String, Integer> entry:map.entrySet()){
System.out.println(entry.getKey()+"---"+entry.getValue());
}
}

}
public static void main(String[] args) {
TridentTopology tridentTopology=new TridentTopology();
tridentTopology.newStream("SpoutId", new MySpout())
.each(new Fields("line"), new SplitBolt(), new Fields("word"))
//用字段的值分组,相同值的为一组
.groupBy(new Fields("word"))
//对每组聚合
.aggregate(new Fields("word"), new MyAggregate(), new Fields("map"))
.each( new Fields("map"), new SumBolt(), new Fields(""));
LocalCluster localCluster = new LocalCluster();
localCluster.submitTopology(TridentWordCount.class.getSimpleName(), new Config(), tridentTopology.build());
}
}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  Storm