您的位置:首页 > 运维架构 > Apache

【apache-hive-1.2.1】hive中reduce的个数

2016-01-25 00:00 393 查看
hive> select count(1) from serde_regex;

Automatically selecting local only mode for query
Query ID = hadoop_20160125101917_ab5615a4-e6f1-47e3-9e97-6795c3268cea
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>

计算的公式: reduce个数 = InputFileSize / bytes per reducer

有三个参数

hive.exec.reducers.bytes.per.reducer 控制一个reducer能处理多少input。 default 256M

hive> set hive.exec.reducers.bytes.per.reducer;
hive.exec.reducers.bytes.per.reducer=256000000

hive.exec.reducers.max 控制最大的reducer数量。如果input / bytes per reduce > max 会启动该参数设定的值,

hive> set hive.exec.reducers.max;
hive.exec.reducers.max=1009

mapreduce.reduce.tasks 这个参数指定了,那么不会进行计算了,都会使用这个参数进行计算,

hive> set mapreduce.reduce.tasks;
mapred.reduce.tasks=-1
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: