登峰造极境

  • WIN
    • CSharp
    • JAVA
    • OAM
    • DirectX
    • Emgucv
  • UNIX
    • FFmpeg
    • QT
    • Python
    • Opencv
    • Openwrt
    • Twisted
    • Design Patterns
    • Mysql
    • Mycat
    • MariaDB
    • Make
    • OAM
    • Supervisor
    • Nginx
    • KVM
    • Docker
    • OpenStack
  • WEB
    • ASP
    • Node.js
    • PHP
    • Directadmin
    • Openssl
    • Regex
  • APP
    • Android
  • AI
    • Algorithm
    • Deep Learning
    • Machine Learning
  • IOT
    • Device
    • MSP430
  • DIY
    • Algorithm
    • Design Patterns
    • MATH
    • X98 AIR 3G
    • Tucao
    • fun
  • LIFE
    • 美食
    • 关于我
  • LINKS
  • ME
Claves
长风破浪会有时,直挂云帆济沧海
  1. 首页
  2. Platforms
  3. LINUX
  4. Hadoop
  5. 正文

Mapreduce WordCount学习-hadoop学习笔记

2017-03-29

如题WordCount!

一、普通版本

1、TokenizerMapper.java

package hadooptest2;

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
//必须继承org.apache.hadoop.mapreduce.Mapper类,实现map函数
//Mapper类的四个泛型代表map函数输入键值对的键的类,值的类;map函数输出键值对的键的类,值的类
public class TokenizerMapper extends Mapper<LongWritable,Text,Text,IntWritable>{
	
	private final static IntWritable one = new IntWritable(1);
	private Text word = new Text();
	

	  /**
	   * 将键值作为参数传给map函数
	   * @param key 键,代表行号
	   * @param value 代表该行的内容
	   * @throws IOException
	   */
	public void map(LongWritable key,Text value,Context context)throws IOException, InterruptedException
	{
		
		StringTokenizer itr = new StringTokenizer(value.toString());
		while(itr.hasMoreTokens())
		{
			//使用StringTokenizer类的nextToken()方法,将每行文本拆分的单个单词
			word.set(itr.nextToken());
			//Context类的Write(key,value) 将其作为中间结果输出
			context.write(word, one);
		}
	}
}

2、IntSumReducer.java

package hadooptest2;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

/**
 * Reduce接受mapper输出的中间结果并执行reduce函数
 * Reducer类的四个泛型代表reduce函数输入键值对的键的类,值的类;map函数输出键值对的键的类,值的类
 */
public class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable>
{

	private IntWritable result = new IntWritable();

	  /**
	   * 
	   * reduce中实现形同key值(同一单词)的计数,并将最后结果输出
	   * @param key 键,代表单词
	   * @param values reduce函数接收到的参数形如<key,List<value>>
	   *               这是因为map函数将key值相同的所有value都发送给reduce函数
	   *               也就是特定单词的value都在这个List内
	   * @throws IOException
	   */
	public void reduce(Text key,Iterable<IntWritable>values,Context context) throws IOException,InterruptedException
	{
		
		int sum = 0;
		for(IntWritable val:values)
		{
			//只需要对List求和即可
			sum += val.get();
		}
		result.set(sum);
		context.write(key,result);
	}
}

3、WordCount.java

package hadooptest2;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCount {
	public static void main(String[] args)throws IOException,ClassNotFoundException,InterruptedException
	{
		//Configuation包含了对Hadoop的配置
		//也可以在代码中用该对象设置作业级别的配置
		Configuration conf = new Configuration();
		
		if(args.length !=2)
		{
			System.err.println("Usage:wordcount <in> <out>");
			System.exit(2);
		}
		Job job = new Job(conf,"word count");
		job.setJarByClass(WordCount.class);
		//指定Mapper类
		job.setMapperClass(TokenizerMapper.class);
		//指定Reduce类
		job.setReducerClass(IntSumReducer.class);
		//指定reduce函数输出key的类
		job.setOutputKeyClass(Text.class);
		//指定reduce函数输出value的类
		job.setOutputValueClass(IntWritable.class);
		
		//输入路径
		FileInputFormat.addInputPath(job,new Path(args[0]));
		//输出路径
		FileOutputFormat.setOutputPath(job,new Path(args[1]));
		//wartForCompletion函数向Hadoop函数提交任务
		System.exit(job.waitForCompletion(true)?0:1);
		
	}
}

二、提前聚合版

方法:设置Combiner函数,对map函数输出结构进行早期聚合以减少传输的数据量

Tip:

  • Conbine过程发生在map和reduce函数之间,将中间结果进行了一次合并
  • Hadoop不保证combiner是否被执行,可能会执行,可能不会执行,可能执行多次
  • Combiner并不是所有场景都适应,随意使用可能导致结果错误。适合Combiner场景有最大值、最小值、求和等

 

标签: 暂无
最后更新:2017-03-29

代号山岳

知之为知之 不知为不知

点赞
< 上一篇
下一篇 >

COPYRIGHT © 2099 登峰造极境. ALL RIGHTS RESERVED.

Theme Kratos Made By Seaton Jiang

蜀ICP备14031139号-5

川公网安备51012202000587号