编写WordCount程序之一固定格式讲解

简介: WordCount因果图MapReduce中 map和reduce函数格式MapReduce中,map和reduce函数遵循如下常规格式:map: (K1, V1) → list(K2, V2)reduce: (K2...

WordCount因果图


MapReduce中 map和reduce函数格式

MapReduce中,map和reduce函数遵循如下常规格式:
map: (K1, V1) → list(K2, V2)
reduce: (K2, list(V2)) → list(K3, V3)
Mapper的基类:
protected void map(KEY key, VALUE value, 
    Context context) throws     IOException, InterruptedException {   
 }
Reducer的基类:
protected void reduce(KEY key, Iterable<VALUE> values,
     Context context) throws IOException, InterruptedException { 
 }

Context是上下文对象

代码模板

wordcount 代码

代码编写依据,也就是固定写法
input-->map--->reduce->output
以下java代码实现此命令的功能bin/hdfs dfs jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar input output

package com.lizh.hadoop.mapreduce;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

    private static Log logger = LogFactory.getLog(WordCount.class);
    //step1 Mapper class
    
    public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        private static final IntWritable mapOutputValues =  new IntWritable(1);//全局只有一个
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }
    
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }
    
    //step3 driver component job 
    
    public int run(String[] args) throws Exception{
        //1 get configration file core-site.xml hdfs-site.xml 
        Configuration configuration = new Configuration();
        
        //2 create job
        Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
        //3 run jar
        job.setJarByClass(this.getClass());
        
        //4 set job
        //input-->map--->reduce-->output
        //4.1 input
        Path path = new Path(args[0]);
        FileInputFormat.addInputPath(job, path);
        
        //4.2 map
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        
        //4.3 reduce
        job.setReducerClass(WordCountReduces.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        
        //4.4 output
        Path outputpath = new Path(args[1]);
        FileOutputFormat.setOutputPath(job, outputpath);
        
        //5 submit job
        boolean rv = job.waitForCompletion(true);
        
        return rv ? 0:1;
        
    }
    
    public static void main(String[] args) throws Exception{
        
        int rv = new WordCount().run(args);
        System.exit(rv);
    }
}


map类业务处理

map 业务处理逻辑
--------------input--------
<0,hadoop yarn>
--------------处理---------
hadoop yarn -->split->hadoop,yarn
-------------output-------
<hadoop,1>
<yarn,1>

public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        //全局只有一个
        private static final IntWritable mapOutputValues =  new IntWritable(1);
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }

reduce类业务处理过程

reduce 业务处理过程 map-->shuffle-->mapreduce

------------input(map的输出结果)-----------------
<hadoop,1>
<hadoop,1>
<hadoop,1>
----------------分组----------------
将相同key的值合并到一起,放到一个集合
<hadoop,1>
<hadoop,1>    ->  <hadoop,list(1,1,1)>
<hadoop,1>
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }

优化MapReduce写法

mapReduce 继承configured类, 并实现 Tool接口
tool接口类中的run方法重写
configured 提供初始化工作。

package com.lizh.hadoop.mapreduce;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCountMapReduce extends Configured implements Tool {

    private static Log logger = LogFactory.getLog(WordCountMapReduce.class);
    //step1 Mapper class
    
    public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        private static final IntWritable mapOutputValues =  new IntWritable(1);//全局只有一个
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }
    
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }
    
    //step3 driver component job 
    
    public int run(String[] args) throws Exception{
        //1 get configration file core-site.xml hdfs-site.xml 
        Configuration configuration = super.getConf();//优化
        
        //2 create job
        Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
        //3 run jar
        job.setJarByClass(this.getClass());
        
        //4 set job
        //input-->map--->reduce-->output
        //4.1 input
        Path path = new Path(args[0]);
        FileInputFormat.addInputPath(job, path);
        
        //4.2 map
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        
        //4.3 reduce
        job.setReducerClass(WordCountReduces.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        
        //4.4 output
        Path outputpath = new Path(args[1]);
        FileOutputFormat.setOutputPath(job, outputpath);
        
        //5 submit job
        boolean rv = job.waitForCompletion(true);//true的时候打印日志
        
        return rv ? 0:1;
        
    }
    
    public static void main(String[] args) throws Exception{
        
        //int rv = new WordCountMapReduce().run(args);
        Configuration configuration = new Configuration();
        //使用工具类运行
        int rv  = ToolRunner.run(configuration, new WordCountMapReduce(), args);
        System.exit(rv);
    }
}

抽象出模板

package org.apache.hadoop.mapreduce;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCountMapReduce extends Configured implements Tool {

    /**
     * Mapper Class : public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>
     * 
     * @param args
     */
    public static class WordCountMapper extends //
            Mapper<LongWritable, Text, Text, LongWritable> {

        private Text mapOutputKey = new Text();
        private LongWritable mapOutputValue = new LongWritable(1);

        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            
        }
    }

    /**
     * Reducer Class : public class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
     * 
     * @param args
     */
    public static class WordCountReducer extends //
            Reducer<Text, LongWritable, Text, LongWritable> {

        private LongWritable outputValue = new LongWritable();

        @Override
        protected void reduce(Text key, Iterable<LongWritable> values,
                Context context) throws IOException, InterruptedException {
            // temp sum
            
        }
    }

    /**
     * Driver : Create\set\submit Job
     * 
     * @param args
     * @throws Exception
     */
    public int run(String[] args) throws Exception {
        // 1.Get Configuration
        Configuration conf = super.getConf();

        // 2.Create Job
        Job job = Job.getInstance(conf);
        job.setJarByClass(getClass());

        // 3.Set Job
        // Input --> map --> reduce --> output
        // 3.1 Input
        Path inPath = new Path(args[0]);
        FileInputFormat.addInputPath(job, inPath);

        // 3.2 Map class
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);

        // 3.3 Reduce class
        job.setReducerClass(WordCountReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);

        // 3.4 Output
        Path outPath = new Path(args[1]);

        FileSystem dfs = FileSystem.get(conf);
        if (dfs.exists(outPath)) {
            dfs.delete(outPath, true);
        }

        FileOutputFormat.setOutputPath(job, outPath);

        // 4.Submit Job
        boolean isSuccess = job.waitForCompletion(true);
        return isSuccess ? 0 : 1;
    }

    public static void main(String[] args) throws Exception {
        

        Configuration conf = new Configuration();
    
        
        // run job
        int status = ToolRunner.run(//
                conf,//
                new WordCountMapReduce(),//
                args);

        // exit program
        System.exit(status);
    }
}
目录
相关文章
|
12月前
麒麟系统mate-indicators进程占用内存过高问题解决
【10月更文挑战第7天】麒麟系统mate-indicators进程占用内存过高问题解决
1187 2
|
11月前
|
运维 网络协议
IP地址与子网划分:IPv4与IPv6地址规划及子网掩码计算详解
IP地址与子网划分:IPv4与IPv6地址规划及子网掩码计算详解
3251 3
|
存储 并行计算 安全
超级计算机:探索宇宙奥秘与解决复杂问题的利器
【9月更文挑战第14天】超级计算机作为计算科学的巅峰之作,以其卓越的计算能力和庞大的数据存储能力,助力科学家深入探索宇宙奥秘并解决复杂问题。本文介绍了超级计算机的定义、发展历程及关键技术,并详细探讨了其在宇宙学模拟、生物学研究、气候研究和工程技术等领域的广泛应用。尽管面临能耗、数据传输等挑战,但其未来前景光明,有望在量子计算等前沿技术推动下继续引领科技创新。
|
iOS开发
如何找到Xcode中下载的Provisioning Profile文件
如何找到Xcode中下载的Provisioning Profile文件
1656 1
|
前端开发 开发者 UED
CSS进阶-CSS Sprites技术
【6月更文挑战第14天】**CSS Sprites是一种合并多个小图至大图的技术,减少HTTP请求,提升页面加载速度。本文探讨了精灵图的核心概念,常见问题(如定位不准、适应性问题、维护困难)及解决方案。建议使用工具精确计算坐标,采用媒体查询适应不同屏幕,建立图标管理机制,并提供代码示例展示如何应用。尽管有WebP、SVG等新技术,但在处理大量小图标时,CSS Sprites仍是高效选择。理解和掌握此技术对前端开发者至关重要。**
239 2
|
缓存
【POI】导出xls文件报错:The maximum number of cell styles was exceeded. You can define up to 4000 styles in a .xls workbook
使用POI导出xls文件,由于数据过多,导致导出xls报错如下: The maximum number of cell styles was exceeded. You can define up to 4000 styles in a .xls workbook   原因: 代码中创建 HSSFCellStyle cellStyle = hssfWorkbook.createCellStyle(); 次数过多,导致报错。
6903 0
|
Kubernetes 监控 算法
Kubernetes 调度器优化
Kubernetes 调度器优化
1464 0
|
Python
Python中的逻辑运算符:且(and)与或(or)
Python中的逻辑运算符:且(and)与或(or)
5595 0
|
Linux 芯片
Linux中断处理机制
中断是指在CPU正常运行期间,由于内外部事件或由程序预先安排的事件引起的CPU暂时停止正在运行的程序,转而为该内部或外部事件或预先安排的事件服务的程序中去,服务完毕后再返回去继续运行被暂时中断的程序。
Linux中断处理机制