3. InputFormat Reposibilities
Divide input data into logical input splits
Data in HDFS is divided into block, but processed as input
splits
InputSplit may contains any number of blocks (usually 1)
Each Mapper processes one input split
Creates RecordReaders to extract <key, value> pairs
2/24/13
4. InputFormat Class
public abstract class InputFormat<K, V> {
public abstract
List<InputSplit> getSplits(JobContext context) throws ...;
public abstract
RecordReader<K,V> createRecordReader(InputSplit split,
TaskAttemptContext context) throws ...;
}
2/24/13
5. Most Common InputFormats
TextInputFormat
Each n-terminated line is a value
The byte offset of that line is a key
Why not a line number?
KeyValueTextInputFormat
Key and value are separated by a separator (tab by default)
2/24/13
6. Binary InputFormats
SequenceFileInputFormat
SequenceFiles are flat files consisting of binary <key,
value> pairs
AvroInputFormat
Avro supports rich data structures (not necessarily <key,
value> pairs) serialized to files or messages
Compact, fast, language-independent, self-describing,
dynamic
2/24/13
7. Some Other InputFormats
NLineInputFormat
Should not be too big since splits are calculated in a single
thread (NLineInputFormat#getSplitsForFile)
CombineFileInputFormat
An abstract class, but not so difficult to extend
SeparatorInputFormat
How to here: http://blog.rguha.net/?p=293
2/24/13
8. Some Other InputFormats
MultipleInputs
Supports multiple input paths with a different
InputFormat and Mapper for each path
MultipleInputs.addInputPath(job,
firstPath, FirstInputFormat.class, FirstMapper.class);
MultipleInputs.addInputPath(job,
secondPath, SecondInputFormat.class, SecondMapper.class);
2/24/13
10. InputFormat Interesting Facts
Ideally InputSplit size is equal to HDFS block size
Or InputSplit contains multiple collocated HDFS block
InputFormat may prevent splitting a file
A whole file is processed by a single mapper (e.g. gzip)
boolean FileInputFormat#isSplittable();
2/24/13
11. InputFormat Interesting Facts
Mapper knows the file/offset/size of the split that it process
MapContext#getInputSplit()
Useful for later debugging on a local machine
2/24/13
12. InputFormat Interesting Facts
PathFilter (included in InputFormat) specifies which files
to include or not into input data
PathFilter hiddenFileFilter = new PathFilter(){
public boolean accept(Path p){
String name = p.getName();
return !name.startsWith("_") && !name.startsWith(".");
}
};
2/24/13
14. RecordReader Logic
Must handle a common situation when InputSplit and
HDFS block boundaries do not match
2/24/13
Image source: Hadoop: The Definitive Guide by Tom White
15. RecordReader Logic
Exemplary solution – based on LineRecordReader
Skips* everything from its block until the fist 'n'
Reads from the second block until it sees 'n'
*except the very first block (an offset equals to 0)
2/24/13
Image source: Hadoop: The Definitive Guide by Tom White
16. Keys And Values
Keys must implement WritableComparable interface
Since they are sorted before passing to the Reducers
Values must implement “at least” Writable interface
2/24/13
18. Writable And WritableComparable
public interface Writable {
void write(DataOutput out) throws IOException;
void readFields(DataInput in) throws IOException;
}
public interface WritableComparable<T> extends Writable,
Comparable<T> {
}
public interface Comparable<T> {
public int compareTo(T o);
}
2/24/13
19. Example: SongWritable
class SongWritable implements Writable {
String title;
int year;
byte[] content;
…
public void write(DataOutput out) throws ... {
out.writeUTF(title);
out.writeInt(year);
out.writeInt(content.length);
out.write(content);
}
}
2/24/13
20. Mapper
Takes input in form of a <key, value> pair
Emits a set of intermediate <key, value> pairs
Stores them locally and later passes to the Reducers
But earlier: partition + sort + spill + merge
2/24/13
22. MapContext Object
Allow the user map code to communicate with MapReduce system
public InputSplit getInputSplit();
public TaskAttemptID getTaskAttemptID();
public void setStatus(String msg);
public boolean nextKeyValue() throws ...;
public KEYIN getCurrentKey() throws ...;
public VALUEIN getCurrentValue() throws ...;
public void write(KEYOUT key, VALUEOUT value) throws ...;
public Counter getCounter(String groupName, String counterName);
2/24/13
23. Examples Of Mappers
Implement highly specialized Mappers and reuse/chain them
when possible
IdentityMapper
InverseMapper
RegexMapper
TokenCounterMapper
2/24/13
24. TokenCounterMapper
public class TokenCounterMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
@Override
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
2/24/13
25. General Advices
Reuse Writable instead of creating a new one each time
Apache commons StringUtils class seems to be the most
efficient for String tokenization
2/24/13
26. Chain Of Mappers
Use multiple Mapper classes within a single Map task
The output of the first Mapper becomes the input of the
second, and so on until the last Mapper
The output of the last Mapper will be written to the task's
output
Encourages implementation of reusable and highly
specialized Mappers
2/24/13
28. Partitioner
Specifies which Reducer a given <key, value> pair is sent to
Desire even distribution of the intermediate data
Skewed data may overload a single reducer and make a whole
job running longer
public abstract class Partitioner<KEY, VALUE> {
public abstract
int getPartition(KEY key, VALUE value, int numPartitions);
}
2/24/13
29. HashPartitioner
The default choice for general-purpose use cases
public int getPartition(K key, V value, int numReduceTasks) {
return
(key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
}
2/24/13
32. TotalOrderPartitioner
Three samplers
InputSampler.RandomSampler<K,V>
Sample from random points in the input
InputSampler.IntervalSampler<K,V>
Sample from s splits at regular intervals
InputSampler.SplitSampler<K,V>
Samples the first n records from s splits
2/24/13
34. Reducer Run Method
public void run(Context context) throws … {
setup(context);
while (context.nextKey()) {
reduce(context.getCurrentKey(),
context.getValues(), context);
}
cleanup(context);
}
2/24/13
35. Chain Of Mappers After A Reducer
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task
Combined with ChainMapper, one could get [MAP+ / REDUCE MAP*]
ChainReducer.setReducer(conf, XReduce.class, LongWritable.class, Text.class,
Text.class, Text.class, true, reduceConf);
ChainReducer.addMapper(conf, CMap.class, Text.class, Text.class,
LongWritable.class, Text.class, false, null);
ChainReducer.addMapper(conf, DMap.class, LongWritable.class, Text.class,
LongWritable.class, LongWritable.class, true, null);
2/24/13
39. Job Class Methods
public void setInputFormatClass(..); public void setNumReduceTasks(int tasks);
public void setOutputFormatClass(..); public void setJobName(String name);
public void setMapperClass(..); public float mapProgress();
public void setCombinerClass(..); public float reduceProgress();
public void setReducerClass(...); public boolean isComplete();
public void setPartitionerClass(..); public boolean isSuccessful();
public void setMapOutputKeyClass(..); public void killJob();
public void setMapOutputValueClass(..); public void submit();
public void setOutputKeyClass(..); public boolean waitForCompletion(..);
public void setOutputValueClass(..);
public void setSortComparatorClass(..);
public void setGroupingComparatorClass(..);
2/24/13
40. ToolRunner
Supports parsing allows the user to specify configuration
options on the command line
hadoop jar examples.jar SongCount
-D mapreduce.job.reduces=10
-D artist.gender=FEMALE
-files dictionary.dat
-jar math.jar,spotify.jar
songs counts
2/24/13
41. Side Data Distribution
public class MyMapper<K, V> extends Mapper<K,V,V,K> {
String gender = null;
File dictionary = null;
protected void setup(Context context) throws … {
Configuration conf = context.getConfiguration();
gender = conf.get(“artist.gender”, “MALE”);
dictionary = new File(“dictionary.dat”);
}
2/24/13
42. public class WordCount extends Configured implements Tool {
public int run(String[] otherArgs) throws Exception {
if (args.length != 2) {
System.out.println("Usage: %s [options] <input> <output>", getClass().getSimpleName());
return -1;
}
Job job = new Job(getConf());
FileInputFormat.setInputPaths(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
...
return job.waitForCompletion(true); ? 0 : 1;
}
}
public static void main(String[] allArgs) throws Exception {
int exitCode = ToolRunner.run(new Configuration(), new WordCount(), allArgs);
System.exit(exitCode);
}
2/24/13
43. MRUnit
Built on top of JUnit
Provides a mock InputSplit, Contex and other classes
Can test
The Mapper class,
The Reducer class,
The full MapReduce job
The pipeline of MapReduce jobs
2/24/13
44. MRUnit Example
public class IdentityMapTest extends TestCase {
private MapDriver<Text, Text, Text, Text> driver;
@Before
public void setUp() {
driver = new MapDriver<Text, Text, Text, Text>(new MyMapper<Text, Text, Text, Text>());
}
@Test
public void testMyMapper() {
driver
.withInput(new Text("foo"), new Text("bar"))
.withOutput(new Text("oof"), new Text("rab"))
.runTest();
}
}
2/24/13
45. Example: Secondary Sort
reduce(key, Iterator<value>) method gets iterator
over values
These values are not sorted for a given key
Sometimes we want to get them sorted
Useful to find minimum or maximum value quickly
2/24/13
46. Secondary Sort Is Tricky
A couple of custom classes are needed
WritableComparable
Partitioner
SortComparator (optional, but recommended)
GroupingComparator
2/24/13
48. Custom Partitioner
HashPartitioner uses a hash on keys
The same titles may go to different reducers (because they are
combined with ts in a key)
Use a custom partitioner that partitions only on first part of the key
int getPartition(TitleWithTs key, LongWritable value, int num) {
return hashParitioner.getPartition(key.title);
}
2/24/13
49. Ordering Of Keys
Keys needs to be ordered before passing to the reducer
Orders by natural key and, for the same natural key, on the
value portion of the key
Implement sorting in WritableComparable or use
Comparator class
job.setSortComparatorClass(SongWithTsComparator.class);
2/24/13
50. Data Passed To The Reducer
By default, each unique key forces reduce() method
(Disturbia#1, 1) → reduce method is invoked
(Disturbia#4, 4) → reduce method is invoked
(Disturbia#7, 7) → reduce method is invoked
(Fast car#2, 2) → reduce method is invoked
(Fast car#2, 2)
(Fast car#6, 6) → reduce method is invoked
(SOS#4, 4) → reduce method is invoked
2/24/13
51. Data Passed To The Reducer
GroupingComparatorClass class determines which keys and
values are passed in a single call to the reduce method
Just look at the natural key when grouping
(Disturbia#1, 1) → reduce method is invoked
(Disturbia#4, 4)
(Disturbia#7, 7)
(Fast car#2, 2) → reduce method is invoked
(Fast car#2, 2)
(Fast car#6, 6)
(SOS#4, 4) → reduce method is invoked
2/24/13
53. Question – A Possible Answer
Implement TotalSort, but
Each Reducer produce an additional file containing a pair
<minimum_value, number_of_values>
After the job ends, a single-thread application
Reads these files to build the index
Calculate which value in which file is the median
Finds this value in this file
2/24/13