- 'WordCountJob' is the "HelloWorld" program in Hadoop.By this job we will see MapReduce FlowChart.
- How we are running our program
"J$hadoop jar test.jar DriverCode file.txt TestOutput"
- Now consider the previous 4 splited file
INPUT SPLITs
---> 1st input split --><-- RecordReader(Interface) --> Mapper
->}
---> 2nd input split --><-- RecordReader(Interface) --> Mapper
->} REDUCER
---> 3rd input split --><-- RecordReader(Interface) --> Mapper
->} (Identity REDUCER)
---> 4th input split --><-- RecordReader(Interface) --> Mapper ->}
- We need not to write extra code for RecordReader(Interface). Hadoop framework take care of it.
- How this RecordReader(Interface) is reading this input splits or records(on what basis this RecordReader(Interface) converitng these records to [key,numerical])?
Answer: There are 4 formats to do this:
1) TextInputFormat
2) KeyValueInputFormat
3) SequenceFileInputFormat
4) SequenceFileAsTextFormat
By Default: TextInputFormat
Now if the format is TextInputFormat, then how the RecordReader(Interface) converts the record into [key,value]---> (byteoffset,enterline).
byteoffset:- Address of the line. enterline:- which is read by the recordreader
For eg:
(0, hi how are you)
(16,how is your job)
- As much lines are there, that much byteoffset,enterline sets will be there. and that many times the mapper will work.
- You will write only 1 mappercode.
- Mappercode make the sets given below:
(hi,1)
(how,1)
(are,1)
(you,1)
hi/how..:Text
1:IntWritable
(text,IntWritable)
- Data after mappercode is an intermediate data and this data is futher sends to REDUCER for process.
- Keys shouldn't be duplicate but values can be duplicated
- Now there are further two phases through which the intermediate data pass : Shuffling & Sorting
- SHUFFLING PHASE: combine all those values associated to single indentical key. Eg.(how,[1,1,1,1,1]) , (is,[1,1,1,1,1,1]) , etc.
- SORTING PHASE: Its done automatically by getting non duplicate key,value through comparing.
- It makes out a parrallel system: Main objective of Hadoop.
- In collection framework(java), we use wrapper classes instead of primitive classes:
Apart from others, Hadoop introduce box clases
Wrapper class Primitive class Box Class
1)Interger int IntWritable
2)Long long LongWritable
3)Float float FloatWritable
4) Double double DoubleWritable
5) String string TextWritable
6)Character character -do-
- If I want to convert int --> IntWritable
new IntWritable(int);
for vice-verca
get();
- If I want to convert string --> TextWritable
toString()
- FINALLY THE REDUCER GIVE OUTPUT AND THAT OUTPUT IS GIVEN INTO RECORDWRITTER
RECORDWRITTER knows how to write key,value.
- The output of RECORDWRITTER is further counts in the file name O/P which is also termed as part-00000.
- Now there is a output directory which is named as TestOutput. This output file contains 2 directories and 1 file those are:
_Source, _logs and part-00000.