楼主: Lisrelchen
693 2

Hadoop with Cloudera VM (the Word Count Example) [推广有奖]

  • 0关注
  • 62粉丝

VIP

已卖:4194份资源

院士

67%

还不是VIP/贵宾

-

TA的文库  其他...

Bayesian NewOccidental

Spatial Data Analysis

东西方数据挖掘

威望
0
论坛币
50288 个
通用积分
83.6906
学术水平
253 点
热心指数
300 点
信用等级
208 点
经验
41518 点
帖子
3256
精华
14
在线时间
766 小时
注册时间
2006-5-4
最后登录
2022-11-6

楼主
Lisrelchen 发表于 2017-4-18 07:55:15 |AI写论文

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

This demonstrates single node Haddop cluster using the Cloudera Virtual Machine. Cloudera has packages Hadoop installation, Cloudera manager in a quickstart virtual machine so people can learn it in without hassels of installing and dealing with different OS systems.

Downloads

Download VirtualBox, a virtualization software package, according to the operating system on your host machine https://www.virtualbox.org/wiki/Downloads

Download the Cloudera QUickstart Virtual Machine (VM) http://www.cloudera.com/content/dev-center/en/home/developer-admin-resources/quickstart-vm.html

Import and start the VM

In VirtualBox Manager, click File->Import Appliance Then input the file path of Cloudera VM in the prompt window. Import the Cloudera VM.

After importing, in VirtualBox Manager, start the VM by right clicking and clicking "start"

Compile wordcount.jar

Now we are in the VM. Open a terminal

#print working diretory, to check the current working directory[cloudera@localhost ~]$ pwd/home/cloudera#if not, change to this directory by doing[cloudera@localhost ~]$ cd /home/cloudera/#note the machine's name is called localhost, which is what we want. It would be problematic if it appears in other names.

First, open the gedit text editor, copy and paste the Java program herehttps://www.cloudera.com/content/cloudera-content/cloudera-docs/HadoopTutorial/CDH4/Hadoop-Tutorial/ht_wordcount1_source.html to gedit, and then save the file to /home/cloudera/WordCount.java

#export CLASSPATH[cloudera@localhost ~]$ export CLASSPATH=/usr/lib/hadoop/client-0.20/\*:/usr/lib/hadoop/\*#display the value of CLASSPATH[cloudera@localhost ~]$ echo $CLASSPATH/usr/lib/hadoop/client-0.20/*:/usr/lib/hadoop/*#make a directory to store the to-be-compiled class[cloudera@localhost ~]$ mkdir wordcount_classes#compile the class, save it to the wordcount_classes directory[cloudera@localhost ~]$ javac -d wordcount_classes/ WordCount.java#make the .jar file, which is to be used for directing word count job in Hadoop[cloudera@localhost ~]$ jar -cvf wordcount.jar -C wordcount_classes/ .added manifestadding: org/(in = 0) (out= 0)(stored 0%)adding: org/myorg/(in = 0) (out= 0)(stored 0%)adding: org/myorg/WordCount.class(in = 1546) (out= 749)(deflated 51%)adding: org/myorg/WordCount$Map.class(in = 1938) (out= 798)(deflated 58%)adding: org/myorg/WordCount$Reduce.class(in = 1611) (out= 649)(deflated 59%)#list files in the current directory. Now you should see the wordcount.jar file listed there.[cloudera@localhost ~]$ ls
Put some files on HDFS

This is a word frequency count job, and we will have some text files from which words will be counted. We make some short text files here in the current directory, and then put them in Hadoop Distributed File System (HDFS). The text files need to be on HDFS to run a Hadoop job.

#create a text file with content "Hello World Bye World" and save to file0[cloudera@localhost ~]$ echo "Hello World Bye World" >file0#create a text file with content "Hello Hadoop Bye Hadoop" and save to file1[cloudera@localhost ~]$ echo "Hello Hadoop Bye Hadoop" >file1#make a new directory "wordcount" in HDFS under the /user/cloudera/[cloudera@localhost ~]$ hadoop fs -mkdir /user/cloudera/wordcount#make a new directory on HDFS now that we have the /user/cloudera/wordcount directory[cloudera@localhost ~]$ hadoop fs -mkdir /user/cloudera/wordcount/input#put file0 to HDFS directory /user/cloudera/wordcount/input[cloudera@localhost ~]$ hadoop fs -put file0 /user/cloudera/wordcount/input#put file1 to HDFS directory /user/cloudera/wordcount/input[cloudera@localhost ~]$ hadoop fs -put file1 /user/cloudera/wordcount/input
Run the Hadoop job

In the terminal, run a Hadoop job, need to supply the .jar file, main class, input folder path on HDFS, output folder path on HDFS.

Important: the output folder cannot be an existing folder in HDFS, it will cause error if that's the case. Hadoop will create this folder during the run for you. Make sure the specified output folder does not exist on HDFS, and if so, delete it.

[cloudera@localhost ~]$ hadoop jar wordcount.jar org.myorg.WordCount /user/cloudera/wordcount/input /user/cloudera/wordcount/output14/03/15 11:56:11 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.14/03/15 11:56:12 INFO mapred.FileInputFormat: Total input paths to process : 214/03/15 11:56:12 INFO mapred.JobClient: Running job: job_201403151136_000114/03/15 11:56:13 INFO mapred.JobClient:  map 0% reduce 0%14/03/15 11:56:24 INFO mapred.JobClient:  map 100% reduce 0%14/03/15 11:56:30 INFO mapred.JobClient:  map 100% reduce 100%14/03/15 11:56:31 INFO mapred.JobClient: Job complete: job_201403151136_000114/03/15 11:56:31 INFO mapred.JobClient: Counters: 3314/03/15 11:56:31 INFO mapred.JobClient:   File System Counters14/03/15 11:56:31 INFO mapred.JobClient:     FILE: Number of bytes read=7114/03/15 11:56:31 INFO mapred.JobClient:     FILE: Number of bytes written=48181714/03/15 11:56:31 INFO mapred.JobClient:     FILE: Number of read operations=014/03/15 11:56:31 INFO mapred.JobClient:     FILE: Number of large read operations=014/03/15 11:56:31 INFO mapred.JobClient:     FILE: Number of write operations=014/03/15 11:56:31 INFO mapred.JobClient:     HDFS: Number of bytes read=29014/03/15 11:56:31 INFO mapred.JobClient:     HDFS: Number of bytes written=3114/03/15 11:56:31 INFO mapred.JobClient:     HDFS: Number of read operations=514/03/15 11:56:31 INFO mapred.JobClient:     HDFS: Number of large read operations=014/03/15 11:56:31 INFO mapred.JobClient:     HDFS: Number of write operations=214/03/15 11:56:31 INFO mapred.JobClient:   Job Counters 14/03/15 11:56:31 INFO mapred.JobClient:     Launched map tasks=214/03/15 11:56:31 INFO mapred.JobClient:     Launched reduce tasks=114/03/15 11:56:31 INFO mapred.JobClient:     Data-local map tasks=214/03/15 11:56:31 INFO mapred.JobClient:     Total time spent by all maps in occupied slots (ms)=1467114/03/15 11:56:31 INFO mapred.JobClient:     Total time spent by all reduces in occupied slots (ms)=375614/03/15 11:56:31 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=014/03/15 11:56:31 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=014/03/15 11:56:31 INFO mapred.JobClient:   Map-Reduce Framework14/03/15 11:56:31 INFO mapred.JobClient:     Map input records=214/03/15 11:56:31 INFO mapred.JobClient:     Map output records=814/03/15 11:56:31 INFO mapred.JobClient:     Map output bytes=7814/03/15 11:56:31 INFO mapred.JobClient:     Input split bytes=24414/03/15 11:56:31 INFO mapred.JobClient:     Combine input records=814/03/15 11:56:31 INFO mapred.JobClient:     Combine output records=614/03/15 11:56:31 INFO mapred.JobClient:     Reduce input groups=414/03/15 11:56:31 INFO mapred.JobClient:     Reduce shuffle bytes=9714/03/15 11:56:31 INFO mapred.JobClient:     Reduce input records=614/03/15 11:56:31 INFO mapred.JobClient:     Reduce output records=414/03/15 11:56:31 INFO mapred.JobClient:     Spilled Records=1214/03/15 11:56:31 INFO mapred.JobClient:     CPU time spent (ms)=106014/03/15 11:56:31 INFO mapred.JobClient:     Physical memory (bytes) snapshot=40034304014/03/15 11:56:31 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=199745126414/03/15 11:56:31 INFO mapred.JobClient:     Total committed heap usage (bytes)=28187852814/03/15 11:56:31 INFO mapred.JobClient:   org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter14/03/15 11:56:31 INFO mapred.JobClient:     BYTES_READ=46

Examine word count results, stored on a text file in the output folder in HDFS

#list the files in the output folder (in HDFS)[cloudera@localhost ~]$ hadoop fs -ls /user/cloudera/wordcount/outputFound 3 items-rw-r--r--   3 cloudera cloudera          0 2014-03-15 11:56 /user/cloudera/wordcount/output/_SUCCESSdrwxr-xr-x   - cloudera cloudera          0 2014-03-15 11:56 /user/cloudera/wordcount/output/_logs-rw-r--r--   3 cloudera cloudera         31 2014-03-15 11:56 /user/cloudera/wordcount/output/part-00000#examine the word frequency file[cloudera@localhost ~]$ hadoop fs -cat /user/cloudera/wordcount/output/part-00000Bye        2Hadoop        2Hello        2World        2

References
[1] https://www.cloudera.com/content/cloudera-content/cloudera-docs/HadoopTutorial/CDH4/Hadoop-Tutorial/ht_wordcount1.html
[2] http://bangforeheadonbrickwall.wordpress.com/2013/01/29/making-the-cloudera-hadoop-wordcount-tutorial-work/
[3] http://stackoverflow.com/questions/16556182/clouderas-cdh4-wordcount-hadoop-tutorial-issues


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝


沙发
MouJack007 发表于 2017-4-18 09:50:46
谢谢楼主分享!

藤椅
MouJack007 发表于 2017-4-18 09:51:22

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群
GMT+8, 2026-1-22 17:01