您好,登錄后才能下訂單哦!
1.下載軟件:
wget http://apache.fayea.com/pig/pig-0.15.0/pig-0.15.0.tar.gz
2.解壓
tar -zxvf pig-0.15.0.tar.gz
mv pig-0.15.0 /usr/local/
ln -s pig-0.15.0 pig
3.配置環(huán)境變量:
export PATH=PATH=$HOME/bin:/usr/local/hadoop/bin:/usr/local/hadoop/sbin:/usr/local/pig/bin:$PATH;
export PIG_CLASSPATH=/usr/local/hadoop/etc/hadoop;
4.進(jìn)入grunt shell:
以本地模式登錄pig: 該方式的所有文件和執(zhí)行過程都在本地,一般用于測試程序
[hadoop@host61 ~]$ pig -x local
15/10/03 01:14:09 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/10/03 01:14:09 INFO pig.ExecTypeProvider: Picked LOCAL as the ExecType
2015-10-03 01:14:09,756 [main] INFO org.apache.pig.Main - Apache Pig version 0.15.0 (r1682971) compiled Jun 01 2015, 11:44:35
2015-10-03 01:14:09,758 [main] INFO org.apache.pig.Main - Logging error messages to: /home/hadoop/pig_1443860049744.log
2015-10-03 01:14:10,133 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/hadoop/.pigbootup not found
2015-10-03 01:14:12,648 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2015-10-03 01:14:12,656 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-10-03 01:14:12,685 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
2015-10-03 01:14:13,573 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
grunt>
以Mapreduce模式登錄:實(shí)際工作模式:
[hadoop@host63 ~]$ pig
15/10/03 02:11:54 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/10/03 02:11:54 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/10/03 02:11:54 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-10-03 02:11:55,086 [main] INFO org.apache.pig.Main - Apache Pig version 0.15.0 (r1682971) compiled Jun 01 2015, 11:44:35
2015-10-03 02:11:55,087 [main] INFO org.apache.pig.Main - Logging error messages to: /home/hadoop/pig_1443863515062.log
2015-10-03 02:11:55,271 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/hadoop/.pigbootup not found
2015-10-03 02:11:59,735 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-10-03 02:11:59,740 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2015-10-03 02:11:59,742 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://host61:9000/
2015-10-03 02:12:06,256 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-10-03 02:12:06,257 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: host61:9001
2015-10-03 02:12:06,265 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
grunt>
5.pig的運(yùn)行方式有如下三種:
1.腳本
2.grunt
3.嵌入式
6.登錄pig,并使用常用的命令:
[hadoop@host63 ~]$ pig
15/10/03 06:01:01 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/10/03 06:01:01 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/10/03 06:01:01 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-10-03 06:01:01,412 [main] INFO org.apache.pig.Main - Apache Pig version 0.15.0 (r1682971) compiled Jun 01 2015, 11:44:35
2015-10-03 06:01:01,413 [main] INFO org.apache.pig.Main - Logging error messages to: /home/hadoop/pig_1443877261408.log
2015-10-03 06:01:01,502 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/hadoop/.pigbootup not found
2015-10-03 06:01:03,657 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-10-03 06:01:03,657 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2015-10-03 06:01:03,662 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://host61:9000/
2015-10-03 06:01:05,968 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-10-03 06:01:05,968 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: host61:9001
2015-10-03 06:01:05,979 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
grunt> help
Commands:
<pig latin statement>; - See the PigLatin manual for details: http://hadoop.apache.org/pig
File system commands:
fs <fs arguments> - Equivalent to Hadoop dfs command: http://hadoop.apache.org/common/docs/current/hdfs_shell.html
Diagnostic commands:
describe <alias>[::<alias] - Show the schema for the alias. Inner aliases can be described as A::B.
explain [-script <pigscript>] [-out <path>] [-brief] [-dot|-xml] [-param <param_name>=<param_value>]
[-param_file <file_name>] [<alias>] - Show the execution plan to compute the alias or for entire script.
-script - Explain the entire script.
-out - Store the output into directory rather than print to stdout.
-brief - Don't expand nested plans (presenting a smaller graph for overview).
-dot - Generate the output in .dot format. Default is text format.
-xml - Generate the output in .xml format. Default is text format.
-param <param_name - See parameter substitution for details.
-param_file <file_name> - See parameter substitution for details.
alias - Alias to explain.
dump <alias> - Compute the alias and writes the results to stdout.
Utility Commands:
exec [-param <param_name>=param_value] [-param_file <file_name>] <script> -
Execute the script with access to grunt environment including aliases.
-param <param_name - See parameter substitution for details.
-param_file <file_name> - See parameter substitution for details.
script - Script to be executed.
run [-param <param_name>=param_value] [-param_file <file_name>] <script> -
Execute the script with access to grunt environment.
-param <param_name - See parameter substitution for details.
-param_file <file_name> - See parameter substitution for details.
script - Script to be executed.
sh <shell command> - Invoke a shell command.
kill <job_id> - Kill the hadoop job specified by the hadoop job id.
set <key> <value> - Provide execution parameters to Pig. Keys and values are case sensitive.
The following keys are supported:
default_parallel - Script-level reduce parallelism. Basic input size heuristics used by default.
debug - Set debug on or off. Default is off.
job.name - Single-quoted name for jobs. Default is PigLatin:<script name>
job.priority - Priority for jobs. Values: very_low, low, normal, high, very_high. Default is normal
stream.skippath - String that contains the path. This is used by streaming.
any hadoop property.
help - Display this message.
history [-n] - Display the list statements in cache.
-n Hide line numbers.
quit - Quit the grunt shell.
grunt> help sh
Commands:
<pig latin statement>; - See the PigLatin manual for details: http://hadoop.apache.org/pig
File system commands:
fs <fs arguments> - Equivalent to Hadoop dfs command: http://hadoop.apache.org/common/docs/current/hdfs_shell.html
Diagnostic commands:
describe <alias>[::<alias] - Show the schema for the alias. Inner aliases can be described as A::B.
explain [-script <pigscript>] [-out <path>] [-brief] [-dot|-xml] [-param <param_name>=<param_value>]
[-param_file <file_name>] [<alias>] - Show the execution plan to compute the alias or for entire script.
-script - Explain the entire script.
-out - Store the output into directory rather than print to stdout.
-brief - Don't expand nested plans (presenting a smaller graph for overview).
-dot - Generate the output in .dot format. Default is text format.
-xml - Generate the output in .xml format. Default is text format.
-param <param_name - See parameter substitution for details.
-param_file <file_name> - See parameter substitution for details.
alias - Alias to explain.
dump <alias> - Compute the alias and writes the results to stdout.
Utility Commands:
exec [-param <param_name>=param_value] [-param_file <file_name>] <script> -
Execute the script with access to grunt environment including aliases.
-param <param_name - See parameter substitution for details.
-param_file <file_name> - See parameter substitution for details.
script - Script to be executed.
run [-param <param_name>=param_value] [-param_file <file_name>] <script> -
Execute the script with access to grunt environment.
-param <param_name - See parameter substitution for details.
-param_file <file_name> - See parameter substitution for details.
script - Script to be executed.
sh <shell command> - Invoke a shell command.
kill <job_id> - Kill the hadoop job specified by the hadoop job id.
set <key> <value> - Provide execution parameters to Pig. Keys and values are case sensitive.
The following keys are supported:
default_parallel - Script-level reduce parallelism. Basic input size heuristics used by default.
debug - Set debug on or off. Default is off.
job.name - Single-quoted name for jobs. Default is PigLatin:<script name>
job.priority - Priority for jobs. Values: very_low, low, normal, high, very_high. Default is normal
stream.skippath - String that contains the path. This is used by streaming.
any hadoop property.
help - Display this message.
history [-n] - Display the list statements in cache.
-n Hide line numbers.
quit - Quit the grunt shell.
2015-10-03 06:02:22,264 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Encountered " <EOL> "\n "" at line 2, column 8.
Was expecting one of:
"cat" ...
"clear" ...
"cd" ...
"cp" ...
"copyFromLocal" ...
"copyToLocal" ...
"dump" ...
"\\d" ...
"describe" ...
"\\de" ...
"aliases" ...
"explain" ...
"\\e" ...
"help" ...
"history" ...
"kill" ...
"ls" ...
"mv" ...
"mkdir" ...
"pwd" ...
"quit" ...
"\\q" ...
"register" ...
"using" ...
"as" ...
"rm" ...
"set" ...
"illustrate" ...
"\\i" ...
"run" ...
"exec" ...
"scriptDone" ...
<IDENTIFIER> ...
<PATH> ...
<QUOTEDSTRING> ...
Details at logfile: /home/hadoop/pig_1443877261408.log
查看當(dāng)前目錄:
grunt> ls
hdfs://host61:9000/user/hadoop/.Trash <dir>
查看根目錄:
grunt> ls /
hdfs://host61:9000/in <dir>
hdfs://host61:9000/out <dir>
hdfs://host61:9000/user <dir>
切換目錄:
grunt> cd /
顯示當(dāng)前目錄:
grunt> ls
hdfs://host61:9000/in <dir>
hdfs://host61:9000/out <dir>
hdfs://host61:9000/user <dir>
grunt> cd /in
grunt> ls
hdfs://host61:9000/in/jdk-8u60-linux-x64.tar.gz<r 3> 181238643
hdfs://host61:9000/in/mytest1.txt<r 3> 23
hdfs://host61:9000/in/mytest2.txt<r 3> 24
hdfs://host61:9000/in/mytest3.txt<r 3> 4
查看文件信息:
grunt> cat mytest1.txt
this is the first file
拷貝hdfs中的文件至操作系統(tǒng):
grunt> copyToLocal /in/mytest5.txt /home/hadoop/mytest.txt
[hadoop@host63 ~]$ ls -l mytest.txt
-rw-r--r--. 1 hadoop hadoop 102 Oct 3 06:23 mytest.txt
使用sh+操作系統(tǒng)命令可以在grunt中執(zhí)行操作系統(tǒng)中的命令:
grunt> sh ls -l /home/hadoop/mytest.txt
-rw-r--r--. 1 hadoop hadoop 102 Oct 3 06:23 /home/hadoop/mytest.txt
7.pig的數(shù)據(jù)模型:
bag:表
tuple:行,記錄
field:屬性
pig不要求相同bag里面的不同tuple有相同數(shù)量或相同類型的field;
8.pig latin的常用語句:
LOAD:指出載入數(shù)據(jù)的方法;
FOREACH:逐行掃描并進(jìn)行某種處理;
FILTER:過濾行;
DUMP:把結(jié)果顯示到屏幕;
STORE:把結(jié)果保存到文件;
9.數(shù)據(jù)處理樣例:
產(chǎn)生測試文件:
[hadoop@host63 tmp]$ ls -l / |awk '{if(NR != 1)print $NF"#"$5}' >/tmp/mytest.txt
[hadoop@host63 tmp]$ cat /tmp/mytest.txt
bin#4096
boot#1024
dev#3680
etc#12288
home#4096
lib#4096
lib64#12288
lost+found#16384
media#4096
mnt#4096
opt#4096
proc#0
root#4096
sbin#12288
selinux#0
srv#4096
sys#0
tmp#4096
usr#4096
var#4096
裝載文件:
grunt> records = LOAD '/tmp/mytest.txt' USING PigStorage('#') AS (filename:chararray,size:int);
2015-10-03 07:35:48,479 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2015-10-03 07:35:48,480 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-10-03 07:35:48,497 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2015-10-03 07:35:48,716 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2015-10-03 07:35:48,723 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2015-10-03 07:35:48,723 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
顯示文件:
grunt> DUMP records;
(bin,4096)
(boot,1024)
(dev,3680)
(etc,12288)
(home,4096)
(lib,4096)
(lib64,12288)
(lost+found,16384)
(media,4096)
(mnt,4096)
(opt,4096)
(proc,0)
(root,4096)
(sbin,12288)
(selinux,0)
(srv,4096)
(sys,0)
(tmp,4096)
(usr,4096)
(var,4096)
顯示records的結(jié)構(gòu):
grunt> DESCRIBE records;
records: {filename: chararray,size: int}
過濾記錄:
grunt> filter_records =FILTER records BY size>4096;
grunt> DUMP fileter_records;
(etc,12288)
(lib64,12288)
(lost+found,16384)
(sbin,12288)
grunt> DESCRIBE filter_records;
filter_records: {filename: chararray,size: int}
分組:
grunt> group_records =GROUP records BY size;
grunt> DUMP group_records;
(0,{(sys,0),(proc,0),(selinux,0)})
(1024,{(boot,1024)})
(3680,{(dev,3680)})
(4096,{(var,4096),(usr,4096),(tmp,4096),(srv,4096),(root,4096),(opt,4096),(mnt,4096),(media,4096),(lib,4096),(home,4096),(bin,4096)})
(12288,{(etc,12288),(lib64,12288),(sbin,12288)})
(16384,{(lost+found,16384)})
grunt> DESCRIBE group_records;
group_records: {group: int,records: {(filename: chararray,size: int)}}
格式化:
grunt> format_records = FOREACH group_records GENERATE group, FLATTEN(records);
去重:
grunt> dis_records =DISTINCT records;
排序:
grunt> ord_records =ORDER dis_records BY size desc;
取前3行數(shù)據(jù):
grunt> top_records=LIMIT ord_records 3;
求最大值:
grunt> max_records =FOREACH group_records GENERATE group,MAX(records.size);
grunt> DUMP max_records;
(0,0)
(1024,1024)
(3680,3680)
(4096,4096)
(12288,12288)
(16384,16384)
查看執(zhí)行計(jì)劃:
grunt> EXPLAIN max_records;
保存記錄集:
grunt> STORE group_records INTO '/tmp/mytest_group';
grunt> STORE filter_records INTO '/tmp/mytest_filter';
grunt> STORE max_records INTO '/tmp/mytest_max';
10.UDF
pig支持使用java,python,javascript編寫UDF
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。