您好,登錄后才能下訂單哦!
這篇文章主要為大家展示了“如何利用eclipse編寫自定義hive udf函數(shù)”,內(nèi)容簡(jiǎn)而易懂,條理清晰,希望能夠幫助大家解決疑惑,下面讓小編帶領(lǐng)大家一起研究并學(xué)習(xí)一下“如何利用eclipse編寫自定義hive udf函數(shù)”這篇文章吧。
在做日志分析的過(guò)程中,用到了hadoop框架中的hive,不過(guò)有些日志處理用hive中的函數(shù)處理顯得力不從心,就需要用udf來(lái)進(jìn)行擴(kuò)展處理了
1 在eclipse中新建java project hiveudf 然后新建class package(com.afan) name(UDFLower)
2 添加jar library hadoop-core-1.1.2.jar(來(lái)源hadoop1.1.2) hive-exec-0.9.0.jar(來(lái)源hive-0.9.0)兩個(gè)文件到project
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
public class UDFLower extends UDF{
public Text evaluate(final Text s){
if (null == s){
return null;
}
return new Text(s.toString().toLowerCase());
}
}
4 編譯輸出打包文件為 udf_hive.jar
第一步:
第二步:
第三步:
第四步:
第五步:
第六步:
5 將udf_hive.jar放入配置好的linux系統(tǒng)的文件夾中路徑為/root/data/udf_hive.jar
6 打開(kāi)hive命令行測(cè)試
hive> add jar /root/data/udf_hive.jar;
Added udf_hive.jar to class path
Added resource: udf_hive.jar
創(chuàng)建udf函數(shù)
hive> create temporary function my_lower as 'UDFLower'; // UDFLower'表示你的類的地址,例如你有包名:cn.jiang.UDFLower.java,那么就as后面接‘cn.jiang.UDFLower’,如果沒(méi)有包名就直接寫類名'UDFLower'就行
創(chuàng)建測(cè)試數(shù)據(jù)
hive> create table dual (name string);
導(dǎo)入數(shù)據(jù)文件test.txt
test.txt文件內(nèi)容為
WHO
AM
I
HELLO
hive> load data local inpath '/root/data/test.txt' into table dual;
hive> select name from dual;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201105150525_0003, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201105150525_0003
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201105150525_0003
2011-05-15 06:46:05,459 Stage-1 map = 0%, reduce = 0%
2011-05-15 06:46:10,905 Stage-1 map = 100%, reduce = 0%
2011-05-15 06:46:13,963 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201105150525_0003
OK
WHO
AM
I
HELLO
使用udf函數(shù)
hive> select my_lower(name) from dual;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201105150525_0002, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201105150525_0002
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201105150525_0002
2011-05-15 06:43:26,100 Stage-1 map = 0%, reduce = 0%
2011-05-15 06:43:34,364 Stage-1 map = 100%, reduce = 0%
2011-05-15 06:43:37,484 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201105150525_0002
OK
who
am
i
hello
經(jīng)測(cè)試成功通過(guò)
以上是“如何利用eclipse編寫自定義hive udf函數(shù)”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對(duì)大家有所幫助,如果還想學(xué)習(xí)更多知識(shí),歡迎關(guān)注億速云行業(yè)資訊頻道!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。