您好,登錄后才能下訂單哦!
這篇文章主要介紹hdfs如何實現(xiàn)數(shù)據(jù)壓縮,文中介紹的非常詳細(xì),具有一定的參考價值,感興趣的小伙伴們一定要看完!
公司一共不到30臺的hadoop集群,hdfs大小共有120T,最近監(jiān)控老是報警,磁盤不足(低于5%時候報警),之前一直忙于業(yè)務(wù),沒時間整理集群,整理之后發(fā)現(xiàn)現(xiàn)有文件一共在34T左右,加上3份冗余,整個hdfs占用在103T,之前清洗的時候直接是文本存入,且沒有進(jìn)行任何壓縮,這塊兒應(yīng)該會有很大的優(yōu)化空間。其中有一份記錄用戶手機安裝應(yīng)用的日志文件占用在5T左右,先拿他下手。
因為hive有三種文件存儲格式,TEXTFILE、SEQUENCEFILE、RCFILE,其中前兩個是基于行存儲,RCFile是Hive推出的一種專門面向列的數(shù)據(jù)格式。 它遵循“先按列劃分,再垂直劃分”的設(shè)計理念,當(dāng)查詢過程中,針對它并不關(guān)心的列時,它會在IO上跳過這些列,所以選擇RCFILE,再用Gzip壓縮。
之間還犯了一個比較2的錯誤:因為之前有同事調(diào)研過rcfile(已離職),所以用show create table XX的方式查看建表語句,發(fā)現(xiàn)是
CREATE EXTERNAL TABLE XX( ...... ) PARTITIONED BY ( day int ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' COLLECTION ITEMS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.RCFileInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.RCFileOutputFormat' LOCATION '/user/hive/data/XX';
就照搬改一下字段,建了一張app_install的RCFile表,sql導(dǎo)入之前的數(shù)據(jù)
set mapred.job.priority=VERY_HIGH; set hive.merge.mapredfiles=true; set hive.merge.smallfiles.avgsize=200000000; set hive.exec.compress.output=true; set mapred.output.compress=true; set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec; set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec; set mapred.job.name=app_install.$_DAY; insert overwrite table app_install1 PARTITION (day=$_DAY) select XXX from tb1 where day=$_DAY
報錯,查看hadoop運行日志,發(fā)現(xiàn)是
FATAL ExecReducer: java.lang.UnsupportedOperationException: Currently the writer can only accept BytesRefArrayWritableat org.apache.hadoop.hive.ql.io.RCFile$Writer.append(RCFile.java:880) at org.apache.hadoop.hive.ql.io.RCFileOutputFormat$2.write(RCFileOutputFormat.java:140) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:588) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762) at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84) at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.createForwardJoinObject(CommonJoinOperator.java:389)at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:715) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:697) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genObject(CommonJoinOperator.java:697)at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:856) at org.apache.hadoop.hive.ql.exec.JoinOperator.endGroup(JoinOperator.java:265) at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:198) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) at org.apache.hadoop.mapred.Child.main(Child.java:249)
網(wǎng)上說是hive的一個bug,一直以為就是這個bug,折騰了一天,最后試著按照網(wǎng)上的方式修改了一下建表語句
REATE EXTERNAL TABLE XX( ...... ) PARTITIONED BY ( day int ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' COLLECTION ITEMS TERMINATED BY ',' LINES TERMINATED BY '\n' STORED AS RCFILE LOCATION '/user/hive/data/XX';
結(jié)果正常運行,然后用show create table XX查看語句發(fā)現(xiàn)又變成了
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.RCFileInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.RCFileOutputFormat'
郁悶死了,就是建表語句然后用show create table顯示的不一樣導(dǎo)致,雖然是個小問題,但是也頗費經(jīng)歷。
以上是“hdfs如何實現(xiàn)數(shù)據(jù)壓縮”這篇文章的所有內(nèi)容,感謝各位的閱讀!希望分享的內(nèi)容對大家有幫助,更多相關(guān)知識,歡迎關(guān)注億速云行業(yè)資訊頻道!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。