您好,登錄后才能下訂單哦!
本篇內(nèi)容主要講解“hive連接hbase外部表時(shí)insert數(shù)據(jù)報(bào)錯(cuò)怎么辦”,感興趣的朋友不妨來(lái)看看。本文介紹的方法操作簡(jiǎn)單快捷,實(shí)用性強(qiáng)。下面就讓小編來(lái)帶大家學(xué)習(xí)“hive連接hbase外部表時(shí)insert數(shù)據(jù)報(bào)錯(cuò)怎么辦”吧!
hive連接hbase外部表成功后,可以正常的查詢hbase的數(shù)據(jù)了。但是向hbase插入數(shù)據(jù)卻報(bào)錯(cuò)了。
Error: java.lang.RuntimeException: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Put.setDurability(Lorg/apache/hadoop/hbase/client/Durability;)V at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:168) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Put.setDurability(Lorg/apache/hadoop/hbase/client/Durability;)V at org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat$MyRecordWriter.write(HiveHBaseTableOutputFormat.java:142) at org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat$MyRecordWriter.write(HiveHBaseTableOutputFormat.java:117) at org.apache.hadoop.hive.ql.io.HivePassThroughRecordWriter.write(HivePassThroughRecordWriter.java:40) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:743) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:97) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:115) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:169) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:561) at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:159) ... 8 more
在網(wǎng)上查找問(wèn)題,發(fā)現(xiàn)MapR解決過(guò)這個(gè)問(wèn)題,參考頁(yè)面http://doc.mapr.com/display/components/Hive+Release+Notes;jsessionid=73C03B3BB0D8547A19E6CCEF80010D30#HiveReleaseNotes-Hive1.2.1-1601ReleaseNotes
的Hive 1.2.1-1601 Release Notes的說(shuō)明,fe18d11的commit的說(shuō)明中和我當(dāng)前錯(cuò)誤一致的。但是這個(gè)MapR公司對(duì)hive版本的一個(gè)patch,我用的是apache的版本,改到MapR的hive版本不現(xiàn)實(shí)啊,嘗試按照他補(bǔ)丁包中的updated,進(jìn)行更新我自己的hive的jar包和配置,發(fā)現(xiàn)又引出新的問(wèn)題。這種方式不行啊,得換條路了。
查看hive官方的發(fā)布版本,當(dāng)前有兩個(gè)版本apache-hive-1.2.1-bin.tar.gz和apache-hive-2.0.0-bin.tar.gz 。我當(dāng)前是1.2.1的,我可以升級(jí)到2.0.0試一下。
下載2.0.0版本并安裝,主要是修改了hive-site.xml文件(執(zhí)行cp hive-default.xml.template hive-site.xml)。同時(shí)在hive/lib目錄下引入hbase的jar包:
guava-14.0.1.jar protobuf-java-2.5.0.jar hbase-client-1.1.1.jar hbase-common-1.1.1.jar zookeeper-3.4.6.jar hbase-server-1.1.1.jar
hive-site.xml
<property> <name>hive.exec.scratchdir</name> <value>/tmp/hive</value> <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/data/hive/logs</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/tmp/hive/temp0_resources</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> ... <property> <name>javax.jdo.option.ConnectionPassword</name> <value>password</value> <description>password to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive_db?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>username</value> <description>Username to use against metastore database</description> </property> <property> <name>hive.session.id</name> <value>temp0</value> <description/> </property> <property> <name>hive.aux.jars.path</name> <value>file:///data/hive/lib/guava-14.0.1.jar,file:///data/hive/lib/protobuf-java-2.5.0.jar,file:///data/hive/lib/hbase-client-1.1.1.jar,file:///data/hive/lib/hbase-common-1.1.1.jar,file:///data/hive/lib/zookeeper-3.4.6.jar,file:///data/hive/lib/hbase-server-1.1.1.jar</value> <description>The location of the plugin jars that contain implementations of user defined functions and serdes.</description> </property> <property> <name>hive.querylog.location</name> <value>/data/hive/logs</value> <description>Location of Hive run time structured log file</description> </property> <property> <name>hive.zookeeper.quorum</name> <value>slave1,slave2,master,slave4,slave5,slave6,slave7</value> <description> List of ZooKeeper servers to talk to. This is needed for: 1. Read/write locks - when hive.lock.manager is set to org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager, 2. When HiveServer2 supports service discovery via Zookeeper. 3. For delegation token storage if zookeeper store is used, if hive.cluster.delegation.token.store.zookeeper.connectString is not set </description> </property> <property> <name>hbase.zookeeper.quorum</name> <value>slave1,slave2,master,slave4,slave5,slave6,slave7</value> </property> <property> <name>hive.server2.logging.operation.log.location</name> <value>/data/hive/logs/operation_logs</value> <description>Top level directory where operation logs are stored if logging functionality is enabled</description> </property>
修改配置文件后,啟動(dòng)metaStore的后臺(tái)進(jìn)程,執(zhí)行hive就可進(jìn)入hive的命令行了,執(zhí)行insert into table hbase的表,執(zhí)行成功。
到此,相信大家對(duì)“hive連接hbase外部表時(shí)insert數(shù)據(jù)報(bào)錯(cuò)怎么辦”有了更深的了解,不妨來(lái)實(shí)際操作一番吧!這里是億速云網(wǎng)站,更多相關(guān)內(nèi)容可以進(jìn)入相關(guān)頻道進(jìn)行查詢,關(guān)注我們,繼續(xù)學(xué)習(xí)!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。