溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

利用sqoop測試oracle數(shù)據(jù)庫的連接使用

發(fā)布時間:2021-08-03 09:21:59 來源:億速云 閱讀:347 作者:chen 欄目:云計算

本篇內(nèi)容介紹了“利用sqoop測試oracle數(shù)據(jù)庫的連接使用”的有關(guān)知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領(lǐng)大家學(xué)習(xí)一下如何處理這些情況吧!希望大家仔細(xì)閱讀,能夠?qū)W有所成!

測試oracle數(shù)據(jù)庫的連接使用

①連接oracle數(shù)據(jù)庫,列出所有的數(shù)據(jù)庫

[hadoop@eb179 sqoop]$sqoop list-databases--connect jdbc 10.1.69.173:1521:ORCLBI --username huangq -P
或者sqoop list-databases--connect jdbc racle:thin10.1.69.173:1521:ORCLBI --username huangq--password 123456

或者MySQL:sqoop list-databases --connectjdbc:mysql://172.19.17.119:3306/ --username hadoop --password hadoop

Warning: /home/hadoop/sqoop/../hcatalog does not exist!HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/sqoop/../accumulo does not exist! Accumulo imports willfail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: $HADOOP_HOME is deprecated.
14/08/17 11:59:24 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
Enter password:
14/08/17 11:59:27 INFO oracle.OraOopManagerFactory: Data Connector for Oracleand Hadoop is disabled.
14/08/17 11:59:27 INFO manager.SqlManager: Using default fetchSize of 1000
14/08/17 11:59:51 INFO manager.OracleManager: Time zone has been set to GMT
MRDRP
MKFOW_QH

②Oracle數(shù)據(jù)庫的表導(dǎo)入到HDFS

注意:

默認(rèn)情況下會使用4個map任務(wù),每個任務(wù)都會將其所導(dǎo)入的數(shù)據(jù)寫到一個單獨的文件中,4個文件位于同一目錄,本例中 -m1表示只使用一個map任務(wù) 文本文件不能保存為二進(jìn)制字段,并且不能區(qū)分null值和字符串值"null"  執(zhí)行下面的命令后會生成一個ENTERPRISE.java文件,可以通過ls ENTERPRISE.java查看,代碼生成是sqoop導(dǎo)入過程的必要部分,sqoop在將源數(shù)據(jù)庫中的數(shù)據(jù)寫到HDFS前,首先會用生成的代碼將其進(jìn)行反序列化

[hadoop@eb179~]$ sqoop import --connect jdbc racle:thin10.1.69.173:1521:ORCLBI--username huangq --password 123456 --table ORD_UV -m 1 --target-dir/user/sqoop/test --direct-split-size 67108864
Warning: /home/hadoop/sqoop/../hcatalog does not exist! HCatalog jobs willfail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/sqoop/../accumulo does not exist! Accumulo imports willfail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: $HADOOP_HOME is deprecated.
14/08/17 15:21:34 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
14/08/17 15:21:34 WARN tool.BaseSqoopTool: Setting your password on thecommand-line is insecure. Consider using -P instead.
14/08/17 15:21:34 INFO oracle.OraOopManagerFactory: Data Connector for Oracleand Hadoop is disabled.
14/08/17 15:21:34 INFO manager.SqlManager: Using default fetchSize of 1000
14/08/17 15:21:34 INFO tool.CodeGenTool: Beginning code generation
14/08/17 15:21:46 INFO manager.OracleManager: Time zone has been set to GMT
14/08/17 15:21:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.*FROM ORD_UV t WHERE 1=0
14/08/17 15:21:46 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is/home/hadoop/hadoop
Note: /tmp/sqoop-hadoop/compile/328657d577512bd2c61e07d66aaa9bb7/ORD_UV.javauses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/08/17 15:21:47 INFO orm.CompilationManager: Writing jar file:/tmp/sqoop-hadoop/compile/328657d577512bd2c61e07d66aaa9bb7/ORD_UV.jar
14/08/17 15:21:47 INFO manager.OracleManager: Time zone has been set to GMT
14/08/17 15:21:47 INFO manager.OracleManager: Time zone has been set to GMT
14/08/17 15:21:47 INFO mapreduce.ImportJobBase: Beginning import of ORD_UV
14/08/17 15:21:47 INFO manager.OracleManager: Time zone has been set to GMT
14/08/17 15:21:49 INFO db.DBInputFormat: Using read commited transactionisolation
14/08/17 15:21:49 INFO mapred.JobClient: Running job: job_201408151734_0027
14/08/17 15:21:50 INFO mapred.JobClient:  map 0% reduce 0%
14/08/17 15:22:12 INFO mapred.JobClient:  map 100% reduce 0%
14/08/17 15:22:17 INFO mapred.JobClient: Job complete: job_201408151734_0027
14/08/17 15:22:17 INFO mapred.JobClient: Counters: 18
14/08/17 15:22:17 INFO mapred.JobClient:   Job Counters 
14/08/17 15:22:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=15862
14/08/17 15:22:17 INFO mapred.JobClient:     Total time spent by allreduces waiting after reserving slots (ms)=0
14/08/17 15:22:17 INFO mapred.JobClient:     Total time spent by allmaps waiting after reserving slots (ms)=0
14/08/17 15:22:17 INFO mapred.JobClient:     Launched map tasks=1
14/08/17 15:22:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
14/08/17 15:22:17 INFO mapred.JobClient:   File Output FormatCounters 
14/08/17 15:22:17 INFO mapred.JobClient:     Bytes Written=1472
14/08/17 15:22:17 INFO mapred.JobClient:   FileSystemCounters
14/08/17 15:22:17 INFO mapred.JobClient:     HDFS_BYTES_READ=87
14/08/17 15:22:17 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=33755
14/08/17 15:22:17 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=1472
14/08/17 15:22:17 INFO mapred.JobClient:   File Input FormatCounters 
14/08/17 15:22:17 INFO mapred.JobClient:     Bytes Read=0
14/08/17 15:22:17 INFO mapred.JobClient:   Map-Reduce Framework
14/08/17 15:22:17 INFO mapred.JobClient:     Map input records=81
14/08/17 15:22:17 INFO mapred.JobClient:     Physical memory (bytes)snapshot=192405504
14/08/17 15:22:17 INFO mapred.JobClient:     Spilled Records=0
14/08/17 15:22:17 INFO mapred.JobClient:     CPU time spent (ms)=1540
14/08/17 15:22:17 INFO mapred.JobClient:     Total committed heapusage (bytes)=503775232
14/08/17 15:22:17 INFO mapred.JobClient:     Virtual memory (bytes)snapshot=2699571200
14/08/17 15:22:17 INFO mapred.JobClient:     Map output records=81
14/08/17 15:22:17 INFO mapred.JobClient:     SPLIT_RAW_BYTES=87
14/08/17 15:22:17 INFO mapreduce.ImportJobBase: Transferred 1.4375 KB in29.3443 seconds (50.1631 bytes/sec)
14/08/17 15:22:17 INFO mapreduce.ImportJobBase: Retrieved 81 records.

③數(shù)據(jù)導(dǎo)出Oracle和HBase

使用export可將hdfs中數(shù)據(jù)導(dǎo)入到遠(yuǎn)程數(shù)據(jù)庫中

          export --connectjdbc racle:thin 192.168.**.**:**:**--username **--password=** -m1tableVEHICLE--export-dir /user/root/VEHICLE

向Hbase導(dǎo)入數(shù)據(jù)

          sqoop import --connect jdbc racle:thin 192.168.**.**:**:**--username**--password=**--m1 --table VEHICLE --hbase-create-table --hbase-table VEHICLE--hbase-row-key ID--column-family VEHICLEINFO --split-by ID

“利用sqoop測試oracle數(shù)據(jù)庫的連接使用”的內(nèi)容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關(guān)的知識可以關(guān)注億速云網(wǎng)站,小編將為大家輸出更多高質(zhì)量的實用文章!

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI