您好,登錄后才能下訂單哦!
本篇文章為大家展示了Hadoop distcp命令如何跨集群復(fù)制文件,內(nèi)容簡明扼要并且容易理解,絕對能使你眼前一亮,通過這篇文章的詳細(xì)介紹希望你能有所收獲。
hadoop提供了Hadoop distcp命令在Hadoop不同集群之間進(jìn)行數(shù)據(jù)復(fù)制和copy。
使用格式為:hadoop distcp -pbc hdfs://namenode1/test hdfs://namenode2/test
distcp copy只有Map沒有Reduce
usage: distcp OPTIONS [source_path...] <target_path>
OPTIONS
-append Reuse existing data in target files and append new
data to them if possible
-async Should distcp execution be blocking
-atomic Commit all changes or none
-bandwidth <arg> Specify bandwidth per map in MB
-delete Delete from target, files missing in source
-diff <arg> Use snapshot diff report to identify the
difference between source and target
-f <arg> List of files that need to be copied
-filelimit <arg> (Deprecated!) Limit number of files copied to <= n
-i Ignore failures during copy
-log <arg> Folder on DFS where distcp execution logs are
saved
-m <arg> Max number of concurrent maps to use for copy
-mapredSslConf <arg> Configuration for ssl config file, to use with
hftps://
-overwrite Choose to overwrite target files unconditionally,
even if they exist.
-p <arg> preserve status (rbugpcaxt)(replication,
block-size, user, group, permission,
checksum-type, ACL, XATTR, timestamps). If -p is
specified with no <arg>, then preserves
replication, block size, user, group, permission,
checksum type and timestamps. raw.* xattrs are
preserved when both the source and destination
paths are in the /.reserved/raw hierarchy (HDFS
only). raw.* xattrpreservation is independent of
the -p flag. Refer to the DistCp documentation for
more details.
-sizelimit <arg> (Deprecated!) Limit number of files copied to <= n
bytes
-skipcrccheck Whether to skip CRC checks between source and
target paths.
-strategy <arg> Copy strategy to use. Default is dividing work
based on file sizes
-tmp <arg> Intermediate work path to be used for atomic
commit
-update Update target, copying only missingfiles or
directories
不同版本的Hadoop集群由于RPC協(xié)議版本不一樣不能直接使用命令 hadoop distcp hdfs://namenode1/test hdfs://namenode2/test
對于不同Hadoop版本間的拷貝,用戶應(yīng)該使用HftpFileSystem。 這是一個(gè)只讀文件系統(tǒng),所以DistCp必須運(yùn)行在目標(biāo)端集群上(更確切的說是在能夠?qū)懭肽繕?biāo)集群的TaskTracker上)。 源的格式是hftp://<dfs.http.address>/<path> (默認(rèn)情況dfs.http.address是 <namenode>:50070)。
上述內(nèi)容就是Hadoop distcp命令如何跨集群復(fù)制文件,你們學(xué)到知識或技能了嗎?如果還想學(xué)到更多技能或者豐富自己的知識儲備,歡迎關(guān)注億速云行業(yè)資訊頻道。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。