溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

HDFS垃圾回收站的配置及使用方法

發(fā)布時間:2021-08-20 09:22:34 來源:億速云 閱讀:404 作者:chen 欄目:web開發(fā)

本篇內(nèi)容介紹了“HDFS垃圾回收站的配置及使用方法”的有關(guān)知識,在實際案例的操作過程中,不少人都會遇到這樣的困境,接下來就讓小編帶領(lǐng)大家學(xué)習(xí)一下如何處理這些情況吧!希望大家仔細(xì)閱讀,能夠?qū)W有所成!

HDFS會為每一個用戶創(chuàng)建一個回收站目錄:/user/用戶名/.Trash/,
每一個被用戶通過Shell刪除的文件/目錄,在系統(tǒng)回收站中都一個周期,也就是當(dāng)系統(tǒng)回收站中的文件/目錄在一段時間之后沒有被用戶恢復(fù)的話,HDFS就會自動的把這個文件/目錄徹底刪除,之后,用戶就永遠(yuǎn)也找不回這個文件/目錄了。

1. HDFS默認(rèn)會關(guān)閉回收站功能。默認(rèn)情況下HDFS刪除文件,無法恢復(fù)。
    [hadoop@hadoop002 hadoop]$ hdfs dfs -rm /gw_test.log2
    Deleted /gw_test.log2
2. 啟用回收站功能,需要配置core-site.xml文件
    [hadoop@hadoop002 hadoop]$ vi etc/hadoop/core-site.xml 
    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
    Licensed under the Apache License, Version 2.0 (the "License");
    you may not use this file except in compliance with the License.
    You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License. See accompanying LICENSE file.
    -->

    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop002:9000</value>
    </property>
    <!--多長時間創(chuàng)建CheckPoint NameNode截點上運行的CheckPointer 從Current文件夾創(chuàng)建CheckPoint;默認(rèn):0 由fs.trash.interval項指定 -->
    <property>
         <name>fs.trash.checkpoint.interva</name>
         <value>0</value>
    </property>
    <!--多少分鐘.Trash下的CheckPoint目錄會被刪除,該配置服務(wù)器設(shè)置優(yōu)先級大于客戶端,默認(rèn):不啟用 -->
    <property>
          <name>fs.trash.interval</name>
         <value>1440</value>  -- 清除周期分鐘(24小時)
    </property>
    
    </configuration>
    [hadoop@hadoop002 hadoop]
3. 重啟hdfs服務(wù)
    #停止hdfs服務(wù)
    [hadoop@hadoop002 hadoop]$ sbin/stop-dfs.sh 
    Stopping namenodes on [hadoop002]
    hadoop002: no namenode to stop
    hadoop002: no datanode to stop
    Stopping secondary namenodes [hadoop002]
    hadoop002: no secondarynamenode to stop
    #啟動hdfs服務(wù)
    [hadoop@hadoop002 hadoop]$ sbin/start-dfs.sh 
    Starting namenodes on [hadoop002]
    hadoop002: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop002.out
    hadoop002: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop002.out
    Starting secondary namenodes [hadoop002]
    hadoop002: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop002.out
    [hadoop@hadoop002 hadoop]$
4. HDFS刪除文件,刪除的文件被存放在回收站下面;/user/hadoop/.Trash/Current
    #刪除文件/gw_test.log3,
    [hadoop@hadoop002 hadoop]$ hdfs dfs -rm  /gw_test.log3
    18/05/25 15:27:47 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hadoop002:9000/gw_test.log3' to trash at: hdfs://hadoop002:9000/user/hadoop/.Trash/Current/gw_test.log3
    #查看根目錄下,gw_test.log3文件不存在
    [hadoop@hadoop002 hadoop]$ hdfs dfs -ls /
    Found 3 items
    drwxr-xr-x   - root   root                0 2018-05-23 13:16 /root
    drwx------   - hadoop supergroup          0 2018-05-22 11:23 /tmp
    drwxr-xr-x   - hadoop supergroup          0 2018-05-22 11:22 /user
    [hadoop@hadoop002 hadoop]$ 
    #查看回收站目錄下的文件,
    [hadoop@hadoop002 hadoop]$ hdfs dfs -ls /user/hadoop/.Trash/Current
    Found 1 items
    -rw-r--r--   1 hadoop supergroup         25 2018-05-23 13:04 /user/hadoop/.Trash/Current/gw_test.log3

5. 恢復(fù)文件
    #恢復(fù)文件操作
    [hadoop@hadoop002 hadoop]$ hdfs dfs -mv /user/hadoop/.Trash/Current/gw_test.log3 /gw_test.log3
    #查看根目錄下文件是否被恢復(fù)
    [hadoop@hadoop002 hadoop]$ hdfs dfs -ls /
    Found 4 items
    -rw-r--r--   1 hadoop supergroup         25 2018-05-23 13:04 /gw_test.log3
    drwxr-xr-x   - root   root                0 2018-05-23 13:16 /root
    drwx------   - hadoop supergroup          0 2018-05-22 11:23 /tmp
    drwxr-xr-x   - hadoop supergroup          0 2018-05-22 11:22 /user

6. 刪除文件跳過回收站 
    #  -skipTrash參數(shù)表示跳過回收站
    [hadoop@hadoop002 hadoop]$ hdfs dfs -rm -skipTrash /gw_test.log3
    Deleted /gw_test.log3

“HDFS垃圾回收站的配置及使用方法”的內(nèi)容就介紹到這里了,感謝大家的閱讀。如果想了解更多行業(yè)相關(guān)的知識可以關(guān)注億速云網(wǎng)站,小編將為大家輸出更多高質(zhì)量的實用文章!

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI