溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點(diǎn)擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

HUE如何安裝與配置

發(fā)布時間:2021-11-16 10:35:41 來源:億速云 閱讀:269 作者:小新 欄目:云計算

這篇文章主要為大家展示了“HUE如何安裝與配置”,內(nèi)容簡而易懂,條理清晰,希望能夠幫助大家解決疑惑,下面讓小編帶領(lǐng)大家一起研究并學(xué)習(xí)一下“HUE如何安裝與配置”這篇文章吧。

HUE 安裝與配置

1. HUE下載:http://cloudera.github.io/hue/docs-3.0.0/manual.html#_hadoop_configuration

2. 安裝HUE相關(guān)依賴(root下)

Redhat

Ubuntu

gcc

gcc

g++

g++

libxml2-devel

libxml2-dev

libxslt-devel

libxslt-dev

cyrus-sasl-devel

libsasl2-dev

cyrus-sasl-gssapi

libsasl2-modules-gssapi-mit

mysql-devel

libmysqlclient-dev

python-devel

python-dev

python-setuptools

python-setuptools

python-simplejson

python-simplejson

sqlite-devel

libsqlite3-dev

ant

ant

libsasl2-dev

cyrus-sasl-devel

libsasl2-modules-gssapi-mit

cyrus-sasl-gssapi

libkrb5-dev

krb5-devel

libtidy-0.99-0

libtidy (For unit tests only)

mvn

mvn (From maven2 package or tarball)

openldap-dev / libldap2-dev

openldap-devel

$ yum install -y gcc    g++    libxml2-devel    libxslt-devel    cyrus-sasl-devel    cyrus-sasl-gssapi    mysql-devel    python-devel    python-setuptools    python-simplejson    sqlite-devel    ant    libsasl2-dev    libsasl2-modules-gssapi-mit    libkrb5-dev    libtidy-0.99-0    mvn    openldap-dev

3. 修改pom.xml文件

$ vim /opt/hue/maven/pom.xml

a.) 修改hadoop與spark版本

      <hadoop-mr1.version>2.6.0</hadoop-mr1.version>

      <hadoop.version>2.6.0</hadoop.version>

       <spark.version>1.4.0</spark.version>

b.) 將hadoop-core修改為hadoop-common

       <artifactId>hadoop-common</artifactId>

c.) 將hadoop-test的版本改為1.2.1

       <artifactId>hadoop-test</artifactId>

       <version>1.2.1</version>

d.) 將兩個ThriftJobTrackerPlugin.java文件刪除,分別在如下兩個目錄:               

/usr/hdp/hue/desktop/libs/hadoop/java/src/main/java/org/apache/hadoop/thriftfs/ThriftJobTrackerPlugin.java

/usr/hdp/hue/desktop/libs/hadoop/java/src/main/java/org/apache/hadoop/mapred/ThriftJobTrackerPlugin.java

4. 編譯

$ cd /opt/hue

$ make apps

5. 啟動HUE服務(wù)

$ ./build/env/bin/supervisor

$ ps -aux|grep "hue"

$ kill -9 <端口號>

6. hue.ini 參數(shù)配置

$ vim /usr/hdp/hue/hue-3.10.0/desktop/conf/hue.ini

a.) [desktop] 配置

      [desktop]

      # Webserver listens on this address and port
      http_host=xx.xx.xx.xx
      http_port=8888

      # Time zone name
      time_zone=Asia/Shanghai

      # Webserver runs as this user
      server_user=hue
      server_group=hue

      # This should be the Hue admin and proxy user
      default_user=hue

      # This should be the hadoop cluster admin
      default_hdfs_superuser=hdfs

      [hadoop]
          [[hdfs_clusters]]

              [[[default]]]
              # Enter the filesystem uri

              # 如果HDFS沒有配置 HA,則按照以下配置

              fs_defaultfs=hdfs://xx.xx.xx.xx:8020          ## hadoop NameNode節(jié)點(diǎn)

              # 如果HDFS配置了HA,則按照以下配置
              fs_defaultfs=hdfs://mycluster                     ## 邏輯名稱,與core-site.xml的fs_defaultfs保持一致

              # NameNode logical name.
              ## logical_name=carmecluster

              # Use WebHdfs/HttpFs as the communication mechanism.
              # Domain should be the NameNode or HttpFs host.
              # Default port is 14000 for HttpFs.

              # 如果HDFS沒有配置HA,則按照以下配置

              webhdfs_url=http://xx.xx.xx.xx:50070/webhdfs/v1

              # 如果HDFS配置HA,則HUE只能通過Hadoop-httpfs來訪問HDFS; 手動安裝Hadoop-httpfs:$                        sudo yum install Hadoop-httpfs       啟動Hadoop-httpfs服務(wù):$ ./hadoop-httpfs start &
              webhdfs_url=http://xx.xx.xx.xx:14000/webhdfs/v1

      [[yarn_clusters]]

         [[[default]]]
             # Enter the host on which you are running the ResourceManager
             resourcemanager_host=xx.xx.xx.xx

             # The port where the ResourceManager IPC listens on
             resourcemanager_port=8050

             # Whether to submit jobs to this cluster
             submit_to=True

             # Resource Manager logical name (required for HA)
             ## logical_name=

             # Change this if your YARN cluster is Kerberos-secured
             ## security_enabled=false

             # URL of the ResourceManager API
            resourcemanager_api_url=http://xx.xx.xx.xx:8088

            # URL of the ProxyServer API
            proxy_api_url=http://xx.xx.xx.xx:8088

            # URL of the HistoryServer API
            history_server_api_url=http://xx.xx.xx.xx:19888

            # URL of the Spark History Server
            ## spark_history_server_url=http://localhost:18088

      [[mapred_clusters]]

          [[[default]]]
               # Enter the host on which you are running the Hadoop JobTracker
               jobtracker_host=xx.xx.xx.xx

               # The port where the JobTracker IPC listens on
               jobtracker_port=8021

               # JobTracker logical name for HA
               ## logical_name=

               # Thrift plug-in port for the JobTracker
               thrift_port=9290

              # Whether to submit jobs to this cluster
              submit_to=False

         [beeswax]

              # Host where HiveServer2 is running.
              # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
              hive_server_host=xx.xx.xx.xx

              # Port where HiveServer2 Thrift server runs on.
              hive_server_port=10000

              # Hive configuration directory, where hive-site.xml is located
              hive_conf_dir=/etc/hive/conf

              # Timeout in seconds for thrift calls to Hive service
              ## server_conn_timeout=120


        [hbase]
              # Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
              # Use full hostname with security.
              # If using Kerberos we assume GSSAPI SASL, not PLAIN.
              hbase_clusters=(Cluster|xx.xx.xx.xx:9090)

              # 如果連接hbase報錯, 開啟服務(wù) $ nohup hbase thrift start &

         [zookeeper]

              [[clusters]]

                 [[[default]]]
              # Zookeeper ensemble. Comma separated list of Host/Port.
              # e.g. localhost:2181,localhost:2182,localhost:2183
              host_ports=xx.xx.xx.xx:2181,xx.xx.xx.xx:2181,xx.xx.xx.xx:2181

         [liboozie]
              # The URL where the Oozie service runs on. This is required in order for
              # users to submit jobs. Empty value disables the config check.
              oozie_url=http://xx.xx.xx.xx:11000/oozie

b.) hadoop 相關(guān)配置

hdfs-site.xml 配置文件

<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>

core-site.xml 配置文件

<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>
</property>

如果HUE server在hadoop集群外,可以通過運(yùn)行HttpFS server服務(wù)來訪問HDFS. HttpFS服務(wù)僅需要一個開放

httpfs-site.xml 配置文件
<property>
  <name>httpfs.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>httpfs.proxyuser.hue.groups</name>
  <value>*</value>
</property>

c.) MapReduce 0.20(MR1)相關(guān)配置

HUE與JobTracker的通信是通過一個jar包,在mapreduce的lib文件夾下

如果JobTracker和HUE在同一個主機(jī)上,拷貝他

$ cd /usr/share/hue
$ cp desktop/libs/hadoop/java-lib/hue-plugins-*.jar /usr/lib/hadoop-0.20-mapreduce/lib

如果JobTracker 運(yùn)行在不同的主機(jī)上,需要scp的Hue plugins jar 到JobTracker主機(jī)上

添加以下內(nèi)容到mapred-site.xml配置文件,重啟JobTracker

<property>
  <name>jobtracker.thrift.address</name>
  <value>0.0.0.0:9290</value>
</property>
<property>
  <name>mapred.jobtracker.plugins</name>
  <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value>
  <description>Comma-separated list of jobtracker plug-ins to be activated.</description>
</property>

d.) Oozie 相關(guān)配置

oozie-site.xml配置

<property>
    <name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
    <value>*</value>
</property>
<property>
    <name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
    <value>*</value>
</property>

以上是“HUE如何安裝與配置”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對大家有所幫助,如果還想學(xué)習(xí)更多知識,歡迎關(guān)注億速云行業(yè)資訊頻道!

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

hue
AI