溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

大數(shù)據(jù)TensorFlowOnSpark安裝

發(fā)布時間:2020-09-05 02:32:40 來源:網(wǎng)絡 閱讀:16384 作者:cs312779641 欄目:大數(shù)據(jù)

1. 概述

大數(shù)據(jù)tensorflowonspark 進行安裝和測試。

2 .環(huán)境

所選操作系統(tǒng)

地址和軟件版本

節(jié)點類型

Centos7.3 64位

192.168.2.31(master)

Java:jdk 1.8

Scala:2.10.4

Hadoop:2.7.3

Spark:2.12.3

TensorFlowOnSpark:0.8.0

Python2.7

Master

Centos7.3 64位

192.168.2.32(spark worker)

Java:jdk 1.8

Hadoop:2.7.3

Spark:2.12.3

slave001

Centos7.3 64位

192.168.2.33(spark worker)

Java:jdk 1.8

Hadoop:2.7.3

Spark:2.12.3

slave002 


3 .安裝

1.1 刪除系統(tǒng)自帶jdk

# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.99-2.6.5.1.el6.x86_64
rpm -e --nodeps java-1.6.0-openjdk-1.6.0.38-1.13.10.4.el6.x86_64
rpm -e --nodeps tzdata-java-2016c-1.el6.noarch


1.2 安裝jdk

rpm -ivh jdk-8u144-linux-x64.rpm


1.3添加java路徑

export JAVA_HOME=/usr/java/jdk1.8.0_144



1.4 驗證java

[root@master opt]# java -version
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)



1.5 Ssh免登陸設置

cd /root/.ssh/
ssh-keygen -t rsa
cat id_rsa.pub >> authorized_keys 
scp id_rsa.pub authorized_keys root@192.168.2.32:/root/.ssh/
scp id_rsa.pub authorized_keys root@192.168.2.31:/root/.ssh/



1.6安裝python2.7和pip

yum install -y gcc 
wget https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz
tar vxf Python-2.7.13.tgz
cd Python-2.7.13.tgz
./configure --prefix=/usr/local
make && make install
 
[root@master opt]# python
Python 2.7.13 (default, Aug 24 2017, 16:10:35) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-18)] on linux2
Type "help", "copyright", "credits" or "license" for more information.


1.7 安裝pip和setuptools

tar zxvf pip-1.5.4.tar.gz
tar zxvf setuptools-2.0.tar.gz
cd setuptools-2.0
python setup.py install
cd pip-1.5.4
python setup.py install


1.8 Hadoop安裝和配置

1.8.1 三臺機器都要安裝Hadoop

tar zxvf hadoop-2.7.3.tar.gz -C /usr/local/
cd /usr/local/hadoop-2.7.3/bin
[root@master bin]# ./hadoop version
Hadoop 2.7.3
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2016-08-18T01:41Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using /usr/local/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar



1.8.2 配置hadoop

配置master
vi /usr/local/hadoop-2.7.3/etc/hadoop/core-site.xml 
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9001</value>
    </property>
</configuration>



配置slave

[root@slave001 hadoop-2.7.3]# vi ./etc/hadoop/core-site.xml 
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://slave001:9001</value>
    </property>
</configuration>
[root@slave002 hadoop-2.7.3]# vi ./etc/hadoop/core-site.xml 
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://slave002:9001</value>
    </property>
</configuration>


1.8.3 配置hdfs

vi /usr/local/hadoop-2.7.3/etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/data</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address</name>
        <value>master:9001</value>
    </property>
</configuration>



1.9 安裝scala

tar -zxvf scala-2.12.3.tgz -C /usr/local/
 
#修改變量添加scala
vi /etc/profile
export SCALA_HOME=/usr/local/scala-2.12.3/
export PATH=$PATH:/usr/local/scala-2.12.3/bin
source /etc/profile



2.0三臺機器都要安裝spark

tar -zxvf spark-2.1.1-bin-hadoop2.7.tgz -C /usr/local/
 
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_144/
export SCALA_HOME=/usr/local/scala-2.12.3/
export PATH=$PATH:/usr/local/scala-2.12.3/bin
export SPARK_HOME=/usr/local/spark-2.1.1-bin-hadoop2.7/
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
source /etc/profile


 修改spark配置

cd /usr/local/spark-2.1.1-bin-hadoop2.7/

vi ./conf/spark-env.sh.template

export JAVA_HOME=/usr/java/jdk1.8.0_144/

export SCALA_HOME=/usr/local/scala-2.12.3/

#export SPARK_HOME=/usr/local/spark-2.1.1-bin-hadoop2.7/

export SPARK_MASTER_IP=192.168.2.31

export SPARK_WORKER_MEMORY=1g

export HADOOP_CONF_DIR=/usr/local/hadoop-2.7.3/etc/hadoop

export HADOOP_HDFS_HOME=/usr/local/hadoop-2.7.3/

export SPARK_DRIVER_MEMORY=1g

保存退出

mv spark-env.sh.template spark-env.sh

 

#修改slaves

[root@master conf]# vi slaves.template

192.168.2.32

192.168.2.33

[root@master conf]# mv slaves.template slaves

 

2.1 三臺主機上修改hosts

vi /etc/hosts

192.168.2.31 master

192.168.2.32 slave001

192.168.2.33 slave002

4. 啟動服務


[root@master local]# cd hadoop-2.7.3/sbin/

修改配置文件vi /usr/local/hadoop-2.7.3/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_144/

./start-all.sh

localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.

localhost: Error: JAVA_HOME is not set and could not be found.

修改配置文件

vi /usr/local/hadoop-2.7.3/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_144/

重新啟動服務

sbin/start-all.sh

#啟動spark

cd /usr/local/spark-2.1.1-bin-hadoop2.7/sbin/

./start-all.sh

大數(shù)據(jù)TensorFlowOnSpark安裝

4. 安裝tensorflow

前提下先安裝cuda
vim /etc/yum.repos.d/linuxtech.testing.repo  
添加內(nèi)容:
[cpp] view plain copy
[linuxtech-testing]  
name=LinuxTECH Testing  
baseurl=http://pkgrepo.linuxtech.net/el6/testing/  
enabled=0  
gpgcheck=1  
gpgkey=http://pkgrepo.linuxtech.net/el6/release/RPM-GPG-KEY-LinuxTECH.NET  
 
sudo rpm -i cuda-repo-rhel6-8.0.61-1.x86_64.rpm
sudo yum clean all
sudo yum install cuda
rpm -ivh --nodeps dkms-2.1.1.2-1.el6.rf.noarch.rpm 
yum install cuda
yum install epel-release
yum install -y zlib* 
#軟連接cuda
ln -s /usr/local/cuda-8.0 /usr/local/cudaldconfig /usr/local/cuda/lib64
Vi /etc/profile
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
export CUDA_HOME=/usr/local/cuda

 

更新pip
pip install --upgrade pip
下載tensorflow
pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl
安裝好后
#python
>>> import tensorflow
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 23, in <module>
    from tensorflow.python import *
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 45, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
    _pywrap_tensorflow = swig_import_helper()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
ImportError: libcudart.so.7.5: cannot open shared object file: No such file or directory
#這是因為lib庫不完整
yum install openssl -y
yum install openssl-devel -y
yum install gcc gcc-c++ gcc*
#更新pip install --upgrade pip
pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl
>>> import tensorflow
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 23, in <module>
    from tensorflow.python import *
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 45, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module>
    _pywrap_tensorflow = swig_import_helper()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description)
ImportError: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by /usr/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so)
#這是因為tensorflow 使用的glibc版本庫太高,系統(tǒng)自帶太低了。
可以使用。

# strings /usr/lib64/libstdc++.so.6 | grep GLIBCXX

GLIBCXX_3.4

GLIBCXX_3.4.1

GLIBCXX_3.4.2

GLIBCXX_3.4.3

GLIBCXX_3.4.4

GLIBCXX_3.4.5

GLIBCXX_3.4.6

GLIBCXX_3.4.7

GLIBCXX_3.4.8

GLIBCXX_3.4.9

GLIBCXX_3.4.10

GLIBCXX_3.4.11

GLIBCXX_3.4.12

GLIBCXX_3.4.13

GLIBCXX_FORCE_NEW

GLIBCXX_DEBUG_MESSAGE_LENGTH

 

放入最新的glibc庫,解壓出6.0.20

libstdc++.so.6.0.20 覆蓋原來的libstdc++.so.6

[root@master 4.4.7]# ln -s /opt/libstdc++.so.6/libstdc++.so.6.0.20 /usr/lib64/libstdc++.so.6

ln: creating symbolic link `/usr/lib64/libstdc++.so.6': File exists

[root@master 4.4.7]# mv /usr/lib64/libstdc++.so.6 /root/

[root@master 4.4.7]# ln -s /opt/libstdc++.so.6/libstdc++.so.6.0.20 /usr/lib64/libstdc++.so.6

[root@master 4.4.7]# strings /usr/lib64/libstdc++.so.6 | grep GLIBCXX

 

[root@master ~]# strings /usr/lib64/libstdc++.so.6 | grep GLIBCXX

GLIBCXX_3.4

GLIBCXX_3.4.1

GLIBCXX_3.4.2

GLIBCXX_3.4.3

GLIBCXX_3.4.4

GLIBCXX_3.4.5

GLIBCXX_3.4.6

GLIBCXX_3.4.7

GLIBCXX_3.4.8

GLIBCXX_3.4.9

GLIBCXX_3.4.10

GLIBCXX_3.4.11

GLIBCXX_3.4.12

GLIBCXX_3.4.13

GLIBCXX_3.4.14

GLIBCXX_3.4.15

GLIBCXX_3.4.16

GLIBCXX_3.4.17

GLIBCXX_3.4.18

GLIBCXX_3.4.19

GLIBCXX_3.4.20

GLIBCXX_DEBUG_MESSAGE_LENGTH

這個地方特別要注意坑特別多,一定要覆蓋原來的。

pip install tensorflowonspark

 

這樣就可以使用了


報錯信息:

報錯:ImportError: /lib64/libc.so.6: version `GLIBC_2.17' not found (required by /usr/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so)

tar zxvf glibc-2.17.tar.gz  

mkdir build  

cd build  

../glibc-2.17/configure  --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin  

make -j4  

make install  

測試驗證tensorflow

大數(shù)據(jù)TensorFlowOnSpark安裝

import tensorflow as tf
import numpy as np
x_data = np.float32(np.random.rand(2, 100)) 
y_data = np.dot([0.100, 0.200], x_data) + 0.300
 
b = tf.Variable(tf.zeros([1]))
W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0))
y = tf.matmul(W, x_data) + b
 
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
 
init = tf.initialize_all_variables()
 
sess = tf.Session()
sess.run(init)
 
 
for step in xrange(0, 201):
    sess.run(train)
    if step % 20 == 0:
        print step, sess.run(W), sess.run(b)
 
# 得到最佳擬合結果 W: [[0.100  0.200]], b: [0.300]


確保etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_144/
export SCALA_HOME=/usr/local/scala-2.12.3/
export PATH=$PATH:/usr/local/scala-2.12.3/bin
export SPARK_HOME=/usr/local/spark-2.1.1-bin-hadoop2.7/
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
export CUDA_HOME=/usr/local/cuda
export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$PYTHONPATH

完成實驗。


下載地址:http://down.51cto.com/data/2338827

向AI問一下細節(jié)

免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權內(nèi)容。

AI