您好,登錄后才能下訂單哦!
本篇內(nèi)容主要講解“Flannel怎么配置”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實(shí)用性強(qiáng)。下面就讓小編來帶大家學(xué)習(xí)“Flannel怎么配置”吧!
目前最新版的Flannel v0.10.0的命令行配置及說明如下:
Usage: /opt/bin/flanneld [OPTION]... -etcd-cafile string SSL Certificate Authority file used to secure etcd communication -etcd-certfile string SSL certification file used to secure etcd communication -etcd-endpoints string a comma-delimited list of etcd endpoints (default "http://127.0.0.1:4001,http://127.0.0.1:2379") -etcd-keyfile string SSL key file used to secure etcd communication -etcd-password string password for BasicAuth to etcd -etcd-prefix string etcd prefix (default "/coreos.com/network") -etcd-username string username for BasicAuth to etcd -healthz-ip string the IP address for healthz server to listen (default "0.0.0.0") -healthz-port int the port for healthz server to listen(0 to disable) -iface value interface to use (IP or name) for inter-host communication. Can be specified multiple times to check each option in order. Returns the first match found. -iface-regex value regex expression to match the first interface to use (IP or name) for inter-host communication. Can be specified multiple times to check each regex in order. Returns the first match found. Regexes are checked after specific interfaces specified by the iface option have already been checked. -ip-masq setup IP masquerade rule for traffic destined outside of overlay network -kube-api-url string Kubernetes API server URL. Does not need to be specified if flannel is running in a pod. -kube-subnet-mgr contact the Kubernetes API for subnet assignment instead of etcd. -kubeconfig-file string kubeconfig file location. Does not need to be specified if flannel is running in a pod. -log_backtrace_at value when logging hits line file:N, emit a stack trace -public-ip string IP accessible by other nodes for inter-host communication -subnet-file string filename where env variables (subnet, MTU, ... ) will be written to (default "/run/flannel/subnet.env") -subnet-lease-renew-margin int subnet lease renewal margin, in minutes, ranging from 1 to 1439 (default 60) -v value log level for V logs -version print version and exit -vmodule value comma-separated list of pattern=N settings for file-filtered logging
需要說明如下:
我們是通過-kube-subnet-mgr
配置Flannel從Kubernetes APIServer中讀取對應(yīng)的ConfigMap來獲取配置的。-kubeconfig-file, -kube-api-url
我們也沒有配置,因?yàn)槲覀兪鞘褂肈aemonSet通過Pod來部署的Flannel,所以Flannel與Kubernetes APIServer是通過ServiceAccount來認(rèn)證通信的。
另外一種方式是直接從etcd中讀取Flannel配置,需要配置對應(yīng)的-etcd
開頭的Flag。
-subnet-file
默認(rèn)為/run/flannel/subnet.env
,一般無需改動。Flannel會將本機(jī)的subnet信息對應(yīng)的環(huán)境變量注入到該文件中,F(xiàn)lannel真正是從這里獲取subnet信息的,比如:
FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.244.26.1/24 FLANNEL_MTU=1500 FLANNEL_IPMASQ=true
-subnet-lease-renew-margin
表示etcd租約到期前多少時(shí)間就可以重新自動續(xù)約,默認(rèn)是1h。因?yàn)閠tl時(shí)間是24h,所以這項(xiàng)配置自然不允許超過24h,即[1, 1439] min.
上面的命令行配置項(xiàng),都可以通過改成大寫,下劃線變中劃線,再加上FLANNELD_
前綴轉(zhuǎn)成對應(yīng)的環(huán)境變量的形式來設(shè)置。
比如--etcd-endpoints=http://10.0.0.2:2379
對應(yīng)的環(huán)境變量為FLANNELD_ETCD_ENDPOINTS=http://10.0.0.2:2379
。
通過Kubernetes DaemonSet部署Flannel,這一點(diǎn)毫無爭議。同時(shí)創(chuàng)建對應(yīng)的ClusterRole,ClusterRoleBinding,ServiceAccount,ConfigMap。完整的Yaml描述文件可參考如下:
--- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node k8s-app: flannel data: cni-conf.json: | { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "host-gw" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel namespace: kube-system labels: tier: node k8s-app: flannel spec: template: metadata: labels: tier: node k8s-app: flannel spec: imagePullSecrets: - name: harborsecret serviceAccountName: flannel containers: - name: kube-flannel image: registry.vivo.xyz:4443/coreos/flannel:v0.10.0-amd64 command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr"] securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: run mountPath: /run - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: install-cni image: registry.vivo.xyz:4443/coreos/flannel-cni:v0.3.0 command: ["/install-cni.sh"] #command: ["sleep","10000"] env: # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: kube-flannel-cfg key: cni-conf.json volumeMounts: #- name: cni # mountPath: /etc/cni/net.d - name: cni mountPath: /host/etc/cni/net.d - name: host-cni-bin mountPath: /host/opt/cni/bin/ hostNetwork: true tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule volumes: - name: run hostPath: path: /run #- name: cni # hostPath: # path: /etc/kubernetes/cni/net.d - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg - name: host-cni-bin hostPath: path: /etc/cni/net.d updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate
很容易混淆幾個(gè)東西。我們通常說的Flannel(coreos/flannel),其實(shí)說的是flanneld。大家都知道Kubernetes是通過CNI標(biāo)準(zhǔn)對接網(wǎng)絡(luò)插件的,但是當(dāng)你去看Flannel(coreos/flannel)的代碼時(shí),并沒有發(fā)現(xiàn)它實(shí)現(xiàn)了CNI的接口。如果你玩過其他CNI插件,你會知道還有一個(gè)二進(jìn)制文件用來供kubele調(diào)用,并且會調(diào)用后端的網(wǎng)絡(luò)插件。對于Flannel(coreos/flannel)來說,這個(gè)二進(jìn)制文件是什么呢?git repo在哪里呢?
這個(gè)二進(jìn)制文件就對應(yīng)宿主機(jī)的/etc/cni/net.d/flannel
,它的代碼地址是https://github.com/containernetworking/plugins,最可恨的它的名字就叫做flannel,為啥不類似contiv netplugin對應(yīng)的contivk8s一樣,取名flannelk8s之類的。
上面的Flannel Pod中還有一個(gè)容器叫做install-cni,它對應(yīng)的腳本在https://github.com/coreos/flannel-cni。
/opt/bin/flanneld --> https://github.com/coreos/flannel
/etc/cni/net.d/flannel --> https://github.com/containernetworking/plugins
/install-cni.sh --> https://github.com/coreos/flannel-cni
在kube-flannel容器里面運(yùn)行的是我們的主角flanneld,我們需要關(guān)注的這個(gè)容器里面的目錄/文件:
/etc/kube-flannel/cni-conf.json
/etc/kube-flannel/net-conf.json
/run/flannel/subnet.env
/opt/bin/flanneld
下面是我的環(huán)境對應(yīng)的內(nèi)容:
/run/flannel # ls /etc/kube-flannel/ cni-conf.json net-conf.json /run/flannel # cat /etc/kube-flannel/cni-conf.json { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } } ] } /run/flannel # cat /etc/kube-flannel/net-conf.json { "Network": "10.244.0.0/16", "Backend": { "Type": "host-gw" } } /run/flannel # cat /run/flannel/subnet.env FLANNEL_NETWORK=10.244.0.0/16 FLANNEL_SUBNET=10.244.26.1/24 FLANNEL_MTU=1500 FLANNEL_IPMASQ=true /run/flannel # ls /opt/bin/ flanneld mk-docker-opts.sh /run/flannel # cat /opt/bin/mk-docker-opts.sh #!/bin/sh usage() { echo "$0 [-f FLANNEL-ENV-FILE] [-d DOCKER-ENV-FILE] [-i] [-c] [-m] [-k COMBINED-KEY] Generate Docker daemon options based on flannel env file OPTIONS: -f Path to flannel env file. Defaults to /run/flannel/subnet.env -d Path to Docker env file to write to. Defaults to /run/docker_opts.env -i Output each Docker option as individual var. e.g. DOCKER_OPT_MTU=1500 -c Output combined Docker options into DOCKER_OPTS var -k Set the combined options key to this value (default DOCKER_OPTS=) -m Do not output --ip-masq (useful for older Docker version) " >&2 exit 1 } flannel_env="/run/flannel/subnet.env" docker_env="/run/docker_opts.env" combined_opts_key="DOCKER_OPTS" indiv_opts=false combined_opts=false ipmasq=true while getopts "f:d:icmk:?h" opt; do case $opt in f) flannel_env=$OPTARG ;; d) docker_env=$OPTARG ;; i) indiv_opts=true ;; c) combined_opts=true ;; m) ipmasq=false ;; k) combined_opts_key=$OPTARG ;; [\?h]) usage ;; esac done if [ $indiv_opts = false ] && [ $combined_opts = false ]; then indiv_opts=true combined_opts=true fi if [ -f "$flannel_env" ]; then . $flannel_env fi if [ -n "$FLANNEL_SUBNET" ]; then DOCKER_OPT_BIP="--bip=$FLANNEL_SUBNET" fi if [ -n "$FLANNEL_MTU" ]; then DOCKER_OPT_MTU="--mtu=$FLANNEL_MTU" fi if [ -n "$FLANNEL_IPMASQ" ] && [ $ipmasq = true ] ; then if [ "$FLANNEL_IPMASQ" = true ] ; then DOCKER_OPT_IPMASQ="--ip-masq=false" elif [ "$FLANNEL_IPMASQ" = false ] ; then DOCKER_OPT_IPMASQ="--ip-masq=true" else echo "Invalid value of FLANNEL_IPMASQ: $FLANNEL_IPMASQ" >&2 exit 1 fi fi eval docker_opts="\$${combined_opts_key}" if [ "$docker_opts" ]; then docker_opts="$docker_opts "; fi echo -n "" >$docker_env for opt in $(set | grep "DOCKER_OPT_"); do OPT_NAME=$(echo $opt | awk -F "=" '{print $1;}'); OPT_VALUE=$(eval echo "\$$OPT_NAME"); if [ "$indiv_opts" = true ]; then echo "$OPT_NAME=\"$OPT_VALUE\"" >>$docker_env; fi docker_opts="$docker_opts $OPT_VALUE"; done if [ "$combined_opts" = true ]; then echo "${combined_opts_key}=\"${docker_opts}\"" >>$docker_env fi
install-cni容器顧名思義就是負(fù)責(zé)安裝cni插件的,把鏡像里的flannel等二進(jìn)制文件復(fù)制到宿主機(jī)的/etc/cni/net.d
,注意這個(gè)目錄要匹配kubelet對應(yīng)的cni配置項(xiàng),如果你沒改kubelet默認(rèn)配置,那么kubelet默認(rèn)也是配置的這個(gè)cni目錄。我們需要關(guān)注install-cni容器內(nèi)的目錄/文件:
/host/etc/cni/net.d/
/host/opt/cni/bin/
/host/etc/cni/net.d/10-flannel.conflist
下面是我的環(huán)境對應(yīng)的內(nèi)容:
/host/etc/cni/net.d # pwd /host/etc/cni/net.d /host/etc/cni/net.d # ls 10-flannel.conflist dhcp ipvlan noop tuning bridge flannel loopback portmap vlan cnitool host-local macvlan ptp /host/etc/cni/net.d # cd /host/opt/cni/bin/ /host/opt/cni/bin # ls 10-flannel.conflist dhcp ipvlan noop tuning bridge flannel loopback portmap vlan cnitool host-local macvlan ptp /opt/cni/bin # ls bridge dhcp host-local loopback noop ptp vlan cnitool flannel ipvlan macvlan portmap tuning /opt/cni/bin # cat /host/etc/cni/net.d/10-flannel.conflist { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } } ] }
畫一個(gè)圖,應(yīng)該就很清晰了。注意帶顏色的部分是Volume對應(yīng)的信息,可重點(diǎn)關(guān)注。
創(chuàng)建容器網(wǎng)絡(luò)的流程就是:kubelet ——> flannel ——> flanneld。如果宿主機(jī)上并發(fā)創(chuàng)建Pod,則你會看到有多個(gè)flannel進(jìn)程在后臺,不過正常幾秒鐘就會結(jié)束,而flanneld是常駐進(jìn)程。
Openshift默認(rèn)也是使用Flannel host-gw容器網(wǎng)絡(luò)方案,其官網(wǎng)也清晰的畫出了host-gw的data flow diagram:
Node 1中對應(yīng)的ip routes:
default via 192.168.0.100 dev eth0 proto static metric 100 10.1.15.0/24 dev docker0 proto kernel scope link src 10.1.15.1 10.1.20.0/24 via 192.168.0.200 dev eth0
Node 2中對應(yīng)的ip routes:
default via 192.168.0.200 dev eth0 proto static metric 100 10.1.20.0/24 dev docker0 proto kernel scope link src 10.1.20.1 10.1.15.0/24 via 192.168.0.100 dev eth0
在我的集群中是使用kube-subnet-mgr來管理subnet的,而不是直接通過etcd v2來管理的。
flanneld啟動時(shí),需要對應(yīng)Node上已經(jīng)配置好PodCIDR,可通過get node信息查看.spec.PodCIDR
字段是否有值。
配置Node的CIDR可有兩種方式:
手動配置每個(gè)Node上kubelet的--pod-cidr
;
配置kube-controller-manager的--allocate-node-cidrs=true --cluster-cidr=xx.xx.xx.xx/yy
,由CIDR Controller自動給每個(gè)節(jié)點(diǎn)配置PodCIDR。
另外,你還會發(fā)現(xiàn)每個(gè)Node都被打上了很多flannel開頭的Annotation,這些Annotation會在每次flanneld啟動時(shí)RegisterNetwork的時(shí)候進(jìn)行更新。這些Annotation主要用于Node Lease。
flannel.alpha.coreos.com/backend-data: "null"
flannel.alpha.coreos.com/backend-type: host-gw
flannel.alpha.coreos.com/kube-subnet-manager: "true"
flannel.alpha.coreos.com/public-ip: xx.xx.xx.xx
flannel.alpha.coreos.com/public-ip-overwrite:yy.yy.yy.yy (ps:optional)
下面是我的環(huán)境中某個(gè)節(jié)點(diǎn)的信息:
# kubectl get no 10.21.36.79 -o yaml apiVersion: v1 kind: Node metadata: annotations: flannel.alpha.coreos.com/backend-data: "null" flannel.alpha.coreos.com/backend-type: host-gw flannel.alpha.coreos.com/kube-subnet-manager: "true" flannel.alpha.coreos.com/public-ip: 10.21.36.79 node.alpha.kubernetes.io/ttl: "0" volumes.kubernetes.io/controller-managed-attach-detach: "true" creationTimestamp: 2018-02-09T07:18:06Z labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/hostname: 10.21.36.79 name: 10.21.36.79 resourceVersion: "45074326" selfLink: /api/v1/nodes/10.21.36.79 uid: 5f91765e-0d69-11e8-88cb-f403434bff24 spec: externalID: 10.21.36.79 podCIDR: 10.244.29.0/24 status: addresses: - address: 10.21.36.79 type: InternalIP - address: 10.21.36.79 type: Hostname allocatable: alpha.kubernetes.io/nvidia-gpu: "0" cpu: "34" memory: 362301176Ki pods: "200" capacity: alpha.kubernetes.io/nvidia-gpu: "0" cpu: "40" memory: 395958008Ki pods: "200" conditions: - lastHeartbeatTime: 2018-02-27T14:07:30Z lastTransitionTime: 2018-02-13T13:05:57Z message: kubelet has sufficient disk space available reason: KubeletHasSufficientDisk status: "False" type: OutOfDisk - lastHeartbeatTime: 2018-02-27T14:07:30Z lastTransitionTime: 2018-02-13T13:05:57Z message: kubelet has sufficient memory available reason: KubeletHasSufficientMemory status: "False" type: MemoryPressure - lastHeartbeatTime: 2018-02-27T14:07:30Z lastTransitionTime: 2018-02-13T13:05:57Z message: kubelet has no disk pressure reason: KubeletHasNoDiskPressure status: "False" type: DiskPressure - lastHeartbeatTime: 2018-02-27T14:07:30Z lastTransitionTime: 2018-02-13T13:05:57Z message: kubelet is posting ready status reason: KubeletReady status: "True" type: Ready daemonEndpoints: kubeletEndpoint: Port: 10250 images: - names: - registry.vivo.xyz:4443/bigdata_release/tensorflow1.5.0@sha256:6d61595c8e85d3724ec42298f8f97cdc782c5d83dd8f651c2eb037c25f525071 - registry.vivo.xyz:4443/bigdata_release/tensorflow1.5.0:v2.0 sizeBytes: 3217838862 - names: - registry.vivo.xyz:4443/bigdata_release/tensorflow1.3.0@sha256:d14b7776578e3e844bab203b17ae504a0696038c7106469504440841ce17e85f - registry.vivo.xyz:4443/bigdata_release/tensorflow1.3.0:v1.9 sizeBytes: 2504726638 - names: - registry.vivo.xyz:4443/coreos/flannel-cni@sha256:dc5b5b370700645efcacb1984ae1e48ec9e297acbb536251689a239f13d08850 - registry.vivo.xyz:4443/coreos/flannel-cni:v0.3.0 sizeBytes: 49786179 - names: - registry.vivo.xyz:4443/coreos/flannel@sha256:2a1361c414acc80e00514bc7abdbe0cd3dc9b65a181e5ac7393363bcc8621f39 - registry.vivo.xyz:4443/coreos/flannel:v0.10.0-amd64 sizeBytes: 44577768 - names: - registry.vivo.xyz:4443/google_containers/pause-amd64@sha256:3b3a29e3c90ae7762bdf587d19302e62485b6bef46e114b741f7d75dba023bd3 - registry.vivo.xyz:4443/google_containers/pause-amd64:3.0 sizeBytes: 746888 nodeInfo: architecture: amd64 bootID: bc7a36a4-2d9b-4caa-b852-445a5fb1b0b9 containerRuntimeVersion: docker://1.12.6 kernelVersion: 3.10.0-514.el7.x86_64 kubeProxyVersion: v1.7.4+793658f2d7ca7 kubeletVersion: v1.7.4+793658f2d7ca7 machineID: edaf7dacea45404b9b3cfe053181d317 operatingSystem: linux osImage: CentOS Linux 7 (Core) systemUUID: 30393137-3136-4336-5537-3335444C4C30
到此,相信大家對“Flannel怎么配置”有了更深的了解,不妨來實(shí)際操作一番吧!這里是億速云網(wǎng)站,更多相關(guān)內(nèi)容可以進(jìn)入相關(guān)頻道進(jìn)行查詢,關(guān)注我們,繼續(xù)學(xué)習(xí)!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。