离线安装 KubeSphere 2.1.1 与 Kubernetes
你好,离线安装Multi-node模式出现这个错误:
CauchyK零SK壹S
我看报错找不到这个镜像,机器磁盘空间是不是比较小,导致镜像没有完全导入呢?可以检查下,然后删除scripts/os中的*.tmp文件重试。
Sseal82K零S
- 已编辑
安装失败,这几个环境包安装不上
本地apt_repo里,这几个软件都存在
当手动挨个尝试的时候发现
python3-software-properties
依赖python3-pycurl
,而在pip_repo文件夹中也是有python3-pycurl
这个包的这就说明这个
python3-pycurl
并没有被正确安装啊,而且,这个是python3的依赖包,但是安装程序里好像根本没有安装pip3,只有pip27
CauchyK零SK壹S
- 已编辑
seal82
系统应该不是官方纯净版新安装的吧,这个看起来是依赖冲突问题,apt的依赖问题可以用aptitude尝试排查处理
https://blog.csdn.net/Davidietop/article/details/88934783
Sseal82K零S
Cauchy 我是纯净的ubuntu,没有装任何东西
- 不知道为什么发不了图片了,我文字描述一下吧
- 错误显示是
software-properties-common
需要依赖python3-software-properties
,而在/kubeinstaller/apt_repo/16.04.6/iso
中是有python3-software-properties
的,于是我尝试手动dpkg -i安装这个包,结果报错说需要依赖python3-pycurl
这个包,于是我又到/kubeinstaller/pip_repo/pip27/iso
路径下,找到了pycurl-7.19.0.tar.gz
这个包,当我手动sudo tar解压的却提示我这个tar包里面的文件都是只读文件,不能解压。这就产生了两个问题 - 1.这是个python3的依赖包,但是
/kubeinstaller/pip_repo/
路径下只有pip27文件夹,却没有pip3的文件夹,说明没有pip3的本地资源库 - 2.就算内部机制让python3的本地资源库可以共用pip27文件夹,但是
pycurl-7.19.0.tar.gz
这个压缩包确是个完全只读文件,根本打不开
FeynmanK零SK贰SK壹S
大家如果使用离线 Installer 安装不顺利,建议先自己安装好 k8s,然后参考 在 K8s 离线安装 KubeSphere,这样会很方便,也可以避免 OS 缺少依赖包的问题。
- 已编辑
centos 7.5 三节点安装,报错如下
TASK [prepare/nodes : GlusterFS | Installing glusterfs-client (YUM)] ********************************************************************************************************************************
Tuesday 14 April 2020 19:30:35 +0800 (0:00:00.429) 0:00:47.080 *********
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (5 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (5 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (5 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (4 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (4 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (4 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (3 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (3 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (3 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (2 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (2 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (2 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (1 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (1 retries left).
FAILED - RETRYING: GlusterFS | Installing glusterfs-client (YUM) (1 retries left).
fatal: [node2]: FAILED! => {
"attempts": 5,
"changed": false,
"rc": 1,
"results": [
"Loaded plugins: langpacks\nResolving Dependencies\n--> Running transaction check\n---> Package glusterfs-fuse.x86_64 0:3.12.2-47.2.el7 will be installed\n--> Processing Dependency: glusterfs-libs(x86-64) = 3.12.2-47.2.el7 for package: glusterfs-fuse-3.12.2-47.2.el7.x86_64\n--> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.12.2-47.2.el7 for package: glusterfs-fuse-3.12.2-47.2.el7.x86_64\n--> Processing Dependency: glusterfs(x86-64) = 3.12.2-47.2.el7 for package: glusterfs-fuse-3.12.2-47.2.el7.x86_64\n--> Running transaction check\n---> Package glusterfs.x86_64 0:3.8.4-53.el7.centos will be updated\n--> Processing Dependency: glusterfs(x86-64) = 3.8.4-53.el7.centos for package: glusterfs-api-3.8.4-53.el7.centos.x86_64\n---> Package glusterfs.x86_64 0:3.12.2-47.2.el7 will be an update\n---> Package glusterfs-client-xlators.x86_64 0:3.8.4-53.el7.centos will be updated\n--> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos for package: glusterfs-api-3.8.4-53.el7.centos.x86_64\n---> Package glusterfs-client-xlators.x86_64 0:3.12.2-47.2.el7 will be an update\n---> Package glusterfs-libs.x86_64 0:3.8.4-53.el7.centos will be updated\n--> Processing Dependency: glusterfs-libs(x86-64) = 3.8.4-53.el7.centos for package: glusterfs-cli-3.8.4-53.el7.centos.x86_64\n---> Package glusterfs-libs.x86_64 0:3.12.2-47.2.el7 will be an update\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"
]
}
MSG:
Error: Package: glusterfs-api-3.8.4-53.el7.centos.x86_64 (@anaconda)
Requires: glusterfs(x86-64) = 3.8.4-53.el7.centos
Removing: glusterfs-3.8.4-53.el7.centos.x86_64 (@anaconda)
glusterfs(x86-64) = 3.8.4-53.el7.centos
Updated By: glusterfs-3.12.2-47.2.el7.x86_64 (bash)
glusterfs(x86-64) = 3.12.2-47.2.el7
Error: Package: glusterfs-cli-3.8.4-53.el7.centos.x86_64 (@anaconda)
Requires: glusterfs-libs(x86-64) = 3.8.4-53.el7.centos
Removing: glusterfs-libs-3.8.4-53.el7.centos.x86_64 (@anaconda)
glusterfs-libs(x86-64) = 3.8.4-53.el7.centos
Updated By: glusterfs-libs-3.12.2-47.2.el7.x86_64 (bash)
glusterfs-libs(x86-64) = 3.12.2-47.2.el7
Error: Package: glusterfs-api-3.8.4-53.el7.centos.x86_64 (@anaconda)
Requires: glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos
Removing: glusterfs-client-xlators-3.8.4-53.el7.centos.x86_64 (@anaconda)
glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos
Updated By: glusterfs-client-xlators-3.12.2-47.2.el7.x86_64 (bash)
glusterfs-client-xlators(x86-64) = 3.12.2-47.2.el7
fatal: [node1]: FAILED! => {
"attempts": 5,
"changed": false,
"rc": 1,
"results": [
"Loaded plugins: langpacks\nResolving Dependencies\n--> Running transaction check\n---> Package glusterfs-fuse.x86_64 0:3.12.2-47.2.el7 will be installed\n--> Processing Dependency: glusterfs-libs(x86-64) = 3.12.2-47.2.el7 for package: glusterfs-fuse-3.12.2-47.2.el7.x86_64\n--> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.12.2-47.2.el7 for package: glusterfs-fuse-3.12.2-47.2.el7.x86_64\n--> Processing Dependency: glusterfs(x86-64) = 3.12.2-47.2.el7 for package: glusterfs-fuse-3.12.2-47.2.el7.x86_64\n--> Running transaction check\n---> Package glusterfs.x86_64 0:3.8.4-53.el7.centos will be updated\n--> Processing Dependency: glusterfs(x86-64) = 3.8.4-53.el7.centos for package: glusterfs-api-3.8.4-53.el7.centos.x86_64\n---> Package glusterfs.x86_64 0:3.12.2-47.2.el7 will be an update\n---> Package glusterfs-client-xlators.x86_64 0:3.8.4-53.el7.centos will be updated\n--> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos for package: glusterfs-api-3.8.4-53.el7.centos.x86_64\n---> Package glusterfs-client-xlators.x86_64 0:3.12.2-47.2.el7 will be an update\n---> Package glusterfs-libs.x86_64 0:3.8.4-53.el7.centos will be updated\n--> Processing Dependency: glusterfs-libs(x86-64) = 3.8.4-53.el7.centos for package: glusterfs-cli-3.8.4-53.el7.centos.x86_64\n---> Package glusterfs-libs.x86_64 0:3.12.2-47.2.el7 will be an update\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest\n"
]
}
MSG:
Error: Package: glusterfs-api-3.8.4-53.el7.centos.x86_64 (@anaconda)
Requires: glusterfs(x86-64) = 3.8.4-53.el7.centos
Removing: glusterfs-3.8.4-53.el7.centos.x86_64 (@anaconda)
glusterfs(x86-64) = 3.8.4-53.el7.centos
Updated By: glusterfs-3.12.2-47.2.el7.x86_64 (bash)
glusterfs(x86-64) = 3.12.2-47.2.el7
Error: Package: glusterfs-cli-3.8.4-53.el7.centos.x86_64 (@anaconda)
Requires: glusterfs-libs(x86-64) = 3.8.4-53.el7.centos
Removing: glusterfs-libs-3.8.4-53.el7.centos.x86_64 (@anaconda)
glusterfs-libs(x86-64) = 3.8.4-53.el7.centos
Updated By: glusterfs-libs-3.12.2-47.2.el7.x86_64 (bash)
glusterfs-libs(x86-64) = 3.12.2-47.2.el7
Error: Package: glusterfs-api-3.8.4-53.el7.centos.x86_64 (@anaconda)
Requires: glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos
Removing: glusterfs-client-xlators-3.8.4-53.el7.centos.x86_64 (@anaconda)
glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos
Updated By: glusterfs-client-xlators-3.12.2-47.2.el7.x86_64 (bash)
glusterfs-client-xlators(x86-64) = 3.12.2-47.2.el7
fatal: [master]: FAILED! => {
"attempts": 5,
"changed": false,
"rc": 1,
"results": [
"Loaded plugins: langpacks\nResolving Dependencies\n--> Running transaction check\n---> Package glusterfs-fuse.x86_64 0:3.12.2-47.2.el7 will be installed\n--> Processing Dependency: glusterfs-libs(x86-64) = 3.12.2-47.2.el7 for package: glusterfs-fuse-3.12.2-47.2.el7.x86_64\n--> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.12.2-47.2.el7 for package: glusterfs-fuse-3.12.2-47.2.el7.x86_64\n--> Processing Dependency: glusterfs(x86-64) = 3.12.2-47.2.el7 for package: glusterfs-fuse-3.12.2-47.2.el7.x86_64\n--> Running transaction check\n---> Package glusterfs.x86_64 0:3.8.4-53.el7.centos will be updated\n--> Processing Dependency: glusterfs(x86-64) = 3.8.4-53.el7.centos for package: glusterfs-api-3.8.4-53.el7.centos.x86_64\n---> Package glusterfs.x86_64 0:3.12.2-47.2.el7 will be an update\n---> Package glusterfs-client-xlators.x86_64 0:3.8.4-53.el7.centos will be updated\n--> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos for package: glusterfs-api-3.8.4-53.el7.centos.x86_64\n---> Package glusterfs-client-xlators.x86_64 0:3.12.2-47.2.el7 will be an update\n---> Package glusterfs-libs.x86_64 0:3.8.4-53.el7.centos will be updated\n--> Processing Dependency: glusterfs-libs(x86-64) = 3.8.4-53.el7.centos for package: glusterfs-cli-3.8.4-53.el7.centos.x86_64\n---> Package glusterfs-libs.x86_64 0:3.12.2-47.2.el7 will be an update\n--> Finished Dependency Resolution\n You could try using --skip-broken to work around the problem\n** Found 1 pre-existing rpmdb problem(s), 'yum check' output follows:\npython-requests-2.6.0-7.el7_7.noarch has missing requires of python-urllib3 >= ('0', '1.10.2', '1')\n"
]
}
MSG:
Error: Package: glusterfs-api-3.8.4-53.el7.centos.x86_64 (@anaconda)
Requires: glusterfs(x86-64) = 3.8.4-53.el7.centos
Removing: glusterfs-3.8.4-53.el7.centos.x86_64 (@anaconda)
glusterfs(x86-64) = 3.8.4-53.el7.centos
Updated By: glusterfs-3.12.2-47.2.el7.x86_64 (bash)
glusterfs(x86-64) = 3.12.2-47.2.el7
Error: Package: glusterfs-cli-3.8.4-53.el7.centos.x86_64 (@anaconda)
Requires: glusterfs-libs(x86-64) = 3.8.4-53.el7.centos
Removing: glusterfs-libs-3.8.4-53.el7.centos.x86_64 (@anaconda)
glusterfs-libs(x86-64) = 3.8.4-53.el7.centos
Updated By: glusterfs-libs-3.12.2-47.2.el7.x86_64 (bash)
glusterfs-libs(x86-64) = 3.12.2-47.2.el7
Error: Package: glusterfs-api-3.8.4-53.el7.centos.x86_64 (@anaconda)
Requires: glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos
Removing: glusterfs-client-xlators-3.8.4-53.el7.centos.x86_64 (@anaconda)
glusterfs-client-xlators(x86-64) = 3.8.4-53.el7.centos
Updated By: glusterfs-client-xlators-3.12.2-47.2.el7.x86_64 (bash)
glusterfs-client-xlators(x86-64) = 3.12.2-47.2.el7
PLAY RECAP ******************************************************************************************************************************************************************************************
master : ok=8 changed=4 unreachable=0 failed=1
node1 : ok=9 changed=7 unreachable=0 failed=1
node2 : ok=9 changed=7 unreachable=0 failed=1
Tuesday 14 April 2020 19:32:30 +0800 (0:01:54.751) 0:02:41.831 *********
===============================================================================
prepare/nodes : GlusterFS | Installing glusterfs-client (YUM) ------------------------------------------------------------------------------------------------------------------------------ 114.75s
prepare/nodes : Ceph RBD | Installing ceph-common (YUM) ------------------------------------------------------------------------------------------------------------------------------------- 22.55s
prepare/nodes : KubeSphere| Installing JQ (YUM) ---------------------------------------------------------------------------------------------------------------------------------------------- 6.85s
prepare/nodes : Copy admin kubeconfig to root user home -------------------------------------------------------------------------------------------------------------------------------------- 2.89s
download : Download items -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2.38s
download : Sync container -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.95s
prepare/nodes : Install kubectl bash completion ---------------------------------------------------------------------------------------------------------------------------------------------- 1.54s
prepare/nodes : Create kube config dir ------------------------------------------------------------------------------------------------------------------------------------------------------- 1.43s
prepare/nodes : Set kubectl bash completion file --------------------------------------------------------------------------------------------------------------------------------------------- 1.21s
fetch config --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.08s
modify resolv.conf --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.01s
prepare/nodes : Fetch /etc/os-release -------------------------------------------------------------------------------------------------------------------------------------------------------- 0.71s
Get the tag date ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.59s
modify resolv.conf --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.44s
prepare/nodes : Ceph RBD | Installing ceph-common (APT) -------------------------------------------------------------------------------------------------------------------------------------- 0.43s
download : include_tasks --------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.43s
prepare/nodes : GlusterFS | Installing glusterfs-client (APT) -------------------------------------------------------------------------------------------------------------------------------- 0.43s
prepare/nodes : KubeSphere | Installing JQ (APT) --------------------------------------------------------------------------------------------------------------------------------------------- 0.34s
kubesphere-defaults : Configure defaults ----------------------------------------------------------------------------------------------------------------------------------------------------- 0.28s
set dev version ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.27s
failed!
**********************************
please refer to https://kubesphere.io/docs/v2.1/zh-CN/faq/faq-install/
**********************************
Kks-5937K零S
kubesphere-all-offline-v2.1.1.tar.gz 离线包 看来少了 flannel 的镜像哦, 只能默认 calico ?
或者有补救方法吗? flannel 需要哪些镜像,手动补回去。
2020-04-18 13:09:24,698 p=16068 u=xxxxxx | TASK [download : download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 )] *******************************************************************
2020-04-18 13:09:24,698 p=16068 u=xxxxxx | Saturday 18 April 2020 13:09:24 +0800 (0:00:00.115) 0:01:46.334 ********
2020-04-18 13:09:24,909 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (4 retries left).
2020-04-18 13:09:24,978 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (4 retries left).
2020-04-18 13:09:25,020 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (4 retries left).
2020-04-18 13:09:31,195 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (3 retries left).
2020-04-18 13:09:32,151 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (3 retries left).
2020-04-18 13:09:33,086 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (3 retries left).
2020-04-18 13:09:37,372 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (2 retries left).
2020-04-18 13:09:39,327 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (2 retries left).
2020-04-18 13:09:41,250 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (2 retries left).
2020-04-18 13:09:43,558 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (1 retries left).
2020-04-18 13:09:46,493 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (1 retries left).
2020-04-18 13:09:49,409 p=16068 u=xxxxxx | FAILED - RETRYING: download_container | Download image if required ( 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 ) (1 retries left).
2020-04-18 13:09:49,759 p=16068 u=xxxxxx | fatal: [k8s-m-202 -> k8s-m-202]: FAILED! => {
“attempts”: 4,
“changed”: true,
“cmd”: [
“/usr/bin/docker”,
“pull”,
“192.168.1.202:5000/coreos/flannel-cni:v0.3.0”
],
“delta”: “0:00:00.052470″,
“end”: “2020-04-18 13:09:49.745324″,
“rc”: 1,
“start”: “2020-04-18 13:09:49.692854”
}
STDERR:
Error response from daemon: manifest for 192.168.1.202:5000/coreos/flannel-cni:v0.3.0 not found
MSG:
non-zero return code
安装以后 日志没有出现成功的那个界面, 日志内容如下,不能访问服务,请问是什么原因,卸载后重装也是这样
`2020-04-23T03:29:05Z INFO : shell-operator v1.0.0-beta.5
2020-04-23T03:29:05Z INFO : HTTP SERVER Listening on 0.0.0.0:9115
2020-04-23T03:29:05Z INFO : Use temporary dir: /tmp/shell-operator
2020-04-23T03:29:05Z INFO : Initialize hooks manager …
2020-04-23T03:29:05Z INFO : Search and load hooks …
2020-04-23T03:29:05Z INFO : Load hook config from ‘/hooks/kubesphere/installRunner.py’
2020-04-23T03:29:06Z INFO : Initializing schedule manager …
2020-04-23T03:29:06Z INFO : KUBE Init Kubernetes client
2020-04-23T03:29:06Z INFO : KUBE-INIT Kubernetes client is configured successfully
2020-04-23T03:29:06Z INFO : MAIN: run main loop
2020-04-23T03:29:06Z INFO : MAIN: add onStartup tasks
2020-04-23T03:29:06Z INFO : Running schedule manager …
2020-04-23T03:29:06Z INFO : QUEUE add all HookRun@OnStartup
2020-04-23T03:29:06Z INFO : MSTOR Create new metric shell_operator_live_ticks
2020-04-23T03:29:06Z INFO : MSTOR Create new metric shell_operator_tasks_queue_length
2020-04-23T03:29:06Z INFO : GVR for kind ‘ConfigMap’ is /v1, Resource=configmaps
2020-04-23T03:29:06Z INFO : EVENT Kube event ‘2d28f5bb-f5a5-4134-b784-46f90a4011d1′
2020-04-23T03:29:06Z INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py
2020-04-23T03:29:09Z INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py
2020-04-23T03:29:09Z INFO : Running hook ‘kubesphere/installRunner.py’ binding ‘KUBE_EVENTS’ …
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match ‘all’
PLAY [localhost] ***************************************************************
TASK [download : include_tasks] ************************************************
skipping: [localhost]
TASK [download : Download items] ***********************************************
skipping: [localhost]
TASK [download : Sync container] ***********************************************
skipping: [localhost]
TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
“msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
}
TASK [preinstall : check k8s version] ******************************************
changed: [localhost]
TASK [preinstall : init k8s version] *******************************************
ok: [localhost]
TASK [preinstall : Stop if kuernetes version is nonsupport] ********************
ok: [localhost] => {
“changed”: false,
“msg”: “All assertions passed”
}
TASK [preinstall : check helm status] ******************************************
changed: [localhost]
TASK [preinstall : Stop if Helm is not available] ******************************
ok: [localhost] => {
“changed”: false,
“msg”: “All assertions passed”
}
TASK [preinstall : check storage class] ****************************************
changed: [localhost]
TASK [preinstall : Stop if StorageClass was not found] *************************
ok: [localhost] => {
“changed”: false,
“msg”: “All assertions passed”
}
TASK [preinstall : check default storage class] ********************************
changed: [localhost]
TASK [preinstall : Stop if defaultStorageClass was not found] ******************
skipping: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=9 changed=4 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match ‘all’
PLAY [localhost] ***************************************************************
TASK [download : include_tasks] ************************************************
skipping: [localhost]
TASK [download : Download items] ***********************************************
skipping: [localhost]
TASK [download : Sync container] ***********************************************
skipping: [localhost]
TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
“msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
}
TASK [metrics-server : Metrics-Server | Checking old installation files] *******
ok: [localhost]
TASK [metrics-server : Metrics-Server | deleting old prometheus-operator] ******
skipping: [localhost]
TASK [metrics-server : Metrics-Server | deleting old metrics-server files] *****
[DEPRECATION WARNING]: evaluating {‘failed’: False, u’stat’: {u’exists’:
False}, u’changed’: False} as a bare variable, this behaviour will go away and
you might need to add |bool to the expression in the future. Also see
CONDITIONAL_BARE_VARS configuration toggle.. This feature will be removed in
version 2.12. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
ok: [localhost] => (item=metrics-server)
TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
changed: [localhost]
TASK [metrics-server : Metrics-Server | Creating manifests] ********************
changed: [localhost] => (item={u’type’: u’config’, u’name’: u’values’, u’file’: u’values.yaml’})
TASK [metrics-server : Metrics-Server | Check Metrics-Server] ******************
changed: [localhost]
TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
changed: [localhost]
TASK [metrics-server : Metrics-Server | Installing metrics-server retry] *******
skipping: [localhost]
TASK [metrics-server : Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready] ***
FAILED - RETRYING: Metrics-Server | Waitting for v1beta1.metrics.k8s.io ready (60 retries left).
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=8 changed=5 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match ‘all’
PLAY [localhost] ***************************************************************
TASK [download : include_tasks] ************************************************
skipping: [localhost]
TASK [download : Download items] ***********************************************
skipping: [localhost]
TASK [download : Sync container] ***********************************************
skipping: [localhost]
TASK [kubesphere-defaults : Configure defaults] ********************************
ok: [localhost] => {
“msg”: “Check roles/kubesphere-defaults/defaults/main.yml”
}
TASK [common : Kubesphere | Check kube-node-lease namespace] *******************
changed: [localhost]
TASK [common : KubeSphere | Get system namespaces] *****************************
ok: [localhost]
TASK [common : set_fact] *******************************************************
ok: [localhost]
TASK [common : debug] **********************************************************
ok: [localhost] => {
“msg”: [
“kubesphere-system”,
“kubesphere-controls-system”,
“kubesphere-monitoring-system”,
“kube-node-lease”,
“kubesphere-logging-system”,
“openpitrix-system”,
“kubesphere-devops-system”,
“istio-system”,
“kubesphere-alerting-system”,
“istio-system”
]
}
TASK [common : KubeSphere | Create kubesphere namespace] ***********************
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-logging-system)
changed: [localhost] => (item=openpitrix-system)
changed: [localhost] => (item=kubesphere-devops-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=kubesphere-alerting-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=istio-system)
TASK [common : KubeSphere | Labeling system-workspace] *************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-logging-system)
changed: [localhost] => (item=openpitrix-system)
changed: [localhost] => (item=kubesphere-devops-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=kubesphere-alerting-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=istio-system)
TASK [common : KubeSphere | Create ImagePullSecrets] ***************************
changed: [localhost] => (item=default)
changed: [localhost] => (item=kube-public)
changed: [localhost] => (item=kube-system)
changed: [localhost] => (item=kubesphere-system)
changed: [localhost] => (item=kubesphere-controls-system)
changed: [localhost] => (item=kubesphere-monitoring-system)
changed: [localhost] => (item=kube-node-lease)
changed: [localhost] => (item=kubesphere-logging-system)
changed: [localhost] => (item=openpitrix-system)
changed: [localhost] => (item=kubesphere-devops-system)
changed: [localhost] => (item=istio-system)
changed: [localhost] => (item=kubesphere-alerting-system)
changed: [localhost] => (item=istio-system)
TASK [common : KubeSphere | Getting kubernetes master num] *********************
changed: [localhost]
TASK [common : KubeSphere | Setting master num] ********************************
ok: [localhost]
TASK [common : Kubesphere | Getting common component installation files] *******
changed: [localhost] => (item=common)
changed: [localhost] => (item=ks-crds)
TASK [common : KubeSphere | Create KubeSphere crds] ****************************
changed: [localhost]
TASK [common : Kubesphere | Checking openpitrix common component] **************
changed: [localhost]
TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item={u’ks’: u’mysql-pvc’, u’op’: u’openpitrix-db’})
skipping: [localhost] => (item={u’ks’: u’etcd-pvc’, u’op’: u’openpitrix-etcd’})
TASK [common : Getting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]
TASK [common : Getting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]
TASK [common : Setting PersistentVolumeName (mysql)] ***************************
skipping: [localhost]
TASK [common : Setting PersistentVolumeSize (mysql)] ***************************
skipping: [localhost]
TASK [common : Getting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]
TASK [common : Getting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]
TASK [common : Setting PersistentVolumeName (etcd)] ****************************
skipping: [localhost]
TASK [common : Setting PersistentVolumeSize (etcd)] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Check mysql PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system mysql-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.525579”, “end”: “2020-04-23 03:30:10.804687”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:10.279108″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“mysql-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting mysql db pv size] **************************
skipping: [localhost]
TASK [common : Kubesphere | Check redis PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system redis-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.517019″, “end”: “2020-04-23 03:30:11.472357”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:10.955338”, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“redis-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting redis db pv size] **************************
skipping: [localhost]
TASK [common : Kubesphere | Check minio PersistentVolumeClaim] *****************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system minio -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.523614″, “end”: “2020-04-23 03:30:12.126804”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:11.603190″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“minio\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“minio\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting minio pv size] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system openldap-pvc-openldap-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.529545”, “end”: “2020-04-23 03:30:12.921917”, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:12.392372″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“openldap-pvc-openldap-0\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting openldap pv size] **************************
skipping: [localhost]
TASK [common : Kubesphere | Check etcd db PersistentVolumeClaim] ***************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system etcd-pvc -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.517999”, “end”: “2020-04-23 03:30:13.570670″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:13.052671″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“etcd-pvc\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting etcd pv size] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Check redis ha PersistentVolumeClaim] **************
fatal: [localhost]: FAILED! => {“changed”: true, “cmd”: “/usr/local/bin/kubectl get pvc -n kubesphere-system data-redis-ha-server-0 -o jsonpath=‘{.status.capacity.storage}’\n”, “delta”: “0:00:00.516003″, “end”: “2020-04-23 03:30:14.290529″, “msg”: “non-zero return code”, “rc”: 1, “start”: “2020-04-23 03:30:13.774526″, “stderr”: "Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found", “stderr_lines”: ["Error from server (NotFound): persistentvolumeclaims \“data-redis-ha-server-0\” not found"], “stdout”: "", “stdout_lines”: []}
…ignoring
TASK [common : Kubesphere | Setting redis ha pv size] **************************
skipping: [localhost]
TASK [common : Kubesphere | Creating common component manifests] ***************
changed: [localhost] => (item={u’path’: u’etcd’, u’file’: u’etcd.yaml’})
changed: [localhost] => (item={u’name’: u’mysql’, u’file’: u’mysql.yaml’})
changed: [localhost] => (item={u’path’: u’redis’, u’file’: u’redis.yaml’})
TASK [common : Kubesphere | Creating mysql sercet] *****************************
changed: [localhost]
TASK [common : Kubesphere | Deploying etcd and mysql] **************************
skipping: [localhost] => (item=etcd.yaml)
skipping: [localhost] => (item=mysql.yaml)
TASK [common : Kubesphere | Getting minio installation files] ******************
skipping: [localhost] => (item=minio-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-minio’, u’file’: u’custom-values-minio.yaml’})
TASK [common : Kubesphere | Check minio] ***************************************
skipping: [localhost]
TASK [common : Kubesphere | Deploy minio] **************************************
skipping: [localhost]
TASK [common : debug] **********************************************************
skipping: [localhost]
TASK [common : fail] ***********************************************************
skipping: [localhost]
TASK [common : Kubesphere | create minio config directory] *********************
skipping: [localhost]
TASK [common : Kubesphere | Creating common component manifests] ***************
skipping: [localhost] => (item={u’path’: u’/root/.config/rclone’, u’file’: u’rclone.conf’})
TASK [common : include_tasks] **************************************************
skipping: [localhost] => (item=helm)
skipping: [localhost] => (item=vmbased)
TASK [common : Kubesphere | Check ha-redis] ************************************
skipping: [localhost]
TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-redis’, u’file’: u’custom-values-redis.yaml’})
TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]
TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]
TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]
TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={u’path’: u’/etc/kubesphere’, u’file’: u’redisMigrate.py’})
TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]
TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]
TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost] => (item=redis.yaml)
TASK [common : Kubesphere | Getting openldap installation files] ***************
skipping: [localhost] => (item=openldap-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-openldap’, u’file’: u’custom-values-openldap.yaml’})
TASK [common : Kubesphere | Check old openldap status] *************************
skipping: [localhost]
TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]
TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap] ************************************
skipping: [localhost]
TASK [common : Kubesphere | Deploy openldap] ***********************************
skipping: [localhost]
TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]
TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]
TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]
TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]
TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Check ha-redis] ************************************
changed: [localhost]
TASK [common : Kubesphere | Getting redis installation files] ******************
skipping: [localhost] => (item=redis-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
skipping: [localhost] => (item={u’name’: u’custom-values-redis’, u’file’: u’custom-values-redis.yaml’})
TASK [common : Kubesphere | Check old redis status] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Delete and backup old redis svc] *******************
skipping: [localhost]
TASK [common : Kubesphere | Deploying redis] ***********************************
skipping: [localhost]
TASK [common : Kubesphere | Getting redis PodIp] *******************************
skipping: [localhost]
TASK [common : Kubesphere | Creating redis migration script] *******************
skipping: [localhost] => (item={u’path’: u’/etc/kubesphere’, u’file’: u’redisMigrate.py’})
TASK [common : Kubesphere | Check redis-ha status] *****************************
skipping: [localhost]
TASK [common : ks-logging | Migrating redis data] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Disable old redis] *********************************
skipping: [localhost]
TASK [common : Kubesphere | Deploying redis] ***********************************
changed: [localhost] => (item=redis.yaml)
TASK [common : Kubesphere | Getting openldap installation files] ***************
changed: [localhost] => (item=openldap-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={u’name’: u’custom-values-openldap’, u’file’: u’custom-values-openldap.yaml’})
TASK [common : Kubesphere | Check old openldap status] *************************
changed: [localhost]
TASK [common : KubeSphere | Shutdown ks-account] *******************************
skipping: [localhost]
TASK [common : Kubesphere | Delete and backup old openldap svc] ****************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap] ************************************
changed: [localhost]
TASK [common : Kubesphere | Deploy openldap] ***********************************
changed: [localhost]
TASK [common : Kubesphere | Load old openldap data] ****************************
skipping: [localhost]
TASK [common : Kubesphere | Check openldap-ha status] **************************
skipping: [localhost]
TASK [common : Kubesphere | Get openldap-ha pod list] **************************
skipping: [localhost]
TASK [common : Kubesphere | Get old openldap data] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Migrating openldap data] ***************************
skipping: [localhost]
TASK [common : Kubesphere | Disable old openldap] ******************************
skipping: [localhost]
TASK [common : Kubesphere | Restart openldap] **********************************
skipping: [localhost]
TASK [common : KubeSphere | Restarting ks-account] *****************************
skipping: [localhost]
TASK [common : Kubesphere | Getting minio installation files] ******************
changed: [localhost] => (item=minio-ha)
TASK [common : Kubesphere | Creating manifests] ********************************
changed: [localhost] => (item={u’name’: u’custom-values-minio’, u’file’: u’custom-values-minio.yaml’})
TASK [common : Kubesphere | Check minio] ***************************************
changed: [localhost]
TASK [common : Kubesphere | Deploy minio] **************************************
`
CentOS 7.5 最小化安装后, 再AllInOne安装, 遇到的错误
`2020-04-23 21:23:09,749 p=24810 u=root | Thursday 23 April 2020 21:23:09 +0800 (0:00:00.228) 0:00:16.730 ********
2020-04-23 21:23:09,787 p=24810 u=root | skipping: [ks-allinone]
2020-04-23 21:23:09,861 p=24810 u=root | TASK [Create repo.d] ******************************************************************************
2020-04-23 21:23:09,861 p=24810 u=root | Thursday 23 April 2020 21:23:09 +0800 (0:00:00.111) 0:00:16.842 ********
2020-04-23 21:23:09,977 p=24810 u=root | fatal: [ks-allinone]: FAILED! => {
“changed”: true,
“rc”: 1
}
STDERR:
mkdir: 无法创建目录"/etc/yum.repos.d": 文件已存在
MSG:
non-zero return code
2020-04-23 21:23:09,978 p=24810 u=root | …ignoring
2020-04-23 21:23:10,096 p=24810 u=root | TASK [Creat client.repo] **************************************************************************
`
这个是因为与我自己装了yum冲突导致的吗?怎么解决呢?
最后还看到这样的error:
`2020-04-23 22:21:53,336 p=4091 u=root | fatal: [ks-allinone -> ks-allinone]: FAILED! => {
“attempts”: 4,
“changed”: false,
“dest”: “/root/releases/kubeadm-v1.16.7-amd64”,
“state”: “absent”,
“url”: “http://192.168.31.141:5080/k8s_repo/iso/v1.16.7/kubeadm”
}
MSG:
Request failed: <urlopen error [Errno 111] 拒绝连接>
2020-04-23 22:21:53,338 p=4091 u=root | NO MORE HOSTS LEFT ********************************************************************************
2020-04-23 22:21:53,339 p=4091 u=root | PLAY RECAP ****************************************************************************************
2020-04-23 22:21:53,340 p=4091 u=root | ks-allinone : ok=122 changed=26 unreachable=0 failed=1
2020-04-23 22:21:53,340 p=4091 u=root | localhost : ok=1 changed=0 unreachable=0 failed=0
2020-04-23 22:21:53,341 p=4091 u=root | Thursday 23 April 2020 22:21:53 +0800 (0:00:23.580) 0:03:27.193 ********
2020-04-23 22:21:53,341 p=4091 u=root | ===============================================================================
2020-04-23 22:21:53,344 p=4091 u=root | download : download_file | Download item ————————————————– 23.58s
`