创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。

操作系统信息
例如:虚拟机/物理机,Centos7.5/Ubuntu18.04,4C/8G

Kubernetes版本信息
kubectl version 命令执行结果贴在下方

容器运行时
docker version / crictl version / nerdctl version 结果贴在下方

KubeSphere版本信息
例如:v2.1.1/v3.0.0。离线安装还是在线安装。在已有K8s上安装还是使用kk安装。

问题是什么
报错日志是什么,最好有截图。

+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name   | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node2  | y    | y    | y       | y        | y     | y     | y         |        |            |             |                  | CST 10:52:31 |
| node5  | y    | y    | y       | y        | y     | y     | y         |        |            |             |                  | CST 10:52:31 |
| node3  | y    | y    | y       | y        | y     | y     | y         |        |            |             |                  | CST 10:52:31 |
| master | y    | y    | y       | y        | y     | y     | y         |        |            |             |                  | CST 10:52:31 |
| node4  | y    | y    | y       | y        | y     | y     | y         |        |            |             |                  | CST 10:52:31 |
| node1  | y    | y    | y       | y        | y     | y     | y         |        |            |             |                  | CST 10:52:31 |
+--------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[10:52:48 CST] Downloading Installation Files               
INFO[10:52:48 CST] Downloading kubeadm ...                      
INFO[10:53:23 CST] Downloading kubelet ...                      
INFO[10:55:11 CST] Downloading kubectl ...                      
INFO[10:55:49 CST] Downloading helm ...                         
INFO[10:56:27 CST] Downloading kubecni ...                      
INFO[10:57:01 CST] Configuring operating system ...             
[node5 10.44.26.12] MSG:
net.ipv6.conf.eth0.accept_dad = 0
net.ipv6.conf.eth0.accept_ra = 1
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.accept_ra_rtr_pref = 1
net.ipv6.conf.eth0.accept_ra_rt_info_max_plen = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[node2 10.44.26.9] MSG:
net.ipv6.conf.eth0.accept_dad = 0
net.ipv6.conf.eth0.accept_ra = 1
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.accept_ra_rtr_pref = 1
net.ipv6.conf.eth0.accept_ra_rt_info_max_plen = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[node1 10.44.26.8] MSG:
net.ipv6.conf.eth0.accept_dad = 0
net.ipv6.conf.eth0.accept_ra = 1
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.accept_ra_rtr_pref = 1
net.ipv6.conf.eth0.accept_ra_rt_info_max_plen = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[node3 10.44.26.10] MSG:
net.ipv6.conf.eth0.accept_dad = 0
net.ipv6.conf.eth0.accept_ra = 1
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.accept_ra_rtr_pref = 1
net.ipv6.conf.eth0.accept_ra_rt_info_max_plen = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[node4 10.44.26.11] MSG:
net.ipv6.conf.eth0.accept_dad = 0
net.ipv6.conf.eth0.accept_ra = 1
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.accept_ra_rtr_pref = 1
net.ipv6.conf.eth0.accept_ra_rt_info_max_plen = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
[master 10.44.26.7] MSG:
net.ipv6.conf.eth0.accept_dad = 0
net.ipv6.conf.eth0.accept_ra = 1
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.accept_ra_rtr_pref = 1
net.ipv6.conf.eth0.accept_ra_rt_info_max_plen = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
no crontab for root
INFO[10:57:03 CST] Installing docker ...                        
INFO[10:58:42 CST] Start to download images on all nodes        
[node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[node5] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[node4] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/etcd:v3.4.13
[node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[node4] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[node5] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.2
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.20.4
[node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.20.4
[node4] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[node5] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[node4] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[node5] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.20.4
[node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.20.4
[node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[node4] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[node5] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[node4] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[node5] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.6.9
[node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
[node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[node4] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[node5] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[node1] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.16.3
[node3] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[node5] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.16.3
[node4] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[node2] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.16.3
[master] Downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.16.3
INFO[11:01:04 CST] Generating etcd certs                        
INFO[11:01:05 CST] Synchronizing etcd certs                     
INFO[11:01:05 CST] Creating etcd service                        
[master 10.44.26.7] MSG:
etcd will be installed
INFO[11:01:19 CST] Starting etcd cluster                        
[master 10.44.26.7] MSG:
Configuration file will be created
INFO[11:01:19 CST] Refreshing etcd configuration                
[master 10.44.26.7] MSG:
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
Waiting for etcd to start
INFO[11:01:25 CST] Backup etcd data regularly                   
INFO[11:01:31 CST] Get cluster status                           
[master 10.44.26.7] MSG:
Cluster will be created.
INFO[11:01:31 CST] Installing kube binaries                     
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.44.26.12:/tmp/kubekey/kubeadm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.44.26.8:/tmp/kubekey/kubeadm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.44.26.11:/tmp/kubekey/kubeadm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.44.26.7:/tmp/kubekey/kubeadm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.44.26.10:/tmp/kubekey/kubeadm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubeadm to 10.44.26.9:/tmp/kubekey/kubeadm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.44.26.12:/tmp/kubekey/kubelet   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.44.26.8:/tmp/kubekey/kubelet   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.44.26.11:/tmp/kubekey/kubelet   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.44.26.12:/tmp/kubekey/kubectl   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.44.26.8:/tmp/kubekey/kubectl   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.44.26.11:/tmp/kubekey/kubectl   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.44.26.10:/tmp/kubekey/kubelet   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.44.26.9:/tmp/kubekey/kubelet   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/helm to 10.44.26.8:/tmp/kubekey/helm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/helm to 10.44.26.12:/tmp/kubekey/helm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubelet to 10.44.26.7:/tmp/kubekey/kubelet   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.44.26.10:/tmp/kubekey/kubectl   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.44.26.9:/tmp/kubekey/kubectl   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.44.26.8:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.44.26.12:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/helm to 10.44.26.11:/tmp/kubekey/helm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/helm to 10.44.26.10:/tmp/kubekey/helm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/helm to 10.44.26.9:/tmp/kubekey/helm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/kubectl to 10.44.26.7:/tmp/kubekey/kubectl   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.44.26.9:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.44.26.11:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/helm to 10.44.26.7:/tmp/kubekey/helm   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.44.26.10:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
Push /home/ecarx/kubesphere/kubekey/v1.20.4/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 10.44.26.7:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz   Done
INFO[11:01:47 CST] Initializing kubernetes cluster              
[master 10.44.26.7] MSG:
[preflight] Running pre-flight checks
W0217 11:01:48.348719    5614 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0217 11:01:48.351437    5614 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[master 10.44.26.7] MSG:
[preflight] Running pre-flight checks
W0217 11:01:49.007376    5832 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0217 11:01:49.010039    5832 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
ERRO[11:01:49 CST] Failed to init kubernetes cluster: Failed to exec command: sudo env PATH=$PATH /bin/sh -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl" 
W0217 11:01:49.108964    5864 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.20.4
[preflight] Running pre-flight checks
        [WARNING FileExisting-ebtables]: ebtables not found in system path
        [WARNING FileExisting-ethtool]: ethtool not found in system path
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.12. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileExisting-conntrack]: conntrack not found in system path
        [ERROR FileExisting-ip]: ip not found in system path
        [ERROR FileExisting-iptables]: iptables not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1  node=10.44.26.7
WARN[11:01:49 CST] Task failed ...                              
WARN[11:01:49 CST] error: interrupted by error                  
Error: Failed to init kubernetes cluster: interrupted by error
Usage:
  kk create cluster [flags]

Flags:
      --download-cmd string      The user defined command to download the necessary binary files. The first param '%s' is output path, the second param '%s', is the URL (default "curl -L -o %s %s")
  -f, --filename string          Path to a configuration file
  -h, --help                     help for cluster
      --skip-pull-images         Skip pre pull images
      --with-kubernetes string   Specify a supported version of kubernetes (default "v1.19.8")
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.1.0)
      --with-local-storage       Deploy a local PV provisioner
  -y, --yes                      Skip pre-check of the installation

Global Flags:
      --debug        Print detailed information (default true)
      --in-cluster   Running inside the cluster

Failed to init kubernetes cluster: interrupted by error

kubesphere版本v3.1.1

kubenertes版本v1.20.4

kubeadm在系统PATH中找不到conntrackip, iptables等软件。你可以使用以下命令检查一下这些软件的路径是否正确

echo $PATH

sudo which conntrack