创建部署问题时,请参考下面模板,你提供的信息越多,越容易及时获得解答。如果未按模板创建问题,管理员有权关闭问题。
确保帖子格式清晰易读,用 markdown code block 语法格式化代码块。
你只花一分钟创建的问题,不能指望别人花上半个小时给你解答。
操作系统信息
百度云,轻量应用服务器 LS,2核/4GB内存/80GB磁盘/6Mbps带宽
CentOS Linux release 7.6.1810 (Core)
Kubernetes版本信息
将 kubectl version
命令执行结果贴在下方
[root@ls ~]# kubectl version
Client Version: version.Info{Major:“1”, Minor:“22”, GitVersion:“v1.22.12”, GitCommit:“b058e1760c79f46a834ba59bd7a3486ecf28237d”, GitTreeState:“clean”, BuildDate:“2022-07-13T14:59:18Z”, GoVersion:“go1.16.15”, Compiler:“gc”, Platform:“linux/amd64”}
Server Version: version.Info{Major:“1”, Minor:“22”, GitVersion:“v1.22.12”, GitCommit:“b058e1760c79f46a834ba59bd7a3486ecf28237d”, GitTreeState:“clean”, BuildDate:“2022-07-13T14:53:39Z”, GoVersion:“go1.16.15”, Compiler:“gc”, Platform:“linux/amd64”}
容器运行时
将 docker version
/ crictl version
/ nerdctl version
结果贴在下方
[root@ls ~]# docker version / crictl version / nerdctl version
“docker version” accepts no arguments.
See ‘docker version –help’.
Usage: docker version [OPTIONS]
Show the Docker version information
KubeSphere版本信息
例如:v2.1.1/v3.0.0。离线安装还是在线安装。在已有K8s上安装还是使用kk安装。
./kk create cluster –with-kubernetes v1.22.12 –with-kubesphere v3.3.1
问题是什么
报错日志是什么,最好有截图。
安装报错,不知道是否安装成功,还是缺少配置
[root@ls backup]# ./kk create cluster –with-kubernetes v1.22.12 –with-kubesphere v3.3.1
_
| | / / | | | | / /
| |/ / | | | |/ / __
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ || | |) | / |\ \ / |_| |
\| \/\,|./ \\| \/\|\__, |
__/ |
|___/
16:19:25 CST [GreetingsModule] Greetings
16:19:25 CST message: [ls.mIRQmxZ6]
Greetings, KubeKey!
16:19:25 CST success: [ls.mIRQmxZ6]
16:19:25 CST [NodePreCheckModule] A pre-check on nodes
16:19:26 CST success: [ls.mIRQmxZ6]
16:19:26 CST [ConfirmModule] Display confirmation form
+————-+——+——+———+———-+——-+——-+———+———–+——–+——–+————+————+————-+——————+————–+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+————-+——+——+———+———-+——-+——-+———+———–+——–+——–+————+————+————-+——————+————–+
| ls.mIRQmxZ6 | y | y | y | y | y | y | | y | y | | | | | | CST 16:19:26 |
+————-+——+——+———+———-+——-+——-+———+———–+——–+——–+————+————+————-+——————+————–+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
16:19:32 CST success: [LocalHost]
16:19:32 CST [NodeBinariesModule] Download installation binaries
16:19:32 CST message: [localhost]
downloading amd64 kubeadm v1.22.12 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 43.7M 100 43.7M 0 0 945k 0 0:00:47 0:00:47 –:–:– 1025k
16:20:19 CST message: [localhost]
downloading amd64 kubelet v1.22.12 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 115M 100 115M 0 0 894k 0 0:02:11 0:02:11 –:–:– 1026k
16:22:32 CST message: [localhost]
downloading amd64 kubectl v1.22.12 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.7M 100 44.7M 0 0 994k 0 0:00:46 0:00:46 –:–:– 1034k
16:23:19 CST message: [localhost]
downloading amd64 helm v3.9.0 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.0M 100 44.0M 0 0 1010k 0 0:00:44 0:00:44 –:–:– 989k
16:24:03 CST message: [localhost]
downloading amd64 kubecni v0.9.1 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 37.9M 100 37.9M 0 0 988k 0 0:00:39 0:00:39 –:–:– 1034k
16:24:43 CST message: [localhost]
downloading amd64 crictl v1.24.0 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 13.8M 100 13.8M 0 0 993k 0 0:00:14 0:00:14 –:–:– 974k
16:24:57 CST message: [localhost]
downloading amd64 etcd v3.4.13 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16.5M 100 16.5M 0 0 931k 0 0:00:18 0:00:18 –:–:– 956k
16:25:16 CST message: [localhost]
downloading amd64 docker 20.10.8 …
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
4 58.1M 4 2785k 0 0 113k 0 0:08:43 0:00:24 0:08:19 111k
4 58.1M 4 2894k 0 0 113k 0 0:08:44 0:00:25 0:08:19 111k
57 58.1M 57 33.1M 0 0 115k 0 0:08:35 0:04:54 0:03:41 114k
84 58.1M 84 49.0M 0 0 114k 0 0:08:39 0:07:18 0:01:21 116k
100 58.1M 100 58.1M 0 0 114k 0 0:08:39 0:08:39 –:–:– 110k
16:33:55 CST success: [LocalHost]
16:33:55 CST [ConfigureOSModule] Get OS release
16:33:55 CST success: [ls.mIRQmxZ6]
16:33:55 CST [ConfigureOSModule] Prepare to init OS
16:33:56 CST success: [ls.mIRQmxZ6]
16:33:56 CST [ConfigureOSModule] Generate init os script
16:33:56 CST success: [ls.mIRQmxZ6]
16:33:56 CST [ConfigureOSModule] Exec init os script
16:33:57 CST stdout: [ls.mIRQmxZ6]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
16:33:57 CST success: [ls.mIRQmxZ6]
16:33:57 CST [ConfigureOSModule] configure the ntp server for each node
16:33:57 CST skipped: [ls.mIRQmxZ6]
16:33:57 CST [KubernetesStatusModule] Get kubernetes cluster status
16:33:57 CST success: [ls.mIRQmxZ6]
16:33:57 CST [InstallContainerModule] Sync docker binaries
16:34:00 CST success: [ls.mIRQmxZ6]
16:34:00 CST [InstallContainerModule] Generate docker service
16:34:00 CST success: [ls.mIRQmxZ6]
16:34:00 CST [InstallContainerModule] Generate docker config
16:34:00 CST success: [ls.mIRQmxZ6]
16:34:00 CST [InstallContainerModule] Enable docker
16:34:02 CST success: [ls.mIRQmxZ6]
16:34:02 CST [InstallContainerModule] Add auths to container runtime
16:34:02 CST skipped: [ls.mIRQmxZ6]
16:34:02 CST [PullModule] Start to pull images on all nodes
16:34:02 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
16:34:04 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
16:34:32 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
16:35:00 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
16:35:15 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
16:35:40 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
16:35:53 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
16:36:16 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
16:36:47 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
16:38:03 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
16:39:13 CST message: [ls.mIRQmxZ6]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
16:39:21 CST success: [ls.mIRQmxZ6]
16:39:21 CST [ETCDPreCheckModule] Get etcd status
16:39:21 CST success: [ls.mIRQmxZ6]
16:39:21 CST [CertsModule] Fetch etcd certs
16:39:21 CST success: [ls.mIRQmxZ6]
16:39:21 CST [CertsModule] Generate etcd Certs
[certs] Generating “ca” certificate and key
[certs] admin-ls.mIRQmxZ6 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost ls.mIRQmxZ6] and IPs [127.0.0.1 ::1 192.168.0.4]
[certs] member-ls.mIRQmxZ6 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost ls.mIRQmxZ6] and IPs [127.0.0.1 ::1 192.168.0.4]
[certs] node-ls.mIRQmxZ6 serving cert is signed for DNS names [etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost ls.mIRQmxZ6] and IPs [127.0.0.1 ::1 192.168.0.4]
16:39:22 CST success: [LocalHost]
16:39:22 CST [CertsModule] Synchronize certs file
16:39:24 CST success: [ls.mIRQmxZ6]
16:39:24 CST [CertsModule] Synchronize certs file to master
16:39:24 CST skipped: [ls.mIRQmxZ6]
16:39:24 CST [InstallETCDBinaryModule] Install etcd using binary
16:39:25 CST success: [ls.mIRQmxZ6]
16:39:25 CST [InstallETCDBinaryModule] Generate etcd service
16:39:26 CST success: [ls.mIRQmxZ6]
16:39:26 CST [InstallETCDBinaryModule] Generate access address
16:39:26 CST success: [ls.mIRQmxZ6]
16:39:26 CST [ETCDConfigureModule] Health check on exist etcd
16:39:26 CST skipped: [ls.mIRQmxZ6]
16:39:26 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
16:39:26 CST success: [ls.mIRQmxZ6]
16:39:26 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
16:39:26 CST success: [ls.mIRQmxZ6]
16:39:26 CST [ETCDConfigureModule] Restart etcd
16:39:27 CST stdout: [ls.mIRQmxZ6]
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.
16:39:27 CST success: [ls.mIRQmxZ6]
16:39:27 CST [ETCDConfigureModule] Health check on all etcd
16:39:27 CST success: [ls.mIRQmxZ6]
16:39:27 CST [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
16:39:28 CST success: [ls.mIRQmxZ6]
16:39:28 CST [ETCDConfigureModule] Health check on all etcd
16:39:28 CST success: [ls.mIRQmxZ6]
16:39:28 CST [ETCDBackupModule] Backup etcd data regularly
16:39:28 CST success: [ls.mIRQmxZ6]
16:39:28 CST [ETCDBackupModule] Generate backup ETCD service
16:39:28 CST success: [ls.mIRQmxZ6]
16:39:28 CST [ETCDBackupModule] Generate backup ETCD timer
16:39:28 CST success: [ls.mIRQmxZ6]
16:39:28 CST [ETCDBackupModule] Enable backup etcd service
16:39:28 CST success: [ls.mIRQmxZ6]
16:39:28 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
16:39:35 CST success: [ls.mIRQmxZ6]
16:39:35 CST [InstallKubeBinariesModule] Synchronize kubelet
16:39:35 CST success: [ls.mIRQmxZ6]
16:39:35 CST [InstallKubeBinariesModule] Generate kubelet service
16:39:35 CST success: [ls.mIRQmxZ6]
16:39:35 CST [InstallKubeBinariesModule] Enable kubelet service
16:39:36 CST success: [ls.mIRQmxZ6]
16:39:36 CST [InstallKubeBinariesModule] Generate kubelet env
16:39:36 CST success: [ls.mIRQmxZ6]
16:39:36 CST [InitKubernetesModule] Generate kubeadm config
16:39:36 CST success: [ls.mIRQmxZ6]
16:39:36 CST [InitKubernetesModule] Init cluster using kubeadm
16:39:49 CST stdout: [ls.mIRQmxZ6]
W1114 16:39:36.642263 36200 utils.go:69] The recommended value for “clusterDNS” in “KubeletConfiguration” is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.22.12
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost ls.mirqmxz6 ls.mirqmxz6.cluster.local] and IPs [10.233.0.1 192.168.0.4 127.0.0.1]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.002890 seconds
[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.22” in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see –upload-certs
[mark-control-plane] Marking the node ls.mirqmxz6 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ls.mirqmxz6 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: mrk68c.5db4ila5dqmumoib
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the “cluster-info” ConfigMap in the “kube-public” namespace
[kubelet-finalize] Updating “/etc/kubernetes/kubelet.conf” to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 –token mrk68c.5db4ila5dqmumoib \
--discovery-token-ca-cert-hash sha256:dac3a2315219dff930b4464e601e3f81ccc4a9742e4ac439adbd740318c72d41 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 –token mrk68c.5db4ila5dqmumoib \
--discovery-token-ca-cert-hash sha256:dac3a2315219dff930b4464e601e3f81ccc4a9742e4ac439adbd740318c72d41
16:39:49 CST success: [ls.mIRQmxZ6]
16:39:49 CST [InitKubernetesModule] Copy admin.conf to ~/.kube/config
16:39:49 CST success: [ls.mIRQmxZ6]
16:39:49 CST [InitKubernetesModule] Remove master taint
16:39:49 CST stdout: [ls.mIRQmxZ6]
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found
16:39:49 CST [WARN] Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl taint nodes ls.mIRQmxZ6 node-role.kubernetes.io/master=:NoSchedule-”
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1
16:39:50 CST stdout: [ls.mIRQmxZ6]
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found
16:39:50 CST [WARN] Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl taint nodes ls.mIRQmxZ6 node-role.kubernetes.io/control-plane=:NoSchedule-”
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1
16:39:50 CST success: [ls.mIRQmxZ6]
16:39:50 CST [InitKubernetesModule] Add worker label
16:39:50 CST stdout: [ls.mIRQmxZ6]
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found
16:39:50 CST message: [ls.mIRQmxZ6]
add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1
16:39:50 CST retry: [ls.mIRQmxZ6]
16:39:55 CST stdout: [ls.mIRQmxZ6]
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found
16:39:55 CST message: [ls.mIRQmxZ6]
add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1
16:39:55 CST retry: [ls.mIRQmxZ6]
16:40:00 CST stdout: [ls.mIRQmxZ6]
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found
16:40:00 CST message: [ls.mIRQmxZ6]
add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1
16:40:00 CST retry: [ls.mIRQmxZ6]
16:40:05 CST stdout: [ls.mIRQmxZ6]
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found
16:40:05 CST message: [ls.mIRQmxZ6]
add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1
16:40:05 CST retry: [ls.mIRQmxZ6]
16:40:10 CST stdout: [ls.mIRQmxZ6]
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found
16:40:10 CST message: [ls.mIRQmxZ6]
add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1
16:40:10 CST failed: [ls.mIRQmxZ6]
error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed:
failed: [ls.mIRQmxZ6] [AddWorkerLabel] exec failed after 5 retires: add worker label failed: Failed to exec command: sudo -E /bin/bash -c “/usr/local/bin/kubectl label –overwrite node ls.mIRQmxZ6 node-role.kubernetes.io/worker=”
Error from server (NotFound): nodes “ls.mIRQmxZ6” not found: Process exited with status 1