在k8s集群中安装ks3.4.1开启日志失败。然后在cluster-configuration中开启logging日志,结果失败。
这是配置文件
[root@k8s-master1 ~]# kubectl describe pod -n kubesphere-logging-system opensearch-cluster-master-0
Name: opensearch-cluster-master-0
Namespace: kubesphere-logging-system
Priority: 0
Node: k8s-node2/10.10.10.212
Start Time: Fri, 22 Mar 2024 23:03:34 +0800
Labels: app.kubernetes.io/component=opensearch-cluster-master
app.kubernetes.io/instance=opensearch-master
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=opensearch
app.kubernetes.io/version=2.6.0
controller-revision-hash=opensearch-cluster-master-6bd578f6b7
helm.sh/chart=opensearch-2.11.0
statefulset.kubernetes.io/pod-name=opensearch-cluster-master-0
Annotations: cni.projectcalico.org/podIP: 192.168.169.141/32
cni.projectcalico.org/podIPs: 192.168.169.141/32
configchecksum: eb70eea659188d444ae38c6d48b6cce2f1c6bd3d9a4b073c07f3839a50e6a01
Status: Pending
IP: 192.168.169.141
IPs:
IP: 192.168.169.141
Controlled By: StatefulSet/opensearch-cluster-master
Init Containers:
fsgroup-volume:
Container ID: docker://6245061e0c6f96fd9576c1dc51df1df52b4a1e652e69d9ac340d24ec9d21be8b
Image: busybox:latest
Image ID: docker-pullable://busybox@sha256:5acba83a746c7608ed544dc1533b87c737a0b0fb730301639a0179f9344b1678
Port: <none>
Host Port: <none>
Command:
sh
-c
Args:
chown -R 1000:1000 /usr/share/opensearch/data
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 22 Mar 2024 23:48:24 +0800
Finished: Fri, 22 Mar 2024 23:48:24 +0800
Ready: False
Restart Count: 13
Environment: <none>
Mounts:
/usr/share/opensearch/data from opensearch-cluster-master (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqq4h (ro)
Containers:
opensearch:
Container ID:
Image: opensearchproject/opensearch:2.6.0
Image ID:
Ports: 9200/TCP, 9300/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 1
memory: 512Mi
Readiness: tcp-socket :9200 delay=0s timeout=3s period=5s #success=1 #failure=3
Startup: tcp-socket :9200 delay=5s timeout=3s period=10s #success=1 #failure=30
Environment:
node.name: opensearch-cluster-master-0 (v1:metadata.name)
cluster.initial_master_nodes: opensearch-cluster-master-0,
discovery.seed_hosts: opensearch-cluster-master-headless
cluster.name: opensearch-cluster
network.host: 0.0.0.0
OPENSEARCH_JAVA_OPTS: -Xmx512M -Xms512M
node.roles: master,
Mounts:
/usr/share/opensearch/config/opensearch.yml from config (rw,path="opensearch.yml")
/usr/share/opensearch/data from opensearch-cluster-master (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqq4h (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
opensearch-cluster-master:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: opensearch-cluster-master-opensearch-cluster-master-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: opensearch-cluster-master-config
Optional: false
kube-api-access-pqq4h:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Warning FailedScheduling 46m default-scheduler ↉ nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 46m default-scheduler Successfully assigned kubesphere-logging-system/opensearch-cluster-master-0 to k8s-node2
Normal Pulled 45m kubelet Successfully pulled image “busybox:latest” in 16.875207627s
Normal Pulled 45m kubelet Successfully pulled image “busybox:latest” in 398.894733ms
Normal Pulled 45m kubelet Successfully pulled image “busybox:latest” in 434.789595ms
Normal Created 45m (x4 over 45m) kubelet Created container fsgroup-volume
Normal Started 45m (x4 over 45m) kubelet Started container fsgroup-volume
Normal Pulled 45m kubelet Successfully pulled image “busybox:latest” in 397.899912ms
Normal Pulling 44m (x5 over 46m) kubelet Pulling image “busybox:latest”
Normal Pulled 44m kubelet Successfully pulled image “busybox:latest” in 15.40814988s
Warning BackOff 66s (x193 over 45m) kubelet Back-off restarting failed container
[root@k8s-master1 ~]#