- Sort Score
- Result 10 results
- Languages All
- Labels All
Results 21 - 30 of 723 for host:kubernetes.io (0.02 sec)
-
Cluster Administration | Kubernetes
Lower-level detail relevant to creating or administering a Kubernetes cluster.kubernetes.io/docs/concepts/cluster-administration/Registered: Mon Jan 26 06:39:31 UTC 2026 - 472.8K bytes - Viewed (0) -
Service Internal Traffic Policy | Kubernetes
If two Pods in your cluster want to communicate, and both Pods are actually running on the same node, use _Service Internal Traffic Policy_ to keep network traffic within that node. Avoiding a round trip via the cluster network can help with reliability, performance (network latency and throughput), or cost.kubernetes.io/docs/concepts/services-networking/service-traffic-policy/Registered: Mon Jan 26 06:39:45 UTC 2026 - 472.7K bytes - Viewed (0) -
Ingress Controllers | Kubernetes
In order for an [Ingress](/docs/concepts/services-networking/ingress/) to work in your cluster, there must be an _ingress controller_ running. You need to select at least one ingress controller and make sure it is set up in your cluster. This page lists common ingress controllers that you can deploy.kubernetes.io/docs/concepts/services-networking/ingress-controllers/Registered: Mon Jan 26 06:40:49 UTC 2026 - 478.5K bytes - Viewed (0) -
Local ephemeral storage | Kubernetes
Nodes have local ephemeral storage, backed by locally-attached writeable devices or, sometimes, by RAM. "Ephemeral" means that there is no long-term guarantee about durability. Pods use ephemeral local storage for scratch space, caching, and for logs. The kubelet can provide scratch space to Pods using local ephemeral storage to mount emptyDir volumes into containers. The kubelet also uses this kind of storage to hold node-level container logs, container images, and the writable layers of running containers.kubernetes.io/docs/concepts/storage/ephemeral-storage/Registered: Mon Jan 26 06:40:44 UTC 2026 - 488.7K bytes - Viewed (0) -
Customizing components with the kubeadm API | K...
This page covers how to customize the components that kubeadm deploys. For control plane components you can use flags in the ClusterConfiguration structure or patches per-node. For the kubelet and kube-proxy you can use KubeletConfiguration and KubeProxyConfiguration, accordingly. All of these options are possible via the kubeadm configuration API. For more details on each field in the configuration you can navigate to our API reference pages. Note:To reconfigure a cluster that has already been created see Reconfiguring a kubeadm cluster.kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/Registered: Mon Jan 26 06:33:40 UTC 2026 - 491.8K bytes - Viewed (0) -
Disruptions | Kubernetes
This guide is for application owners who want to build highly available applications, and thus need to understand what types of disruptions can happen to Pods. It is also for cluster administrators who want to perform automated cluster actions, like upgrading and autoscaling clusters. Voluntary and involuntary disruptions Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system software error.kubernetes.io/docs/concepts/workloads/pods/disruptions/Registered: Mon Jan 26 06:34:46 UTC 2026 - 489.2K bytes - Viewed (0) -
Assigning Pods to Nodes | Kubernetes
You can constrain a Pod so that it is restricted to run on particular node(s), or to prefer to run on particular nodes. There are several ways to do this and the recommended approaches all use label selectors to facilitate the selection. Often, you do not need to set any such constraints; the scheduler will automatically do a reasonable placement (for example, spreading your Pods across nodes so as not place Pods on a node with insufficient free resources).kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/Registered: Mon Jan 26 06:35:00 UTC 2026 - 553.4K bytes - Viewed (0) -
Troubleshooting kubeadm | Kubernetes
As with any program, you might run into an error installing or running kubeadm. This page lists some common failure scenarios and have provided steps that can help you understand and fix the problem. If your problem is not listed below, please follow the following steps: If you think your problem is a bug with kubeadm: Go to github.com/kubernetes/kubeadm and search for existing issues. If no issue exists, please open one and follow the issue template.kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/Registered: Mon Jan 26 06:33:49 UTC 2026 - 513.9K bytes - Viewed (0) -
Installing Kubernetes with deployment tools | K...
Production-Grade Container Orchestrationkubernetes.io/docs/setup/production-environment/tools/Registered: Mon Jan 26 06:34:20 UTC 2026 - 470.1K bytes - Viewed (0) -
Pod Scheduling Readiness | Kubernetes
FEATURE STATE: Kubernetes v1.30 [stable] Pods were considered ready for scheduling once created. Kubernetes scheduler does its due diligence to find nodes to place all pending Pods. However, in a real-world case, some Pods may stay in a "miss-essential-resources" state for a long period. These Pods actually churn the scheduler (and downstream integrators like Cluster AutoScaler) in an unnecessary manner. By specifying/removing a Pod's .spec.schedulingGates, you can control when a Pod is ready to be considered for scheduling.kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/Registered: Mon Jan 26 06:35:50 UTC 2026 - 480.9K bytes - Viewed (0)