- Sort Score
- Result 10 results
- Languages All
- Labels All
Results 1 - 10 of 2,182 for host:docs.ruby-lang.org (0.05 sec)
-
Garbage Collection | Kubernetes
Garbage collection is a collective term for the various mechanisms Kubernetes uses to clean up cluster resources. This allows the clean up of resources like the following: Terminated pods Completed Jobs Objects without owner references Unused containers and container images Dynamically provisioned PersistentVolumes with a StorageClass reclaim policy of Delete Stale or expired CertificateSigningRequests (CSRs) Nodes deleted in the following scenarios: On a cloud when the cluster uses a cloud controller manager On-premises when the cluster uses an addon similar to a cloud controller manager Node Lease objects Owners and dependents Many objects in Kubernetes link to each other through owner references.kubernetes.io/docs/concepts/architecture/garbage-collection/Registered: Mon Nov 04 06:12:44 UTC 2024 - 440.5K bytes - Viewed (0) -
Container Runtime Interface (CRI) | Kubernetes
The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to recompile the cluster components. You need a working container runtime on each Node in your cluster, so that the kubelet can launch Pods and their containers. The Container Runtime Interface (CRI) is the main protocol for the communication between the kubelet and Container Runtime. The Kubernetes Container Runtime Interface (CRI) defines the main gRPC protocol for the communication between the node components kubelet and container runtime.kubernetes.io/docs/concepts/architecture/cri/Registered: Mon Nov 04 06:13:53 UTC 2024 - 429.8K bytes - Viewed (0) -
Set up a High Availability etcd Cluster with ku...
By default, kubeadm runs a local etcd instance on each control plane node. It is also possible to treat the etcd cluster as external and provision etcd instances on separate hosts. The differences between the two approaches are covered in the Options for Highly Available topology page. This task walks through the process of creating a high availability external etcd cluster of three members that can be used by kubeadm during cluster creation.kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/Registered: Mon Nov 04 06:13:45 UTC 2024 - 452.7K bytes - Viewed (0) -
Dynamic Volume Provisioning | Kubernetes
Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when users create PersistentVolumeClaim objects. Background The implementation of dynamic volume provisioning is based on the API object StorageClass from the API group storage.kubernetes.io/docs/concepts/storage/dynamic-provisioning/Registered: Mon Nov 04 06:13:28 UTC 2024 - 436.6K bytes - Viewed (0) -
Annotations | Kubernetes
You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata. Attaching metadata to objects You can use either labels or annotations to attach metadata to Kubernetes objects. Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.kubernetes.io/docs/concepts/overview/working-with-objects/annotations/Registered: Mon Nov 04 06:13:37 UTC 2024 - 434.4K bytes - Viewed (0) -
Limit Ranges | Kubernetes
By default, containers run with unbounded compute resources on a Kubernetes cluster. Using Kubernetes resource quotas, administrators (also termed cluster operators) can restrict consumption and creation of cluster resources (such as CPU time, memory, and persistent storage) within a specified namespace. Within a namespace, a Pod can consume as much CPU and memory as is allowed by the ResourceQuotas that apply to that namespace. As a cluster operator, or as a namespace-level administrator, you might also be concerned about making sure that a single object cannot monopolize all available resources within a namespace.kubernetes.io/docs/concepts/policy/limit-range/Registered: Mon Nov 04 06:21:40 UTC 2024 - 444.6K bytes - Viewed (0) -
Resource Management for Pods and Containers | K...
When you specify a Pod, you can optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM); there are others. When you specify the resource request for containers in a Pod, the kube-scheduler uses this information to decide which node to place the Pod on. When you specify a resource limit for a container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set.kubernetes.io/docs/concepts/configuration/manage-resources-containers/Registered: Mon Nov 04 06:22:02 UTC 2024 - 488.9K bytes - Viewed (0) -
Pod Priority and Preemption | Kubernetes
FEATURE STATE: Kubernetes v1.14 [stable] Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible. Warning:In a cluster where not all users are trusted, a malicious user could create Pods at the highest possible priorities, causing other Pods to be evicted/not get scheduled.kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/Registered: Mon Nov 04 06:23:03 UTC 2024 - 450.8K bytes - Viewed (0) -
Pod Topology Spread Constraints | Kubernetes
You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization. You can set cluster-level constraints as a default, or configure topology spread constraints for individual workloads. Motivation Imagine that you have a cluster of up to twenty nodes, and you want to run a workload that automatically scales how many replicas it uses.kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/Registered: Mon Nov 04 06:22:45 UTC 2024 - 483.5K bytes - Viewed (0) -
Upgrading kubeadm clusters | Kubernetes
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version 1.30.x to version 1.31.x, and from version 1.31.x to 1.31.y (where y > x). Skipping MINOR versions when upgrading is unsupported. For more details, please visit Version Skew Policy. To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead: Upgrading a kubeadm cluster from 1.29 to 1.30 Upgrading a kubeadm cluster from 1.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/Registered: Mon Nov 04 06:25:59 UTC 2024 - 449.2K bytes - Viewed (0)