Search Options

Results per page
Sort
Preferred Languages
Labels
Advance

Results 491 - 500 of 669 for host:kubernetes.io (0.02 sec)

  1. Pod Security Policies | Kubernetes

    Removed feature PodSecurityPolicy was deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25. Instead of using PodSecurityPolicy, you can enforce similar restrictions on Pods using either or both: Pod Security Admission a 3rd party admission plugin, that you deploy and configure yourself For a migration guide, see Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller. For more information on the removal of this API, see PodSecurityPolicy Deprecation: Past, Present, and Future.
    kubernetes.io/docs/concepts/security/pod-security-policy/
    Registered: Fri Nov 15 06:37:18 UTC 2024
    - 426.5K bytes
    - Viewed (0)
  2. Scheduling, Preemption and Eviction | Kubernetes

    In Kubernetes, scheduling refers to making sure that Pods are matched to Nodes so that the kubelet can run them. Preemption is the process of terminating Pods with lower Priority so that Pods with higher Priority can schedule on Nodes. Eviction is the process of terminating one or more Pods on Nodes. Scheduling Kubernetes Scheduler Assigning Pods to Nodes Pod Overhead Pod Topology Spread Constraints Taints and Tolerations Scheduling Framework Dynamic Resource Allocation Scheduler Performance Tuning Resource Bin Packing for Extended Resources Pod Scheduling Readiness Descheduler Pod Disruption Pod disruption is the process by which Pods on Nodes are terminated either voluntarily or involuntarily.
    kubernetes.io/docs/concepts/scheduling-eviction/
    Registered: Fri Nov 15 06:37:22 UTC 2024
    - 430.2K bytes
    - Viewed (0)
  3. Organizing Cluster Access Using kubeconfig File...

    Use kubeconfig files to organize information about clusters, users, namespaces, and authentication mechanisms. The kubectl command-line tool uses kubeconfig files to find the information it needs to choose a cluster and communicate with the API server of a cluster. Note:A file that is used to configure access to clusters is called a kubeconfig file. This is a generic way of referring to configuration files. It does not mean that there is a file named kubeconfig.
    kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
    Registered: Fri Nov 15 06:32:20 UTC 2024
    - 437K bytes
    - Viewed (0)
  4. Volumes | Kubernetes

    On-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. One problem occurs when a container crashes or is stopped. Container state is not saved so all of the files that were created or modified during the lifetime of the container are lost. During a crash, kubelet restarts the container with a clean state. Another problem occurs when multiple containers are running in a Pod and need to share files.
    kubernetes.io/docs/concepts/storage/volumes/
    Registered: Fri Nov 15 06:32:30 UTC 2024
    - 545.2K bytes
    - Viewed (0)
  5. Configuration | Kubernetes

    Resources that Kubernetes provides for configuring Pods.
    kubernetes.io/docs/concepts/configuration/
    Registered: Fri Nov 15 06:31:27 UTC 2024
    - 424.4K bytes
    - Viewed (0)
  6. Node-specific Volume Limits | Kubernetes

    This page describes the maximum number of volumes that can be attached to a Node for various cloud providers. Cloud providers like Google, Amazon, and Microsoft typically have a limit on how many volumes can be attached to a Node. It is important for Kubernetes to respect those limits. Otherwise, Pods scheduled on a Node could get stuck waiting for volumes to attach. Kubernetes default limits The Kubernetes scheduler has default limits on the number of volumes that can be attached to a Node:
    kubernetes.io/docs/concepts/storage/storage-limits/
    Registered: Fri Nov 15 06:31:22 UTC 2024
    - 429.2K bytes
    - Viewed (0)
  7. Debug Running Pods | Kubernetes

    This page explains how to debug Pods running (or crashing) on a Node. Before you begin Your Pod should already be scheduled and running. If your Pod is not yet running, start with Debugging Pods. For some of the advanced debugging steps you need to know on which Node the Pod is running and have shell access to run commands on that Node. You don't need that access to run the standard debug steps that use kubectl.
    kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/
    Registered: Fri Nov 15 06:52:28 UTC 2024
    - 488.8K bytes
    - Viewed (0)
  8. Handling retriable and non-retriable pod failur...

    FEATURE STATE: Kubernetes v1.31 [stable] (enabled by default: true) This document shows you how to use the Pod failure policy, in combination with the default Pod backoff failure policy, to improve the control over the handling of container- or Pod-level failure within a Job. The definition of Pod failure policy may help you to: better utilize the computational resources by avoiding unnecessary Pod retries. avoid Job failures due to Pod disruptions (such preemption, API-initiated eviction or taint-based eviction).
    kubernetes.io/docs/tasks/job/pod-failure-policy/
    Registered: Fri Nov 15 06:53:34 UTC 2024
    - 457.7K bytes
    - Viewed (0)
  9. Windows debugging tips | Kubernetes

    Node-level troubleshooting My Pods are stuck at "Container Creating" or restarting over and over Ensure that your pause image is compatible with your Windows OS version. See Pause container to see the latest / recommended pause image and/or get more information. Note:If using containerd as your container runtime the pause image is specified in the plugins.plugins.cri.sandbox_image field of the of config.toml configuration file. My pods show status as ErrImgPull or ImagePullBackOff
    kubernetes.io/docs/tasks/debug/debug-cluster/windows/
    Registered: Fri Nov 15 06:54:23 UTC 2024
    - 435.9K bytes
    - Viewed (0)
  10. HorizontalPodAutoscaler Walkthrough | Kubernetes

    A HorizontalPodAutoscaler (HPA for short) automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand. Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.
    kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
    Registered: Fri Nov 15 06:54:33 UTC 2024
    - 485.9K bytes
    - Viewed (0)
Back to top