Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

What's New in Kubernetes 1.18 Webinar Slides

Slides from webinar about the latest Kubernetes release. Watch the webinar recording: https://bit.ly/k8s-1-18-slide-deck

  • Be the first to comment

What's New in Kubernetes 1.18 Webinar Slides

  1. 1. Copyright © 2020 Mirantis, Inc. All rights reserved What's New in Kubernetes 1.18 WEBINAR | March 17, 2020
  2. 2. 2 The content contained herein is for informational purposes only, may not be referenced or added to any contract, and should not be relied upon to make purchasing decisions. It is not a commitment, promise, or legal obligation to provide any features, functionality, capabilities, code, etc. or to provide anything within any schedule, date, time, etc. All Mirantis product and service decisions remain at Mirantis sole and exclusive discretion. Plus, I can't guarantee what features actually make it into Kubernetes 1.18 when it's released next week. Disclaimer
  3. 3. 3 Featured Presenter Nick Chase Head of Technical Content at Mirantis Nick Chase is Head of Technical Content for Mirantis and a former member of the Kubernetes release team. He is a former software developer and author or co-author of more than a dozen books on various programming topics, including the OpenStack Architecture Guide, Understanding OPNFV, and Machine Learning for Mere Mortals. Reach him on Twitter @NickChase.
  4. 4. 4 A Little Housekeeping ● Please submit questions in the Questions panel. ● We’ll provide a link where you can download the slides at the end of the webinar.
  5. 5. 5 ● Generally Available ● Beta ● Alpha ● Q&A Agenda
  6. 6. Copyright © 2020 Mirantis, Inc. All rights reserved Generally available Production ready and enabled by default
  7. 7. 7 RunAsUsername for Windows
  8. 8. 8 ● Windows worker nodes ● Controllers still run on Linux RunAsUserName for Windows
  9. 9. 9 apiVersion: v1 kind: Pod metadata: name: username-demo-pod spec: securityContext: windowsOptions: runAsUserName: "ContainerUser" containers: - name: username-demo image: mcr.microsoft.com/windows/servercore:ltsc2019 command: ["ping", "-t", "localhost"] nodeSelector: kubernetes.io/os: windows RunAsUserName for Windows
  10. 10. 10 kubectl apply -f run-as-username-pod.yaml kubectl exec -it username-demo-pod -- powershell echo $env:USERNAME ContainerUser RunAsUserName for Windows
  11. 11. 11 ● Limitations ○ Must be valid (non-empty) user (DOMAINUSER) ○ DOMAIN ■ Optional ■ NetBios name or DNS name ○ USER ■ <= 20 characters ■ Can have dots or spaces ■ No control characters ■ Not in / : * ? " < > | RunAsUserName for Windows
  12. 12. 12 Support gMSA for Windows workloads
  13. 13. 13 ● Group Managed Service Account ○ Password management ○ Single identity for group of servers ● Deploy GMSACredentialSpec CRD ● Install validation webhooks (multiple steps) ● Provision gMSAs in Active Directory Support gMSA for Windows workloads
  14. 14. 14 ● Create the GMSACredentialSpec object: apiVersion: windows.k8s.io/v1alpha1 kind: GMSACredentialSpec metadata: name: gmsa-WebApp1 #This is an arbitrary name but it will be used as a reference credspec: ActiveDirectoryConfig: GroupManagedServiceAccounts: - Name: WebApp1 #Username of the GMSA account Scope: CONTOSO #NETBIOS Domain Name - Name: WebApp1 #Username of the GMSA account Scope: contoso.com #DNS Domain Name CmsPlugins: - ActiveDirectory DomainJoinConfig: DnsName: contoso.com #DNS Domain Name DnsTreeName: contoso.com #DNS Domain Name Root Guid: 244818ae-87ac-4fcd-92ec-e79e5252348a #GUID MachineAccountName: WebApp1 #Username of the GMSA account NetBiosName: CONTOSO #NETBIOS Domain Name Sid: S-1-5-21-2126449477-2524075714-3094792973 #SID of GMSA Support gMSA for Windows workloads
  15. 15. 15 ● Configure cluster role to enable RBAC on specific gMSA credential specs apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: webapp1-role rules: - apiGroups: ["windows.k8s.io"] resources: ["gmsacredentialspecs"] verbs: ["use"] resourceNames: ["gmsa-WebApp1"] Support gMSA for Windows workloads
  16. 16. 16 ● Assign role to service accounts to use specific gMSA credentialspecs apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: allow-default-svc-account-read-on-gmsa-WebApp1 namespace: default subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: webapp1-role apiGroup: rbac.authorization.k8s.io Support gMSA for Windows workloads
  17. 17. 17 ● Configure Pod to use the gMSA credential spec apiVersion: apps/v1beta1 kind: Deployment metadata: labels: run: with-creds name: with-creds namespace: default spec: replicas: 1 selector: matchLabels: run: with-creds Support gMSA for Windows workloads template: metadata: labels: run: with-creds spec: securityContext: windowsOptions: gmsaCredentialSpecName: gmsa-webapp1 containers: - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 imagePullPolicy: Always name: iis nodeSelector: beta.kubernetes.io/os: windows
  18. 18. 18 ● Configure container to use the gMSA spec apiVersion: apps/v1beta1 kind: Deployment metadata: labels: run: with-creds name: with-creds namespace: default spec: replicas: 1 selector: matchLabels: run: with-creds Support gMSA for Windows workloads template: metadata: labels: run: with-creds spec: containers: - image: mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 imagePullPolicy: Always name: iis securityContext: windowsOptions: gmsaCredentialSpecName: gmsa-Webapp1 nodeSelector: beta.kubernetes.io/os: windows
  19. 19. 19 Raw block device using persistent volume source
  20. 20. 20 ● Raw block devices -- non-networked storage ○ AWSElasticBlockStore ○ AzureDisk ○ CSI ○ FC (Fibre Channel) ○ GCEPersistentDisk ○ iSCSI ○ Local volume ○ OpenStack Cinder ○ RBD (Ceph Block Device) ○ VsphereVolume Raw block device using persistent volume source
  21. 21. 21 ● Persistent Volumes using a Raw Block Volume apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block persistentVolumeReclaimPolicy: Retain fc: targetWWNs: ["50060e801049cfd1"] lun: 0 readOnly: false Raw block device using persistent volume source
  22. 22. 22 ● Persistent Volume Claim requesting a Raw Block Volume apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 10Gi Raw block device using persistent volume source
  23. 23. 23 ● Add to container ○ Specify device path instead of mount path apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: [ "tail -f /dev/null" ] volumeDevices: - name: data devicePath: /dev/xvda volumes: - name: data persistentVolumeClaim: claimName: block-pvc Raw block device using persistent volume source
  24. 24. 24 Cloning a PVC
  25. 25. 25 ● Use an existing PersistentVolumeClaim as the DataSource for a new PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cloned-pvc spec: storageClassName: my-csi-plugin dataSource: name: existing-src-pvc-name kind: PersistentVolumeClaim accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Cloning a PVC
  26. 26. 26 Kubectl diff
  27. 27. 27 ● Similar to kubectl apply kubectl diff -f some-resources.yaml ● Specify KUBECTL_EXTERNAL_DIFF to use your favorite diff tool KUBECTL_EXTERNAL_DIFF=meld kubectl diff -f some-resources.yaml kubectl diff
  28. 28. 28 APIServer DryRun
  29. 29. 29 kubectl apply --server-dry-run APIServer DryRun
  30. 30. 30 Pass Pod information in CSI calls
  31. 31. 31 ● Adds new fields to volume_context for NodePublishVolumeRequest ○ csi.storage.k8s.io/pod.name: {pod.Name} ○ csi.storage.k8s.io/pod.namespace: {pod.Namespace} ○ csi.storage.k8s.io/pod.uid: {pod.UID} ○ csi.storage.k8s.io/serviceAccount.name: {pod.Spec.ServiceAccountName} Pass Pod information in CSI calls
  32. 32. 32 ● Manually include CSIDriver object in driver manifests ● Used to need cluster-driver-registrar sidecar container ● Container creates CSIDriver Object automatically Pass Pod information in CSI calls
  33. 33. 33 apiVersion: storage.k8s.io/v1beta1 kind: CSIDriver metadata: name: testcsidriver.example.com spec: podInfoOnMount: true Pass Pod information in CSI calls
  34. 34. 34 Skip attach for non-attachable CSI volumes
  35. 35. 35 ● Some CSI volume types don't have attach operations: ○ NFS ○ Secrets ○ Ephemeral Skip attach for non-attachable CSI volumes
  36. 36. Copyright © 2020 Mirantis, Inc. All rights reserved Beta Enabled by default, but not necessarily ready for production environments. Not likely to change.
  37. 37. 37 CertificateSigningRequest API
  38. 38. 38 ● Create the request ● Create the object and send to K8s ● Approve the request ○ Manual or automatic ● Associated with a private key ○ Can be held by a pod ■ Identity ■ Authorization ● Be careful who can approve requests! CertificateSigningRequest API
  39. 39. 39 ● Must be set up to serve the certificates API ● Default signer implementation in controller manager ○ Pass CA's keypair --cluster-signing-cert-file and --cluster-signing-key-file to controller manager CertificateSigningRequest API
  40. 40. 40 cat <<EOF | cfssl genkey - | cfssljson -bare server { "hosts": [ "my-svc.my-namespace.svc.cluster.local", "my-pod.my-namespace.pod.cluster.local", "192.0.2.24", "10.0.34.2" ], "CN": "my-pod.my-namespace.pod.cluster.local", "key": { "algo": "ecdsa", "size": 256 } } EOF 2017/03/21 06:48:17 [INFO] generate received request 2017/03/21 06:48:17 [INFO] received CSR 2017/03/21 06:48:17 [INFO] generating key: ecdsa-256 2017/03/21 06:48:17 [INFO] encoded CSR CertificateSigningRequest API
  41. 41. 41 ● Generates 2 files ○ Actual request (server.csr) ○ Encoded key for the final certificate (server-key.pem) kubectl get csr NAME AGE REQUESTOR CONDITION my-svc.my-namespace 10m yourname@example.com Pending kubectl certificate approve my-svc.my-namespace ● Download to server.crt kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' | base64 --decode > server.crt ● Use server.crt and server-key.pem as keypair for HTTPS server CertificateSigningRequest API
  42. 42. 42 Even pod spreading across failure domains
  43. 43. 43 ● Affinity = infinite ● Antiaffinity = 1 apiVersion: v1 kind: Pod metadata: name: mypod spec: topologySpreadConstraints: - maxSkew: <integer> topologyKey: <string> whenUnsatisfiable: <string> labelSelector: <object> Even pod spreading across failure domains
  44. 44. 44 ● Default policy (alpha) apiVersion: kubescheduler.config.k8s.io/v1alpha2 kind: KubeSchedulerConfiguration profiles: pluginConfig: - name: PodTopologySpread args: defaultConstraints: - maxSkew: 1 topologyKey: failure-domain.beta.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway Even pod spreading across failure domains
  45. 45. 45 Add pod-startup liveness-probe holdoff for slow starting pods
  46. 46. 46 apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: k8s.gcr.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 Add pod-startup liveness-probe holdoff for slow-starting pods
  47. 47. 47 Kubeadm for Windows
  48. 48. 48 ● Create a K8s node on Windows ● Run Windows-based containers ○ For Windows containers get Windows Server 2019 license (or higher) ● Control plane still runs on Linux Kubeadm for Windows
  49. 49. 49 New Endpoint API
  50. 50. 50 ● Services with > 100 endpoints -> EndpointSlices ● EndpointSliceProxying feature gate (apha) ● Will replace v1 New Endpoint API
  51. 51. 51 Node Topology Manager
  52. 52. 52 ● Performance/latency sensitive operations ● CPU vs Device manager ● Hint providers ● Four supported policies (--topology-manager-policy) ○ none (default) ○ best-effort ○ restricted ○ single-numa-node ● Only none takes pod specs into account Node Topology Manager
  53. 53. 53 ● No requests or limits ● BestEffort QoS class spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" Node Topology Manager
  54. 54. 54 ● requests < limits ● Burstable QoS class spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Node Topology Manager
  55. 55. 55 ● requests == limits ● Guaranteed QoS class spec: containers: - name: nginx image: nginx resources: limits: example.com/deviceA: "1" example.com/deviceB: "1" requests: example.com/deviceA: "1" example.com/deviceB: "1" Node Topology Manager
  56. 56. 56 ● Limitations for Non-Uniform Memory Access ● Max NUMA nodes = 8. ○ state explosion ● Scheduler inot topology-aware ○ Can still fail ● Only Device Manager and the CPU Manager support Topology Manager's HintProvider interface. ○ Memory and Hugepages not considered Node Topology Manager
  57. 57. 57 IPv6 support
  58. 58. 58 ● Feature parity with IPv4 ● kubeadm uses default gateway network interface ○ advertise address for API server. ○ Specify kubeadm init --apiserver-advertise-address=<ip-address> to change ○ For example --apiserver-advertise-address=fd00::101 IPv6 support added
  59. 59. 59 Pod overhead: account resources tied to the pod sandbox, but not specific containers
  60. 60. 60 kind: RuntimeClass apiVersion: node.k8s.io/v1beta1 metadata: name: kata-fc handler: kata-fc overhead: podFixed: memory: "120Mi" cpu: "250m" ... Pod Overhead: account resources tied to the pod sandbox, but not specific containers apiVersion: v1 kind: Pod metadata: name: test-pod spec: runtimeClassName: kata-fc containers: - name: busybox-ctr image: busybox stdin: true tty: true resources: limits: cpu: 500m memory: 100Mi - name: nginx-ctr image: nginx resources: limits: cpu: 1500m memory: 100Mi
  61. 61. 61 Adding AppProtocol to Services and Endpoints
  62. 62. 62 ● AppProtocol ● Optional field ○ Endpoint ○ EndpointSlice ○ Service ■ UDP, TCP, SCTP Adding AppProtocol to Services and Endpoints
  63. 63. 63 ● Specific protocol ○ postgresql:// ○ https:// ○ mysql:// Adding AppProtocol to Services and Endpoints
  64. 64. Copyright © 2020 Mirantis, Inc. All rights reserved Alpha Disabled by default, may change in the future
  65. 65. 65 Skip Volume ownership change
  66. 66. 66 ● Changes to match securityContext by default ● For large volumes can be slow ● fSGroupChangePolicy ● No effect on ephemeral volumes ○ secret ○ configMap ○ ephemeral Skip Volume Ownership Change
  67. 67. 67 Configurable scale velocity for HPA
  68. 68. 68 ● Horizontal Pod Autoscaler ● Highest recommendation in window ● Configure with ○ --horizontal-pod-autoscaler-downscale-stabilization ○ behavior.scaleDown.stabilizationWindowSeconds ● Specify periodSeconds ○ Length of time for which condition must be true Configurable scale velocity for HPA
  69. 69. 69 ● Create defaults Configurable scale velocity for HPA behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 scaleUp: stabilizationWindowSeconds: 0 policies: - type: Percent value: 100 periodSeconds: 15 - type: Pods value: 4 periodSeconds: 15 selectPolicy: Max
  70. 70. 70 ● Limit scale down: behavior: scaleDown: policies: - type: Percent value: 10 periodSeconds: 60 - type: Pods value: 5 periodSeconds: 60 selectPolicy: Max Configurable scale velocity for HPA
  71. 71. 71 behavior: scaleDown: policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 Configurable scale velocity for HPA
  72. 72. 72 Provide ODIC discovery for service account token issuer
  73. 73. 73 ● Enables federation of clusters ● Identity provider --> relying parties ● Must be OIDC compliant ● system:service-account-issuer-discovery ClusterRole ○ No role bindings included ○ Admin binds to system:authenticated or system:unauthenticated Provide OIDC discovery for service account token issuer
  74. 74. 74 Immutable Secrets and Configuration
  75. 75. 75 ● Can be set individually ● Prevents changes ● Can't be un-set Immutable Secrets and ConfigMaps
  76. 76. 76 Kubectl debug
  77. 77. 77 ● For containers with no OS / debugging capabilities ● Provides debugging container kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo Defaulting debug container name to debugger-8xzrl. If you don't see a command prompt, try pressing enter. / # Kubectl debug
  78. 78. 78 Run multiple scheduling profiles
  79. 79. 79 ● Policies vs Profiles ● Policies ○ Filter (PodFitsHostPorts, CheckNodeMemoryPressure) ○ Scoring (SelectorSpreadPriority, ImageLocalityPriority) Run multiple Scheduling Profiles
  80. 80. 80 ● Profiles ○ Uses plugins ○ Can be enabled, disabled, reordered ○ Extension points (ie QueueSort, Permit, Un-reserve) ■ Single QueueSort plugin; only one pending pods queue ○ For example: NodePreferAvoidPods, VolumeRestrictions, PrioritySort ● Request specific profile using pod's .spec.schedulerName field Run multiple Scheduling Profiles
  81. 81. 81 Generic data populators
  82. 82. 82 ● Populate a new PVC via a CRD ● Must have a controller installed ● Same namespace ● Dynamic provisioners must support that resource ● Write your own ○ Create the PV ○ Bind it to the PVC Generic data populators
  83. 83. 83 Extending the HugePage feature
  84. 84. 84 ● Not supported in Windows ● Must be pre-allocated ● requests == limits ● Isolated at the container level ● Each container has own limit on their cgroup sandbox as per spec ● Control via ResourceQuota (like cpu or memory using hugepages-<size> token) ● Multiple sizes Extending the HugePage feature
  85. 85. 85 apiVersion: v1 kind: Pod metadata: name: huge-pages-example spec: volumes: - name: hugepage-2mi emptyDir: medium: HugePages-2Mi - name: hugepage-1gi emptyDir: medium: HugePages-1Gi ... Extending the HugePage feature containers: - name: example image: fedora:latest command: - sleep - inf volumeMounts: - mountPath: /hugepages-2Mi name: hugepage-2mi - mountPath: /hugepages-1Gi name: hugepage-1gi resources: limits: hugepages-2Mi: 100Mi hugepages-1Gi: 2Gi memory: 100Mi requests: memory: 100Mi
  86. 86. 86 Training Promotion Special Offer
  87. 87. 87 Mirantis Training - Kubernetes training.mirantis.com Webinar attendees! Get 15% off Mirantis training! Use coupon code: WEBMIR2020 Kubernetes & Docker Bootcamp I (KD100) Learn Docker and Kubernetes to deploy, run, and manage containerized applications 2 days Kubernetes & Docker Bootcamp II (KD200) Advanced training for Kubernetes professionals, preparation for CKA exam 3 days Accelerated Kubernetes & Docker Bootcamp (KD250) Most popular course! A combination of KD100 & KD200 at an accelerated pace, preps for the CKA exam 4 days Kubernetes in Production Bootcamp (KP300) In Development Advanced training focused on production grade architecture, operational best practices, and cluster management. 2 days
  88. 88. 88 Thank You! Q&A Download the slides: bit.ly/k8s-1-18_slides We’ll send you the slides & recording later this week.

×