SlideShare a Scribd company logo
1 of 35
Download to read offline
클라우드 적용시 멀티 클라우드로 전환하고 있으며
Private Cloud의 비중이 높아지고 있음
Private Cloud에서 openstack과 kubernetes의 도입이
주류를 이루고 있음.
4
Infra node #1,#2
Kubelet
Nexus
container
Registry
Prometheus
Granfana
Container
Pod
ETCD
Controller
manager
scheduler
Kube-
API
storage
node #1
storage
node #2
storage
node #N
ceph OSD
ceph MDS
ceph MON
ceph OSD
ceph MDS
ceph MON
ceph OSD
ceph MDS
ceph MON
Container
Runtime
Kubelet
Container
Pod
Container
Runtime
Load
Balancer
7
8
Storage
Kubernetes 물리 Network
물리 인프라
Switch
LoadBalancer
Layer 2
Switch
SAN 기반의
스토리지
x86 서버
…
container
container
container
container
container
container
container
container
container
container
container
container
container
container
container
container
container
container
iscsi 기반의
스토리지
Legacy 형태
가상 NFV
가상 SDN
LoadBalancer
가상 스토리지
가상 인프라
SDC
(Software Defined Computing)
CEPH
Storage
SDS
(Software Defined Storage)
…
VM 서버 VM 서버 VM 서버
Kubernetes
…
container
container
container
container
container
container
container
container
container
vRouter
OVS
SDDC 형태
물리 인프라
Switch
x86 서버
• 기존의 legacy 인프라에 kubernetes를 제공
• Kubernetes를 여러 multi tenant로 배포하는 것이 힘듦
• 단일업무 단일부서의 서비스인 경우에 적합
• Loadbalncer / ingress 추가시 네트워크 하드웨어 L4 포트 작업을 직접 하여야 함
• 스토리지의 경우 스토리지 엔지니어가 할당한 LUN을 받아서 작업해야 하고, 기존 SAN장비
나 iscsi adapter를 사용하여 구성시 벤더 엔지니어의 지원이 필요함
• 증설시 밴더의 지원이 필요함
• Block device/object storage/shared storage 등의 용도에 따라 다양한 형태의 인프라를
구매해야 함
• IaaS 위의 인프라에 kubernetes를 제공 (전체 인프라이 관리가 가능)
• Kubernetes를 여러 multi tenant로 배포하는 것에 최적화됨
• 다양한 부서의 다양한 업무를 여러 Kubernetes cluster단위로 제공 가능
• 일반적으로 openstack위에 올릴 경우, LB는 NFV인 Octavia를 이용하여 구성하며, kubernetes에서
구성시 자동으로 할당됨
• 스토리지의 경우, 직접 pool을 만들어 운영 가능하며, 속도나 업무에 따라 다양한 pool로 제공가능
(SAS/SSD/Nvme)
• block device/shared volume/Object storage등 다양한 형태의 스토리지를 한 ceph storage에서
제공
9
데이타베이스
openstack / KVM / CentOS
데이타베이스
openstack / KVM / CentOS
openstack / KVM / CentOS
10
(virtenv) [root@test5 openstack-deploy-train-test]# openstack Network create octavia-network
(virtenv) [root@test5 openstack-deploy-train-test]# openstack subnet create --Network octavia-network --subnet-range xx.xx.4.0/24 octavia-subnet1
(virtenv) [root@test5 openstack-deploy-train-test]# openstack router add subnet osc-demo-router octavia-subnet1
(virtenv) [root@test5 openstack-deploy-train-test]# openstack security group create octavia-sg
(virtenv) [root@test5 openstack-deploy-train-test]# openstack security group rule create --dst-port 9443 --ingress --protocol tcp octavia-sg
(virtenv) [root@test5 openstack-deploy-train-test]# openstack flavor create --id 100 --vcpus 4 --ram 4096 octavia-flavor
(virtenv) [root@test5 octavia]# source create_single_CA_intermediate_CA.sh
################# Verifying the Octavia files ###########################
etc/octavia/certs/client.cert-and-key.pem: OK
etc/octavia/certs/server_ca.cert.pem: OK
!!!!!!!!!!!!!!!Do not use this script for deployments!!!!!!!!!!!!!
Please use the Octavia Certificate Configuration guide:
https://docs.openstack.org/octavia/latest/admin/guides/certificates.html
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
12
(virtenv) [root@test5 ~]# openstack image create --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --tag amphora
--file packages/octavia/amphora-x64-haproxy-ubuntu.raw --disk-format raw amphora-image
(virtenv) [root@test5 openstack-deploy-train-test]# kolla-ansible deploy -i /etc/kolla/multinode
(virtenv) [root@test5 ~]# openstack keypair create --public-key /root/.ssh/id_rsa.pub octavia_ssh_key
/etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br0
ONBOOT=yes
TYPE=OVSBridge
BOOTPROTO=none
NAME=br-ex
DEVICE=br-ex
ONBOOT=yes
IPADDR=외부 서비스 n/w ip
PREFIX=24
MTU=9000
VLAN=yes
13
[nova]
availability_zone= ssss
[root@test8 ~]# cat /etc/sysconfig/network-scripts/route-br-ex
xx.xx.4.0/24 via [외부 서비스 IP] dev br-ex
14
15
Compute Node 1
Worker Instan
ce 1
Amphora instan
ce1
(Octavia vm)
Compute Node 2
Worker Instan
ce 2
Amphora instan
ce2
(Octavia vm)
Compute Node3
Worker Instan
ce 3
Worker Instan
ce 4
eth0 eth0 eth0 eth0 eth0
qbr-xxx
qbr-xxx
qbr-xxx
qbr-xxx
qbr-xxx
tapxxx tapxxx tapxxx tapxxx tapxxx
qvb-xx
x
qvb-xx
x
qvb-xx
x
qvb-xx
x
qvb-xx
x
br-int br-int
br-int
qvo-xx
x
bond-sr
v
qvo-xx
x
qvo-xx
x
qvo-xx
x
bond-sr
v
bond-sr
v
bond-tun
bond-tun
bond-tun
eth0
qvo-xx
x
qbr-xxx
tapxxx
qvb-xx
x
qvo-xx
x
br-ex br-tun br-ex br-tun br-ex br-tun
Tunneling(vxlan) Network
External (Service) Network
시큐리티 그룹 브릿지
case1 case2
시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지
1. Octavia를 이용한 network
흐름도 octavia를 하나의 vm
으로 생각하면 똑같은 흐름으
로 진행됨.
1. LoadBalancer역시 각각 IP
와 포트에 따라서 보안 룰이
적용됨
16
Compute Node 1
Worker Instan
ce 1
Amphora instan
ce1
(Octavia vm)
Compute Node 2
Worker Instan
ce 2
Amphora instan
ce2
(Octavia vm)
Worker Instan
ce 3
Worker Instan
ce 4
eth0 eth0 eth0 eth0 eth0
qbr-xxx
qbr-xxx
qbr-xxx
qbr-xxx
qbr-xxx
tapxxx tapxxx tapxxx tapxxx tapxxx
qvb-xxx qvb-xxx qvb-xxx
qvb-xx
x
qvb-xx
x
br-int br-int
br-int
qvo-xxx
bond-srv
qvo-xxx
qvo-xx
x
qvo-xx
x
bond-srv bond-tun
bond-tun
eth0
qvo-xxx
qbr-xxx
tapxxx
qvb-xxx
qvo-xxx
br-ex br-tun br-ex br-tun br-ex br-tun
Tunneling(vxlan) Network
External (Service) Network
Ironic 서버
Bond-svr
시큐리티 그룹 브릿지
case1 case2
시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지
1. Octavia를 이용한 network
흐름도 octavia를 하나의 vm
으로 생각하면 똑같은 흐름으
로 진행됨.
1. LoadBalancer역시 각각 IP
와 포트에 따라서 보안 룰이
적용됨
17
18
(virtenv) [root@test5 data]# openstack flavor create --disk 200 --ram 117760 --vcpus 18 worker-flavor
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 200 |
| id | 8ffead97-9ff7-4627-83e9-8dd59d4db698 |
| name | worker-flavor |
| os-flavor-access:is_public | True |
| properties | |
| ram | 117760 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 18 |
+----------------------------+--------------------------------------+
(virtenv) [root@test5 data]# openstack flavor create --disk 200 --ram 32768 --vcpus 8 master-flavor
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 200 |
| id | 93ed4569-244d-4121-981d-b5f97d7f46ad |
| name | master-flavor |
| os-flavor-access:is_public | True |
| properties | |
| ram | 32768 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 8 |
+----------------------------+--------------------------------------+ 19
(virtenv) [root@test5 data]# openstack volume create --size 200 --image centos7-x86-64-2003 --type ceph-ssd --bootable master01-volume
(virtenv) [root@test5 data]# openstack volume create --size 200 --image centos7-x86-64-2003 --type ceph-hdd --bootable test01-volume
(virtenv) [root@test5 data]# openstack server create --volume test01-volume --security-group sg --flavor worker-flavor
--key-name test5-keypair --nic net-id=04f9ad30-b2ab9-b013-d5298de69116,v4-fixed-ip=XX.XX.XX.XX test01
20
(virtenv) [root@test5 data]# nova list
+--------------------------------------+----------------------+--------+------------+-------------+---------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------------+--------+------------+-------------+---------------------------------+
| a436192e-664e-47e3-82b7-93c3055b268f | a-test01 | ACTIVE | - | Running | test--network=xx.xx.10.41 |
| 3beab3b0-95e8-4903-9f11-423bb65ca47c | a-test02 | ACTIVE | - | Running | test--network=xx.xx.10.42 |
| 79fe665a-544c-4cc9-aa8f-35bca9cb06ac | a-test03 | ACTIVE | - | Running | test--network=xx.xx.10.43 |
| 1e86268d-cae1-434d-a26e-ef09ffcdee9b | a-test04 | ACTIVE | - | Running | test--network=xx.xx.10.44 |
| 55e50216-0bd3-479d-b66f-ee235fc8a736 | a-test05 | ACTIVE | - | Running | test--network=xx.xx.10.45 |
| c7c8c10a-d4a2-4a03-857d-c60dec95798c | a-test06 | ACTIVE | - | Running | test--network=xx.xx.10.46 |
| 3790eab3-bb85-49f6-99b4-464db1a13d86 | a-test07 | ACTIVE | - | Running | test--network=xx.xx.10.47 |
| 3f7aa9e1-75aa-4118-b586-79104e4a1302 | a-test08 | ACTIVE | - | Running | test--network=xx.xx.10.48 |
| d8f5333b-46d7-4aea-b708-8e9b1fa30dfd | a-test09 | ACTIVE | - | Running | test--network=xx.xx.10.49 |
| 4e9b7eb5-f175-4f99-89e4-c8059ef13240 | a-test10 | ACTIVE | - | Running | test--network=xx.xx.10.50 |
| d43f4c6f-7493-45a2-b804-070a40adfa41 | test--master01 | ACTIVE | - | Running | test--network=xx.xx.10.11 |
| 347c7726-d9e7-4e06-a5fe-3be1f3beae41 | test--master02 | ACTIVE | - | Running | test--network=xx.xx.10.12 |
| 93b9b9ae-d415-4b9b-8710-37611a89462a | test--master03 | ACTIVE | - | Running | test--network=xx.xx.10.13 |
| fe5fdbd6-7949-4dd4-869c-1187e9ae2aec | test01 | ACTIVE | - | Running | test--network=xx.xx.10.21 |
| 507123ed-7d61-41b7-8deb-b0936a9e8cdd | test02 | ACTIVE | - | Running | test--network=xx.xx.10.22 |
| e8fc684f-d26d-4aa3-a1b0-7cf22f586f14 | test03 | ACTIVE | - | Running | test--network=xx.xx.10.23 |
| 7357d45e-e6c6-41e8-b67e-c6ae4e2d3a68 | test04 | ACTIVE | - | Running | test--network=xx.xx.10.24 |
| 5d2ca83f-1064-41c7-b79c-8820d6f25d09 | test05 | ACTIVE | - | Running | test--network=xx.xx.10.25 |
| 2e6ab299-f3c1-4c57-9d29-974ca2227211 | test06 | ACTIVE | - | Running | test--network=xx.xx.10.26 |
| eee6c54a-4948-4b77-ac43-e2e490fa5d04 | test07 | ACTIVE | - | Running | test--network=xx.xx.10.27 |
| 85ea1490-bf06-42c4-85f3-0c8dc8fd4000 | test08 | ACTIVE | - | Running | test--network=xx.xx.10.28 |
| 81d41852-a73b-4286-8f3b-c843b499a6d5 | test09 | ACTIVE | - | Running | test--network=xx.xx.10.29 |
| 396a7f69-952e-4d0b-9dac-e9680cc8912e | test10 | ACTIVE | - | Running | test--network=xx.xx.10.30 |
| 15648b26-3307-42ad-b9c5-425ca420dc8a | test11 | ACTIVE | - | Running | test--network=xx.xx.10.31 |
| f11161a1-722e-49df-b06f-f4573562f4cb | test12 | ACTIVE | - | Running | test--network=xx.xx.10.32 |
+--------------------------------------+----------------------+--------+------------+-------------+---------------------------------+
21
openstack port set 0a861ba5-c8f2-4b07-aa5c-13041f1bf6c5 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18
22
inventory/hosts
23
[master]
test-master01 ansible_host=xx.xx.10.11 api_address=xx.xx.10.11
test-master02 ansible_host=xx.xx.10.12 api_address=xx.xx.10.12
test-master03 ansible_host=xx.xx.10.13 api_address=xx.xx.10.13
[worker]
test0-01 ansible_host=xx.xx.10.21 api_address=xx.xx.10.21
test0-02 ansible_host=xx.xx.10.22 api_address=xx.xx.10.22
test0-03 ansible_host=xx.xx.10.23 api_address=xx.xx.10.23
test0-04 ansible_host=xx.xx.10.24 api_address=xx.xx.10.24
test0-05 ansible_host=xx.xx.10.25 api_address=xx.xx.10.25
---
etcd_kubeadm_enabled: false
bin_dir: /usr/local/bin
LoadBalancer_apiserver_port: 6443
LoadBalancer_apiserver_healthcheck_port: 8081
upstream_dns_servers:
- xx.xx.xx.76
- xx.xx.xx.76
cloud_provider: external
external_cloud_provider: openstack
24
k8s-net-calico.yml
nat_outgoing: true
global_as_num: "64512"
calico_mtu: 8930
calico_datastore: "etcd"
calico_iptables_backend: "Legacy"
typha_enabled: false
typha_secure: false
calico_network_backend: bird
calico_ipip_mode: 'CrossSubnet'
calico_vxlan_mode: 'Never'
calico_ip_auto_method: "interface=eth0"
(virtenv) [root@test-master01 ~]# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
a-svc01 Ready <none> 61m v1.18.9 xx.xx.10.41 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://19.3.13
test-master01 Ready master 62m v1.18.9 xx.xx.10.11 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://19.3.13
test-master02 Ready master 62m v1.18.9 xx.xx.10.12 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://19.3.13
test-master03 Ready master 62m v1.18.9 xx.xx.10.13 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://19.3.13
25
(virtenv) [root@test-master01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5df9d68dd6-sjgpr 1/1 Running 0 59m
calico-node-5pqb4 1/1 Running 1 61m
coredns-d687dc8df-w84dw 1/1 Running 0 59m
csi-cinder-controllerplugin-664b4964cf-mnbtg 5/5 Running 0 58m
csi-cinder-nodeplugin-28d8b 2/2 Running 0 8m6s
dns-autoscaler-6bb9b476-c5xsj 1/1 Running 0 59m
kube-apiserver-test-master01 1/1 Running 0 64m
(virtenv) [root@test--master01 ~]# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
cinder-csi cinder.csi.openstack.org Delete WaitForFirstConsumer false 19h
cinder-csi-hdd cinder.csi.openstack.org Delete WaitForFirstConsumer false 17h
cinder-csi-ssd cinder.csi.openstack.org Delete WaitForFirstConsumer false 18h
(virtenv) [root@test--master01 ~]# nginx-ssd.yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc-cinderplugin-ssd
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: cinder-csi-ssd
volumes:
- name: csi-data-cinderplugin
persistentVolumeClaim:
claimName: csi-pvc-cinderplugin-ssd
readOnly: false
(virtenv) [root@test--master01 ~]# tee nginx-hdd.yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc-cinderplugin-hdd
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: cinder-csi-hdd
volumes:
- name: csi-data-cinderplugin
persistentVolumeClaim:
claimName: csi-pvc-cinderplugin-hdd
readOnly: false
27
(virtenv) (virtenv) [root@test--master01 osc]# kubectl get pod
NAME READY STATUS RESTARTS AGE
echoserver 1/1 Running 0 13m
nginx-hdd 1/1 Running 0 16s
nginx-ssd 1/1 Running 0 20s
(virtenv) (virtenv) [root@test--master01 osc]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
REASON AGE
pvc-6c5d5108-075e-4e7f-9935-eea3cb2a057a 1Gi RWO Delete Bound default/csi-pvc-cinderplugin-ssd cinder-csi-ssd
22s
pvc-a569676c-96f3-4a77-a92a-f6adf9e646cf 1Gi RWO Delete Bound default/csi-pvc-cinderplugin-hdd cinder-csi-hdd
19s
(virtenv) (virtenv) [root@test--master01 osc]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-pvc-cinderplugin-hdd Bound pvc-a569676c-96f3-4a77-a92a-f6adf9e646cf 1Gi RWO cinder-csi-hdd 21s
csi-pvc-cinderplugin-ssd Bound pvc-6c5d5108-075e-4e7f-9935-eea3cb2a057a 1Gi RWO cinder-csi-ssd 25s
28
29
# kubectl run echoserver --image=test--
master01:5000/google-containers/echoserver:1.10 --
port=8080
(virtenv) [root@test--master01 ~]#
kind: Service
apiVersion: v1
metadata:
name: loadbalanced-service
spec:
selector:
run: echoserver
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
30
(virtenv) [root@test--master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 116m
loadbalanced-service LoadBalancer 10.233.59.140 xx.xx.10.239 80:30534/TCP 3m5s
(virtenv) [root@test--master01 ~]# curl xx.xx.10.239
Hostname: echoserver
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=xx.xx.10.11
method=GET
real path=/
query=
request_version=1.1
request_scheme=http
request_uri=http://xx.xx.4.153:8080/
Request Headers:
accept=*/*
host=xx.xx.4.153
user-agent=curl/7.29.0
Request Body:
-no body in request-
(virtenv) [root@test--master01 ~]# kubectl delete svc loadbalanced-service
service "loadbalanced-service" deleted
31
1. 컨트롤러 는 API 서버의 Ingress
이벤트 를 감시 합니다. 요구 사항을 충족하는
Ingress 리소스를 찾으면 AWS 리소스 생성을
시작합니다.
2. Ingress 리소스에 대한 ALB가 생성됩니다.
3. Ingress 리소스에 지정된 각 백엔드에
대해 TargetGroup 이 생성됩니다.
4. 수신 리소스 주석으로 지정된 모든 포트에
대해 리스너 가 생성됩니다. 포트를 지정하지
않으면 적절한 기본값 ( 80또는 443)이 사용됩니다.
5. Ingress 리소스에 지정된 각 경로에 대해 규칙 이
생성됩니다. 이렇게하면 특정 경로에 대한
트래픽이 TargetGroup생성 된 올바른 경로로
라우팅됩니다 .
32
apiVersion: apps/v1
kind: Deployment
----------
ports:
- containerPort: 8080
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-octavia-ingress
annotations:
kubernetes.io/ingress.class: "openstack"
octavia.ingress.kubernetes.io/internal: "false"
NAME CLASS HOSTS ADDRESS PORTS AGE
test-octavia-ingress <none> foo.bar.com 80 7s
((virtenv) [root@test--master01 osc]# kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
test-octavia-ingress <none> foo.bar.com xx.xx.83.104 80 100s
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: webserver
servicePort: 8080
33
(virtenv) [root@test--master01 osc]# IPADDRESS=xx.xx.83.104
(virtenv) [root@test--master01 osc]# curl -H "Host: foo.bar.com" http://$IPADDRESS/
Hostname: webserver-598ddccb79-gl8mn
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.3 - lua: 10008
34
T. 02-516-0711 E. sales@osci.kr
서울시강남구테헤란로83길32,5층(삼성동,나라키움삼성동A빌딩)
THANK YOU

More Related Content

What's hot

[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
OpenStack Korea Community
 

What's hot (20)

오픈스택 커뮤니티 소개 및 기술 동향
오픈스택 커뮤니티 소개 및 기술 동향오픈스택 커뮤니티 소개 및 기술 동향
오픈스택 커뮤니티 소개 및 기술 동향
 
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
[OpenStack Days Korea 2016] Track1 - 카카오는 오픈스택 기반으로 어떻게 5000VM을 운영하고 있을까?
 
K8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals TrainingK8s in 3h - Kubernetes Fundamentals Training
K8s in 3h - Kubernetes Fundamentals Training
 
Kubernetes PPT.pptx
Kubernetes PPT.pptxKubernetes PPT.pptx
Kubernetes PPT.pptx
 
[OpenInfra Days Korea 2018] (Track 1) TACO (SKT All Container OpenStack): Clo...
[OpenInfra Days Korea 2018] (Track 1) TACO (SKT All Container OpenStack): Clo...[OpenInfra Days Korea 2018] (Track 1) TACO (SKT All Container OpenStack): Clo...
[OpenInfra Days Korea 2018] (Track 1) TACO (SKT All Container OpenStack): Clo...
 
Scale Kubernetes to support 50000 services
Scale Kubernetes to support 50000 servicesScale Kubernetes to support 50000 services
Scale Kubernetes to support 50000 services
 
Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...
Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...
Room 3 - 7 - Nguyễn Như Phúc Huy - Vitastor: a fast and simple Ceph-like bloc...
 
Red Hat OpenStack 17 저자직강+스터디그룹_1주차
Red Hat OpenStack 17 저자직강+스터디그룹_1주차Red Hat OpenStack 17 저자직강+스터디그룹_1주차
Red Hat OpenStack 17 저자직강+스터디그룹_1주차
 
Open vSwitch 패킷 처리 구조
Open vSwitch 패킷 처리 구조Open vSwitch 패킷 처리 구조
Open vSwitch 패킷 처리 구조
 
쿠버네티스 ( Kubernetes ) 소개 자료
쿠버네티스 ( Kubernetes ) 소개 자료쿠버네티스 ( Kubernetes ) 소개 자료
쿠버네티스 ( Kubernetes ) 소개 자료
 
클라우드 컴퓨팅 기반 기술과 오픈스택(Kvm) 기반 Provisioning
클라우드 컴퓨팅 기반 기술과 오픈스택(Kvm) 기반 Provisioning 클라우드 컴퓨팅 기반 기술과 오픈스택(Kvm) 기반 Provisioning
클라우드 컴퓨팅 기반 기술과 오픈스택(Kvm) 기반 Provisioning
 
Kubernetes - introduction
Kubernetes - introductionKubernetes - introduction
Kubernetes - introduction
 
Room 2 - 3 - Nguyễn Hoài Nam & Nguyễn Việt Hùng - Terraform & Pulumi Comparin...
Room 2 - 3 - Nguyễn Hoài Nam & Nguyễn Việt Hùng - Terraform & Pulumi Comparin...Room 2 - 3 - Nguyễn Hoài Nam & Nguyễn Việt Hùng - Terraform & Pulumi Comparin...
Room 2 - 3 - Nguyễn Hoài Nam & Nguyễn Việt Hùng - Terraform & Pulumi Comparin...
 
Kubernetes Introduction
Kubernetes IntroductionKubernetes Introduction
Kubernetes Introduction
 
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요
 
Kubernetes
KubernetesKubernetes
Kubernetes
 
오픈스택 멀티노드 설치 후기
오픈스택 멀티노드 설치 후기오픈스택 멀티노드 설치 후기
오픈스택 멀티노드 설치 후기
 
Kubernetes Networking
Kubernetes NetworkingKubernetes Networking
Kubernetes Networking
 
Deep dive into Kubernetes Networking
Deep dive into Kubernetes NetworkingDeep dive into Kubernetes Networking
Deep dive into Kubernetes Networking
 
(발표자료) CentOS EOL에 따른 대응 OS 검토 및 적용 방안.pdf
(발표자료) CentOS EOL에 따른 대응 OS 검토 및 적용 방안.pdf(발표자료) CentOS EOL에 따른 대응 OS 검토 및 적용 방안.pdf
(발표자료) CentOS EOL에 따른 대응 OS 검토 및 적용 방안.pdf
 

Similar to [오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법

fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)
fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)
fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)
Wesley Beary
 
glance replicator
glance replicatorglance replicator
glance replicator
irix_jp
 

Similar to [오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법 (20)

OpenStack API's and WSGI
OpenStack API's and WSGIOpenStack API's and WSGI
OpenStack API's and WSGI
 
Couch to OpenStack: Neutron (Quantum) - August 13, 2013 Featuring Sean Winn
Couch to OpenStack: Neutron (Quantum) - August 13, 2013 Featuring Sean WinnCouch to OpenStack: Neutron (Quantum) - August 13, 2013 Featuring Sean Winn
Couch to OpenStack: Neutron (Quantum) - August 13, 2013 Featuring Sean Winn
 
Make stateful apps in Kubernetes a no brainer with Pure Storage and GitOps
Make stateful apps in Kubernetes a no brainer with Pure Storage and GitOpsMake stateful apps in Kubernetes a no brainer with Pure Storage and GitOps
Make stateful apps in Kubernetes a no brainer with Pure Storage and GitOps
 
Open stack pike-devstack-tutorial
Open stack pike-devstack-tutorialOpen stack pike-devstack-tutorial
Open stack pike-devstack-tutorial
 
IxVM on CML
IxVM on CMLIxVM on CML
IxVM on CML
 
Bare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and ChefBare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and Chef
 
Our Puppet Story – Patterns and Learnings (sage@guug, March 2014)
Our Puppet Story – Patterns and Learnings (sage@guug, March 2014)Our Puppet Story – Patterns and Learnings (sage@guug, March 2014)
Our Puppet Story – Patterns and Learnings (sage@guug, March 2014)
 
fog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloudfog or: How I Learned to Stop Worrying and Love the Cloud
fog or: How I Learned to Stop Worrying and Love the Cloud
 
Dave Williams - Nagios Log Server - Practical Experience
Dave Williams - Nagios Log Server - Practical ExperienceDave Williams - Nagios Log Server - Practical Experience
Dave Williams - Nagios Log Server - Practical Experience
 
Drupaljam 2017 - Deploying Drupal 8 onto Hosted Kubernetes in Google Cloud
Drupaljam 2017 - Deploying Drupal 8 onto Hosted Kubernetes in Google CloudDrupaljam 2017 - Deploying Drupal 8 onto Hosted Kubernetes in Google Cloud
Drupaljam 2017 - Deploying Drupal 8 onto Hosted Kubernetes in Google Cloud
 
fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)
fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)
fog or: How I Learned to Stop Worrying and Love the Cloud (OpenStack Edition)
 
[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVR[OpenStack 하반기 스터디] HA using DVR
[OpenStack 하반기 스터디] HA using DVR
 
Cooking with Chef
Cooking with ChefCooking with Chef
Cooking with Chef
 
AWS re:Invent 2016: Service Integration Delivery and Automation Using Amazon ...
AWS re:Invent 2016: Service Integration Delivery and Automation Using Amazon ...AWS re:Invent 2016: Service Integration Delivery and Automation Using Amazon ...
AWS re:Invent 2016: Service Integration Delivery and Automation Using Amazon ...
 
Couch to OpenStack: Cinder - August 6, 2013
Couch to OpenStack: Cinder - August 6, 2013Couch to OpenStack: Cinder - August 6, 2013
Couch to OpenStack: Cinder - August 6, 2013
 
Atlanta OpenStack 2014 Chef for OpenStack Deployment Workshop
Atlanta OpenStack 2014 Chef for OpenStack Deployment WorkshopAtlanta OpenStack 2014 Chef for OpenStack Deployment Workshop
Atlanta OpenStack 2014 Chef for OpenStack Deployment Workshop
 
glance replicator
glance replicatorglance replicator
glance replicator
 
Continuous Delivery: The Next Frontier
Continuous Delivery: The Next FrontierContinuous Delivery: The Next Frontier
Continuous Delivery: The Next Frontier
 
Monkey man
Monkey manMonkey man
Monkey man
 
ONOS SDN Controller - Introduction
ONOS SDN Controller - IntroductionONOS SDN Controller - Introduction
ONOS SDN Controller - Introduction
 

More from Open Source Consulting

More from Open Source Consulting (20)

클라우드 네이티브 전환 요소 및 성공적인 쿠버네티스 도입 전략
클라우드 네이티브 전환 요소 및 성공적인 쿠버네티스 도입 전략클라우드 네이티브 전환 요소 및 성공적인 쿠버네티스 도입 전략
클라우드 네이티브 전환 요소 및 성공적인 쿠버네티스 도입 전략
 
[기술 트렌드] Gartner 선정 10대 전략 기술
[기술 트렌드] Gartner 선정 10대 전략 기술[기술 트렌드] Gartner 선정 10대 전략 기술
[기술 트렌드] Gartner 선정 10대 전략 기술
 
[오픈테크넷서밋2022] 국내 PaaS(Kubernetes) Best Practice 및 DevOps 환경 구축 사례.pdf
[오픈테크넷서밋2022] 국내 PaaS(Kubernetes) Best Practice 및 DevOps 환경 구축 사례.pdf[오픈테크넷서밋2022] 국내 PaaS(Kubernetes) Best Practice 및 DevOps 환경 구축 사례.pdf
[오픈테크넷서밋2022] 국내 PaaS(Kubernetes) Best Practice 및 DevOps 환경 구축 사례.pdf
 
쿠버네티스 기반 PaaS 솔루션 - Playce Kube를 소개합니다.
쿠버네티스 기반 PaaS 솔루션 - Playce Kube를 소개합니다.쿠버네티스 기반 PaaS 솔루션 - Playce Kube를 소개합니다.
쿠버네티스 기반 PaaS 솔루션 - Playce Kube를 소개합니다.
 
Life science에서 k-agile으로 일하기 : with SAFe(Scaled Agile) & Atlassian
Life science에서 k-agile으로 일하기 : with SAFe(Scaled Agile) & Atlassian Life science에서 k-agile으로 일하기 : with SAFe(Scaled Agile) & Atlassian
Life science에서 k-agile으로 일하기 : with SAFe(Scaled Agile) & Atlassian
 
초보자를 위한 네트워크/VLAN 기초
초보자를 위한 네트워크/VLAN 기초초보자를 위한 네트워크/VLAN 기초
초보자를 위한 네트워크/VLAN 기초
 
Atlassian cloud 제품을 이용한 DevOps 프로세스 구축: Jira Cloud, Bitbucket Cloud
Atlassian cloud 제품을 이용한 DevOps 프로세스 구축: Jira Cloud, Bitbucket CloudAtlassian cloud 제품을 이용한 DevOps 프로세스 구축: Jira Cloud, Bitbucket Cloud
Atlassian cloud 제품을 이용한 DevOps 프로세스 구축: Jira Cloud, Bitbucket Cloud
 
[웨비나] 클라우드 마이그레이션 수행 시 가장 많이 하는 질문 Top 10!
[웨비나] 클라우드 마이그레이션 수행 시 가장 많이 하는 질문 Top 10![웨비나] 클라우드 마이그레이션 수행 시 가장 많이 하는 질문 Top 10!
[웨비나] 클라우드 마이그레이션 수행 시 가장 많이 하는 질문 Top 10!
 
[오픈소스컨설팅] EFK Stack 소개와 설치 방법
[오픈소스컨설팅] EFK Stack 소개와 설치 방법[오픈소스컨설팅] EFK Stack 소개와 설치 방법
[오픈소스컨설팅] EFK Stack 소개와 설치 방법
 
[오픈소스컨설팅] SELinux : Stop Disabling SELinux
[오픈소스컨설팅] SELinux : Stop Disabling SELinux[오픈소스컨설팅] SELinux : Stop Disabling SELinux
[오픈소스컨설팅] SELinux : Stop Disabling SELinux
 
[오픈소스컨설팅] 서비스 메쉬(Service mesh)
[오픈소스컨설팅] 서비스 메쉬(Service mesh)[오픈소스컨설팅] 서비스 메쉬(Service mesh)
[오픈소스컨설팅] 서비스 메쉬(Service mesh)
 
[오픈소스컨설팅] ARM & OpenStack Community
[오픈소스컨설팅] ARM & OpenStack Community[오픈소스컨설팅] ARM & OpenStack Community
[오픈소스컨설팅] ARM & OpenStack Community
 
[오픈소스컨설팅] Linux Network Troubleshooting
[오픈소스컨설팅] Linux Network Troubleshooting[오픈소스컨설팅] Linux Network Troubleshooting
[오픈소스컨설팅] Linux Network Troubleshooting
 
Atlassian ITSM Case-study
Atlassian ITSM Case-studyAtlassian ITSM Case-study
Atlassian ITSM Case-study
 
[열린기술공방] Container기반의 DevOps - 클라우드 네이티브
[열린기술공방] Container기반의 DevOps - 클라우드 네이티브[열린기술공방] Container기반의 DevOps - 클라우드 네이티브
[열린기술공방] Container기반의 DevOps - 클라우드 네이티브
 
주 52시간 시대의 Agile_ 오픈소스컨설팅 한진규 이사
주 52시간 시대의 Agile_ 오픈소스컨설팅 한진규 이사주 52시간 시대의 Agile_ 오픈소스컨설팅 한진규 이사
주 52시간 시대의 Agile_ 오픈소스컨설팅 한진규 이사
 
Open infra and cloud native
Open infra and cloud nativeOpen infra and cloud native
Open infra and cloud native
 
[오픈소스컨설팅] jira service desk 201908
[오픈소스컨설팅] jira service desk 201908[오픈소스컨설팅] jira service desk 201908
[오픈소스컨설팅] jira service desk 201908
 
Community Openstack 구축 사례
Community Openstack 구축 사례Community Openstack 구축 사례
Community Openstack 구축 사례
 
Ceph issue 해결 사례
Ceph issue 해결 사례Ceph issue 해결 사례
Ceph issue 해결 사례
 

Recently uploaded

Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
chiefasafspells
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
masabamasaba
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
VictoriaMetrics
 
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
masabamasaba
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
masabamasaba
 

Recently uploaded (20)

Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
Love witchcraft +27768521739 Binding love spell in Sandy Springs, GA |psychic...
 
What Goes Wrong with Language Definitions and How to Improve the Situation
What Goes Wrong with Language Definitions and How to Improve the SituationWhat Goes Wrong with Language Definitions and How to Improve the Situation
What Goes Wrong with Language Definitions and How to Improve the Situation
 
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
%in Stilfontein+277-882-255-28 abortion pills for sale in Stilfontein
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand
 
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
 
Architecture decision records - How not to get lost in the past
Architecture decision records - How not to get lost in the pastArchitecture decision records - How not to get lost in the past
Architecture decision records - How not to get lost in the past
 
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
%+27788225528 love spells in Knoxville Psychic Readings, Attraction spells,Br...
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare%in Harare+277-882-255-28 abortion pills for sale in Harare
%in Harare+277-882-255-28 abortion pills for sale in Harare
 
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
WSO2CON 2024 - WSO2's Digital Transformation Journey with Choreo: A Platforml...
 
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
 

[오픈소스컨설팅] 쿠버네티스와 쿠버네티스 on 오픈스택 비교 및 구축 방법

  • 1.
  • 2.
  • 3.
  • 4. 클라우드 적용시 멀티 클라우드로 전환하고 있으며 Private Cloud의 비중이 높아지고 있음 Private Cloud에서 openstack과 kubernetes의 도입이 주류를 이루고 있음. 4
  • 5.
  • 6.
  • 7. Infra node #1,#2 Kubelet Nexus container Registry Prometheus Granfana Container Pod ETCD Controller manager scheduler Kube- API storage node #1 storage node #2 storage node #N ceph OSD ceph MDS ceph MON ceph OSD ceph MDS ceph MON ceph OSD ceph MDS ceph MON Container Runtime Kubelet Container Pod Container Runtime Load Balancer 7
  • 8. 8
  • 9. Storage Kubernetes 물리 Network 물리 인프라 Switch LoadBalancer Layer 2 Switch SAN 기반의 스토리지 x86 서버 … container container container container container container container container container container container container container container container container container container iscsi 기반의 스토리지 Legacy 형태 가상 NFV 가상 SDN LoadBalancer 가상 스토리지 가상 인프라 SDC (Software Defined Computing) CEPH Storage SDS (Software Defined Storage) … VM 서버 VM 서버 VM 서버 Kubernetes … container container container container container container container container container vRouter OVS SDDC 형태 물리 인프라 Switch x86 서버 • 기존의 legacy 인프라에 kubernetes를 제공 • Kubernetes를 여러 multi tenant로 배포하는 것이 힘듦 • 단일업무 단일부서의 서비스인 경우에 적합 • Loadbalncer / ingress 추가시 네트워크 하드웨어 L4 포트 작업을 직접 하여야 함 • 스토리지의 경우 스토리지 엔지니어가 할당한 LUN을 받아서 작업해야 하고, 기존 SAN장비 나 iscsi adapter를 사용하여 구성시 벤더 엔지니어의 지원이 필요함 • 증설시 밴더의 지원이 필요함 • Block device/object storage/shared storage 등의 용도에 따라 다양한 형태의 인프라를 구매해야 함 • IaaS 위의 인프라에 kubernetes를 제공 (전체 인프라이 관리가 가능) • Kubernetes를 여러 multi tenant로 배포하는 것에 최적화됨 • 다양한 부서의 다양한 업무를 여러 Kubernetes cluster단위로 제공 가능 • 일반적으로 openstack위에 올릴 경우, LB는 NFV인 Octavia를 이용하여 구성하며, kubernetes에서 구성시 자동으로 할당됨 • 스토리지의 경우, 직접 pool을 만들어 운영 가능하며, 속도나 업무에 따라 다양한 pool로 제공가능 (SAS/SSD/Nvme) • block device/shared volume/Object storage등 다양한 형태의 스토리지를 한 ceph storage에서 제공 9
  • 10. 데이타베이스 openstack / KVM / CentOS 데이타베이스 openstack / KVM / CentOS openstack / KVM / CentOS 10
  • 11.
  • 12. (virtenv) [root@test5 openstack-deploy-train-test]# openstack Network create octavia-network (virtenv) [root@test5 openstack-deploy-train-test]# openstack subnet create --Network octavia-network --subnet-range xx.xx.4.0/24 octavia-subnet1 (virtenv) [root@test5 openstack-deploy-train-test]# openstack router add subnet osc-demo-router octavia-subnet1 (virtenv) [root@test5 openstack-deploy-train-test]# openstack security group create octavia-sg (virtenv) [root@test5 openstack-deploy-train-test]# openstack security group rule create --dst-port 9443 --ingress --protocol tcp octavia-sg (virtenv) [root@test5 openstack-deploy-train-test]# openstack flavor create --id 100 --vcpus 4 --ram 4096 octavia-flavor (virtenv) [root@test5 octavia]# source create_single_CA_intermediate_CA.sh ################# Verifying the Octavia files ########################### etc/octavia/certs/client.cert-and-key.pem: OK etc/octavia/certs/server_ca.cert.pem: OK !!!!!!!!!!!!!!!Do not use this script for deployments!!!!!!!!!!!!! Please use the Octavia Certificate Configuration guide: https://docs.openstack.org/octavia/latest/admin/guides/certificates.html !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 12
  • 13. (virtenv) [root@test5 ~]# openstack image create --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --tag amphora --file packages/octavia/amphora-x64-haproxy-ubuntu.raw --disk-format raw amphora-image (virtenv) [root@test5 openstack-deploy-train-test]# kolla-ansible deploy -i /etc/kolla/multinode (virtenv) [root@test5 ~]# openstack keypair create --public-key /root/.ssh/id_rsa.pub octavia_ssh_key /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br0 ONBOOT=yes TYPE=OVSBridge BOOTPROTO=none NAME=br-ex DEVICE=br-ex ONBOOT=yes IPADDR=외부 서비스 n/w ip PREFIX=24 MTU=9000 VLAN=yes 13
  • 14. [nova] availability_zone= ssss [root@test8 ~]# cat /etc/sysconfig/network-scripts/route-br-ex xx.xx.4.0/24 via [외부 서비스 IP] dev br-ex 14
  • 15. 15
  • 16. Compute Node 1 Worker Instan ce 1 Amphora instan ce1 (Octavia vm) Compute Node 2 Worker Instan ce 2 Amphora instan ce2 (Octavia vm) Compute Node3 Worker Instan ce 3 Worker Instan ce 4 eth0 eth0 eth0 eth0 eth0 qbr-xxx qbr-xxx qbr-xxx qbr-xxx qbr-xxx tapxxx tapxxx tapxxx tapxxx tapxxx qvb-xx x qvb-xx x qvb-xx x qvb-xx x qvb-xx x br-int br-int br-int qvo-xx x bond-sr v qvo-xx x qvo-xx x qvo-xx x bond-sr v bond-sr v bond-tun bond-tun bond-tun eth0 qvo-xx x qbr-xxx tapxxx qvb-xx x qvo-xx x br-ex br-tun br-ex br-tun br-ex br-tun Tunneling(vxlan) Network External (Service) Network 시큐리티 그룹 브릿지 case1 case2 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 1. Octavia를 이용한 network 흐름도 octavia를 하나의 vm 으로 생각하면 똑같은 흐름으 로 진행됨. 1. LoadBalancer역시 각각 IP 와 포트에 따라서 보안 룰이 적용됨 16
  • 17. Compute Node 1 Worker Instan ce 1 Amphora instan ce1 (Octavia vm) Compute Node 2 Worker Instan ce 2 Amphora instan ce2 (Octavia vm) Worker Instan ce 3 Worker Instan ce 4 eth0 eth0 eth0 eth0 eth0 qbr-xxx qbr-xxx qbr-xxx qbr-xxx qbr-xxx tapxxx tapxxx tapxxx tapxxx tapxxx qvb-xxx qvb-xxx qvb-xxx qvb-xx x qvb-xx x br-int br-int br-int qvo-xxx bond-srv qvo-xxx qvo-xx x qvo-xx x bond-srv bond-tun bond-tun eth0 qvo-xxx qbr-xxx tapxxx qvb-xxx qvo-xxx br-ex br-tun br-ex br-tun br-ex br-tun Tunneling(vxlan) Network External (Service) Network Ironic 서버 Bond-svr 시큐리티 그룹 브릿지 case1 case2 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 시큐리티 그룹 브릿지 1. Octavia를 이용한 network 흐름도 octavia를 하나의 vm 으로 생각하면 똑같은 흐름으 로 진행됨. 1. LoadBalancer역시 각각 IP 와 포트에 따라서 보안 룰이 적용됨 17
  • 18. 18
  • 19. (virtenv) [root@test5 data]# openstack flavor create --disk 200 --ram 117760 --vcpus 18 worker-flavor +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 200 | | id | 8ffead97-9ff7-4627-83e9-8dd59d4db698 | | name | worker-flavor | | os-flavor-access:is_public | True | | properties | | | ram | 117760 | | rxtx_factor | 1.0 | | swap | | | vcpus | 18 | +----------------------------+--------------------------------------+ (virtenv) [root@test5 data]# openstack flavor create --disk 200 --ram 32768 --vcpus 8 master-flavor +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 200 | | id | 93ed4569-244d-4121-981d-b5f97d7f46ad | | name | master-flavor | | os-flavor-access:is_public | True | | properties | | | ram | 32768 | | rxtx_factor | 1.0 | | swap | | | vcpus | 8 | +----------------------------+--------------------------------------+ 19
  • 20. (virtenv) [root@test5 data]# openstack volume create --size 200 --image centos7-x86-64-2003 --type ceph-ssd --bootable master01-volume (virtenv) [root@test5 data]# openstack volume create --size 200 --image centos7-x86-64-2003 --type ceph-hdd --bootable test01-volume (virtenv) [root@test5 data]# openstack server create --volume test01-volume --security-group sg --flavor worker-flavor --key-name test5-keypair --nic net-id=04f9ad30-b2ab9-b013-d5298de69116,v4-fixed-ip=XX.XX.XX.XX test01 20
  • 21. (virtenv) [root@test5 data]# nova list +--------------------------------------+----------------------+--------+------------+-------------+---------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+----------------------+--------+------------+-------------+---------------------------------+ | a436192e-664e-47e3-82b7-93c3055b268f | a-test01 | ACTIVE | - | Running | test--network=xx.xx.10.41 | | 3beab3b0-95e8-4903-9f11-423bb65ca47c | a-test02 | ACTIVE | - | Running | test--network=xx.xx.10.42 | | 79fe665a-544c-4cc9-aa8f-35bca9cb06ac | a-test03 | ACTIVE | - | Running | test--network=xx.xx.10.43 | | 1e86268d-cae1-434d-a26e-ef09ffcdee9b | a-test04 | ACTIVE | - | Running | test--network=xx.xx.10.44 | | 55e50216-0bd3-479d-b66f-ee235fc8a736 | a-test05 | ACTIVE | - | Running | test--network=xx.xx.10.45 | | c7c8c10a-d4a2-4a03-857d-c60dec95798c | a-test06 | ACTIVE | - | Running | test--network=xx.xx.10.46 | | 3790eab3-bb85-49f6-99b4-464db1a13d86 | a-test07 | ACTIVE | - | Running | test--network=xx.xx.10.47 | | 3f7aa9e1-75aa-4118-b586-79104e4a1302 | a-test08 | ACTIVE | - | Running | test--network=xx.xx.10.48 | | d8f5333b-46d7-4aea-b708-8e9b1fa30dfd | a-test09 | ACTIVE | - | Running | test--network=xx.xx.10.49 | | 4e9b7eb5-f175-4f99-89e4-c8059ef13240 | a-test10 | ACTIVE | - | Running | test--network=xx.xx.10.50 | | d43f4c6f-7493-45a2-b804-070a40adfa41 | test--master01 | ACTIVE | - | Running | test--network=xx.xx.10.11 | | 347c7726-d9e7-4e06-a5fe-3be1f3beae41 | test--master02 | ACTIVE | - | Running | test--network=xx.xx.10.12 | | 93b9b9ae-d415-4b9b-8710-37611a89462a | test--master03 | ACTIVE | - | Running | test--network=xx.xx.10.13 | | fe5fdbd6-7949-4dd4-869c-1187e9ae2aec | test01 | ACTIVE | - | Running | test--network=xx.xx.10.21 | | 507123ed-7d61-41b7-8deb-b0936a9e8cdd | test02 | ACTIVE | - | Running | test--network=xx.xx.10.22 | | e8fc684f-d26d-4aa3-a1b0-7cf22f586f14 | test03 | ACTIVE | - | Running | test--network=xx.xx.10.23 | | 7357d45e-e6c6-41e8-b67e-c6ae4e2d3a68 | test04 | ACTIVE | - | Running | test--network=xx.xx.10.24 | | 5d2ca83f-1064-41c7-b79c-8820d6f25d09 | test05 | ACTIVE | - | Running | test--network=xx.xx.10.25 | | 2e6ab299-f3c1-4c57-9d29-974ca2227211 | test06 | ACTIVE | - | Running | test--network=xx.xx.10.26 | | eee6c54a-4948-4b77-ac43-e2e490fa5d04 | test07 | ACTIVE | - | Running | test--network=xx.xx.10.27 | | 85ea1490-bf06-42c4-85f3-0c8dc8fd4000 | test08 | ACTIVE | - | Running | test--network=xx.xx.10.28 | | 81d41852-a73b-4286-8f3b-c843b499a6d5 | test09 | ACTIVE | - | Running | test--network=xx.xx.10.29 | | 396a7f69-952e-4d0b-9dac-e9680cc8912e | test10 | ACTIVE | - | Running | test--network=xx.xx.10.30 | | 15648b26-3307-42ad-b9c5-425ca420dc8a | test11 | ACTIVE | - | Running | test--network=xx.xx.10.31 | | f11161a1-722e-49df-b06f-f4573562f4cb | test12 | ACTIVE | - | Running | test--network=xx.xx.10.32 | +--------------------------------------+----------------------+--------+------------+-------------+---------------------------------+ 21
  • 22. openstack port set 0a861ba5-c8f2-4b07-aa5c-13041f1bf6c5 --allowed-address ip-address=10.233.0.0/18 --allowed-address ip-address=10.233.64.0/18 22
  • 23. inventory/hosts 23 [master] test-master01 ansible_host=xx.xx.10.11 api_address=xx.xx.10.11 test-master02 ansible_host=xx.xx.10.12 api_address=xx.xx.10.12 test-master03 ansible_host=xx.xx.10.13 api_address=xx.xx.10.13 [worker] test0-01 ansible_host=xx.xx.10.21 api_address=xx.xx.10.21 test0-02 ansible_host=xx.xx.10.22 api_address=xx.xx.10.22 test0-03 ansible_host=xx.xx.10.23 api_address=xx.xx.10.23 test0-04 ansible_host=xx.xx.10.24 api_address=xx.xx.10.24 test0-05 ansible_host=xx.xx.10.25 api_address=xx.xx.10.25 --- etcd_kubeadm_enabled: false bin_dir: /usr/local/bin LoadBalancer_apiserver_port: 6443 LoadBalancer_apiserver_healthcheck_port: 8081 upstream_dns_servers: - xx.xx.xx.76 - xx.xx.xx.76 cloud_provider: external external_cloud_provider: openstack
  • 24. 24 k8s-net-calico.yml nat_outgoing: true global_as_num: "64512" calico_mtu: 8930 calico_datastore: "etcd" calico_iptables_backend: "Legacy" typha_enabled: false typha_secure: false calico_network_backend: bird calico_ipip_mode: 'CrossSubnet' calico_vxlan_mode: 'Never' calico_ip_auto_method: "interface=eth0" (virtenv) [root@test-master01 ~]# kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME a-svc01 Ready <none> 61m v1.18.9 xx.xx.10.41 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://19.3.13 test-master01 Ready master 62m v1.18.9 xx.xx.10.11 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://19.3.13 test-master02 Ready master 62m v1.18.9 xx.xx.10.12 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://19.3.13 test-master03 Ready master 62m v1.18.9 xx.xx.10.13 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://19.3.13
  • 25. 25 (virtenv) [root@test-master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-5df9d68dd6-sjgpr 1/1 Running 0 59m calico-node-5pqb4 1/1 Running 1 61m coredns-d687dc8df-w84dw 1/1 Running 0 59m csi-cinder-controllerplugin-664b4964cf-mnbtg 5/5 Running 0 58m csi-cinder-nodeplugin-28d8b 2/2 Running 0 8m6s dns-autoscaler-6bb9b476-c5xsj 1/1 Running 0 59m kube-apiserver-test-master01 1/1 Running 0 64m
  • 26.
  • 27. (virtenv) [root@test--master01 ~]# kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE cinder-csi cinder.csi.openstack.org Delete WaitForFirstConsumer false 19h cinder-csi-hdd cinder.csi.openstack.org Delete WaitForFirstConsumer false 17h cinder-csi-ssd cinder.csi.openstack.org Delete WaitForFirstConsumer false 18h (virtenv) [root@test--master01 ~]# nginx-ssd.yml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-cinderplugin-ssd spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: cinder-csi-ssd volumes: - name: csi-data-cinderplugin persistentVolumeClaim: claimName: csi-pvc-cinderplugin-ssd readOnly: false (virtenv) [root@test--master01 ~]# tee nginx-hdd.yml --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-cinderplugin-hdd spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: cinder-csi-hdd volumes: - name: csi-data-cinderplugin persistentVolumeClaim: claimName: csi-pvc-cinderplugin-hdd readOnly: false 27
  • 28. (virtenv) (virtenv) [root@test--master01 osc]# kubectl get pod NAME READY STATUS RESTARTS AGE echoserver 1/1 Running 0 13m nginx-hdd 1/1 Running 0 16s nginx-ssd 1/1 Running 0 20s (virtenv) (virtenv) [root@test--master01 osc]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-6c5d5108-075e-4e7f-9935-eea3cb2a057a 1Gi RWO Delete Bound default/csi-pvc-cinderplugin-ssd cinder-csi-ssd 22s pvc-a569676c-96f3-4a77-a92a-f6adf9e646cf 1Gi RWO Delete Bound default/csi-pvc-cinderplugin-hdd cinder-csi-hdd 19s (virtenv) (virtenv) [root@test--master01 osc]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-cinderplugin-hdd Bound pvc-a569676c-96f3-4a77-a92a-f6adf9e646cf 1Gi RWO cinder-csi-hdd 21s csi-pvc-cinderplugin-ssd Bound pvc-6c5d5108-075e-4e7f-9935-eea3cb2a057a 1Gi RWO cinder-csi-ssd 25s 28
  • 29. 29
  • 30. # kubectl run echoserver --image=test-- master01:5000/google-containers/echoserver:1.10 -- port=8080 (virtenv) [root@test--master01 ~]# kind: Service apiVersion: v1 metadata: name: loadbalanced-service spec: selector: run: echoserver type: LoadBalancer ports: - port: 80 targetPort: 8080 protocol: TCP 30
  • 31. (virtenv) [root@test--master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 116m loadbalanced-service LoadBalancer 10.233.59.140 xx.xx.10.239 80:30534/TCP 3m5s (virtenv) [root@test--master01 ~]# curl xx.xx.10.239 Hostname: echoserver Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=xx.xx.10.11 method=GET real path=/ query= request_version=1.1 request_scheme=http request_uri=http://xx.xx.4.153:8080/ Request Headers: accept=*/* host=xx.xx.4.153 user-agent=curl/7.29.0 Request Body: -no body in request- (virtenv) [root@test--master01 ~]# kubectl delete svc loadbalanced-service service "loadbalanced-service" deleted 31
  • 32. 1. 컨트롤러 는 API 서버의 Ingress 이벤트 를 감시 합니다. 요구 사항을 충족하는 Ingress 리소스를 찾으면 AWS 리소스 생성을 시작합니다. 2. Ingress 리소스에 대한 ALB가 생성됩니다. 3. Ingress 리소스에 지정된 각 백엔드에 대해 TargetGroup 이 생성됩니다. 4. 수신 리소스 주석으로 지정된 모든 포트에 대해 리스너 가 생성됩니다. 포트를 지정하지 않으면 적절한 기본값 ( 80또는 443)이 사용됩니다. 5. Ingress 리소스에 지정된 각 경로에 대해 규칙 이 생성됩니다. 이렇게하면 특정 경로에 대한 트래픽이 TargetGroup생성 된 올바른 경로로 라우팅됩니다 . 32
  • 33. apiVersion: apps/v1 kind: Deployment ---------- ports: - containerPort: 8080 apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-octavia-ingress annotations: kubernetes.io/ingress.class: "openstack" octavia.ingress.kubernetes.io/internal: "false" NAME CLASS HOSTS ADDRESS PORTS AGE test-octavia-ingress <none> foo.bar.com 80 7s ((virtenv) [root@test--master01 osc]# kubectl get ing NAME CLASS HOSTS ADDRESS PORTS AGE test-octavia-ingress <none> foo.bar.com xx.xx.83.104 80 100s spec: rules: - host: foo.bar.com http: paths: - path: / backend: serviceName: webserver servicePort: 8080 33
  • 34. (virtenv) [root@test--master01 osc]# IPADDRESS=xx.xx.83.104 (virtenv) [root@test--master01 osc]# curl -H "Host: foo.bar.com" http://$IPADDRESS/ Hostname: webserver-598ddccb79-gl8mn Pod Information: -no pod information available- Server values: server_version=nginx: 1.13.3 - lua: 10008 34
  • 35. T. 02-516-0711 E. sales@osci.kr 서울시강남구테헤란로83길32,5층(삼성동,나라키움삼성동A빌딩) THANK YOU