2. 2
History of our services using OpenStack in GMO Internet Inc.,
Nova-network model and Diablo: Onamae.com VPS
Quantum overlay network: ConoHa Grizzly cluster
High performance network: GMO AppsCloud(Havana)
Juno ConoHa: Regison, Domain, DNS and SDS
Juno GMO AppsCloud: Ironic and copy offload Cinder
Swift cluster (shared from each OpenStack)
# Agenda
12. 12
Oname.com VPS(Diablo)
• Nova Network:
– very simple(LinuxBridge)
– Flat networking is scalable.
• Only 1 NIC per VM.
• Only 1 Public Network IP
– MQ(rabbitmq) dependency is little(sync. API)
• More scalable than Juno, Kilo, Liberty and Mitaka
• Cloud ?
– Only virtulization management
But
There is no added value, such as a free configuration of the network
OpenStack service: Onamae.com VPS(Diablo)
13. 13
OpenStack service: Onamae.com VPS(Diablo) model
compute
vm
compute
NIC NIC
Vlan network
bridge
NIC vlan vlan
tap
vNIC
Vlan network
16. 16
ConoHa(Grizzly)
• Quantam Network:
– It was using the initial version of the Open vSwitch full
mesh GRE-vlan overlay network with LinuxBridge Hybrid
But
When the scale becomes large,
Localization occurs to a specific node
of the communication of the GRE-mesh-tunnel
(with under cloud network(L2) problems)
(Broadcast storm?)
OpenStack service: ConoHa(Grizzly)
19. 19
GMO AppsCloud(Havana)
• Service XaaS model:
– KVM compute + Private VLAN networks + Cinder + Swift
• Network:
– 10Gbps wired(10GBase SFP+)
• Network model:
– IPv4 Flat-VLAN + Neutron LinuxBridge(not ML2) + Brocade ADX L4-LBaaS original driver
• Public API
– Provided the public API
• Ceilometer
• Glance
– Provided(GlusterFS)
• Cinder
– HP 3PAR(Active-Active Multipath original) + NetApp
• ObjectStorage
– Swift cluster
• Bare-Metal Compute
– Modifiyed cobbler bare-metal deploy driver.
OpenStack service: GMO AppsCloud(Havana)
20. 20
OpenStack service: GMO AppsCloud(Havana) model
compute
vm
NIC
Vlan network
bridge
NIC vlan
tap
vNIC
Vlan network
vNIC
bridge
vlan
tap
compute
NIC
bridge
NIC vlan
bridge
vlan
public network
Neutronだけどsimpleな
LinuxBridge model
(Context Switchが少ない)
>> Game配信など高速用途の
仮想化ネットワーク
それが、GMO AppsCloud
22. 22
GMO AppsCloud(Havana) public API
Web panel(httpd, php)
API wrapper proxy
(httpd, php
Framework: fuel php)
Havana
Nova API
Customer sys API
Havana
Neutron API
Havana
Glance API
OpenStack API for
input validation
Customer DB
Havana
Keystone API
OpenStack API
Havana
Cinder API
Havana
Ceilometer API
Endpoint L7:reverse proxy
Havana
Swift Proxy
29. 29
Multi Region
SSD Only
Scalability
API
Simple and competitive pricing
# Newly Released ConoHa
30. 30
In ConoHa, We added two additional features.
– Multi-location region
– Domain Structure: Application to multi-location region
structure
– 1 Domain == 1 OEM service or Product service
– Domain on API validation wrapper proxy
Multi-Location region and domain structures
31. 31
The meaning of the word
• Domain
• Keystone domain
• With v2 API service (our cloud)
• != DNS Domain
• Location
• Different geographic locations on the Earth
• US(San Jose), JP(Tokyo), SG(Singapore)
• Region
• OpenStack region
• Location != Region
• Can setup up multiple Region
in one Location
33. 33
CentOS 7.1 x86_64 Juno (RDO) Maria DB
Connect to Tokyo KeyStone from All regions.
Add each region endpoints to Tokyo KeyStone.
Did not need to modify OpenStack code.
OS and OpenStack Versions
Multi Region Setting
# Specs
34. 34
Tokyo Singapole
User/tenant User/tenant
API Management
Keystone API
API Management
Keystone APIAPI Management
Keystone API
Token Token
Tokyo SanJoseSingapore
API Management
Keystone API
API Management
Keystone API
READ/WRIT
E
READ READ
TokenToken Token
Do not
create/delete
users
Do not
create/delete
users
Our Customer base
User administration
# User-registration is possible in Japan only
DB Replication DB Replication
User/tenant User/tenantUser/tenant
R/W R/W
35. 35
# Issues and Restrictions on Multi Region
User-registration is possible in Japan only
VPN performance issue
Issues on replicating token table.
36. 36
API Management
Keystone API
KeystoneDB
Nova
Neutron Glance
Cinder
OpenStack Cluster
Nova Get/token Glance Get/token
Neutron Get/token Cinder Get/tokenVM Create !
Nova user token:001
Neutron Token:002
Glance Token:003
Cinder Token:004
VM Create !
VM Create !
Nova user token:002
Neutron Token:003
Glance Token:004
Cinder Token:005
Nova user token:006
Neutron Token:007
Glance Token:008
Cinder Token:009
# Bloat access tokens
Too many tokens will be created from each components.
37. 37
Setting example.conf
[keystone_authtoken]
token= 100 year expires token
[neutron_authtoken]
token= 100 year expires token
[glance_authtoken]
token= 100 year expires token
[cinder_authtoken]
token= 100 year expires token
# Issues on replicating token table.
100 year expires token
We fixed it so that any tokens can be used for each components.
40. 40
Swift cluster
GMO Internet, Inc.: VPS and Cloud services
Onamae.com VPS (2012/03) :
http://www.onamae-server.com/
Forcus: global IPs; provided by simple "nova-network"
tenten VPS (2012/12)
http://www.tenten.vn/
Share of OSS by Group companies in Vietnam
ConoHa VPS (2013/07) :
http://www.conoha.jp/
Forcus: Quantam(Neutron) overlay tenant network
GMO AppsCloud (2014/04) : http://cloud.gmo.jp/
OpenStack Havana based 1st region
Enterprise grade IaaS with block storage, object storage,
LBaaS and baremetal compute was provided
Onamae.com Cloud (2014/11)
http://www.onamae-cloud.com/
Forcus: Low price VM instances, baremetal compute and object storage
ConoHa Cloud (2015/05/18) http://www.conoha.jp/
Forcus: ML2 vxlan overlay, LBaaS, block storage, DNSaaS(Designate)
and original services by keystone auth
OpenStack Diablo
on CentOS 6.x
Nova
Keystone
Glance
Nova network
Shared codes
Quantam
OpenStack Glizzly
on Ubuntu 12.04
Nova
Keystone
Glance
OpenStack Havana
on CentOS 6.x
Keystone
Glance
Cinder
Swift
Swift
Shared cluster
Shared codes KeystoneGlance
Neutron
Nova Swift
Baremetal compute
Nova
Ceilometer
Baremetal compute
Neutron LBaaS
ovs + gre tunnel overlay
Ceilometer
Designate
SwiftOpenStack Juno
on CentOS 7.x
NovaKeystone
Glance
Cinder
Ceilometer
Neutron
LBaaS
GMO AppsCloud (2015/09/27) : http://cloud.gmo.jp/
2nd region by OpenStack Juno based
Enterprise grade IaaS with High IOPS Ironic Compute and Neutron LBaaS
Upgrade
Juno
GSLB
Swift
Keystone Glance
CinderCeilometer
Nova
Neutron
Ironic
LBaaS
41. 41
• The cost to operate Multi version Openstack have
increased
• It is difficult to upgrade or add new features
Managing multiple sites of OpenStack is a headache.
What’s the problems abount Multi-Cluster?
43. 43
ConoHa: based on OpenStack Juno (IaaS)
• Multiple region openstack cluster
• Tokyo / Singapore / San Jose
• ... and so on
• Full SSD storage
• Multiple keystone service domain support
• ConoHa and Next service (now in development) ... OEM etc.
• LB as a Service: LVS-DSR (original)
• DNS as a service : OpenStack Designate
• OpenStack API and additional RESTful API
• Multiple Languages web panel support
• Japanese, ConoHa, English,
Korean, Mandarin Chinese
44. 44
• Create scope in the domain
– Scoped items
• Flavor
• Images
• Volume type
– Shared items
• Public Networks
• Hypervisor
• Images (Default domain)
• Using Keystone API v2.0
Motivation
45. 45
• We use and customize the code that is in Juno Keystone v3 domain
– Enable Domain ID for Juno Keystone V2 API
• SaaS implementation with python-keystoneclient
– Process related Domain ID
and Data implementation
Domain ID from token API
User:
POST /v2.0/token
Admin(service):
GET /v2.0/token/{id}
Juno Keystone V2 API : Does not support Domains
46. 46
Keystone: wrapper proxy at domain specific keystone endpoint
Domains and user prefix namespace
Domain Product Prefix
name space
gnc ConoHa gnc
zjp JP OEM-1 zjp
zsg SG OEM-
1
zsg
... ... OEM-n ... ...
Exp) user: gnc0000348
Image name: gnc_centos7
47. 47
We released 2nd service on same Juno infra.
(2015/10/20 ~)
Adding domain(2nd): cloud.z.com
53. 55
Designate DNS: ConoHa cloud(Juno)
Client API
DNS
Identify
Endpoint
Storage
DB
OpenStack
Keystone
Backend
DB
RabbitMQ
Central
Components of the DNS and GSLB(original) back-end services
Application of Designate DNS:
• DNS as a service(tenant)
• Undercloud Infra-network
• No Keystone auth config
55. 57
Compute and Cinder(zfs): SSD
Toshiba enterprise SSD
• The balance of cost and performance we have taken.
• Excellent IOPS performance, low latency
Compute local SSD
The benefits of SSD of Compute of local storage
• The provision of high-speed storage
than cinder boot.
• It is easy to take online live snapshot of vm instance.
• deployment of vm is fast.
ConoHa: Compute option was modified:
• take online live snapshot of vm instance.
http://toshiba.semicon-storage.com/jp/product/storage-
products/publicity/storage-20150914.html
57. 59
NetApp storage: GMO Appscloud(Juno)
If you are using the same Cluster onTAP NetApp a
Glance and Cinder storage, it is possible to offload
a copy of the inter-service of OpenStack as the
processing of NetApp side.
• Create volume from glance image
((glance the image is converted (ex: qcow2 to raw)
required that does not cause the condition)
• Volume QoS limit: Important function of multi-
tenant storage
• Uppper IOPS-limit by volume
59. 61
Ironic with undercloud: GMO Appscloud(Juno)
For Compute server deployment.
Kilo Ironic and All-in-one
• Compute server: 10G boot
• Clout-init: network
• Compute setup: Ansible
Under-cloud Ironic(Kilo):
It will use a different
network and Ironic
Baremetal dhcp for Service
baremetal compute
Ironic(Kilo).
(OOO seed server)
Trunk allowed vlan, LACP
67. 69
swift proxy
keystone
OpenStack Swift cluster (5 zones, 3 copy)
swift proxy
keystone
LVS-DSrLVS-DSR HAProxy(SSL)HAProxy(SSL)
Xeon E3-1230 3.3GHz
Memory 16GB
Xeon E3-1230 3.3GHz
Memory 16GB
Xeon E5620 2.4GHz x 2CPU
Memory 64GB
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
swift objects
swift objects
Xeon E3-1230 3.3GHz
swift account
swift container
Xeon E5620 2.4GHz x 2CPU
Memory 64GB, SSD x 2
68. 70
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift objects
swift proxy keystone
Havana AppsCloud
swift proxy keystone
Grizzly ConoHa
Havana
To
Juno
swift account
swift container
swift account
swift container
swift account
swift container
swift account
swift container
swift account
swift container
swift proxy keystone
Juno ConoHa
swift proxy keystone
Juno AppsCloud
Swift cluster: multi-auth and multi-endpoint
swift proxy keystone
Juno Z.com
72. 74
Finally:
The GMO AppsCloud in Juno OpenStack it was released on 10/27/2015.
• Deployment of SanDisk Fusion ioMemory by Kilo Ironic on Juno OpenSack I can also.
• Compute server was deployed by Kilo Ironic with under-cloud All-in-One openstack.
Compute server configuration was deployed by Ansible.
• Cinder and Glance was provied NetApp copyoffload storage mechanism.
• LbaaS is Brocade ADX NAT mode original driver.
• Linux Bridge Neutron mode is best performance without L3 switch
On the otherhand; Juno OpenStack ConoHa released on 05/18/2015.
• Designate DNS and GSLB service was started on ConoHa.
• Cinder storage is SDS provied NexentaStor zfs storage for single volume type.
• LBaaS is LVS-DSR original driver.
• ovs-VXLAN overlay Neutron mode is more high degree of freedom.
• And Z.com OEM openstack domain was living together in ConoHa
74. 76
Develop OpenStack related tools
Tool that create Docker host.
Golang
Develop Vagrant provider for ConoHa.
Fix a problem and pull request.
Docker Machine
https://github.com/hironobu-s/vagrant-conoha
75. 77
CLI tool that handle ConoHa specific APIs
Golang
Develop plugin that enable to save media files
to Swift(Object Store)
Develop OpenStack related tools
https://github.com/hironobu-s/conoha-iso
https://wordpress.org/plugins/conoha-object-sync/
Editor's Notes
Hi everyone.
We are from a team at GMO Internet that focuses on developing services based on Openstack.
My name is Hyuntae Park and I am a manager of this team being involved in many Openstack projects.
We recently released a public cloud service called ConoHa which was built Openstack Juno.
We have always been developing products for Japan market but this product is targeted for users globally.
We’ve developed virtual server products in the past but all of them are a very simple product for users in Japan.
Because ConoHa is targeted for users globally, we faced architecture challenges that we didn’t have before.
One example is “Multi Region”.
I’d like to introduce ConoHa multi region structure and our operation know how.
The agenda for today is as shown in this slide.
Introduction to OpenStack based services we developed in the past.
Overview of Multi Region.
Our original extensions to OpenStack.
Multi region supported Domain
Within all the services we’ve launched so far, we have 2,000 active physical node and over 100,000 VMs activated.
I will dive into the newly released ConoHa’s features.
As Park san mentioned, multiple OpenStack clusters are operating in multiple Products within our environment.
Primarily this is very inefficient in terms of cloud operation and it is off a principal of a fundamental business rule “selection and concentration”.
Starting with the Diablo cluster, we’ve built many OpenStack clusters such as Grizzly, Havana and Juno and they are still in operations.
In terms of Swift cluster, we share multiple OpenStack by deploying swift-proxy per Keystone auth.
However, it takes a lot of time and cost to deploy OpenStack cluster each time.
There is stackforge project called Tricircle that deals with OpenStack deployment across multiple sites but for now, we don’t want to complicate our operation.
=========
先ほどPark sanが話したように、我々の環境では複数のOpenStack clusterが複数のProductで稼働しています。
本来ならば、このようなことは、クラウド運用には無駄が多いですし、ビジネスの「選択と集中」の原則から外れますね。
Diablloのclusterから始まり、
Grizzly、
Havana、
Junoと数多くのOpenStack clusterを構築して、現在もサービスとして運用しています。
Swift clusterはKeystone authごとにswift-proxyを起動することで、複数のOpenStackから共有して利用しています。
しかし、毎回OpenStack cluster環境を構築するのは、工数もかかり大変です。
もちろん、OpenStackをCascadingするProjectでTricircle というのがありますが、いまのところ、より複雑な運用は我々の望むところではありません。
それでは、GMOインターネットでのOpenStackの使われ方をご説明します。
それでは、GMOインターネットでのOpenStackの使われ方をご説明します。
それでは、GMOインターネットでのOpenStackの使われ方をご説明します。
それでは、GMOインターネットでのOpenStackの使われ方をご説明します。
As Park san mentioned, multiple OpenStack clusters are operating in multiple Products within our environment.
Primarily this is very inefficient in terms of cloud operation and it is off a principal of a fundamental business rule “selection and concentration”.
Starting with the Diablo cluster, we’ve built many OpenStack clusters such as Grizzly, Havana and Juno and they are still in operations.
In terms of Swift cluster, we share multiple OpenStack by deploying swift-proxy per Keystone auth.
However, it takes a lot of time and cost to deploy OpenStack cluster each time.
There is stackforge project called Tricircle that deals with OpenStack deployment across multiple sites but for now, we don’t want to complicate our operation.
=========
先ほどPark sanが話したように、我々の環境では複数のOpenStack clusterが複数のProductで稼働しています。
本来ならば、このようなことは、クラウド運用には無駄が多いですし、ビジネスの「選択と集中」の原則から外れますね。
Diablloのclusterから始まり、
Grizzly、
Havana、
Junoと数多くのOpenStack clusterを構築して、現在もサービスとして運用しています。
Swift clusterはKeystone authごとにswift-proxyを起動することで、複数のOpenStackから共有して利用しています。
しかし、毎回OpenStack cluster環境を構築するのは、工数もかかり大変です。
もちろん、OpenStackをCascadingするProjectでTricircle というのがありますが、いまのところ、より複雑な運用は我々の望むところではありません。
それでは、GMOインターネットでのOpenStackの使われ方をご説明します。
Based on the feedbacks we received from our users, we decided to upgrade ConoHa using the latest version of OpenStack.
In order for ConoHa to be accepted by users all over the globe, we needed to make sure we have the following features.
Multi region
SSD ONLY
Scalability
API
competitive pricing
For the time being, I will focus on talking about the Multi region feature as it is very unique.
Park san has talked about the region strucutre in Multi-Region and how it works in ConoHa cloud environment.
We have more to that.
And they are,
Application to multi location region structure (Domain structure).
Application of one domain to 1 OEM service or product service.
I’m going to dive into the details of these 2 features.
=========
ここまで、Park sanにMuli-LocationにおけるRegion構成について、我々のConoHaのクラウド環境での適用をお話しました。
ConoHaインフラでは、さらにもう一工夫しています。
それは、次の内容になります。
ドメイン構造のマルチロケーション・リージョン構成への適用
一つのドメインを一つの製品サービスまたは、OEMサービスへの適用
これらの側面をさらに適用した話をします。
I’d like to take a few minutes to go over the definition of each terms.
When I say Domain, it refers to the Keystone Domain.
I will talk specifically about the domain that is operated with the Keystone V2 API.
I will also talk about the Designate DNS domain but that’s not all.
Location means areas that are physically in a location that are far from each other.
In our cases, they will be US (San Jose), Japan (Tokyo) and SG (Singapore).
Region refers to OpenStack region.
Which means that Location and Region are not used as same meanings.
There are caseswhere we set up multiple Region in one Location.
==========
ちょっと、ここからの単語について、前もって定義しておきます。
ここから、Domainとは、主にKeystone doaminのことです。
その中でも、我々が設定したKeystoen V2 APIで運用しているdomainについて、主に言及します。
DesignateでのDNS domainについても話しますが、ここでのドメインはそれだけではありません。
Locationとは、地域的に遠い場所にあるところを意味します。
我々の環境では、US(San Jose), JP(Tokyo), SG(Singapore)になります。
Regionとは、OpenStack regionを指します。
LocationとRegionは同一ではないです。
ゆえに、同じLocationにRegionを複数設定することもあります。
==========
Supporting multi region was our first priority out of all the features in the roadmap.
Physical location of the servers means a lot to our users.
The data center location we chose initially were Tokyo, Singapore and San Jose.
We’ve successfully built a multi region architected OpenStack environment between the 3 locations.
Now I’m going to explain the actual process we took to architect multi region.
The OS and OpenStack version for ConoHa areas shown.
We were able to build it just by connecting to Tokyo KeyStone from all regions and adding each region endpoints to Tokyo KeyStone.
We of course didn’t have to modify any of OpenStack codes.
Restriction 1: The system that manages both service site and user information only exists in Japan. Therefore user registration was only available in Japan.
Restriction 1: The system that manages both service site and user information only exists in Japan. Therefore user registration was only available in Japan.
Restriction 2:We of course use 10G network interface for connection between physical server and Switch but the network used for data base replication within the Region is a VPN of 10Mbps so it is not stable.
Restriction 3: As we anticipated 10s of thousand of token issues per day, it was clear that the table capacity was not suitable for long distance replication.
Although we were able to delete all the tokens that was automatically issued by components such as glance and neutron, we still anticipated a tremendous amount of queries.
I will talk more about it in the next page.
Restriction 3: As we anticipated 10s of thousand of token issues per day, it was clear that the table capacity was not suitable for long distance replication.
Although we were able to delete all the tokens that was automatically issued by components such as glance and neutron, we still anticipated a tremendous amount of queries.
The procedure we use for internal transaction, we were able to reduce the Token by setting up token with expiration date of 100 years to each components of neutron and glance.
As per the 3 restrictions I’ve mentioned, we had to give up the initial idea of a very simple multi region structure.
We separated the KeyStone functionality with the data base and then picked out the ones that requires region to region replication and the ones that doesn’t.
Handling data replications that are in 3 different regions with a very narrow bandwidth, we knew there was a chance of failing to sync the data discrepancies that is caused by a massive amount of tokens.
Also, if the data replications between the regions happens to fail, all services in different regions will stop and it would be difficult for us to match the data in the recovery process.
Therefore we decided to architect as shown in the next slide.
Thank’s Park-san.
Now I’m going to talk about the relationship between the keystone V2 API domain operation and regions within the OpenStack authentication in Juno.
=========
ここからは、JunoのOpenStackの認証基盤のkeystone V2 APIでのDomain運用とRegionの関係について、おはなしします。
Why?
さて、どういうことでしょうか?
As Park san mentioned, multiple OpenStack clusters are operating in multiple Products within our environment.
Primarily this is very inefficient in terms of cloud operation and it is off a principal of a fundamental business rule “selection and concentration”.
Starting with the Diablo cluster, we’ve built many OpenStack clusters such as Grizzly, Havana and Juno and they are still in operations.
In terms of Swift cluster, we share multiple OpenStack by deploying swift-proxy per Keystone auth.
However, it takes a lot of time and cost to deploy OpenStack cluster each time.
There is stackforge project called Tricircle that deals with OpenStack deployment across multiple sites but for now, we don’t want to complicate our operation.
=========
先ほどPark sanが話したように、我々の環境では複数のOpenStack clusterが複数のProductで稼働しています。
本来ならば、このようなことは、クラウド運用には無駄が多いですし、ビジネスの「選択と集中」の原則から外れますね。
Diablloのclusterから始まり、
Grizzly、
Havana、
Junoと数多くのOpenStack clusterを構築して、現在もサービスとして運用しています。
Swift clusterはKeystone authごとにswift-proxyを起動することで、複数のOpenStackから共有して利用しています。
しかし、毎回OpenStack cluster環境を構築するのは、工数もかかり大変です。
もちろん、OpenStackをCascadingするProjectでTricircle というのがありますが、いまのところ、より複雑な運用は我々の望むところではありません。
The cost to operate Multi version Openstack have increased,
and
it is difficult to upgrade or add new features.
Managing multiple sites of OpenStack is a headache for us.
=========
・複数のバージョンのOpenStack clusterを運用する負荷が高くなる。
・ちょっとしたupgradeや新機能追加もやりにくくなる。
という頭の痛い問題に直面しています。
It is a dark age for the cloud suppliers
=========
クラウド事業者の暗黒面に突入しています。
Even in that circumstance, we’ve developed a Multi Location environment with OpenStack Juno and launched ConoHa.
Speaking of infrastructure features, we’ve deployed features such as DNS and LBaaS.
As a keystone feature for IaaS authentication infrastructure, we were able to segment the “by Domain user management” feature.
=========
そのような中で、OpenStack JunoでのMulti Location環境を構築してConoHaのサービスを開始したのは先程まで話したとおりです。
インフラの機能としては、DNS, LBaaSなどの機能を盛り込みました。
そのなかでも、IaaS認証基盤のkeystoneの機能として、ドメインによるユーザ管理機能の分割ができるようにしました。
Motivation is these.
==========
Motivationとしては、ドメインにより、スコープをドメイン内に作ることができ、ドメインに属する場合には、
scoped itemsとshared itemsの両方の利用ができるということです。
また、keystone APIはクライアントとサービスのサポートの観点からv2.0を使います。
Juno Keystone V2 API doesn’t support Domains.
We use and customize the code that is in Juno keystone v3 domain.
Then we use doamin ID for Juno Keystone v2 API.
What you see in the slide is the actual token information in the response JSON.
You can see “gnc” as a Domain ID.
By doing this you can use Domain ID in token and as long as you write a script based on Domain ID, SaaS we built on top of OpenStack will work just fine.
==========
Keystone ver. 2 APIでは、Domainのサポートはありません。
ですが、v3 APIで本来稼働するためのDomain実装ののソースコードが入っていたので、
それを利用して、domain IDをJuno Keystone V2 APIで扱えるようにしました。
右に、実際に keystoneでtokenを取得したときのresponse JSONの中のToken infomationのところを抜き出しています。
Domain IDとして”gnc”が入っていることがわかります。
このようにして、domain_idをtokenでつかえるようになり、OpenStack上に我々が実装したSaaSでもDomain IDに基づいた
処理を書けば動くようになりました。
There’s one more thing we need to do.
In our IaaS case, there is a wrapper proxy for Validation check in the Public API.
In Keystone, you determine the Domain either by the prefix of user name and tenant name or by looking up user ID or tenant ID in the keystone DB.
Also for other Nova, Glance and Cinder that we want to Scope, we add prefix name.
==========
もう一つ、補わなければならないことがあります。
Public APIには、われわれのIaaSの場合、Validation checkのwrapper proxyが入っています。
Keystoneでは、user IDまたはtenant IDをkeystone DBでDomain IDの確認、またはuser name、tenant nameのprefixでDomainを判別します。
Domain IDを判別すると、適切なDomainごとに起動しているKeystoneにAPIがわに、パケットが流れます。
また、ほかのNova, Glance, CinderでもScope化したいものには、同じようにprefix name space をつけることにしています。
This October, we released our 2nd service on same Juno infrastructure.
We added a domain of OEM service for your group company “z.com”.
z.com is a brand domain and an OEM partner in Asia that mainly distributes GMO Internet Group companies’ services.
3 regions which includes US, Singapore and Japan is also available with this product.
Different domain ids are used for each OEM partners.
=============
10月にConoHaのJuno基盤上に、z.comというグループ企業へのOEMサービスのドメインを追加して、リリースしています。
GMO Internet Groupでもアジアに展開しているグループ会社が、世界に向けてOEM販売するブランドドメインになっています。
このProductでも、先ほどPark sanが説明したUS, SG, JPの3 LocationのRegionが、利用できます。
そのなかで、販売OEMごとに、domain IDがつかわれるようになっています。
=============
By adding Z.com product, keystone endpoint related to z.com domain is separated in to API endpoint for admin and public API endpoint for users.
PaaS and SaaS that we built using Keystone auth is designed to lookup up the Admin URL of this default domain.
This is because there is domain ID in the keystone DB and with reverse lookup such as GET token within Admin, there is no issue with using default domain keystone.
Also ConoHa and z.com each has a Wrapper proxy program which introduced earlier, in Public API, Internal API and Admin API.
==============
Z.comのProductを追加したことで、z.comのドメインに関するkeystone endpointが、adminのAPI endpintとuser用のpublic API endpintがそれぞれつくられています。
我々がKeystone authを使って作ったPaaSやSaaSの
サービスはこのDefault domainのAdmin URLを見るようになっています。
これは、Keystone DBにdomain IDが入っているので、GET tokenのようなAdminでのtokenの逆引きでは、Default Domainのkeystoneでも問題ないためです。
また、ConoHa / z.comはそれぞれPublic API, Internal API , admin API の前に、さきほど紹介したWrapper proxy programが入っています。
==============
This is Multi-domains and Multi-endpoint details.
Left side is named “gnc” domain as ConoHa service.
Right side is named “zjp” domain as z.com service.
When user access from Internet, connect to the wrapper proxy program by php and nginx system.
Validation check to verify the value to OpenStack DB(for example: nova, cinder, neutron or keystone).
Then, after passing through the validation check, you can access the real keystone.
Other side “zjp” domain is a same scheme, like this, but the different API Value validation and API endpoint.
==========
そして、異なるドメインでは、Keystone APIサーバは、ドメインごとに起動する必要があります。
この図の例では、Domain “gnc”とDomain “zjp”では、異なるサービスですので、別々のAPIのEndpoint URLに
アクセスすることになります。
アクセス先には、我々が追加で構築したhttps proxyのwrapper serverがあり、ユーザのアクセス情報がそのDomainに属しているのか
OpenStack DBに確認して、その後ろの実際のOpenStack APIサーバに渡すようになっています。
この図では、keystone APIですが、wrapper proxyでユーザ名のprefixに”gnc”が付いている場合には、Domain “gnc”では正規のユーザとして、
後ろのkeystone APIに処理を渡します。
Keystone APIが分かれているのは、サービスDomainごとのEndpoint URLのCatalog一覧など、Domainごとに異なる応答をするためです。
Domain “zjp”のユーザがDomain “gnc”にアクセスしても、wrapper proxyがドメインが異なるとして、エラーとして処理します。
===========
Here is an our example of how Keystore API servers are set up for each domain.
Default Domain ID is a voluntary name and then you use default catalog template file for each regions and set it up the API call.
As an END point configuration, this was way more easy to use and has higher versatility.
In the IP address section, you would put in the hostname that you registered in DNS for each products.
===============
DomainごとのKeystone API serverの設定例を示します。
Default Domain IDとして任意のものを入れて、リージョンごとに、default_catalog.templatesファイルをつかって、API応答するように設定します。
END point登録処理としては、こちらのほうが汎用性と使いやすさがSQLの設定よりあることがわかりました。
上記では、IPアドレスになっていますが、ここをProductごとにDNSに登録したhostnameにします。
===============