GPU-VDI acceraletion with Linux KVM Hypervisor.
Pros.
1. Linux KVM is Opensource.
>> Many people have been using Linux KVM.
Demand for workstations is getting more and more complicated, so the need for GPU acceleration of graphics should be increasing.
However, it is disadvantageous about the growth rate as a product that Linux KVM is open source.
GDG Cloud Southlake 32: Kyle Hettinger: Demystifying the Dark Web
Do linux KVM hypervisor dream of GPU-VDI computing? OIST
1. Do Linux KVM Hypervisor dream
of GPU-VDI computing?
@naoto_gohko (郷古 直仁)
(Japan OpenStack Users Group / GMO Internet Inc.,)
GPU-Accelerated VDI International Conference 2017 Asia
Community LT
2017/06/16, Okinawa
https://goo.gl/KGZaxW
2. LT presenter(It’s me) #1
• Naoto Gohko / 郷古 直仁
(@naoto_gohko)
• Cloud Service development divistion,
GMO Internet Inc.,
• Japan OpenStack Users Group (JOSUG) Member.
@MikumoConoHa
3. LT presenter(It’s me) #2
• My trend
“To live a beautiful life”
• Untile Last December, it was night type life.
to the office 11:00, leave the office 20:00
• This year,
worked start 9 o’clock, leave the office 18:00
• I was at a loss as to whether to apply for the OpenStack jobs
at OIST on Linked-in site. : )
4. Swift cluster
GMO Internet, Inc.: VPS and Cloud services
Onamae.com VPS (2012/03) :
http://www.onamae-server.com/
Forcus: global IPs; provided by simple "nova-network"
tenten VPS (2012/12)
http://www.tenten.vn/
Share of OSS by Group companies in Vietnam
ConoHa VPS (2013/07) :
http://www.conoha.jp/
Forcus: Quantam(Neutron) overlay tenant network
GMO AppsCloud (2014/04) : http://cloud.gmo.jp/
OpenStack Havana based 1st region
Enterprise grade IaaS with block storage, object storage,
LBaaS and baremetal compute was provided
Onamae.com Cloud (2014/11)
http://www.onamae-cloud.com/
Forcus: Low price VM instances, baremetal compute and object storage
ConoHa Cloud (2015/05/18) http://www.conoha.jp/
Forcus: ML2 vxlan overlay, LBaaS, block storage, DNSaaS(Designate)
and original services by keystone auth
OpenStack Diablo
on CentOS 6.x
Nova
Keystone
Glance
Nova network
Shared codes
Quantam
OpenStack Glizzly
on Ubuntu 12.04
Nova
Keystone
Glance
OpenStack Havana
on CentOS 6.x
Keystone
Glance
Cinder
Swift
Swift
Shared cluster
Shared codes KeystoneGlance
Neutron
Nova Swift
Baremetal compute
Nova
Ceilometer
Baremetal compute
Neutron LBaaS
ovs + gre tunnel overlay
Ceilometer
Designate
SwiftOpenStack Juno
on CentOS 7.x
NovaKeystone
Glance
Cinder
Ceilometer
Neutron
LBaaS
GMO AppsCloud (2015/09/27) : http://cloud.gmo.jp/
2nd region by OpenStack Juno based
Enterprise grade IaaS with High IOPS Ironic Compute and Neutron LBaaS
Upgrade
Juno
GSLB
Swift
Keystone Glance
CinderCeilometer
Nova
Neutron
Ironic
LBaaS
5. Do Linux KVM hypervisor
dream of GPU-VDI computing?
What is its meaning?
6. GPU-VDI with Linux KVM hypervisor
Pros
• Linux KVM is Opensource
Cons
• Linux KVM and Linux kernel is the opensource.
7. GPU-VDI with Linux Computing
(not limited to KVM)
Computing Accelereted Method
Pass-through
• A) KVM-VM with GPU-PCI pass-through (with OpenStack)
• B) Container deployment (with Kubernetes 1.6~)
Virulization GPU
• C) KVMGT: Full GPU Virtulization(only Intel Chip embedded GPU)
API Intercept based
• D) virGL : virtio-GPU driver para-virtulization with KVM
(3D library accelerelation on Host GPU OpenGL computing)
• F) Legacy: VMGL (limited Linux workstation)
• G*) VirtCL : virtio-OpenCL (GPGPU instead of GPU-VDI)
9. A) KVM VM with GPU-PCI
pass-through (with OpenStack)
What is its meaning?
10. OpenStack for Scientific Research
https://www.openstack.org/science/
HPC and HTPC (High-throughput Computing)
Book URL
The Crossroads of Cloud and HPC: OpenStack for Scientific Research
https://www.openstack.org/assets/science/OpenStack-CloudandHPC6x9Booklet-
v4-online.pdf
11. GPGPU on OpenStack –
The Best Practice for GPGPU Internal Cloud
• My friend, Ohta-san.
(he is the leader of the
Japanese Raspberry-Pi user
group.)
• Open Source Summit Japan
2017
• LinuxCon Chaina 2017
https://speakerdeck.com/masafumi_ohta/gpu-on-openstack
12. GPGPU on OpenStack –
The Best Practice for GPGPU Internal Cloud
• PCI Pass-though has
generalized in GPGPU.
• but in GPU-VDI it
depends on the
number of GPUs in
Computing node
14. How to VDI-GPU with Kubernetes 1.6
• Exp) Nvidia GPU:
•Pass-through PCI-GPU
•Docker run with-in KVM instance
(run KVM instance as a application with
system priviledge.)
Guest VM of Windows is OK
OR
•runv and frakti: Hypervisor-based container
Guest VM of Windows is ??
15. GPU with Kubernetes 1.6
• Node affinity/anti-affinity
scheduler: beta
• Special Hardware (like a GPU)
• Multiple GPU support
for Docker container
• For CUDA use case
GPU … CUDA ??
GPGPU?
(VDI is ok)
16. How to VDI-GPU with Kubernetes 1.6
• Exp) Nvidia GPU:
•Pass-through PCI-GPU
•Docker run with-in KVM instance
(run KVM instance as a application with system
priviledge.)
Windows guest is OK (PCI pass-through)
OR
•runv and frakti: Hypervisor-based container
Guest VM of Windows is ??
17. How to VDI-GPU with Kubernetes 1.6
• Hypervisor-based container:
•Hypernetes: manage Frakit, HyperContainer,
CNI, Volumes…
https://hyper.sh
HyperHQ team:
Success of CRI: Bringing Hypervisor based
Container to Kubernetes. (Cloud Native Con,
2017 / KubeCon, Harry Zhang)
18. How to VDI-GPU with Kubernetes 1.6+
• Hypervisor-based container:
• Frakti: the hypervisor based container
https://github.com/kubernetes/frakti
(… It is similar to a Windows container)
• HyperContainer
https://hypercontainer.io/
• runv: hypervisor based runtime for OCI
https://github.com/hyperhq/runv
• hyperd: control daemon
https://github.com/hyperhq/hyperd
• hyperstart: init service (PID=1)
https://github.com/hyperhq/hyperstart
Frakti, runv: using same CT image for runc
19. GPU-VDI with Linux Computing
(not limited to KVM)
Computing Accelereted Method
Pass-through
• A) KVM-VM with GPU-PCI pass-through (with OpenStack)
• B) Container deployment (with Kubernetes 1.6~)
Virulization GPU
• C) KVMGT: Full GPU Virtulization(only Intel Chip embedded GPU)
API Intercept based
• D) virGL : virtio-GPU driver para-virtulization with KVM
(3D library accelerelation on Host GPU OpenGL computing)
• F) Legacy: VMGL (limited Linux workstation)
• G*) VirtCL : virtio-OpenCL (GPGPU instead of GPU-VDI)
21. virtCL: A Framework for OpenCL Device
Abstractoin and Management
https://www.researchgate.net/publication/273630028
Yi-Ping You, Hen-Junk Wu, Yeh-Ning Tsai, Yen-Ting Chao;
Department of Computer Science, National Chiao Tung University,
Taiwan
(is not yet opensource,
in development)
22. GPU-VDI with Linux Computing
(not limited to KVM)
Computing Accelereted Method
Pass-through
• A) KVM-VM with GPU-PCI pass-through (with OpenStack)
• B) Container deployment (with Kubernetes 1.6~)
Virulization GPU
• C) KVMGT: Full GPU Virtulization(only Intel Chip embedded GPU)
API Intercept based
• D) virGL : virtio-GPU driver para-virtulization with KVM
(3D library accelerelation on Host GPU OpenGL computing)
• F) Legacy: VMGL (limited Linux workstation)
• G*) VirtCL : virtio-OpenCL (GPGPU instead of GPU-VDI)
23. D) virGL: virtio-GPU driver
para-virtulization with KVM
(need guest support)
This is GPU-VDI
24. virGL: Guest/Host OpenGL/DirectX software
• Host/Guest = Qemu
• Gaming, Rendering(opengl),
Encording etc. workload was
accelereted.
https://www.freedesktop.org/wiki/Software/gallium/
25. Virgil 3D GPU project
• For details, please see the
slide of this URL.
• Direct3D drivers for it easy??
• that allows the guest
operating system to use the
capabilities of the host GPU
to accelerate 3D rendering.
https://virgil3d.github.io/
26. News in Qemu graphics
• For details, please see the
slide of this URL.
• Linux kernel 4.4+
Qemu 2.5+
(with GL enabled build)
• Not yet: Windows guest,
DirectX(3D) drivers.
• opengl support in qemu UIs
• Spice remote display;
in progress (VDI)
https://www.kraxel.org/slides/qemu-gfx-2016/
27. Exp) virtio-GPU: guest OS
Using virtio-gpu (with virgl aka opengl acceleration)
with libvirt and spice
• Fedora 24 or lator (Host/Guest support)
Requirements for Guest:
• Linux kernel 4.4+ (version 4.2 without opengl build)
• Mesa 11.1
• xorg server 1.19, or 1.18 with commit "5627708 dri2: add
virtio-gpu pci ids" backported
28. Exp) virtio-GPU: host OS
Using virtio-gpu (with virgl aka opengl acceleration)
with libvirt and spice
Requirements for Host:
• Qemu 2.6+
• virglrenderer
• spice-server 0.13.2+ (development release)
• spice-gtk 0.32+ (used by virt-viewer & friends)
Note that 0.32 got a new shared library major version, therefore
the tools using this must be compiled against that version.
• Mesa 10.6
• libepoxy 1.3.1
• libvirt 1.3+
29. Exp) virtio-GPU: libvirt guest config and run
Libvirt guest config:
Client GUI(spice local only):
virt-viewer --attach $domain
(final important bit is that spice needs a unix socket
connection for opengl to work)
30. Ex) Windows server 2016 with
Hyper-V service by Linux
nested KVM
For Developer use on Windows VDI-Guest
31. Running Hyper-V in QEMU/KVM Guest
https://ladipro.wordpress.com/2017/02/24/running-hyperv-in-
kvm-guest/
(Feb. 2017, only Intel CPU)
Linux 4.10 or newer kernel (from ELREPO
http://elrepo.org/tiki/kernel-ml)
QEMU 2.7 or later (Better is latest QEMU 2.9 = my test)
SeaBIOS 1.10 or later (my test is latest)
And qemu command line must include the +vmx parameter:
“-cpu hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx”
32. GPU support running Hyper-V 2016
with Nested KVM
I have not verified it yet.
Pass through is better.
IF you have a Nutanix, then Nvidia vGPU is …?
(I want to test …)
33. Do Linux KVM Hypervisor
dream of GPU-VDI
computing?
GPU-VDI on Linux KVM with virtio-GPU in community opensource.
- Not yet Windows driver (virtio-GPU)
- Not yet spice client GPU support (virtio-GPU)
Editor's Notes
Hi, Everyone.
This title is a bit of a book of "Electric sheep 's dream" book.
My name is Naoto Gohko.
I am working at GMO Internet Inc., which provides public cloud services in OpenStack.
as a ConoHa public cloud and z.com cloud public.
She is ConoHa chan. Mascot girl charactor of ConoHa public cloud.
Also, I am a member of the OpenStack User Group.
AND My trend is “To live a beautifl life”.
I came to Okinawa and took a tour of OIST and I was touched by the fact that it is a suitable place to live beautifully.
Since this year, I have worked at 9 o'clock and leave the office at 18 o'clock
so that I can make use of time effectively.
I am also participating in Ohkawa-san 's GPU - VDI working community.
And here,
I will NOT talk about our service development history by OpenStack.
There are Linux KVM cluster.
Our multiple OpenStack clusters are operating in multiple Products within our environment.
Starting with the Diablo cluster with CentOS 6 Linux KVM.
Latest OpenStack cluster for ConoHa/z.com cloud is CentOS 7.2 Linux KVM.
To-day this LT topic is
“Do Linux KVM Hypervisor dream of GPU-VDI computing?”
What is its meaning?
GPU-VDI acceraletion with Linux KVM Hypervisor.
Pros.
1. Linux KVM is Opensource.
>> Many people have been using Linux KVM.
Demand for workstations is getting more and more complicated, so the need for GPU acceleration of graphics should be increasing.
However, it is disadvantageous about the growth rate as a product that Linux KVM is open source.
This is GPU acceleration method on Linux Computing without limited KVM.
-Pass through
- Virtulization GPu
- API Intercept based.
Exp) Virgil3D for offering 3D/OpenGL guest
Virgl Gallium3D driver to mesa Git master
will not explain about KVMGT,
but GPU vendor needs support.
It can not be used with Nvidia or AMD.
A) GPU-PCI pass-through.
What is its meaning?
The OpenStack is one way to provide computing programmably.
CERN, NASA and Many
My friend Ohta-san will present about GPGPU usage in OpenStack with PCI-Pass-through.
The GPU usage method supported by OpenStack Nova is PCI - Pass - though,
and a lot of information exists on the Internet.
This presentation is Open Source Summit Japan 2017.
PCI Pass-though has generalized in GPGPU, but in GPU-VDI it depends on the number of GPUs in Computing node
A) Container deployment
What is its meaning?
This is Google method.
This method was taught by a person who is an engineer of a company named Retty.
He worked for Google before.
He showed me Demo to start Windows as Guest OS.
GPU scheduling of Kubernetes 1.6+ is for running GPGPU applications directly in containers.
Yes, In the case of Nvidia, CUDA is commonly used.
So in this case too, pci pass-through is used because KVM runs as a system privileged application in Container.
There is HyperContainer which is Hypervisor runtime container.
Get a container image, start up container using hypervisor.
Frakti was adopted for k8s.
I do not know if this is available for VDI, but I am interested in technology.
I thought that this approach was similar to Windows container.
This is GPU acceleration method on Linux Computing without limited KVM.
-Pass through
- Virtulization GPu
- API Intercept based.
Exp) Virgil3D for offering 3D/OpenGL guest
Virgl Gallium3D driver to mesa Git master
And this is only GPGPU area.
This paper title is “Enabling OpenCL support for GPGPU in Kernel-based Virtual Machine” in Taiwan.
The OpenCL is a Computing subset library as the OpenGL.
I found a master's thesis in a university in Taiwan.
This is an interesting method as a way to benefit from GUEST OS.
This is GPU acceleration method on Linux Computing without limited KVM.
-Pass through
- Virtulization GPu
- API Intercept based.
Exp) Virgil3D for offering 3D/OpenGL guest
Virgl Gallium3D driver to mesa Git master
VirGL is a method to operate as a virtio-GPU-pci driver.
Host KVM side will provide OpenGL API to GUEST.
Freedesktop foundation provides a technique called gallium
and is trying to use GPU with both host / guest rendering.
Do you all know Steam?
The service called Steam is an application store that sells games on-line and a multi-platform development environment.
For details, please see the slide of this URL.
Virgil is a research project to investigate the possibility of creating a virtual 3D GPU for use inside qemu virtual machines,
that allows the guest operating system to use the capabilities of the host GPU to accelerate 3D rendering.
For details, please see the slide of this URL.
The new in Qemu graphics support.
For example)
IF we can use virtio-GPU on GUEST OS,
Using virtio-gpu with virgl opengl acceleration with libvirt and spice.
In Fedora 24 or lator (host/guest supported)
Requirements for Guest:
Linux kernel 4.4+
Mesa 11.1
Xorg server 1.19 or lator with virtio-gpu backport
For details, please see the slide of this URL.
And Requirement for Host:
Qemu 2.6 or lator
Virglrenderer
Spice-server 0.13.2 or lator
Spice gtk 0.32 or lator
Mesa 10.6
Libepoxy 1.3.1
Libvirt 1.3 or lator
For details, please see the slide of this URL.
And libvirt guest config.
This is “gl enable=‘yes’.
Video model flag is “model type=‘virtio’
And GPU native client ; this is virt-viewer.
Start client graphic command is;
“virt-viewer –attach $vm-domain”
Extra)
Hyper-V is also required in Windows GUEST-VDI.
Even with development using docker, it is necessary for the operation of the Linux container by the Linux-kit and the Hyper-V container of Windows.
And Special settings are required for Hyper-V 2016 server operation on KVM.
I found the following site and verified the operation of Hyper-V 2016 on Nested KVM.
Linux 4.10 or newer kernel
QEMU 2.7 or later (Better is latest QEMU 2.9)
SeaBIOS 1.10 or later
And qemu command line must include the +vmx parameter:
“-cpu hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,+vmx”
The parameter "hv _" attached to prefix is a parameter introduced to qemu to run Hyper-V.