9. Computing Block Function as a
Service
Container as a
Service
Platform as a
Service
Inside a Component
Virtual
Machine
Instance
Stateless Firewall (IP/Port restriction)
Load Balancer
Subnet(s)
17. Links
https://www.terraform.io/
Y.Brikman, Terraform - Up and Running, O′Reilly
(2019, 2nd ed.) https://www.amazon.co.uk/dp/1492046906
https://docs.microsoft.com/en-us/azure/architecture/aws-
professional/services
https://docs.microsoft.com/en-us/azure/architecture/gcp-
professional/services
https://github.com/giuliov/terraform-fun
https://www.slideshare.net/giuliov
18. Hardware spec:
1 KB RAM
(16KB after upgrade)
4 KB ROM
(8KB after upgrade)
First computer Past Companies Communities
Giulio Vian Senior DevOps Engineer
Good afternoon everyone,
thanks for your time to attend this session
We will explore how to abstract our Terraform code from being provider specific and how to leverage a few cool features of Terraform 0.12 and 0.13.
So, even if you are not interested in technological agnosticism, you might enjoy a practical example of latest Terraform abilities.
All the code is publicly available in GitHub.
I was involved in a customer project and the customer insisted on using two major cloud vendors.
This is a common request from big customers: they do not want to put all their eggs in the same basket.
Studying the two vendors documentation and knowing Terraform well, I demonstrated the ability to build the same infrastructure on either platforms.
How it worked out in the end? that, like most customers, the management is uncomfortable with the multi-cloud concept and opts for a multi-vendor strategy. Different workloads land on different cloud providers.
I think that consulting businesses can and should go multi-cloud.
In preparing this session, I choose to focus on the most known and used cloud platforms: Amazon AWS and Microsoft Azure. My customer made a different choice then.
Before we move on, there is an important announce. I will not explain the basics of Terraform, because I assume that you know them, at least enough to understand the samples.
I hope you discover with me that cloud are more similar than different.
But there is more.
The sample code use Terraform 0.13 and will not work with earlier version.
To allocate a Virtual Machine in AWS through Terraform, you write code that is specific for the AWS Provider.
In this basic example you ask Terraform to configure an "aws_instance" resource. The fundamental properties to setup a VM are:
- the operating system image, it can be a barebone Linux or a full-fledged Oracle instance or a custom configuration you setup yourself;
- the computing resources to use, mainly CPU and RAM, but may include GPUs, special networking, special hardware;
- how the VM is connected to the network;
- and the machine identifier, which in AWS is a special tag.
Now, let’s take a look at the equivalent code for Azure.
This is the code to allocate a Ubuntu virtual machine in Azure.
You can notice some important differences that we have to smooth out to make our code agnostic.
The Region (location) is an explicit parameter, while for AWS it is tied to the provider configuration.
The OS image is identified using four keys instead of a single identifier.
The networking is quite different as Azure has a separate resource, while AWS is just a property of the instance.
The Azure provider requires to specify some settings which have a default value in AWS.
Now, given the similarities how do we generalise the code?
"We can solve any problem by introducing an extra level of indirection."
I bet this isn’t a real surprise for you, right?
In practice, we must abstract the differences between providers through Terraform modules.
The module parameters must be provider-neutral and translated to provider-specific values and formats.
It is important to pick the right abstractions so that we end up with a rich model where we can define a great deal of details and can combine simple components in a complex ways.
My recommendation is a top-down approach.
A top-down approach starts by looking at the overall architecture of our systems and moves down to finer graded components.
The diagram illustrates my personal choice for decomposing a system.
The Global block contains cross-cutting services like IAM / AAD, that is, users, groups and permissions and networking that connects resources across regions (mostly for a disaster recovery implementation).
Within a region, you have a segment which represents an application, living in a distinct part of the network. A segment may represent availability zones too.
A Data block can be an S3 bucket / Azure Storage or an RDS / Azure Database instance.
Microsoft even has a couple of pages mapping AWS and GCP services to the Azure equivalent.
Note that you do not need to abstract every possible component.
For examples, networking infrastructure like Express Route (Az) / Direct Connect (AWS) can be setup once and plugged into the abstract modules.
Also Terraform data sources are a great help in decoupling modules and abstracting resources.
The goal is to minimise the migration effort.
Let’s see a bit more details before delving into code.
This is just a decomposition example.
A computing block abstracts network and computing resources with tight bounds. For example, an auto-scaling / VM scales set group needs a load balancer; ports must be open for traffic to flow in and out.
This abstract block can be further specified to be a serverless (Function/Lambda) resource, or a container (ACI/ECS/Fargate) or a VM.
I think this is enough abstract talk, let’s see some concrete example.
The demo code is not a full-blown decomposition. It demonstrates the allocation of a Virtual Machine in either AWS or Azure.
(switch to demo)
And this wraps up the demo.
We learned a few things about Terraform 0.13
The count pseudo-argument can be used with modules and this is the best new feature in my opinion.
You can use objects to simplify and reduce the number of module parameters.
Variables can be checked before use and get a meaningful error message when required.
The region of AWS provider can be an expression, dynamically calculated, and you can pass this provider configuration to submodules.
Terraform Data sources are a great way to simplify your modules, reducing coupling and the number of input / output parameters.
This is a bunch of, hopefully, useful links.
Terraform documentation.
The best known book to learn Terraform. In case you bought the first edition like me, the second edition is finally out.
A couple of Microsoft documentation pages listing side by side the equivalent AWS or GCP service.
The link of the GitHub repository with the complete source code and the link to this presentation’s slides.
Some information on yours truly.
I started with poor hardware when writing assembly code was not exceptional.
I worked for some companies over the years in quite a few different roles, now I work for Unum, a Fortune 500 insurance company.
Recognised by Microsoft with the Most Valuable Professional award in the last 5 years, I like to help communities throughout Europe.
Some information on yours truly.
I started with poor hardware when writing assembly code was not exceptional.
I worked for some companies over the years in quite a few different roles, now I work for Unum, a Fortune 500 insurance company.
Recognised by Microsoft with the Most Valuable Professional award in the last 5 years, I like to help communities throughout Europe.
And here are some references if you want to get in touch with me.