Effortless Use of Terraform using GitLab and Azure DevOps Pipelines for VM Deployment on AWS, Azure, and GCP

Introduction

Tiago Dias Generoso
Published in
15 min readNov 27, 2023

--

In this era of Digital Transformation, where businesses are migrating their operations to the Cloud, automation becomes our guiding light. As we navigate this transformative landscape, the key to streamlining manual tasks, mitigating errors, and simplifying infrastructure creation lies in Infrastructure as Code (IaC). And when we talk IaC, we think Terraform.

I won’t delve into the complexities of Terraform theory here. Instead, join me on a hands-on journey where I leverage the dynamic duo of GitLab and Azure DevOps Pipelines to orchestrate a flawless Infrastructure deployment.

Why? Because sharing our learning experiences is a catalyst for collective growth. As I navigate the realms of IaC using pipelines, I want to demystify the process for you, offering insights I’ve discovered.

Whether you’re a seasoned pro or just starting your IaC adventure, there’s something for everyone. Let’s unravel the magic of Terraform and make our Cloud deployments a breeze!

My goal is not to provide ready-to-copy-and-paste code that guarantees immediate functionality; instead, I aim to demonstrate and explain the underlying workings of the code.

Ready to dive in? Let’s embark on this learning journey together!

Implementation Plan

In line with our goal to simplify the process of Virtual Machine (VM) creation using Pipelines for AWS, Azure, and GCP, this article outlines a straightforward implementation plan. Let’s break it down into manageable steps:

Prepare Cloud Provider Environments (AWS, Azure, GCP):

  • Set up projects, users, and access permissions within each Cloud Provider.
  • Lay the foundation for seamless interactions with AWS, Azure, and GCP.

Configure Pipeline Tool Environments (GitLab and Azure DevOps):

  • Establish a conducive environment within GitLab and Azure DevOps for smooth integration with your chosen Cloud Providers.
  • Ensure that the Pipeline tools are ready to orchestrate the deployment process effectively.

Setup Terraform Environment

  • Organize the Terraform infrastructure by creating a dedicated folder for the project.
  • Implement necessary credentials to enable secure access to both Cloud Providers and the Pipeline tools.

Execute Terraform Deployment:

  • Showcase the creation of a remote state for Terraform.
  • Illustrate the implementation and destruction of VMs within each Cloud Provider using the streamlined Pipelines.

By following these comprehensive steps, you’ll be equipped with the knowledge and tools needed to effortlessly deploy VMs across AWS, Azure, and GCP. Let’s dive in and make the most of the power of Infrastructure as Code (IaC) with Terraform and efficient Pipelines

AWS Preparation

Navigate to the AWS Management Console and search for "IAM".

In the IAM dashboard, select "Users" from the navigation menu.

Click the "Create User" button.

Enter a descriptive user name, such as "terraform-user", and click "Next: Permissions".

Select "Attach policies directly" and choose the "AdministratorAccess" policy.

Under the "Security credentials" tab, click "Create access key".

Select "Application running outside AWS" as the access key type and click "Next".

Click "Download Access Key" to save the access key ID and secret access key to your local machine.

Set the following environment variables (now or on the moment you will use Terraform):

export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY

Azure Preparation

Navigate to the Azure Management Portal and search for “Entra ID”

In the Azure Active Directory dashboard, select “App registrations”.

Click “New App Registration”.

Keep the default application type and name, and click “Register”.

Click the newly created app registration’s name, now we have Client ID and Tenant ID.

Click on Client Credentials link as you can see on the image:

Click “New client secret”.

Here you will have the client secret. Copy the client secret value and save it to a secure location.

You will also need your Subscription ID:

Set the following environment variables (now or on the moment you will use Terraform):

export ARM_CLIENT_ID=YOUR_CLIENT_ID
export ARM_TENANT_ID=YOUR_TENANT_ID
export ARM_SUBSCRIPTION_ID=YOUR_SUBSCRIPTION_ID
export ARM_CLIENT_SECRET=YOUR_CLIENT_SECRET

Azure Preparation:

Navigate to the Google Cloud Console and search for “IAM”.

In the IAM dashboard, select “Service accounts”.

Click “+ Create Service Account”.

Select “New service account”, enter a descriptive service account name, such as “terraform-tiagodg”, and click “Create and Continue”.

Click the three dots menu next to the newly created service account and select “Manage keys”.

Click “Add key” and then “Create a new key”.

Select JSON as the key type and click “Create”. Save the JSON key file to a secure location.

Go back to IAM, select the newly created service account and click “Grant IAM role”.

Select the roles to grant. Select the “terraform” user as the role grantee. Click “Save”.

Do not forget to enable the API access searching for API services.

Set the following environment variables (now or on the moment you will use Terraform):

export GOOGLE_APPLICATION_CREDENTIALS=PATH/TO/YOUR_KEY_FILE.json

GitLab Preparation:

Create a GitLab project: Create a new project on GitLab at https://gitlab.com/

Generate SSH keys: Generate an SSH key pair using the command:

ssh-keygen -f gitlab_key

Add SSH public key to GitLab: Extract the contents of the gitlab_key.pub file and add it to your GitLab profile at https://gitlab.com/-/profile/keys.

Test SSH connectivity: Test the SSH connection to GitLab using the command:

ssh -T git@gitlab.com

Azure DevOps Preparation:

Create an Azure DevOps organization: Create a new organization in Azure DevOps. You can also use an existing organization.

Create a new project: Within the organization, create a new project.

Generate SSH keys: Generate an SSH key pair using the command:

You can consult A Microsoft reference for ssh keys configuration.

ssh-keygen -f azureDevops_key

Add SSH public key to Azure DevOps: Extract the contents of the azureDevOps_key.pub file and add it to your Azure DevOps project settings.

Test SSH connectivity: Test the SSH connection to Azure DevOps using the command:

ssh -T git@ssh.dev.azure.com

Terraform Configuration

I will split the terraform configuration in 3 basic parts
1 — A project / folder to create the infrastructure to allow us to have remote state on AWS, Azure and GCP
2 — A project / folder to create network infrastructure for AWS and Azure (not GCP)
3 — A project / folder to create the virtual machines on AWS, Azure and GCP

Remote state configuration

Terraform, the “state” refers to a snapshot of your infrastructure that Terraform uses to keep track of the resources it manages. The state includes information about the resources, their configuration, and the relationships between them. It’s crucial for Terraform to maintain state so that it can accurately plan and apply changes to your infrastructure.

Remote state involves storing the Terraform state in a shared and remote location, typically in a backend storage system.

There are various backends available for storing remote state in Terraform, such as Amazon S3, Azure Storage, Google Cloud Storage, and more. You can configure Terraform to use a specific backend based on your requirements.

Main.tf:

terraform {
required_version = ">= 1.3.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "3.78.0"
}
google = {
source = "hashicorp/google"
version = "5.6.0"
}
}
}

provider "aws" {
region = "us-east-1"
default_tags {
tags = {
owner = "tiagogeneroso"
managed-by = "terraform"
}
}
}

provider "azurerm" {
features {}
}

provider "google" {
credentials = file("credentials.json")
project = "terraform-tiagodg"
region = "us-central1"
zone = "us-central1-a"
}

variables.tf

variable "location" {
description = "Regiao Azure"
type = string
default = "East US"

}

variable "account_tier" {
description = "Tier do storage account"
type = string
default = "Standard"
}

variable "account_replication_type" {
description = "Tipo de replicacao"
type = string
default = "LRS"
}

locals.tf

locals {
common_tags = {
owner = "tiagogeneroso"
managed-by = "terraform"
}
}

aws-bucket.tf

resource "aws_s3_bucket" "bucket" {
bucket = "tiagogeneroso-remote-state"
}

resource "aws_s3_bucket_versioning" "versioning" {
bucket = aws_s3_bucket.bucket.id
versioning_configuration {
status = "Enabled"
}
}

azure_storage_account.tf

resource "azurerm_resource_group" "resource_group" {
name = "rg-terraform-state"
location = var.location

tags = local.common_tags
}

resource "azurerm_storage_account" "storage_account" {
name = "tiagoterraformstate"
resource_group_name = azurerm_resource_group.resource_group.name
location = azurerm_resource_group.resource_group.location
account_tier = var.account_tier
account_replication_type = var.account_replication_type

blob_properties {
versioning_enabled = true
}

tags = local.common_tags
}

resource "azurerm_storage_container" "container" {
name = "remote-state"
storage_account_name = azurerm_storage_account.storage_account.name
container_access_type = "private"
}

gcp_bucket.tf

resource "google_storage_bucket" "default" {
name = "tiagogeneroso-remote-state"
force_destroy = false
location = "US"
storage_class = "STANDARD"
versioning {
enabled = true
}
}

Open a terminal on the Visual Studio Code and run the following commands:

terraform init
terraform fmt
terraform validate
terraform plan -out plan.out
terraform apply plan.out

AWS VPC configuration

This configuration will create the AWS network components.

main.tf

terraform {
required_version = ">= 1.3.0"

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}

backend "s3" {
bucket = "tiagogeneroso-remote-state"
key = "aws-vpc/terraform.tfstate"
region = "us-east-1"
}
}

provider "aws" {
region = "us-east-1"
default_tags {
tags = {
owner = "tiagogeneroso"
managed-by = "terraform"
}
}
}

output.tf

output "subnet_id" {
description = "value"
value = aws_subnet.subnet.id
}

output "security_group_id" {
description = "ID da Security group"
value = aws_security_group.security_group.id
}

network.tf

resource "aws_vpc" "vpc" {
cidr_block = "10.0.0.0/16"

tags = {
Name = "vpc-terraform"
}
}

resource "aws_subnet" "subnet" {
vpc_id = aws_vpc.vpc.id
cidr_block = "10.0.1.0/24"

tags = {
Name = "subnet-terraform"
}
}

resource "aws_internet_gateway" "internet_gateway" {
vpc_id = aws_vpc.vpc.id

tags = {
Name = "internet-gateway-terraform"
}
}

resource "aws_route_table" "route_table" {
vpc_id = aws_vpc.vpc.id

route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.internet_gateway.id
}

tags = {
Name = "route-table-terraform"
}
}

resource "aws_route_table_association" "rta" {
subnet_id = aws_subnet.subnet.id
route_table_id = aws_route_table.route_table.id
}

resource "aws_security_group" "security_group" {
name = "security-group-terraform"
description = "Permitir porta 22"
vpc_id = aws_vpc.vpc.id

ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "security_group-terraform"
}
}

Open a terminal on the Visual Studio Code and run the following commands:

terraform init
terraform fmt
terraform validate
terraform plan -out plan.out
terraform apply plan.out

Azure VNET configuration

This configuration will create the Azure network components.

main.tf

terraform {
required_version = ">= 1.3.0"

required_providers {

azurerm = {
source = "hashicorp/azurerm"
version = "3.78.0"
}
}

backend "azurerm" {
resource_group_name = "rg-terraform-state"
storage_account_name = "tiagoterraformstate"
container_name = "remote-state"
key = "azure-vnet/terraform.tfstate"
}
}

provider "azurerm" {
features {}
}

locals.tf

locals {
common_tags = {
owner = "tiagogeneroso"
managed-by = "terraform"
}
}

outputs.tf

output "subnet_id" {
description = "ID da Subnet na azure"
value = azurerm_subnet.subnet.id
}

output "security_group_id" {
description = "ID da network security group"
value = azurerm_network_security_group.nsg.id
}

variables.tf

variable "location" {
description = "Regiao Azure"
type = string
default = "East US"
}

network.tf

resource "azurerm_resource_group" "resource_group" {
name = "rg-vnet"
location = var.location

tags = local.common_tags
}

resource "azurerm_virtual_network" "vnet" {
name = "vnet-terraform"
location = var.location
resource_group_name = azurerm_resource_group.resource_group.name
address_space = ["10.0.0.0/16"]

tags = local.common_tags
}

resource "azurerm_subnet" "subnet" {
name = "subnet-terraform"
resource_group_name = azurerm_resource_group.resource_group.name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.1.0/24"]
}

resource "azurerm_network_security_group" "nsg" {
name = "nsg-terraform"
location = var.location
resource_group_name = azurerm_resource_group.resource_group.name

security_rule {
name = "SSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}

tags = local.common_tags
}

resource "azurerm_subnet_network_security_group_association" "snsga" {
subnet_id = azurerm_subnet.subnet.id
network_security_group_id = azurerm_network_security_group.nsg.id
}

Virtual machine configuration

Now we will create the virtual machine on AWS, Azure and GCP reusing all components created on the previous sessions.

locals.tf

locals {
common_tags = {
owner = "tiagogeneroso"
managed-by = "terraform"
}
}

variables.tf

variable "aws_key_pub" {
description = "Chave Publica AWS"
type = string
}

variable "azure_key_pub" {
description = "Chave Publica Azure"
type = string
}

variable "location" {
description = "Regiao Azure"
type = string
default = "East US"
}

vm-aws.tf

resource "aws_key_pair" "key" {
key_name = "aws-key-pipelines"
public_key = var.aws_key_pub
}

resource "aws_instance" "vm" {
ami = "ami-0fc5d935ebf8bc3bc"
instance_type = "t2.micro"
key_name = aws_key_pair.key.key_name
subnet_id = data.terraform_remote_state.vpc.outputs.subnet_id
vpc_security_group_ids = [data.terraform_remote_state.vpc.outputs.security_group_id]
associate_public_ip_address = true


tags = {
Name = "vm-terraform"
}
}

vm-azure.tf

resource "azurerm_resource_group" "rg" {
name = "rg-vm"
location = "East US"

tags = local.common_tags
}

resource "azurerm_public_ip" "ip" {
name = "public-ip-terraform"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
allocation_method = "Dynamic"

tags = local.common_tags
}

resource "azurerm_network_interface" "nic" {
name = "nic-terraform"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name

ip_configuration {
name = "public-ip-terraform"
subnet_id = data.terraform_remote_state.vnet.outputs.subnet_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.ip.id
}
tags = local.common_tags
}

resource "azurerm_network_interface_security_group_association" "nisga" {
network_interface_id = azurerm_network_interface.nic.id
network_security_group_id = data.terraform_remote_state.vnet.outputs.security_group_id
}

resource "azurerm_linux_virtual_machine" "vm" {
name = "vm-terraform"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
size = "Standard_B1s"
admin_username = "terraform"
network_interface_ids = [azurerm_network_interface.nic.id, ]

admin_ssh_key {
username = "terraform"
public_key = var.azure_key_pub
}

os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}

source_image_reference {
publisher = "canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
}

tags = local.common_tags
}

gcp-vm.tf

resource "google_service_account" "default" {
account_id = "my-custom-sa"
display_name = "Custom SA for VM Instance"
}

resource "google_compute_instance" "default" {
name = "my-instance"
machine_type = "n2-standard-2"
zone = "us-central1-a"

tags = ["foo", "bar"]

boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
labels = {
my_label = "value"
}
}
}

// Local SSD disk
scratch_disk {
interface = "NVME"
}

network_interface {
network = "default"

access_config {
// Ephemeral public IP
}
}

metadata = {
foo = "bar"
}

metadata_startup_script = "echo hi > /test.txt"

service_account {
# Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
email = google_service_account.default.email
scopes = ["cloud-platform"]
}
}

main.tf

terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.24.0"
}
azurerm = {
source = "hashicorp/azurerm"
version = "3.78.0"
}
google = {
source = "hashicorp/google"
version = "5.6.0"
}
}
backend "azurerm" {
resource_group_name = "rg-terraform-state"
storage_account_name = "tiagoterraformstate"
container_name = "remote-state"
key = "pipeline-gitlab/terraform.tfstate"
}
}

provider "aws" {
region = "us-east-1"

default_tags {
tags = {
owner = "tiagogeneroso"
managed-by = "terraform"
}
}
}

provider "azurerm" {
features {}
}

provider "google" {
credentials = file("credentials.json")
project = "terraform-tiagodg"
region = "us-central1"
zone = "us-central1-a"
}

data "terraform_remote_state" "vpc" {
backend = "s3"
config = {
bucket = "tiagogeneroso-remote-state"
key = "aws-vpc/terraform.tfstate"
region = "us-east-1"
}
}

data "terraform_remote_state" "vnet" {
backend = "azurerm"
config = {
resource_group_name = "rg-terraform-state"
storage_account_name = "tiagoterraformstate"
container_name = "remote-state"
key = "azure-vnet/terraform.tfstate"
}
}

data "terraform_remote_state" "gcs" {
backend = "gcs"
config = {
bucket = "tiagogeneroso-remote-state"
prefix = "terraform/terraform.tfstate"
}
}

GitLab Pipeline Configuration

Go to you gitlab repository and go to setting -> CI/CD

Configure all the variables you used to connect into the environments like it:

Create an internal folder and clone your repository you created on the preparation steps.

cd gitlab
git remote add origin https://gitlab.com/terraform6003245/curso-terraform-tiagodg.git
git clone https://gitlab.com/terraform6003245/curso-terraform-tiagodg.git

Copy all the Terraform files into this folder.

Create a .gitignore file to ignore some files from the synchronization

aws-key*
azure-key*
.terraform*
*.out
*.tfvars
*.tfstate*

Create the Pipeline file: .gitlab-ci.yml

stages:
- validate_plan
- apply
- destroy

.template:
image:
name: hashicorp/terraform:1.5.7
entrypoint: [""]
before_script:
- terraform init

validate & plan:
extends: .template
stage: validate_plan
script:
- terraform validate
- terraform plan -out plan.out
cache:
key: plan
policy: push
paths:
- plan.out

apply:
extends: .template
stage: apply
script:
- terraform apply plan.out
cache:
key: plan
policy: pull
paths:
- plan.out
when: manual

destroy:
extends: .template
stage: destroy
script:
- terraform destroy -auto-approve
when: manual

Commit the cloned folder:

git add .
git commit -m "add terraform to create vms on aws, azure and GCP"
git push

Go back to the gitlab to find the pipeline:

Azure DevOps Pipeline Configuration

Go to your Azure DevOps project you created on the preparation phase as I explained on the beginning and go to Pipelines -> Library

name: 1.0
pool:
vmImage: ubuntu-latest
trigger:
- main
variables:
- group: AWS-Credentials
- group: Azure-Credentials
- group: Public Keys

stages:
- stage: validate_plan
displayName: Validate & Plan
jobs:
- job: validate_plan
displayName: Validate & Plan
steps:
- script: |
terraform init
terraform validate
terraform plan -out plan.out
displayName: Terraform Validate & Plan
env:
TF_VAR_aws_key_pub: $(TF_VAR_aws_key_pub)
TF_VAR_azure_key_pub: $(TF_VAR_azure_key_pub)
AWS_ACCESS_KEY_ID: $(AWS_ACCESS_KEY_ID)
AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY)
ARM_CLIENT_ID: $(ARM_CLIENT_ID)
ARM_CLIENT_SECRET: $(RM_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(RM_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(ARM_TENANT_ID)
- publish: $(Build.SourcesDirectory)/plan.out
artifact: plan
displayName: Publish Plan
- stage: apply
displayName: Apply
jobs:
- job: approve_apply
pool: server
displayName: Approve Apply
steps:
- task: ManualValidation@0
timeoutInMinutes: 1440
- job: apply
displayName: Terraform Apply
dependsOn: approve_apply
steps:
- download: current
artifact: plan
displayName: Donwload Plan
- script: |
terraform init
terraform apply $(Pipeline.Workspace)/plan/plan.out
displayName: Terraform Apply
env:
TF_VAR_aws_key_pub: $(TF_VAR_aws_key_pub)
TF_VAR_azure_key_pub: $(TF_VAR_azure_key_pub)
AWS_ACCESS_KEY_ID: $(AWS_ACCESS_KEY_ID)
AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY)
ARM_CLIENT_ID: $(ARM_CLIENT_ID)
ARM_CLIENT_SECRET: $(RM_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(RM_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(ARM_TENANT_ID)

- stage: destroy
displayName: Destroy
jobs:
- job: approve_destroy
pool: server
displayName: Approve Destroy
steps:
- task: ManualValidation@0
timeoutInMinutes: 1440
- job: destroy
displayName: Terraform Apply
dependsOn: approve_destroy
steps:
- script: |
terraform init
terraform destroy - auto-approve
displayName: Terraform Destroy
env:
TF_VAR_aws_key_pub: $(TF_VAR_aws_key_pub)
TF_VAR_azure_key_pub: $(TF_VAR_azure_key_pub)
AWS_ACCESS_KEY_ID: $(AWS_ACCESS_KEY_ID)
AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY)
ARM_CLIENT_ID: $(ARM_CLIENT_ID)
ARM_CLIENT_SECRET: $(RM_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(RM_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(ARM_TENANT_ID)

Conclusion

Embracing Infrastructure as Code (IaC) is indispensable when deploying infrastructure in the cloud. Manual deployment without IaC would present significant challenges, making it nearly impossible to ensure consistency across environments. The adoption of pipelines further solidifies this imperative.

Utilizing pipelines to deploy infrastructure with Terraform aligns seamlessly with contemporary software development practices. This approach amplifies automation, guarantees consistency in deployments, fosters collaborative efforts among team members, and establishes robust testing and validation mechanisms. It is pivotal in adopting Infrastructure as Code, ensuring scalability and operational efficiency.

In conclusion, this article provides a foundational understanding of creating essential cloud components across major cloud providers. Incorporating basic pipelines offers a glimpse into the operational intricacies, illustrating their role in achieving consistent, collaborative, and automated infrastructure deployment. We trust this information proves beneficial in your exploration of these concepts.

Tiago Dias Generoso is a Distinguished IT Architect | Senior SRE | Master Inventor based in Pocos de Caldas, Brazil. The above article is personal and does not necessarily represent the employer’s positions, strategies or opinions.

--

--

Distinguished IT Architect | Senior SRE specialized in Observability with 20+ years of experience helping organizations strategize complex IT solutions. Kyndryl