Provisioning GCP Cloud Functions with Terraform

October 12, 2020

In this post my goal is to show you how to provision and deploy your GCP Cloud Functions by using Terraform. Infrastructure as Code is a great way to define and keep track of all cloud services you put together. My favourite reasons for IaC is it opens up the ability for peer review, and to ability to have the exact same infrastructure defined in multiple environment without issue.

This post follows on from my previous on developing GCP Cloud Functions locally with TypeScript. It'll be useful to have the same project setup if you're following along, if you want to skip that post you can also clone down this repo and be sure to use the developing-cloud-functions branch (trunk includes all the code referenced in this post!).

We'll start by installing Terraform. You can use Homebrew, or download a binary from the Terraform website. However, you may overtime find you're running multiple projects on different versions. For this I recommend tfenv. This can also be installed via homebrew if you're on MacOS with: brew install tfenv. Check the repo out for additional installation instructions.

Next up let's create the directories and files where we'll be keeping our infrastructure:

mkdir infra/projects
touch infra/{functions,providers,variables}.tf infra/tf infra/project/{production,test}.tfvars

There are a fair few files here. It's worth stating now that terraform is fairly free-form, all files are merged together ultimately so you can organise these as you please.

I have defined three core files: functions.tf, providers.tf and variables.tf.

functions.tf is where we'll define our cloud function, along with the storage bucket (for deployment) and the service account the function will use.

providers.tf this is where we will configure settings for our terraform providers. This allows us to set the GCP project id for the google provider. In here I have also locked terraform to specific version to prevent people using a different one (to avoid unexpected breaking changes).

variables.tf is where we will define global variables.

Next up we have the projects directory containing two files: production.tfvars and test.tfvars. This is where we will define the values of global variables. When we execute terraform we can tell it to use correct tfvars file, so we can have specific settings per project (and change to different gcp projects more importantly!).

Finally we have the tf file. This will become an executable that will reduce the verbosity of some the commands we need to run, and helps idiot proof it slightly.

Let's get into it. Starting with our tfvars.

Editing: infra/projects/production.tfvars

environment = "prod"
project     = "[YOUR-GCP-PROJECT-ID-HERE]"

Editing: infra/projects/test.tfvars

environment = "test"
project     = "[YOUR-GCP-PROJECT-ID-HERE]"

Now we need to define this variable in our infra/variables.tf file:

variable "environment" {}
variable "project" {}

So, what we've done above is define two variables, one called environment the other project. environment represents whether we're running in production or another env. This is handy if you want to deploy multiple versions of the same function to the same GCP project. Additionally we have the project variable which defines our GCP Project ID. You could make this different between production and test to truly separate your architecture.

Now let's configure our infra/providers.tf file:

locals {
  region = "europe-west1"
}

provider "google-beta" {
  project = var.project
  region  = local.region
}

provider "google" {
  project = var.project
  region  = local.region
}

terraform {
  required_version = "0.13.4"
}

In this file we have defined a local variable for region this configures both the google-beta and google providers to use the europe-west1 region. Additionally you can see we're referencing the project variable we defined using the var.project syntax. Finally you can also see I have locked the version of terraform down to exactly match 0.13.4 (latest at the time of writing).

Ok, let's move onto the beefy file... infra/functions.tf.

# GCS bucket for storing our uploaded function zip.
# @see https://www.terraform.io/docs/providers/google/r/storage_bucket.html
resource "google_storage_bucket" "function_artifacts" {
  name = "${var.project}-function-artifacts"
}

# GCS Object for our hello_world function zip.
# @see https://www.terraform.io/docs/providers/google/r/storage_bucket_object.html
resource "google_storage_bucket_object" "hello_world_gcf" {
  name   = "${var.environment}_hello_world_gcf/${timestamp()}.zip"
  bucket = google_storage_bucket.function_artifacts.name
  source = "functions.zip"
}

# Define our function
# @see https://www.terraform.io/docs/providers/google/r/cloudfunctions_function.html
resource "google_cloudfunctions_function" "hello_world" {
  # Make the name unique per environment!
  name    = "${var.environment}-helloWorld"
  runtime = "nodejs10"

  # The exported function we wish to execute from within build/src/index.js
  entry_point = "helloWorld"

  source_archive_bucket = google_storage_bucket.function_artifacts.id
  source_archive_object = google_storage_bucket_object.hello_world_gcf.output_name

  # Our service account that only allows this function to write to logs and metrics.
  service_account_email = google_service_account.hello_world_gcf.email

  # Allow our function to be triggered via http requests.
  trigger_http = true

}

# Allow our cloud function to invoked by all users - WARNING this make your cloud function public.
# be sure to check the docs and consider if this is really what you want.
# @see https://www.terraform.io/docs/providers/google/r/cloudfunctions_cloud_function_iam.html
resource "google_cloudfunctions_function_iam_member" "hello_world" {
  cloud_function = google_cloudfunctions_function.hello_world.name
  role           = "roles/cloudfunctions.invoker"
  member         = "allUsers"
}

# A service account just for our helloWorld function.
# this ensures our cloud function only has the absolute minimum needed permissions.
# @see https://www.terraform.io/docs/providers/google/r/google_service_account.html
resource "google_service_account" "hello_world_gcf" {
  provider     = google-beta
  account_id   = "${var.environment}-gcf-helloworld"
  display_name = "${var.environment}-gcf-helloworld"
}

# Add the monitoring.metricWriter role to our service account.
# @see https://www.terraform.io/docs/providers/google/r/google_project_iam.html#google_project_iam_member
resource "google_project_iam_member" "hello_world_gcf_monitoring_writer" {
  provider = google-beta
  role     = "roles/monitoring.metricWriter"
  member   = "serviceAccount:${google_service_account.hello_world_gcf.email}"
}

# Add the logging.logWriter role to our service account.
# @see https://www.terraform.io/docs/providers/google/r/google_project_iam.html#google_project_iam_member
resource "google_project_iam_member" "hello_world_gcf_logging_writer" {
  provider = google-beta
  role     = "roles/logging.logWriter"
  member   = "serviceAccount:${google_service_account.hello_world_gcf.email}"
}

Now, there is a lot going on here. So let's break the steps:

  1. First of all we define a Cloud Storage Bucket. This is where we will upload our packaged function code.
  2. We define the storage object, this will handle uploading our zip that contains the function code. Note that it references function.zip. We'll make this zip later on (outside of terraform)!
  3. We define our Cloud Function. We're referencing where the function is stored, and the service account we create for it. Additionally we have defined the function to have a http trigger.
  4. Next up we've made sure the function can be executed publicly. By default you won't be able execute the function! Be mindful for your application if you actually need this.
  5. Then the next 3 sections define a service account, and then associate the logging.logWriter and monitoring.metricWriter roles to the service account. This SA is assigned to our function, and therefore our function will not be able to interact with any other services other than the above mentioned.

That's quite a lot to take in. I'll break down the terraform syntax below. It's really taken some time to get use to this myself...

Let's go over one of the resources defined line by line to explain:

# First we define a resource with: 
# resource "resource_name_from_provider_docs" "my_unique_name"
# the "my_unique_name" will be stored in the terraform state file, and will map to the resources created within GCP.
resource "google_storage_bucket_object" "hello_world_gcf" {
  # Different resources have different properties. The docs are the best place to find out what these fields do.
  # here we've set a name with some string interpolation, this will create a new object called: test_hello_world_gcf/2020-10-12T21:54:44Z.zip
  name   = "${var.environment}_hello_world_gcf/${timestamp()}.zip"

  # Here we're referencing the resource above, and accessing the name property.
  bucket = google_storage_bucket.function_artifacts.name
  source = "functions.zip"
}

Hopefully that gives you a little guidance on how terraform resources are defined. As I said before, the docs are the absolute best place to go!

Next up let's set our infra/tf file, this will be need to be executable:

chmod +x infra/tf

Then the contents will look like so:

#!/bin/bash

VERB=$1
PROJECT=$2

terraform init -reconfigure infra

terraform $VERB -var-file=./infra/projects/${PROJECT}.tfvars -state=./infra/state/${PROJECT}-terraform.tfstate infra

What we're doing here initialising terraform and installing the providers. Then the second command is variable it could run terraform apply, terraform plan or terraform destroy. The $VERB variable is the first argument we accept from the tf script.

Then we define which variable file will be loaded with the -var-file flag. This consumes our second argument PROJECT which could be in this case test or production. Finally the -state option allows us to define where the terraform state will be stored, and we'll keep a two different state files based on the project.

When using multiple state files like this you have to ensure all your resources are defined with this in mind. Terraform will not know if a resource already exists and will attempt to recreate. This is why we prefix our terraform resources where possible with the current environment.

Note: The Terraform state file shouldn't be kept on your local machine. It can contain sensitive values, and if you lose the state file you will have to recreate your infrastructure. In a future post I'll cover managing your state using a remote storage option: https://www.terraform.io/docs/state/remote.html

With this all in place, let's modify our package.json to include the commands for running terraform:

-   "compile": "tsc",
+   "compile": "tsc && yarn tf:format",
    "fix": "gts fix",
-   "prepare": "yarn compile"
+   "prepare": "yarn compile && zip -r functions.zip build && zip -g functions.zip {package.json,yarn.lock}",
+   "tf:format": "terraform fmt -recursive infra",
+   "tf:plan": "yarn prepare && ./infra/tf plan",
+   "tf:plan:test": "yarn tf:plan test",
+   "tf:plan:prod": "yarn tf:plan production",
+   "tf:deploy": "yarn prepare && ./infra/tf apply",
+   "tf:deploy:test": "yarn tf:deploy test",
+   "tf:deploy:prod": "yarn tf:deploy production",
+   "tf:destroy:test": "./infra/tf destroy test",
+   "tf:destroy:prod": "./infra/tf destroy production"

That's a lot of commands... However the goal here is to make it obvious which commands are being run. When you run yarn tf:deploy:test it will compile your code, and format your terraform code (they have their own standard format), next up we'll package the build dir and package.json + yarn.lock files into a zip. Then finally we run ./infra/tf apply production this will instruct terraform to create the resources.

Additionally we have an option for tf:plan:{ENV} too, this will show a diff of what will change if you run apply. This can be handy for sanity checking and testing changes without applying.

With a little luck you'll be able to run yarn tf:deploy:test and you'll have your newly provisioned infrastructure in no time!

Note: I have assumed you have setup a GCP Project, linked to a billing account. You will also need to configure the gcloud cli and authenticated You may also need to enable the Cloud Build API