Long-lived, static access keys are a commonality in many companies cloud infrastructure these days. Access-keys are often not rotated which poses a security risk, especially if the access keys are used for deploying applications or any utility functions (e.g building AMIs / Golden images, deploying resources with Terraform)
When authenticating to AWS the best practice is to use a role with short-term access keys / session tokens - Thanks to AWS IAM OIDC Providers, we can now remove the need for access keys in CI/CD making our pipelines and infrastructure more secure and less fragile as we don’t need to worry about access key secrets being accidentally updated, or deleted.
Here’s some of the primary benefits of using this method of authentication for your workflows:
- No access keys are required, increasing security
- Far more granular control of how the credentials are used in comparison to access keys - you can limit the role to only be able to be used on a single branch, in a single repository as an example
- No need to rotate credentials as the credentials provided by your cloud provider are short-lived
- Using OIDC to secure your CI/CD pipelines - No more long-lived access keys!
- Next Steps
Introduction
This blog post covers how we can remove the need for access keys by using OIDC (OpenID Connect) to authenticate to AWS or Azure, as well as creating the required resources via Terraform.
Note: While this blog post is specifically catered to GitHub Actions, but you can use the same principals to do the same for most CI/CD providers, such as:
- CircleCI - https://circleci.com/docs/openid-connect-tokens/
- BitBucket - https://support.atlassian.com/bitbucket-cloud/docs/deploy-on-aws-using-bitbucket-pipelines-openid-connect/
- GitLab - https://docs.gitlab.com/ee/ci/cloud_services/aws/
- You probably could also use it for Jenkins, but I think you’d need to host your own OIDC provider.
In the example (and tutorial) below, I’m using GitHub Actions, Terraform, Packer and Ansible to create a Golden Image which I’ll use as a base image for Kubernetes nodes (This won’t be covered - but you can read This blog post about it on AWS here ) but this concept can be applied to many different things such as application deployment, building application assets and pushing them to S3, etc. Literally anything you do in GitHub Actions that uses your Azure/AWS credentials will heavily benefit from this.
However, I’ll only be covering the following:
- Setting up the required resources on AWS And Azure to so GitHub Actions can authenticate via OIDc
- Granting the required permissions for my use case, as an example.
We’ll also be creating a Terraform module to create the resources, so it can be re-used by you/your company in the future - as well as configuring GitHub Actions be able to access AWS/Azure via OIDC.
How does it actually work?
This diagram below from GitHub’s documentation explains it quite well.
And to further quote GitHub’s documentation, :
- In your cloud provider, create an OIDC trust between your cloud role and your GitHub workflow(s) that need access to the cloud. I’ll be going into this with more detail for both AWS and Azure below.
- Every time your job runs, GitHub’s OIDC Provider auto-generates an OIDC token. This token contains multiple claims to establish a security-hardened and verifiable identity about the specific workflow that is trying to authenticate.
- You could include a step or action in your job to request this token from GitHub’s OIDC provider, and present it to the cloud provider.
- Once the cloud provider successfully validates the claims presented in the token, it then provides a short-lived cloud access token that is available only for the duration of the job.
GitHub to AWS
An IAM Identity provider is used to create a trust between GitHub’s OIDC provider and our AWS Account. Then, the role that’s being used in the workflow will have conditions in it’s trust policy to lock it down to specific GitHub repositories. See this example IAM Role policy form GitHub’s documentation below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456123456:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:octo-org/octo-repo:*"
},
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
}
}
}
]
}
Here’s a simplified diagram of how it works:
Note: In this diagram sp
is “Service Principal” - the equivalent of an IAM role on Azure.
GitHub to Azure
While the technologies are still the same, it’s a bit different when using Azure.
Firstly, an Azure AD Application is created as well as a Service Principal. This Service principal is the Principal (or IAM role, if you’re familiar with AWS) and it is what permissions will be assigned to.
Secondly, a set of Federated Identity Credentials is created for the Azure AD Application. While these credentials are long-lived and are would usually be considered a secret, it’s not an issue in this case as the ‘client secret’ isn’t required to authenticate to github actions from our repository, only the Client ID, Tenant ID and Subscription ID.
This diagram from Azure’s documentation displays the process very well:
In this case, the external workload is GitHub Actions and the external IdP is Github’s OIDC Provider.
Implementation
Since we’re using Terraform we’re going to create a module so it can be reused by you or your company.
Since I’m using this pipeline to build VM Images with Packer, there will be a few things that are specific to my use case (such as creating a secret that’s used in the pipeline, permissions, etc) but feel free to modify this to fit your use case.
Azure
Provider setup
create the following in providers.tf:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.48.0"
}
azuread = {
source = "hashicorp/azuread"
version = "2.36.0"
}
}
required_version = ">=1.4.0"
}
provider "azurerm" {
features {}
}
Variables Setup
Before we look at creating any resources, let’s setup some variables so this can be re-used in the future!
Create the following in variables.tf:
variable "gh_username" {
type = string
description = "GitHub Username, used to specify which repo/org for GH Actions permissions."
}
variable "repo" {
type = string
description = "Name of GitHub Repository that GitHub Actions role will use."
}
variable "az_rg_name" {
type = string
description = "Name of Azure Resource Group"
default = "golden-image-build"
}
variable "az_region" {
type = string
description = "Name of Azure region"
default = "Australia Southeast"
}
Configuring the OIDC Provider
In this section, we’ll create the following:
- Azure Resource group to keep everything clean and organized in one ‘solution’
- AAD (Azure AD) Application Federated Identity Credential
- AAD Service Principal (Which is used by GitHub Actions)
We’ll also create these resources, which are OPTIONAL and only required if you need to access some form of secret in your pipeline such as an ansible vault password.
- Key vault (to store secret used in to encrypt ansible-vault secrets)
- Key vault secrets themselves
- AAD groups to grant permissions on who can access the secret
- Access policies granting permissions to those who are in those groups
First, create a resource group (to contain the resources)
resource "azurerm_resource_group" "github_oidc" {
name = var.az_rg_name
location = var.az_region
}
Then create an Azure AD Application, Service Principal and Federated Identity Credential. In Azure AD, a service principal is the equivalent of an IAM role - and the Azure AD Application is equivalent to an IAM identity provider.
Here’s a brief overview of how those resources tie in together:
- Permissions are assigned to the Service Principal
- The Service Principal is the identity that the Azure AD Application uses to access resources
- The Federated Identity credential is what actually allows GitHub Actions to authenticate to Azure
resource "azuread_application" "github_oidc" {
display_name = "${var.repo}-gh-actions"
api {
requested_access_token_version = 2
}
}
resource "azuread_service_principal" "github_oidc" {
application_id = azuread_application.github_oidc.application_id
}
// Actual OpenID Connect connection
resource "azuread_application_federated_identity_credential" "github_oidc" {
application_object_id = azuread_application.github_oidc.object_id
display_name = "${var.repo}-gh-actions"
description = "Deployments for ${var.gh_username}/${var.repo} for Production Environment"
audiences = ["api://AzureADTokenExchange"]
issuer = "https://token.actions.githubusercontent.com"
// NOTE: WILDCARDS IN SUBJECT DON'T WORK
subject = "repo:${var.gh_username}/${var.repo}:ref:refs/heads/main"
}
Note: I’ve chosen to only allow jobs on the ‘main’ branch to be able to authenticate as that’s the only branch I want this job to run on. If desired, you configure your github actions jobs to run in a specific environment, then grant the federated identity permissions to run on ALL jobs in that environment, eg:
resource "azuread_application_federated_identity_credential" "github_oidc" {
application_object_id = azuread_application.github_oidc.object_id
display_name = "${var.repo}-gh-actions"
description = "Deployments for ${var.gh_username}/${var.repo} for Production Environment"
audiences = ["api://AzureADTokenExchange"]
issuer = "https://token.actions.githubusercontent.com"
subject = "repo:${var.gh_username}/${var.repo}:environment:your-environment"
}
Now, let’s assign the service principal permissions:
data "azurerm_subscription" "primary" {}
resource "azurerm_role_assignment" "github_oidc" {
scope = data.azurerm_subscription.primary.id
role_definition_name = "Contributor"
principal_id = azuread_service_principal.github_oidc.id
}
If you’re only needing to setup the connection and permissions between Azure and GitHub actions, this is all that’s required - you can deploy this with terraform and you’ll be good to go - you’ll only need to modify your workflow as described in the section Configuring the GitHub Actions workflow
However, if you’re going to use Azure Key vault to store ansible vault passwords then keep following along.
First, create a key vault and secret - this is used to store an Ansible vault Password
resource "azurerm_key_vault" "ansible_vault" {
name = "${var.repo}-image-build"
location = azurerm_resource_group.github_oidc.location
resource_group_name = azurerm_resource_group.github_oidc.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "standard"
soft_delete_retention_days = 7
network_acls {
default_action = "Allow"
bypass = "AzureServices"
}
}
Add the following to variables.tf:
variable "upn" {
type = string
description = "User principal name"
}
We need this value to add ourselves (in Azure AD) to a group that we’re creating for granting access to the Key Vault. To get the user principal name, login to the Azure Console - then search for Users, and you’ll see a list of each user and their User Principal Name.
Now, create a group and add your Azure user to it - this is primarily so we can still manage the resource with Terraform.
resource "azuread_group" "kv-full" {
display_name = "${local.kv_name}-full"
security_enabled = true
}
resource "azuread_group_member" "me" {
group_object_id = azuread_group.kv-full.id
member_object_id = data.azuread_user.me.object_id
}
data "azuread_user" "me" {
user_principal_name = var.upn
}
Now, let’s create some permissions on the key vault.
resource "azuread_group" "kv-read" {
display_name = "${local.kv_name}-read"
security_enabled = true
}
// Add GH Actions SP to READ access grp
resource "azuread_group_member" "sp_read" {
group_object_id = azuread_group.kv-read.id
member_object_id = azuread_service_principal.github_oidc.object_id
}
################################################
# AZURE KEY VAULT - STORE ANSIBLE VAULT SECRET #
################################################
resource "azurerm_key_vault_access_policy" "read" {
key_vault_id = azurerm_key_vault.ansible_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = azuread_group.kv-read.object_id
lifecycle {
create_before_destroy = true
}
key_permissions = [
"Get",
]
secret_permissions = [
"Get",
]
}
resource "azurerm_key_vault_access_policy" "full" {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_vault_id = azurerm_key_vault.ansible_vault.id
lifecycle {
create_before_destroy = true
}
key_permissions = [
"Create",
"Get",
]
secret_permissions = [
"Set",
"List",
"Get",
"Delete",
"Purge",
"Recover"
]
}
resource "azurerm_key_vault_access_policy" "full_grp" {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = azuread_group.kv-full.object_id
key_vault_id = azurerm_key_vault.ansible_vault.id
lifecycle {
create_before_destroy = true
}
key_permissions = [
"Create",
"Get",
]
secret_permissions = [
"Set",
"Get",
"Delete",
"Purge",
"Recover"
]
}
resource "azurerm_key_vault_secret" "ansible_vault_pass" {
name = "ansible-vault-pass"
value = var.vault_pass_secret_value
key_vault_id = azurerm_key_vault.ansible_vault.id
depends_on = [
azurerm_key_vault_access_policy.full,
azurerm_key_vault_access_policy.full_grp,
azurerm_key_vault_access_policy.read
]
}
Now that that’s all done, run the following to create the resources:
terraform plan
terraform apply
AWS
Providers setup
Create the following in providers.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "4.59.0"
}
}
required_version = ">=1.4.0"
}
Variables Setup
Create the following in variables.tf:
variable "repo" {
type = string
description = "Name of GitHub Repository that GitHub Actions role will use."
}
variable "gh_username" {
type = string
description = "Name of GitHub user that owns the repository that gh actions will run on"
}
variable "gh_role_name" {
type = string
description = "IAM Role name for GitHub Actions"
}
variable "tags" {
default = {}
type = map(string)
}
Configuring the OIDC Provider
Since initially i created this module for setting up the required permissions for an image-build workflow using Packer (with AWS) on GitHub Actions, i opted to use an existing module for the OIDC side of things.
See their documentation here
First, let’s create the IAM OpenID Connect Identity Provider. You can omit this if you already have one created in your account as there’s a limit of 1 per AWS account. Add the following to main.tf
resource "aws_iam_openid_connect_provider" "github_actions" {
url = "https://token.actions.githubusercontent.com"
client_id_list = ["sts.amazonaws.com"]
// GitHub's OIDC Thumbprint
thumbprint_list = ["6938fd4d98bab03faadb97b34396831e3780aea1"]
}
Then, create the role and required permissions.
```tf
module "github_oidc" {
source = "philips-labs/github-oidc/aws"
version = "0.6.0"
role_name = var.gh_role_name
repo = "${var.gh_username}/${var.repo}"
openid_connect_provider_arn = aws_iam_openid_connect_provider.github_actions.arn
role_policy_arns = [module.gh_role_policy_packer.arn,module.gh_role_policy_secretsmanager.arn]
default_conditions = ["allow_main"]
conditions = [{
test = "StringLike"
variable = "token.actions.githubusercontent.com:sub"
values = ["repo:${var.gh_username}/${var.repo}:pull_request"]
}]
}
If you’re not setting up an image build pipeline, then change the role_policy_arns
parameter above to your desired IAM policies (so the IAM role you created can access resources) and you’ll be off to the races once you deploy.
Otherwise, add the following to main.tf (or use the below as an example), which will grant our workflow permissions to do the following once deployed:
- Build AMIs with Packer
- Access a specific Secrets Manager secret
- Update an SSM parameter (which is used to get the latest AMI)
data "aws_iam_policy_document" "gh_role_permissions_packer" {
statement {
sid = "RatherSafeActions"
actions = [
"ec2:CopyImage",
"ec2:CreateImage",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DescribeImages",
"ec2:DescribeImageAttribute",
"ec2:DescribeInstanceStatus",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:DescribeTags",
"ec2:RegisterImage",
"ec2:RunInstances",
"ec2:GetPasswordData",
"ec2:CreateKeyPair",
"ec2:DeleteKeyPair",
"ec2:CreateSecurityGroup",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:DeleteSecurityGroup"
]
resources = ["*"]
}
statement {
sid = "DangerousActions"
actions = [
"ec2:AttachVolume",
"ec2:DeleteSnapshot",
"ec2:DeleteVolume",
"ec2:DeregisterImage",
"ec2:DetachVolume",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:ModifySnapshotAttribute",
"ec2:StopInstances",
"ec2:TerminateInstances"
]
resources = ["*"]
condition {
test = "StringEquals"
variable = "ec2:ResourceTag/Creator"
values = ["Packer"]
}
}
}
resource "aws_secretsmanager_secret" "ansible_vault_pass" {
name = "${var.repo}-ansible-vault-pass"
}
data "aws_iam_policy_document" "gh_role_permissions_secretsmanager" {
statement {
actions = [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds"
]
resources = [
aws_secretsmanager_secret.ansible_vault_pass.arn,
"${aws_secretsmanager_secret.ansible_vault_pass.arn}/*",
"${aws_secretsmanager_secret.ansible_vault_pass.arn}*",
]
}
statement {
actions = ["secretsmanager:ListSecrets"]
resources = ["*"]
}
statement {
actions = [
"secretsmanager:GetResourcePolicy",
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret",
"secretsmanager:ListSecretVersionIds",
"secretsManager:PutSecretValue",
"secretsManager:CreateSecret",
"secretsManager:UpdateSecret"
]
resources = [
aws_secretsmanager_secret.ansible_vault_pass.arn,
"${aws_secretsmanager_secret.ansible_vault_pass.arn}/*",
"${aws_secretsmanager_secret.ansible_vault_pass.arn}*"
]
}
statement {
actions = [
"ssm:DescribeParameters",
"ssm:PutParameter",
"ssm:GetParameters",
"ssm:GetParameter",
"ssm:GetParametersByPath",
"ssm:GetParameterHistory",
]
resources = [
aws_ssm_parameter.golden_image_id.arn
]
}
}
resource "aws_ssm_parameter" "golden_image_id" {
name = "golden-image-id"
type = "String"
value = "changeme"
}
// IAM policy that grants permissions to access SecretsManager
module "gh_role_policy_secretsmanager" {
source = "terraform-aws-modules/iam/aws//modules/iam-policy"
version = "~> 3.0"
name = "${var.gh_role_name}-secretsmanager"
path = "/"
description = "Grants permissions to GitHub OIDC Role (tf-aws repo) to read secretsmanager secret for img build pipeline"
policy = data.aws_iam_policy_document.gh_role_permissions_secretsmanager.json
}
// Policy to grant permissions required for building AMIs
module "gh_role_policy_packer" {
source = "terraform-aws-modules/iam/aws//modules/iam-policy"
version = "~> 3.0"
name = "${var.gh_role_name}-packer"
path = "/"
description = "Grants permissions for GitHub OIDC Role (tf-aws repo) required permissions for packer role"
policy = data.aws_iam_policy_document.gh_role_permissions_packer.json
}
Configuring the GitHub Actions workflow
Secrets
Depending on which cloud provider you’re using, the secrets you need to create are quite different.
For AWS, you only need to create a secret called AWS_ROLE_ARN
in your GitHub repository settings. The value of this should be the ARN of the role that you created.
You can get the value of this by running the following AWS CLI command, it should output something similar to below:
aws iam get-role --role-name YOUR_ROLE_NAME_HERE --query 'Role.Arn'
# in this case, mine is called 'tf-aws-github-actions'.
"arn:aws:iam::12345678901:role/github-actions/tf-aws-gh-actions"
If you’re running on Azure, you’ll need to create the following secrets:
- AZURE_CLIENT_ID
- AZURE_SUBSCRIPTION_ID
- AZURE_TENANT_ID
# get AZURE_CLIENT_ID
# first, get the ID of the service principal - REPLACE tf-aws WITH YOUR REPAL NAME HERE
REPO_NAME="tf-aws"
SP_DISPLAY_NAME="${REPO_NAME}-gh-actions"
az ad app list --display-name $SP_DISPLAY_NAME --query "[*].appId" -o tsv
# get AZURE_SUBSCRIPTION_ID
az account show | jq -r '.id'
# get AZURE_TENANT_ID
az account tenant list
You can use the GitHub CLI, GitHub’s web app or GitHub’s terraform provider to create the secrets described above:
See the corresponding documentation below:
- https://docs.github.com/en/actions/security-guides/encrypted-secrets
- https://cli.github.com/manual/gh_secret_set
- https://registry.terraform.io/providers/integrations/github/latest/docs
Workflow Configuration
As I mentioned earlier, the configuration in the GitHub Actions workflow isn’t that different to if you were using Access Keys.
However, there is one significant difference - the permissions
directive. If you don’t add this, it won’t work as the workflow won’t have permissions to request the OIDC JWT token from GitHub’s own OIDC server.
permissions:
id-token: write
contents: read
Here’s a breakdown of the permission fields above:
- The
id-token: write
field is required for requesting the JWT/OIDC token itself - And the
contents: read
is required for passing it between multiple jobs in the workflow - if you’ve only got one job that’d use it, then you probably could omit this field.
Authentication / Example Workflows
There’s not a whole lot of differences in configuration regarding the cloud provider login Action configurations (az/login, aws-actions/configure-aws-credentials)
For AWS, It’s pretty simple - you just don’t pass any credentials in:
---
name: Build VM Images
on:
push:
branches:
- main
pull_request:
branches:
- main
permissions:
id-token: write
contents: read
jobs:
build-aws-ami:
runs-on: ubuntu-latest
container:
image: joelfreeman/ansible-packer-boto3:latest
steps:
- name: Check out repository code
uses: actions/checkout@v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: arn:aws:iam::012345678910:role/github-actions/tf-aws-gh-actions
role-duration-seconds: 2100
aws-region: us-east-1
# and now you've got credentials!
And for azure, you need to pass in the GitHub actions secrets we created above:
name: Build VM Images
on:
push:
branches:
- main
pull_request:
branches:
- main
permissions:
id-token: write
contents: read
jobs:
build-azure-image:
runs-on: ubuntu-latest
container:
image: joelfreeman/ansible-packer-boto3:latest
steps:
- name: Check out repository code
uses: actions/checkout@v2
- name: 'Az CLI login'
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
Then you’re good to go!
If you’re curious to see what my full workflow for building VM images is like, take a look below:
---
name: Build VM Images
on:
push:
branches:
- main
pull_request:
branches:
- main
permissions:
id-token: write
contents: read
jobs:
#TODO: ADD MOLECULE TESTING FOR ANSIBLE
ansible-lint:
runs-on: ubuntu-latest
container:
image: joelfreeman/ansible-packer-boto3:latest
steps:
- name: Check out repository code
uses: actions/checkout@v2
- name: Fix git permissions
run: git config --global --add safe.directory /__w/tf-aws/tf-aws
- name: Force ansible to use dummy vault
run: |
cp ./ansible/golden-image/vault.yaml ./ansible/golden-image/vault.old.yaml
mv ./ansible/golden-image/dummy_vault.yaml ./ansible/golden-image/vault.yaml
- name: Check Ansible Playbook Syntax
env:
ANSIBLE_VAULT_PASSWORD_FILE: ./ansible/golden-image/dummy-vault-pass
run: |
ansible-playbook \
--syntax-check \
--vault-password-file ./ansible/golden-image/dummy-vault-pass \
./ansible/golden-image/base.yml
ansible-playbook \
--syntax-check \
--vault-password-file ./ansible/golden-image/dummy-vault-pass \
./ansible/golden-image/base_azure.yml
- name: Run ansible-lint
env:
ANSIBLE_VAULT_PASSWORD_FILE: ./ansible/golden-image/dummy-vault-pass
run: ansible-lint ./ansible/
build-aws-ami:
runs-on: ubuntu-latest
container:
image: joelfreeman/ansible-packer-boto3:latest
steps:
- name: Check out repository code
uses: actions/checkout@v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
role-to-assume: arn:aws:iam::012345678910:role/github-actions/tf-aws-gh-actions
role-duration-seconds: 2100
aws-region: us-east-1
- name: Setup `packer`
uses: hashicorp/setup-packer@main
id: setup
with:
version: "1.8.3" # or `latest`
- name: Fix git error
run: git config --global --add safe.directory /__w/tf-aws/tf-aws
- name: Validate Packerfile
env:
VAULT_AWS_SECRET_NAME: ${{ secrets.VAULT_AWS_SECRET_NAME }}
run: |
packer validate \
-var "subnet_id=subnet-0b72ba8262022bc88" \
-var "vault_pw_file_path=./bin/get_vault_pw.py" \
-var "vault_path=./ansible/golden-image/vault.yaml" \
./components/golden-img/alma.pkr.hcl
- name: Build base AWS AMI
env:
VAULT_AWS_SECRET_NAME: ${{ secrets.VAULT_AWS_SECRET_NAME }}
run: |
packer build \
-var "subnet_id=subnet-0b72ba8262022bc88" \
-var "vault_pw_file_path=./bin/get_vault_pw.py" \
-var "vault_path=./ansible/golden-image/vault.yaml" \
-timestamp-ui \
./components/golden-img/alma.pkr.hcl
echo "AMI_ID=$(jq -r '.builds[-1].artifact_id' packer-manifest.json | cut -d ":" -f2)" >> $GITHUB_ENV
shell: bash
- name: Set SSM Parameter
run: |
echo "Setting SSM Parameter golden-image-id to $AMI_ID"
aws ssm put-parameter --name "golden-image-id" --value $AMI_ID --overwrite
build-azure-image:
runs-on: ubuntu-latest
container:
image: joelfreeman/ansible-packer-boto3:latest
steps:
- name: Check out repository code
uses: actions/checkout@v2
- name: 'Az CLI login'
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
- name: Setup `packer`
uses: hashicorp/setup-packer@main
id: setup
with:
version: "1.8.3" # or `latest`
- name: Fix git error
run: git config --global --add safe.directory /__w/tf-aws/tf-aws
- name: Validate Packerfile
env:
VAULT_AZURE_SECRET_NAME: ${{ secrets.VAULT_AZURE_SECRET_NAME }}
VAULT_AZURE_VAULT_NAME: ${{ secrets.VAULT_AZURE_VAULT_NAME }}
run: |
packer validate \
-var "vault_pw_file_path=./bin/get_vault_pw_azure.sh" \
-var "vault_path=./ansible/golden-image/vault.yaml" \
./components/golden-img/alma-azure.pkr.hcl
- name: Build base image on azure
env:
VAULT_AZURE_SECRET_NAME: ${{ secrets.VAULT_AZURE_SECRET_NAME }}
VAULT_AZURE_VAULT_NAME: ${{ secrets.VAULT_AZURE_VAULT_NAME }}
run: |
packer build \
-var "vault_pw_file_path=./bin/get_vault_pw_azure.sh" \
-var "vault_path=./ansible/golden-image/vault.yaml" \
-timestamp-ui \
./components/golden-img/alma-azure.pkr.hcl
Next Steps
This concept can be applied to most CI/CD providers, and most large services that access cloud resources - such as terraform cloud.
If you’re reading this post then you’re most likely interested in security - here’s some great open source resources/tooling to look at including in your pipelines:
- tfsec - Terraform Static Analysis tool with a focus on security
- chekov - Static Analyis (Security scanning), Policy as code tool
- docker scan - scan docker images for application and package vulnerabilities
- trivy - find vulnerabilities, misconfigurations, secrets in your code, container images, kubernetes manifests and more
Lastly, thank you for reading - If you liked this post, feel free to connect with me on LinkedIn
If you have any questions or inquiries, Message me on LinkedIn above - or send me an email, contact@jxel.dev