-->

20/07/2020

IaC - Terraform Automation using Octopus Deploy


Octopus is a enterprise mainstream release management tool, but now its avilable for developers via cloud under tha name of "Octopus Cloud". 
We don't need a "build server" as its is hosted by Octopus cloud and we utilize its resources for our deployments.

In this article we will learn how to deploy our "Infrastructure as Code" using Terraform and Octopus deploy. Beofre that, if you need to familiarize with terrafroms basics, go to earlier article.

I need to highlight couple of features we look, for considering this combination of Terraform and Octopus deploy.

#1. Configuration Management Vs Provisioning: 
We are not looking for Configuration management, as Chef and Puppet does better in that department. We want Provisioning tool which creates Infrastructure in the same state no matter what order the resources were mentioned and no matter how many times the tools was deployed.

#2. Idempotent:
This feature is the ability to create the infrasturcture in exact same state even after multiple runs. Terraform doesnt suffers from "Configuration Drift" unlike Cheff and Puppet.

#3. Economic: 
We are dealing with what a single developer can do on his dev machine. So unlike Cheff and Puuppet we dont need a "MasterServer" to save the state or to run the centralized updates. We can either use a shared location or in our case an S3 bucket for saving the state.

Step 1: Sign up for Octopus Cloud and check the features that were free for this free version. Click the link here

Step 2: I am planning to target AWS for this automated deployment, as we have been working with AWS in all my recent posts.  But this procedure can also be applicable if you would liek to target Azure.

Signup for the AWS free tier subscription. Once you install AWS CLI, Open AWS console in browser and go to IAM.

Create a user "terraform" with "Adminstrator" access. Please note that this is not a ideal Prod scenario and you should never assign "Adminstrator" permissions to any IAM Users. We just want to keep things simple for our Demo. Download the 

Step 3: Log into your Octopus cloud portal and first thing we need to setup is an Enviornment. As we are intend to use your developer machine and deploy to AWS, we name it "AWS_Development".


 Please note that we dont need any deployment targets here as we intent to provision and not conifg management.

So next step is to add a AWS Account that we will use for our deployments. Use the Access Key ID and Secret Key for adding the "terraform" account as shown below.


Step 4: Now create a project group and a new project. I named them as shown below.


Lets add the AWS account we configure earlier as a variable in this project. Name it "AWS_Account" and in value section click on Open Editor and select as shown below.



Please select the Scope as the environment we created earlier. This will be the only varaible we intend to create for time being. Save the variable created.


Its time to create deployment steps. Go to "Process" on left hand pane and click "Add Step" button.
Now choose "Terrafrom" category and "Apply terraform template" step as shon below.


Give a meaningful name like "Terraform - Cretae AWS S3 Bucket". Leave other selections about "Once on a worker pool" and "Default worker pool" as we intent to provision once and using shared resources provided by Octopus cloud.

Now select the AWS Account section as shown below. Select the Varaible we create assigning the  AWS account. 


In the template section select "Template Source" as "Source code" and paste the below code in it. Please change the highlighted part with a unique bucket name as per your likes.
provider "aws" {
  region="us-west-2"
}

resource "aws_s3_bucket" "tf_demos3bucket" {
  bucket="terrafromoctopusbucket"
  acl="private"
}
Save the step and now click "Create release" button for and then click "Save" button. This action will create a new build with the current variables and deployment steps.


Now click "Deploy to AWS_Development" to trigger our first deployment.

Here is the out put in both Octopus and AWS.



Now lets look at how to destroy the S3 bucket we created using Octopus. 

Step 5:  Create a second step in our project with name "Terraform - Delete AWS S3 bucket" and seect the category and template as shown below.


Now in the details section, repeat what you have done for Create step(first step), including selection of AWS account and Source code we used. You can copy it from above. Save the new step and disable "Terraform - Create S3 Bucket"(First step).


Save and create release and use "Deploy to AWS_Development" button to deploy it. Now it should delete AWS S3 bucket as per our confirguration. Here is the output.


Now don't get deceived by green color of this step and "Destory Complete !" message. Look at enitre message "Destroy complete! Resources: 0 destroyed."  

This says nothing has been deleted. If you check my earlier article about basics, both apply and destory commands were sucessfull with same configuration. What happened this time.

Now lets talk about state file in terraforms. 

Terraform must store state about your managed infrastructure and configuration. This state is used by Terraform to map real world resources to your configuration, keep track of metadata, and to improve performance for large infrastructures.

This state is stored by default in a local file if you are running bash commands form a local machine. 

But Octopus do need a remote state file to access the current state of the infrastructure. Since we didn't consider the remote state file, Octopus cannot find any state file to determine its sate, so it dint see the newly created S3 bcuket. SO it gave a message saying "0 to destroy".

Step 6:Now we need to delete the S3 bucket manually and change the "source code" in both create and delete steps to code shown below. 

terraform {
    backend "s3" {
        bucket = "terrafromoctopusstatebucket"
        key = "Octopus S3 Operations"
        region = "us-west-2"
    }
}
provider "aws" {
  region="us-west-2"
}

resource "aws_s3_bucket" "tf_demos3bucket" {
  bucket="terrafromoctopusbucket"
  acl="private"
}

Now this source code says, create or reffer to a remote state file on AWS S3 bucket. We exclusively need to create a bucket to maintain all our state files related to all our terraform projects.

In this case we name it "terrafromoctopusstatebucket". So we manually need to create a S3 bucket with this name. I created it and you can see its empty now.

Now after updating both create and delete steps with the "Source Code" shown above, Enable the create step and disable the delete step. Create a new release and deploy it.

This will create S3 bucket and create a state file on the S3 baucket names "terrafromoctopusstatebucket". Lets check it.

Look at the remote state file created on S3 bucket and compare it with the time stamp from creation shown above.


Bucket created in AWS.

Step 7: Now edit the project , disable Create step and enable Delete step. Save the build and "Create a new release" and deploy it. Now when we deploy the build, it will try to delete the S3 bucket this time.

Here is the output on Octopu deployment.

Here is the statefile getting updated, please compare the time stamp from above screen for deletion step.

And lastly the S3 bucket "terrafromoctopusbucketket" was sucessfully deleted as intended.

Finally we were able to provision and destroy the Infrastructure using Terrafrom and Octopus integration. 

Please note that just by updating the "Source code" in both the Create and Delete steps, you can create any infrastructure on any cloud platform. Remeber terrafrom is could-agnostic.

No comments:

Post a Comment