Table of contents
- Interview questions of Terraform ๐ฅ
- What is Terraform and how is it different from other IaaC tools?
- How do you call a module in a configuration file?
- What exactly is Sentinel? Can you provide a few examples of where we can use Sentinel policies?
- How to create multiple instances of the same resource in Terraform?
- How would you know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files)?
- How would you save any particular resource while destroying the existing infrastructure?
- Which module is used to store the .tfstate file in S3?
- How do you manage sensitive data in Terraform, such as API keys or passwords?
- You are working on a Terraform project that needs to provision an S3 bucket and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?
- Who maintains Terraform providers?
- How can we export data from one module to another?
- Day 71 task is complete!
Interview questions of Terraform ๐ฅ
What is Terraform and how is it different from other IaaC tools?
Terraform is an open-source infrastructure as code (IaaC) tool created by HashiCorp. It allows you to define and provision infrastructure resources across various cloud providers or on-premises environments using a declarative configuration language. Terraform focuses on providing a consistent and predictable provisioning workflow, infrastructure state management, and support for multiple cloud platforms.
Unlike other IaaC tools, Terraform supports a wide range of cloud providers, has a large and active community, and enables infrastructure orchestration and management of complex environments.
How do you call a module in a configuration file?
In Terraform, you define and use modules in your Terraform configuration. To use a module, you need to declare its usage in your Terraform files using the module
block. For example:
module "my_ec2" {
source = "./modules/main.tf"
variable1 = "value1"
variable2 = "value2"
}
Here, module
refers to the module block, "my_ec2"
is the name given to the module instance, source
specifies the module's location, and variable1
and variable2
are variables passed to the module.
What exactly is Sentinel? Can you provide a few examples of where we can use Sentinel policies?
Sentinel is Terraform's policy as a code framework. It allows you to define and enforce policies for Terraform configurations and deployments like resource limits, usage restrictions, and compliance constraints within your Terraform code. Sentinel policies are written using the Sentinel policy language and can be used to validate and enforce constraints on infrastructure provisioning.
A basic Sentinel policy:
import "tfplan"
policy "limit_resources" {
source_type = "terraform"
source_id = "main"
test "count-test" {
rules = {
"resource_count" = count(tfplan.resource_changes) < 10
}
}
}
This policy defines a test named count-test
that checks if the number of resources in the Terraform plan is less than 10.
You can run Sentinel policies using the terraform plan
command:
terraform plan -var 'policy_file=limit_resources.sentinel'
Terraform will then evaluate the Sentinel policy and either allow the plan to proceed if all tests pass, or fail the plan if any test fails.
Some other examples of Sentinel policies are:
Enforcing a maximum number of instances per resource type
Ensuring specific providers or versions are used
Implementing security and compliance rules.
Enforcing resource tagging standards.
Restricting the use of certain resources.
Implementing cost optimization policies.
How to create multiple instances of the same resource in Terraform?
To create multiple instances of the same resource in Terraform, you can use resource "count" or resource "for_each" meta-arguments depending on the Terraform version you are using.
Here's how to you achieve it:
- Using
count
:
resource "aws_instance" "example" {
count = 3
instance_type = "t2.micro"
ami = "ami-123456" # Ubuntu AMI
tags = {
Name = "Webserver-${count.index}"
}
}
- Using
for_each
:
resource "aws_instance" "example" {
for_each = toset([
"web1",
"web2",
"web3"
])
instance_type = "t2.micro"
ami = "ami-123456" # Ubuntu AMI
tags = {
Name = "Webserver-${each.key}"
}
}
The count
and for_each
method allows you to specify custom configurations/names for the instances, which can make the infrastructure more self-documenting and easy to troubleshoot.
How would you know from which paths Terraform is loading providers referenced in your Terraform configuration (*.tf files)?
You need to enable debug messages to find this out. Which of the following would achieve this?
A. Set the environment variable TF_LOG=TRACE
B. Set verbose logging for each provider in your Terraform configuration
C. Set the environment variable TF_VAR_log=TRACE
D. Set the environment variable TF_LOG_PATH
Answer---> Option A
To enable debug logging, you can set the environment variable TF_LOG to TRACE before running Terraform commands. For example, in a Unix/Linux-based system, you can execute the following command to enable debug logging:
export TF_LOG=TRACE
After setting the environment variable, any subsequent Terraform commands you run, such as terraform init or terraform apply, will generate detailed debug messages, including the paths from which Terraform is loading the providers referenced in your Terraform configuration files (*.tf files).
To enable debug messages for Terraform providers, you can set the TF_LOG environment variable.
For example, to debug provider loading, you can run:
TF_LOG=debug terraform init
This will print debug messages during the terraform init
command, including information about which paths Terraform is loading providers from.
The debug messages will look something like this:
[DEBUG] plugin: finding plugin "registry.terraform.io/hashicorp/aws" (meta.terraform-provider-aws)
[DEBUG] plugin: using plugin at ./plugins/registry.terraform.io/hashicorp/aws/linux_amd64/3.63.0
This shows Terraform locating and using the AWS provider plugin from the ./plugins
directory. So in summary, the correct answer is to set the TF_LOG
environment variable to debug
:
TF_LOG=debug terraform init
Running any Terraform command this way will print debug-level logs, including info about provider loading paths.
How would you save any particular resource while destroying the existing infrastructure?
We can use the -target
flag to specify the resource you want to retain. For example:
terraform destroy -target=aws_instance.webserver_3
This command will only destroy resources other than aws_instance.webserver_3
, allowing you to retain that specific resource.
Which module is used to store the .tfstate
file in S3?
The terraform-aws-backend module is used to store the .tfstate
files in an S3 bucket. To configure this, you add a backend
"s3
" block to your Terraform configuration:
terraform {
backend "s3" {
bucket = "my-bucket"
key = "path/to/my/key/terraform.tfstate"
region = "us-east-1"
}
}
This tells Terraform to store the .tfstate
file in the S3 bucket named "my-bucket
" under the key "path/to/my/key/terraform.tfstate
". The region specifies the AWS region of the S3 bucket.
Then you run terraform init
, it will initialize the S3 backend and start using S3 to store the terraform.tfstate
file.
The main benefits of this are:
Centralized state storage - Multiple Terraform users can access the same .tfstate
Durability - S3 has high durability and the .tfstate is backed up
Locking - Terraform can manage state locking in S3 to prevent concurrent modifications
So in summary, the terraform-aws-backend module is used to configure Terraform to store .tfstate in an S3 bucket.
How do you manage sensitive data in Terraform, such as API keys or passwords?
To manage sensitive data in Terraform, you can use the following approaches:
1) Using an environment variable to store a sensitive value in Terraform:
variable "db_password" {
description = "Database password"
}
resource "aws_db_instance" "example" {
# ... other configs
password = var.db_password
}
$ export TF_VAR_db_password="My5uper$ecureP4ssw0rd!"
When you run terraform plan
or apply
, Terraform will read the TF_VAR_db_password
environment variable and use that value for the db_password
variable.
2) You can also store the value in a terraform.tfvars
file:
db_password = "My5uper$ecureP4ssw0rd!"
And pass it to Terraform:
$ terraform apply -var-file=terraform.tfvars
The key things are:
Define variables in your
.tf
filesSet the variable's default value to an environment variable or
.tfvars
fileSet the sensitive value using an environment variable or
.tfvars
fileDon't commit the
.tfvars
file to version control
This allows you to store and manage sensitive values externally, while still being able to use them in your Terraform configs.
You can use external secret management systems such as HashiCorp Vault or AWS Secrets Manager to store and retrieve sensitive data securely.
Can encrypt sensitive data using external tools or services before storing them in Terraform configuration files or version control systems.
You are working on a Terraform project that needs to provision an S3 bucket and a user with read and write access to the bucket. What resources would you use to accomplish this, and how would you configure them?
To provision an S3 bucket and a user with read and write access, you can use the following resources:
aws_s3_bucket
: Defines the S3 bucket and its properties.aws_iam_user
: Creates an IAM user.aws_iam_access_key
: Generates an access key for the IAM user.aws_iam_user_policy
: Associates an IAM policy with the IAM user, granting access to the S3 bucket.
Here's an example of how you can configure these resources:
resource "aws_s3_bucket" "buck1" {
bucket = "your-bucket-name"
# Other S3 bucket configurations
}
resource "aws_iam_user" "sam" {
name = "your-user-name"
}
resource "aws_iam_access_key" "key1" {
user = aws_iam_user.sam.name
}
resource "aws_iam_user_policy" "example" {
name = "your-policy-name"
user = aws_iam_user.sam.name
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::${aws_s3_bucket.buck1.id}/*"
}
]
}
EOF
}
Who maintains Terraform providers?
Terraform providers are maintained by a variety of parties:
The official Terraform providers are maintained by HashiCorp, the company that created Terraform. This includes providers like
aws
,azurerm
,google
, etc.Some providers are maintained by the cloud providers themselves. For example, the
alicloud
provider is maintained by Alibaba Cloud.Some providers are maintained by the community or third-party companies. For example, the
openstack
provider is maintained by the OpenTelekomCloud community.Many less popular providers are maintained by individuals or small teams.
HashiCorp provides guidelines and the Provider Development Kit (PDK) to help the community create and maintain Terraform providers. But ultimately, each provider is maintained independently.
HashiCorp does maintain a set of "blessed" official providers, which undergo additional testing and support. But many useful providers exist outside of the official set.
In general, most providers aim to keep their code up-to-date with the latest Terraform and provider SDK versions. But support for the latest features can vary between providers.
How can we export data from one module to another?
There are a few ways to export data from one Terraform module and import it into another---> Outputs - The simplest way is to define an "output" in the source module and consume that output as an "input" in the destination module.
In the source module:
output "vpc_id" {
value = aws_vpc.main.id
}
In the destination module:
input "vpc_id" {
type = string
}
resource "aws_subnet" "example" {
vpc_id = var.vpc_id
}
Then when you call the destination module, pass the output as an argument:
module "network" {
source = "./network"
}
module "app" {
source = "./app"
vpc_id = module.network.vpc_id
}
Using outputs and inputs is the cleanest and most "Terraform-y" way to share data between modules.
Happy learning:)
Day 71 task is complete!
90DaysOfDevOps Tasks๐