CI/CD pipeline với Gitlab, Nexus, Jenkins và AWS CodeDeploy
Table of contents
- Sơ đồ tổng quan
- Tạo Permission set, myApplication và SSO users
- Tạo secret manager
- Tạo S3 và S3 notification
- Tạo CodeDeploy resources và VPC endpoint
- Tạo 1 số EC2 instances phụ trợ
- Tạo app lấy dữ liệu từ Secret Manger
- Configure IAM role anywhere
- Tạo app lấy certificate từ Vault
- Tạo các web app instance
- Testing và sử dụng application
- Lưu ý
Note: Code Terraform tạo ra với mục đích demo, do đó chưa được tối ưu, nên chỉnh sửa lại code Terraform để code tối ưu hơn.
Sơ đồ tổng quan
Tạo Permission set, myApplication và SSO users
Ta dùng Terraform sau để tạo:
# Provider cho SSO (us-east-1)
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
}
# Provider cho các resource AWS khác
provider "aws" {
region = "ap-southeast-1"
}
# Đăng ký ứng dụng mới
resource "aws_servicecatalogappregistry_application" "web_app" {
for_each = var.user_list
name = "WebApp-${each.key}"
description = "WebApp-${each.key}"
}
# Lookup AMI Amazon Linux 2023 mới nhất với SSM
data "aws_ssm_parameter" "al_2023" {
name = "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-x86_64"
}
# Tạo EC2 instance
resource "aws_instance" "web_server" {
for_each = var.user_list
instance_type = "t3.micro"
ami = data.aws_ssm_parameter.al_2023.value
subnet_id = "subnet-e707d7af"
iam_instance_profile = aws_iam_instance_profile.ec2_instance_profile.name
tags = merge(
{
Name = "WebServer-${each.key}"
Owner = "AROA4CWKQYRJITZOTL5FV:${each.key}" # SSO role id: AROA4CWKQYRJITZOTL5FV # tag này phải được thêm sau khi đã tạo ra permisssion set và lấy đc role SSO
codedeploy = "true"
},
{
awsApplication = aws_servicecatalogappregistry_application.web_app[each.key].application_tag["awsApplication"]
}
)
}
# Tham chiếu đến SSO instance hiện có
data "aws_ssoadmin_instances" "example" {
provider = aws.us_east_1
}
# Tạo Permission Set cho tất cả người dùng
resource "aws_ssoadmin_permission_set" "user_permission_set" {
provider = aws.us_east_1
name = "UserPermissionSet"
description = "Permission set for all users"
instance_arn = tolist(data.aws_ssoadmin_instances.example.arns)[0]
session_duration = "PT2H"
}
resource "aws_iam_policy" "permission_set_policy" {
provider = aws.us_east_1
name = "LimitedUserAccessPolicy"
path = "/"
description = "Custom policy for user Permission Set with limited access"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow",
Action = [
"servicecatalog:ListApplications",
"servicecatalog:GetApplication",
"servicecatalog:ListAssociatedResources",
"cloudwatch:GetMetricData",
"cloudwatch:ListMetrics",
"cloudwatch:GetDashboard",
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListDashboards",
"cloudwatch:Describe*",
"SNS:List*",
"tag:GetResources",
"kms:GenerateDataKey",
"ssm:DescribeSessions",
"ssm:GetConnectionStatus",
"ssm:DescribeInstanceProperties",
"ssm:DescribeInstanceInformation",
"ec2:Describe*",
"ec2:Get*",
"ec2:List*",
"s3:GetBucketLocation",
"s3:ListAllMyBuckets",
"s3:GetAccountPublicAccessBlock",
"s3:GetBucketAcl",
"s3:GetBucketPolicyStatus",
"s3:GetBucketPublicAccessBlock",
"s3:ListAccessPoints",
"iam:Get*",
"iam:List*",
# CodePipeline readonly permissions
"codepipeline:GetPipeline",
"codepipeline:GetPipelineState",
"codepipeline:GetPipelineExecution",
"codepipeline:ListPipelines",
"codepipeline:ListPipelineExecutions",
"codepipeline:ListActionTypes",
"codepipeline:ListActionExecutions",
# Lambda readonly permissions
"lambda:GetFunction",
"lambda:GetFunctionConfiguration",
"lambda:ListFunctions",
"lambda:ListVersionsByFunction",
"lambda:GetPolicy",
"lambda:ListTags",
"lambda:GetLayerVersion",
"lambda:ListLayers",
"lambda:ListLayerVersions",
"lambda:GetAccountSettings",
# CodeDeploy readonly permissions
"codedeploy:BatchGetApplications",
"codedeploy:BatchGetDeploymentGroups",
"codedeploy:BatchGetDeployments",
"codedeploy:GetApplication",
"codedeploy:GetDeployment",
"codedeploy:GetDeploymentConfig",
"codedeploy:GetDeploymentGroup",
"codedeploy:GetDeploymentInstance",
"codedeploy:List*",
"codedeploy:Get*",
"codedeploy:BatchGet*",
"codebuild:ListProjects",
"codeartifact:List*",
"codecommit:ListRepositories"
],
Resource = "*"
},
{
Effect = "Allow"
Action = [
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:RebootInstances"
]
Resource = "arn:aws:ec2:ap-southeast-1:<account-id>:instance/*"
Condition = {
StringLike = {
"ec2:ResourceTag/Owner" : "$${aws:userid}"
}
}
},
{
Effect = "Allow",
Action = [
"ssm:DescribeInstanceInformation",
"ssm:StartSession",
"ssm:TerminateSession",
"ssm:ResumeSession"
],
Resource = ["arn:aws:ec2:ap-southeast-1:<account-id>:instance/*"]
Condition = {
StringLike = {
"ssm:resourceTag/Owner" : "$${aws:userid}"
}
}
},
{
Effect = "Allow",
Action = [
"ssm:StartSession"
],
Resource = ["arn:aws:ssm:ap-southeast-1:<account-id>:document/SSM-SessionManagerRunShell"]
},
{
Effect = "Allow",
Action = [
"servicecatalog:DescribeProduct",
"servicecatalog:DescribeProductView",
"servicecatalog:DescribeProvisioningParameters",
"servicecatalog:DescribeRecord",
"servicecatalog:ListLaunchPaths",
"servicecatalog:ListRecordHistory",
"servicecatalog:ProvisionProduct",
"servicecatalog:ScanProvisionedProducts",
"servicecatalog:SearchProducts",
"servicecatalog:TerminateProvisionedProduct",
"servicecatalog:UpdateProvisionedProduct",
"servicecatalog:GetApplication",
"servicecatalog:ListAssociatedResources"
],
Resource = "*",
Condition = {
StringEquals = {
"servicecatalog:userLevel": "self"
}
}
},
{
Effect = "Allow",
Action = [
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStackResource",
"cloudformation:DescribeStackResources"
],
Resource = "*"
},
{
Effect = "Allow",
Action = [
"servicecatalog:ViewPortal"
],
Resource = "*"
},
{
Effect = "Allow",
Action = [
"s3:*"
],
Resource = ["arn:aws:s3:::immersionday-aaaa-jjjj", "arn:aws:s3:::immersionday-aaaa-jjjj/*"]
},
{
Effect = "Allow",
Action = [
"logs:Describe*",
"logs:List*",
"logs:Get*"
],
Resource = [
"arn:aws:logs:ap-southeast-1:<accountid>:log-group:/com/example/secrets-manager-retriever:*",
"arn:aws:logs:ap-southeast-1:<accountid>:log-group:/aws/lambda/test-trigger-codedeploy-function:*"
]
},
]
})
}
# Gắn customer managed policy vào Permission Set
resource "aws_ssoadmin_customer_managed_policy_attachment" "example" {
provider = aws.us_east_1
instance_arn = tolist(data.aws_ssoadmin_instances.example.arns)[0]
permission_set_arn = aws_ssoadmin_permission_set.user_permission_set.arn
customer_managed_policy_reference {
name = aws_iam_policy.permission_set_policy.name
path = "/"
}
}
# Tạo IAM role cho EC2 instances
resource "aws_iam_role" "ec2_role" {
name = "EC2InstanceRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}
]
})
}
# Gắn SSM Managed Instance Core policy vào EC2 instance role
resource "aws_iam_role_policy_attachment" "ssm_managed_instance_core" {
policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
role = aws_iam_role.ec2_role.name
}
resource "aws_iam_role_policy_attachment" "cloudwatch_access" {
policy_arn = "arn:aws:iam::aws:policy/CloudWatchFullAccess"
role = aws_iam_role.ec2_role.name
}
# Tạo custom policy cho CodeDeploy commands
resource "aws_iam_policy" "codedeploy_commands_policy" {
name = "CodeDeployCommandsPolicy"
path = "/"
description = "Custom policy for CodeDeploy commands"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"codedeploy-commands-secure:GetDeploymentSpecification",
"codedeploy-commands-secure:PollHostCommand",
"codedeploy-commands-secure:PutHostCommandAcknowledgement",
"codedeploy-commands-secure:PutHostCommandComplete"
]
Effect = "Allow"
Resource = "*"
}
]
})
}
# Gắn custom CodeDeploy commands policy vào EC2 instance role
resource "aws_iam_role_policy_attachment" "codedeploy_commands" {
policy_arn = aws_iam_policy.codedeploy_commands_policy.arn
role = aws_iam_role.ec2_role.name
}
# Gắn AmazonEC2RoleforAWSCodeDeploy policy vào EC2 instance role
resource "aws_iam_role_policy_attachment" "ec2_for_codedeploy" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforAWSCodeDeploy"
role = aws_iam_role.ec2_role.name
}
# Gắn AWSCodeDeployRole policy vào EC2 instance role
resource "aws_iam_role_policy_attachment" "codedeploy_role" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole"
role = aws_iam_role.ec2_role.name
}
# Tạo IAM instance profile cho EC2
resource "aws_iam_instance_profile" "ec2_instance_profile" {
name = "EC2InstanceProfile"
role = aws_iam_role.ec2_role.name
}
variable "user_list" {
type = map(string)
default = {
"user1" = "test1"
"user2" = "test2"
}
}
output "sso_instance_arn" {
value = tolist(data.aws_ssoadmin_instances.example.arns)[0]
}
output "identity_store_id" {
value = tolist(data.aws_ssoadmin_instances.example.identity_store_ids)[0]
}
output "app_tag" {
value = {
for k, v in aws_servicecatalogappregistry_application.web_app : k => v.application_tag
}
}
output "permission_set_arn" {
value = aws_ssoadmin_permission_set.user_permission_set.arn
}
output "permission_set_policy_arn" {
value = aws_iam_policy.permission_set_policy.arn
}
Important note: Bạn đã dedicate tag key Owner cho resources, thì không để thêm 1 cái tag key nào trùng từ khóa, kể cả TH đặt là owner (Không viết hoa chữ cái đầu) cũng sẽ conflict và không start được ssm session!
Do mình đã tạo sso user bằng tay nên sẽ cần block import trong Terraform:
import {
to = aws_identitystore_user.sso_users["user1"]
id = "d-yyyyyyyyyy/54785438-f0f1-701b-bd3f-1d18c0346de7"
}
import {
to = aws_identitystore_user.sso_users["user2"]
id = "d-yyyyyyyyyy/24f88408-f0c1-70e7-2c3a-270605622240"
}
# PRINCIPAL_ID,PRINCIPAL_TYPE,TARGET_ID,TARGET_TYPE,PERMISSION_SET_ARN,INSTANCE_ARN
import {
to = aws_ssoadmin_account_assignment.user_assignments["user1"]
id = "54785438-f0f1-701b-bd3f-1d18c0346de7,USER,<account-id>,AWS_ACCOUNT,arn:aws:sso:::permissionSet/ssoins-xxxxxxxxxxxxxxxx/ps-94081c39d40b0cd0,arn:aws:sso:::instance/ssoins-xxxxxxxxxxxxxxxx"
}
import {
to = aws_ssoadmin_account_assignment.user_assignments["user2"]
id = "24f88408-f0c1-70e7-2c3a-270605622240,USER,<account-id>,AWS_ACCOUNT,arn:aws:sso:::permissionSet/ssoins-xxxxxxxxxxxxxxxx/ps-94081c39d40b0cd0,arn:aws:sso:::instance/ssoins-xxxxxxxxxxxxxxxx"
}
# Create SSO users
resource "aws_identitystore_user" "sso_users" {
provider = aws.us_east_1
for_each = var.user_list
identity_store_id = tolist(data.aws_ssoadmin_instances.example.identity_store_ids)[0]
display_name = each.key
user_name = each.key
name {
given_name = each.key
family_name = each.value
}
}
# Assign users to the AWS account with the permission set
resource "aws_ssoadmin_account_assignment" "user_assignments" {
provider = aws.us_east_1
for_each = var.user_list
instance_arn = tolist(data.aws_ssoadmin_instances.example.arns)[0]
permission_set_arn = aws_ssoadmin_permission_set.user_permission_set.arn
principal_id = aws_identitystore_user.sso_users[each.key].user_id
principal_type = "USER"
target_id = "<account-id>"
target_type = "AWS_ACCOUNT"
}
output "sso_user_ids" {
value = {
for k, v in aws_identitystore_user.sso_users : k => v.user_id
}
description = "The IDs of the created SSO users"
}
output "sso_user_assignments" {
value = {
for k, v in aws_ssoadmin_account_assignment.user_assignments : k => v.principal_id
}
description = "The assignments of SSO users to the AWS account"
}
Tạo secret manager
resource "aws_secretsmanager_secret" "imsday" {
name = "/cloudwatch/config"
}
Secret này sẽ lưu aws config sau:
{
"agent": {
"metrics_collection_interval": 60,
"logfile": "/var/log/amazon-cloudwatch-agent.log"
},
"metrics": {
"append_dimensions": {
"InstanceId": "${aws:InstanceId}"
},
"metrics_collected": {
"cpu": {
"measurement": [
"cpu_usage_idle",
"cpu_usage_user",
"cpu_usage_system"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
Tạo S3 và S3 notification
resource "aws_s3_bucket" "immersionday" {
bucket = "immersionday-aaaa-jjjj"
}
# Sử dụng archive_file để nén mã Lambda
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "${path.module}/lambda_source" # Đường dẫn đến thư mục chứa mã Lambda
output_path = "${path.module}/lambda_function.zip"
}
resource "aws_sns_topic" "s3_artifact_notification" {
name = var.sns_topic_name
}
resource "aws_iam_role" "lambda_role" {
name = "lambda_codedeploy_role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = "sts:AssumeRole",
Effect = "Allow",
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "lambda_codedeploy_policy" {
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "codedeploy_policy" {
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/AWSCodeDeployFullAccess"
}
resource "aws_iam_role_policy_attachment" "s3_full_access_policy" {
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}
resource "aws_lambda_function" "trigger_codedeploy" {
function_name = var.lambda_function_name
role = aws_iam_role.lambda_role.arn
handler = "lambda_function.lambda_handler"
runtime = "python3.12"
filename = data.archive_file.lambda_zip.output_path
environment {
variables = {
APPLICATION_NAME = var.application_name
DEPLOYMENT_GROUP_NAME = var.deployment_group_name
}
}
}
resource "aws_sns_topic_subscription" "lambda_sns_subscription" {
topic_arn = aws_sns_topic.s3_artifact_notification.arn
protocol = "lambda"
endpoint = aws_lambda_function.trigger_codedeploy.arn
}
# Thêm quyền cho SNS gọi Lambda
resource "aws_lambda_permission" "allow_sns" {
statement_id = "AllowSNSInvoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.trigger_codedeploy.function_name
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.s3_artifact_notification.arn
}
# Thêm policy cho SNS Topic
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.s3_artifact_notification.arn
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "AllowS3ToPublishToSNS"
Effect = "Allow"
Principal = {
Service = "s3.amazonaws.com"
}
Action = "SNS:Publish"
Resource = aws_sns_topic.s3_artifact_notification.arn
Condition = {
ArnLike = {
"aws:SourceArn" = "arn:aws:s3:::${var.bucket_name}"
}
}
}
]
})
}
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = var.bucket_name
topic {
topic_arn = aws_sns_topic.s3_artifact_notification.arn
events = ["s3:ObjectCreated:*"]
filter_prefix = "artifacts/"
}
depends_on = [aws_sns_topic_policy.default, aws_lambda_permission.allow_sns]
}
variable "bucket_name" {
description = "The name of the S3 bucket"
type = string
default = "immersionday-aaaa-jjjj"
}
variable "sns_topic_name" {
description = "The name of the SNS topic"
type = string
default = "s3-artifact-notification"
}
variable "lambda_function_name" {
description = "The name of the Lambda function"
type = string
default = "trigger-codedeploy"
}
variable "application_name" {
description = "The name of the CodeDeploy application"
type = string
}
variable "deployment_group_name" {
description = "The name of the CodeDeploy deployment group"
type = string
}
output "sns_topic_arn" {
value = aws_sns_topic.s3_artifact_notification.arn
}
output "lambda_function_arn" {
value = aws_lambda_function.trigger_codedeploy.arn
}
output "s3_bucket_name" {
value = var.bucket_name
}
Lambda source:
UML:
Source code:
import json
import boto3
import os
import zipfile
import io
s3 = boto3.client('s3')
codedeploy = boto3.client('codedeploy')
def lambda_handler(event, context):
print("Received event:", json.dumps(event)) # Log the entire event for debugging
# Parse the SNS message directly from the event
sns_message = json.loads(event['Records'][0]['Sns']['Message'])
for record in sns_message['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
print(f"Processing file: {key} from bucket: {bucket}") # Log the file being processed
# Check if the file is already a zip
if key.endswith('.zip'):
zip_key = key
else:
# Download the file from S3
response = s3.get_object(Bucket=bucket, Key=key)
file_content = response['Body'].read()
# Create a new zip file in memory
zip_buffer = io.BytesIO()
with zipfile.ZipFile(zip_buffer, 'w', zipfile.ZIP_DEFLATED) as zf:
zf.writestr(os.path.basename(key), file_content)
zip_buffer.seek(0)
# Define the new key for the zip file
zip_key = key.rsplit('.', 1)[0] + '.zip'
# Upload the zipped file back to S3
s3.put_object(Bucket=bucket, Key=zip_key, Body=zip_buffer.getvalue())
print(f"Uploaded zipped file: {zip_key} to bucket: {bucket}") # Log the uploaded zip file
# Trigger CodeDeploy with the zipped artifact
try:
response = codedeploy.create_deployment(
applicationName=os.environ['APPLICATION_NAME'],
deploymentGroupName=os.environ['DEPLOYMENT_GROUP_NAME'],
revision={
'revisionType': 'S3',
's3Location': {
'bucket': bucket,
'key': zip_key,
'bundleType': 'zip'
}
},
deploymentConfigName='CodeDeployDefault.AllAtOnce',
description='Deployment triggered by S3 upload'
)
print(f"CodeDeploy deployment created: {response['deploymentId']}") # Log the deployment ID
except Exception as e:
print(f"Error creating CodeDeploy deployment: {str(e)}") # Log any errors in deployment creation
return {
'statusCode': 200,
'body': json.dumps('Deployment triggered successfully')
}
Khi có 1 artifact mới được ghi vào prefix /artifacts, thì lập tức S3 sẽ push event vào SNS, và SNS trigger lambda function để tạo Deployment trên CodeDeploy.
Tạo CodeDeploy resources và VPC endpoint
CodeDeploy sẽ nhận các EC2 instance có tag là codedeploy=true làm target để deploy.
Terraform:
# Tạo CodeDeploy application
resource "aws_codedeploy_app" "example" {
name = "example-app"
}
# Tạo IAM role cho CodeDeploy
resource "aws_iam_role" "codedeploy" {
name = "codedeploy-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "codedeploy.amazonaws.com"
}
}
]
})
}
# Attach policy cho IAM role
resource "aws_iam_role_policy_attachment" "codedeploy" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole"
role = aws_iam_role.codedeploy.name
}
# Tạo CodeDeploy deployment group cho các EC2 instances với tag codedeploy=true
resource "aws_codedeploy_deployment_group" "example" {
app_name = aws_codedeploy_app.example.name
deployment_group_name = "example-deployment-group"
service_role_arn = aws_iam_role.codedeploy.arn
deployment_config_name = "CodeDeployDefault.OneAtATime"
ec2_tag_set {
ec2_tag_filter {
key = "codedeploy"
type = "KEY_AND_VALUE"
value = "true"
}
}
auto_rollback_configuration {
enabled = true
events = ["DEPLOYMENT_FAILURE"]
}
}
# Tạo CodeDeploy VPC endpoint
resource "aws_vpc_endpoint" "codedeploy" {
vpc_id = var.vpc_id
service_name = "com.amazonaws.${var.region}.codedeploy"
vpc_endpoint_type = "Interface"
subnet_ids = var.subnet_ids # Thêm dòng này
security_group_ids = [
aws_security_group.codedeploy_endpoint.id
]
private_dns_enabled = true
tags = {
Name = "codedeploy-vpc-endpoint"
}
}
# Tạo CodeDeploy Commands Secure VPC endpoint
resource "aws_vpc_endpoint" "codedeploy_commands_secure" {
vpc_id = var.vpc_id
service_name = "com.amazonaws.${var.region}.codedeploy-commands-secure"
vpc_endpoint_type = "Interface"
subnet_ids = var.subnet_ids
security_group_ids = [
aws_security_group.codedeploy_endpoint.id
]
private_dns_enabled = true
tags = {
Name = "codedeploy-commands-secure-vpc-endpoint"
}
}
# Tạo security group cho VPC endpoint
resource "aws_security_group" "codedeploy_endpoint" {
name_prefix = "codedeploy-endpoint-"
vpc_id = var.vpc_id
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = [var.vpc_cidr_block]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [var.vpc_cidr_block]
}
tags = {
Name = "codedeploy-endpoint-sg"
}
}
# Định nghĩa các biến
variable "region" {
description = "AWS region"
type = string
default = "ap-southeast-1"
}
variable "vpc_id" {
description = "ID của VPC"
type = string
}
variable "vpc_cidr_block" {
description = "CIDR block của VPC"
type = string
}
variable "subnet_ids" {
description = "List of subnet IDs for VPC endpoints"
type = list(string)
}
file tfvars:
user_list = {
"user1" = "test1"
"user2" = "test2"
}
vpc_id = "vpc-3fdd2259"
vpc_cidr_block = "0.0.0.0/0"
sns_topic_name = "test-trigger-codedeploy-topic"
lambda_function_name = "test-trigger-codedeploy-function"
application_name = "example-app"
deployment_group_name = "example-deployment-group"
subnet_ids = ["subnet-e707d7af", "subnet-3502a253", "subnet-ea7ee1b3"]
bucket_name = "immersionday-aaaa-jjjj"
Tạo 1 số EC2 instances phụ trợ
Sg chung cho các instance phụ trợ: (trong thực tế nên tách theo đúng nhiệm vụ instance, không nên gộp chung nhưng vì testing nên mình gộp chung để đỡ phải tạo nhiều)
Tạo vault instance
Terraform:
resource "aws_instance" "vault" {
ami = "ami-06af374d6f1809ce5"
instance_type = "t3.micro"
iam_instance_profile = "ec2-admin-role"
tags = {
Name = "Vault"
}
}
Cài đặt Vault
Cài đặt Docker:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
sudo groupadd docker
sudo usermod -aG docker $(whoami)
sudo curl -L "https://github.com/docker/compose/releases/download/$(curl -s https://api.github.com/repos/docker/compose/releases/latest | grep -Po '"tag_name": "\K.*\d')/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo curl -L https://raw.githubusercontent.com/docker/compose/1.29.2/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
Setup vault:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo groupadd docker
# sudo usermod -aG docker $USER
sudo usermod -aG docker $(whoami)
export VAULT_TOKEN="abc123"
export VAULT_ADDR='http://0.0.0.0:8200'
docker run --cap-add=IPC_LOCK -e 'VAULT_DEV_ROOT_TOKEN_ID=abc123' -p 8200:8200 --name iam-roles-anywhere-test -d vault:1.11.0
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install vault
# http://54.251.17.198:8200/ui/vault/secrets
# Đăng nhập bằng token, nhập abc123 để đăng nhập UI
Chúng ta sẽ tạo 1 vault path là secret/roleanywhere để lưu trữ cert IAM role anywhere, và set up Approle bằng script sau:
#!/bin/bash
# Cấu hình Vault
VAULT_ADDR="http://54.251.17.198:8200"
VAULT_TOKEN="abc123" # Thay thế bằng token Vault thực của bạn
# Kiểm tra xem jq đã được cài đặt chưa
if ! command -v jq &> /dev/null; then
echo "jq không được tìm thấy. Đang cài đặt jq..."
sudo yum install -y jq
fi
# Hàm để gửi requests đến Vault
vault_request() {
local method=$1
local path=$2
local data=$3
curl --silent --show-error \
--header "X-Vault-Token: $VAULT_TOKEN" \
--request "$method" \
--data "$data" \
"$VAULT_ADDR/v1/$path"
}
# Kiểm tra và kích hoạt AppRole auth nếu chưa được kích hoạt
enable_approle() {
local auth_methods=$(vault_request "GET" "sys/auth")
if echo "$auth_methods" | jq -e '.["approle/"]' > /dev/null; then
echo "AppRole auth đã được kích hoạt."
else
echo "Đang kích hoạt AppRole auth..."
vault_request "POST" "sys/auth/approle" '{"type": "approle"}'
echo "AppRole auth đã được kích hoạt."
fi
}
# Tạo policy cho AppRole
create_policy() {
local policy_name="roleanywhere-read-policy"
local policy_data=$(cat <<EOF
{
"policy": "path \"secret/data/roleanywhere\" { capabilities = [\"read\"] }"
}
EOF
)
echo "Đang tạo policy..."
vault_request "PUT" "sys/policies/acl/$policy_name" "$policy_data"
echo "Policy đã được tạo."
}
# Tạo AppRole
create_approle() {
local role_name="roleanywhere-reader"
local role_data=$(cat <<EOF
{
"policies": ["roleanywhere-read-policy"],
"token_ttl": "1h",
"token_max_ttl": "4h"
}
EOF
)
echo "Đang tạo AppRole..."
vault_request "POST" "auth/approle/role/$role_name" "$role_data"
echo "AppRole đã được tạo."
# Lấy RoleID
local role_id=$(vault_request "GET" "auth/approle/role/$role_name/role-id" | jq -r .data.role_id)
echo "Role ID: $role_id"
# Tạo Secret ID
local secret_id_response=$(vault_request "POST" "auth/approle/role/$role_name/secret-id" "{}")
local secret_id=$(echo $secret_id_response | jq -r .data.secret_id)
echo "Secret ID: $secret_id"
}
# Thực thi các hàm
enable_approle
create_policy
create_approle
echo "Setup hoàn tất."
Tạo instance nexus
Terraform:
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "vault" {
ami = data.aws_ami.ubuntu.id
instance_type = "m5.xlarge"
iam_instance_profile = "ec2-admin-role"
root_block_device {
volume_size = 100
volume_type = "gp3"
}
tags = {
Name = "Nexus"
}
}
docker-compose:
version: "3"
services:
nexus:
image: sonatype/nexus3
restart: always
volumes:
- "nexus-data:/sonatype-work"
ports:
- "8081:8081"
- "8085:8085"
volumes:
nexus-data: {}
Lưu ý muốn chạy thành công được compose thì instance để host Nexus cần memory ram tối thiểu 2GB (khoảng m5.xlarge trở lên, nếu không sẽ không run được nexus)
Tạo instance Jenkins
Terraform:
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "vault" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.medium"
iam_instance_profile = "ec2-admin-role"
root_block_device {
volume_size = 30
volume_type = "gp3"
}
tags = {
Name = "Jenkins"
}
}
Docker compose:
version: '3.7'
services:
jenkins:
image: jenkins/jenkins:lts
container_name: jenkins
ports:
- "8080:8080"
- "50000:50000"
volumes:
- jenkins_home:/var/jenkins_home
restart: always
volumes:
jenkins_home:
Cần cài đặt các plugins:
Git
Pipeline
Nexus Artifact Uploader
AWS Steps
AWS Steps
Amazon Web Services SDK :: All
Và cài đặt Maven cho Jenkins
Jenkins file:
pipeline {
agent any
tools {
maven 'M3'
}
stages {
stage('Checkout') {
steps {
git branch: 'master',
credentialsId: 'root-id',
url: 'http://13.250.8.254/java-app/immersion-day-app.git'
}
}
stage('Build') {
steps {
sh "mvn clean package -s settings.xml"
}
}
stage('Deploy to Nexus') {
steps {
nexusArtifactUploader(
nexusVersion: 'nexus3',
protocol: 'http',
nexusUrl: '3.0.102.196:8081',
groupId: 'com.example',
version: '1.0-SNAPSHOT',
repository: 'maven-snapshots',
credentialsId: 'd168fb98-1d95-4636-b2e4-07e7d768e8e6',
artifacts: [
[artifactId: 'ssm-parameter-retriever',
classifier: '',
file: 'target/ssm-parameter-retriever-1.0-SNAPSHOT-jar-with-dependencies.jar',
type: 'jar']
]
)
}
}
stage('Prepare Deployment Package') {
steps {
sh '''
# Create deployment package directory
mkdir -p deployment-package
# Copy JAR file
cp target/ssm-parameter-retriever-1.0-SNAPSHOT-jar-with-dependencies.jar deployment-package/app.jar
# Create appspec.yml
cat << EOF > deployment-package/appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/
hooks:
BeforeInstall:
- location: scripts/install_java.sh
timeout: 300
runas: root
AfterInstall:
- location: scripts/download_jar.sh
timeout: 300
runas: ec2-user
ApplicationStart:
- location: scripts/validate_service.sh
timeout: 300
runas: ec2-user
EOF
# Create install_java.sh
cat > deployment-package/scripts/install_java.sh << 'EOL'
#!/bin/bash
set -e
# Function to log messages
log() {
echo "[$(date)] $1" | tee -a /var/log/java_install.log
}
# Function to check and install Java 17
install_java_17() {
log "Checking Java installation..."
if type -p java; then
java_version=$(java -version 2>&1 | awk -F '"' '/version/ {print $2}')
log "Current Java version: $java_version"
if [[ "$java_version" == "17"* ]]; then
log "Java 17 is already installed."
return 0
fi
fi
log "Java 17 is not installed. Attempting to install..."
if command -v yum &> /dev/null; then
# For RHEL/CentOS/Amazon Linux
log "Using yum package manager..."
sudo yum update -y
if sudo yum install -y java-17-openjdk-devel; then
log "OpenJDK 17 installed successfully using yum."
else
log "Failed to install OpenJDK 17 using yum. Trying alternative method..."
# Try installing Amazon Corretto as an alternative
sudo rpm --import https://yum.corretto.aws/corretto.key
sudo curl -L -o /etc/yum.repos.d/corretto.repo https://yum.corretto.aws/corretto.repo
if sudo yum install -y java-17-amazon-corretto-devel; then
log "Amazon Corretto 17 installed successfully."
else
log "Failed to install Java 17. Please check system logs for more details."
return 1
fi
fi
elif command -v apt-get &> /dev/null; then
# For Ubuntu/Debian
log "Using apt-get package manager..."
sudo apt-get update
if sudo apt-get install -y openjdk-17-jdk; then
log "OpenJDK 17 installed successfully using apt-get."
else
log "Failed to install OpenJDK 17 using apt-get."
return 1
fi
else
log "Unable to install Java 17. No supported package manager found."
return 1
fi
# Set Java 17 as default
if command -v update-alternatives &> /dev/null; then
java_path=$(update-alternatives --list java | grep "java-17" | head -n 1)
if [ -n "$java_path" ]; then
sudo update-alternatives --set java "$java_path"
log "Java 17 set as default."
else
log "Failed to set Java 17 as default. Please set it manually."
fi
fi
# Verify installation
if java -version 2>&1 | grep -q "version \\"17"; then
log "Java 17 installed and configured successfully."
java -version 2>&1 | tee -a /var/log/java_install.log
return 0
else
log "Java 17 installation or configuration failed."
return 1
fi
}
# Main execution
log "Starting Java 17 installation and configuration process"
if [ "$(id -u)" -ne 0 ]; then
log "This script must be run as root"
exit 1
fi
install_java_17
if [ $? -eq 0 ]; then
log "Java 17 installation and configuration process completed successfully"
exit 0
else
log "Java 17 installation and configuration process failed"
# Additional debugging information
log "System information:"
uname -a | tee -a /var/log/java_install.log
log "Available disk space:"
df -h | tee -a /var/log/java_install.log
log "Memory usage:"
free -m | tee -a /var/log/java_install.log
log "Please check /var/log/java_install.log for more details"
exit 1
fi
EOL
chmod +x deployment-package/scripts/install_java.sh
# Create download_jar.sh
cat << 'EOL' > deployment-package/scripts/download_jar.sh
#!/bin/bash
set -e
# Use a log file in the ec2-user's home directory
LOG_FILE="/home/ec2-user/jar_download.log"
# Function to log messages
log() {
echo "[$(date)] $1" | tee -a "$LOG_FILE"
}
# S3 bucket and key information
S3_BUCKET="immersionday-aaaa-jjjj"
S3_KEY="artifacts/deployment-package.zip"
TEMP_DIR="/tmp/deployment-package"
DESTINATION="/home/ec2-user/app.jar"
log "Starting JAR download process"
log "Running as user: $(whoami)"
# Ensure the AWS CLI is available
if ! command -v aws &> /dev/null; then
log "AWS CLI is not installed. Please install it and try again."
exit 1
fi
# Create temporary directory
mkdir -p "$TEMP_DIR"
# Download the ZIP file from S3
log "Downloading deployment package from S3"
if aws s3 cp "s3://${S3_BUCKET}/${S3_KEY}" "${TEMP_DIR}/deployment-package.zip"; then
log "Successfully downloaded deployment package"
else
log "Failed to download deployment package from S3"
log "AWS S3 ls output:"
aws s3 ls "s3://${S3_BUCKET}/artifacts/" | tee -a "$LOG_FILE"
exit 1
fi
# Unzip the package
log "Unzipping deployment package"
if unzip -q "${TEMP_DIR}/deployment-package.zip" -d "$TEMP_DIR"; then
log "Successfully unzipped deployment package"
else
log "Failed to unzip deployment package"
log "Content of TEMP_DIR:"
ls -la "$TEMP_DIR" | tee -a "$LOG_FILE"
exit 1
fi
# Find and move the JAR file
JAR_FILE=$(find "$TEMP_DIR" -name "*.jar" -type f)
if [ -n "$JAR_FILE" ]; then
log "Found JAR file: $JAR_FILE"
if mv "$JAR_FILE" "$DESTINATION"; then
log "Successfully moved JAR file to $DESTINATION"
else
log "Failed to move JAR file to $DESTINATION"
log "Permissions of destination directory:"
ls -la "$(dirname "$DESTINATION")" | tee -a "$LOG_FILE"
exit 1
fi
else
log "No JAR file found in the deployment package"
log "Content of TEMP_DIR after unzip:"
find "$TEMP_DIR" -type f | tee -a "$LOG_FILE"
exit 1
fi
# Clean up
rm -rf "$TEMP_DIR"
log "JAR file download and placement completed successfully"
exit 0
EOL
chmod +x deployment-package/scripts/download_jar.sh
# Create validate_service.sh
cat << 'EOL' > deployment-package/scripts/validate_service.sh
#!/bin/bash
set -e
# Function to log messages
log() {
echo "[$(date)] $1"
}
# Configuration
APP_DIR="/home/ec2-user"
JAR_FILE="app.jar"
log "Starting service validation"
log "Running as user: $(whoami)"
# Check if JAR file exists
if [ -f "${APP_DIR}/${JAR_FILE}" ]; then
log "SUCCESS: ${JAR_FILE} found in ${APP_DIR}"
else
log "ERROR: ${JAR_FILE} not found in ${APP_DIR}"
ls -l "${APP_DIR}"
exit 1
fi
# Check Java installation
if command -v java &> /dev/null; then
log "SUCCESS: Java is installed"
java -version
else
log "ERROR: Java is not installed"
exit 1
fi
# Additional system information
log "Current working directory:"
pwd
log "Current user and groups:"
id
log "Disk space:"
df -h
log "Service validation completed successfully"
exit 0
EOL
# Create ZIP file using jar command
cd deployment-package
jar -cvf ../deployment-package.zip .
cd ..
echo "ZIP file created successfully: $(pwd)/deployment-package.zip"
'''
}
}
stage('Upload to S3') {
steps {
withAWS(role: 'arn:aws:iam::<account-id>:role/another-ec2-admin-role', roleAccount: '<account-id>') {
s3Upload(
bucket: 'immersionday-aaaa-jjjj',
path: 'artifacts/',
includePathPattern: 'deployment-package.zip'
)
}
}
}
}
post {
always {
cleanWs()
}
}
}
Jenkins job này sẽ:
Build và ghi file .jar vào Nexus (trong TH ta muốn kéo thẳng file .jar từ Nexus vào instance chạy
Add thêm các file appspec, đóng vào thành file zip và đẩy vào S3
Các file Appspec:
appspec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/
hooks:
BeforeInstall:
- location: scripts/install_java.sh
timeout: 300
runas: root
AfterInstall:
- location: scripts/download_jar.sh
timeout: 300
runas: ec2-user
ApplicationStart:
- location: scripts/validate_service.sh
timeout: 300
runas: ec2-user
download jar:
#!/bin/bash
set -e
# Use a log file in the ec2-user's home directory
LOG_FILE="/home/ec2-user/jar_download.log"
# Function to log messages
log() {
echo "[$(date)] $1" | tee -a "$LOG_FILE"
}
# S3 bucket and key information
S3_BUCKET="immersionday-aaaa-jjjj"
S3_KEY="artifacts/deployment-package.zip"
TEMP_DIR="/tmp/deployment-package"
DESTINATION="/home/ec2-user/app.jar"
log "Starting JAR download process"
log "Running as user: $(whoami)"
# Ensure the AWS CLI is available
if ! command -v aws &> /dev/null; then
log "AWS CLI is not installed. Please install it and try again."
exit 1
fi
# Create temporary directory
mkdir -p "$TEMP_DIR"
# Download the ZIP file from S3
log "Downloading deployment package from S3"
if aws s3 cp "s3://${S3_BUCKET}/${S3_KEY}" "${TEMP_DIR}/deployment-package.zip"; then
log "Successfully downloaded deployment package"
else
log "Failed to download deployment package from S3"
log "AWS S3 ls output:"
aws s3 ls "s3://${S3_BUCKET}/artifacts/" | tee -a "$LOG_FILE"
exit 1
fi
# Unzip the package
log "Unzipping deployment package"
if unzip -q "${TEMP_DIR}/deployment-package.zip" -d "$TEMP_DIR"; then
log "Successfully unzipped deployment package"
else
log "Failed to unzip deployment package"
log "Content of TEMP_DIR:"
ls -la "$TEMP_DIR" | tee -a "$LOG_FILE"
exit 1
fi
# Find and move the JAR file
JAR_FILE=$(find "$TEMP_DIR" -name "*.jar" -type f)
if [ -n "$JAR_FILE" ]; then
log "Found JAR file: $JAR_FILE"
if mv "$JAR_FILE" "$DESTINATION"; then
log "Successfully moved JAR file to $DESTINATION"
else
log "Failed to move JAR file to $DESTINATION"
log "Permissions of destination directory:"
ls -la "$(dirname "$DESTINATION")" | tee -a "$LOG_FILE"
exit 1
fi
else
log "No JAR file found in the deployment package"
log "Content of TEMP_DIR after unzip:"
find "$TEMP_DIR" -type f | tee -a "$LOG_FILE"
exit 1
fi
# Clean up
rm -rf "$TEMP_DIR"
log "JAR file download and placement completed successfully"
exit 0
install java:
#!/bin/bash
set -e
# Function to log messages
log() {
echo "[$(date)] $1" | tee -a /var/log/java_install.log
}
# Function to check and install Java 17
install_java_17() {
log "Checking Java installation..."
if type -p java; then
java_version=$(java -version 2>&1 | awk -F '"' '/version/ {print $2}')
log "Current Java version: $java_version"
if [[ "$java_version" == "17"* ]]; then
log "Java 17 is already installed."
return 0
fi
fi
log "Java 17 is not installed. Attempting to install..."
if command -v yum &> /dev/null; then
# For RHEL/CentOS/Amazon Linux
log "Using yum package manager..."
sudo yum update -y
if sudo yum install -y java-17-openjdk-devel; then
log "OpenJDK 17 installed successfully using yum."
else
log "Failed to install OpenJDK 17 using yum. Trying alternative method..."
# Try installing Amazon Corretto as an alternative
sudo rpm --import https://yum.corretto.aws/corretto.key
sudo curl -L -o /etc/yum.repos.d/corretto.repo https://yum.corretto.aws/corretto.repo
if sudo yum install -y java-17-amazon-corretto-devel; then
log "Amazon Corretto 17 installed successfully."
else
log "Failed to install Java 17. Please check system logs for more details."
return 1
fi
fi
elif command -v apt-get &> /dev/null; then
# For Ubuntu/Debian
log "Using apt-get package manager..."
sudo apt-get update
if sudo apt-get install -y openjdk-17-jdk; then
log "OpenJDK 17 installed successfully using apt-get."
else
log "Failed to install OpenJDK 17 using apt-get."
return 1
fi
else
log "Unable to install Java 17. No supported package manager found."
return 1
fi
# Set Java 17 as default
if command -v update-alternatives &> /dev/null; then
java_path=$(update-alternatives --list java | grep "java-17" | head -n 1)
if [ -n "$java_path" ]; then
sudo update-alternatives --set java "$java_path"
log "Java 17 set as default."
else
log "Failed to set Java 17 as default. Please set it manually."
fi
fi
# Verify installation
if java -version 2>&1 | grep -q "version \"17"; then
log "Java 17 installed and configured successfully."
java -version 2>&1 | tee -a /var/log/java_install.log
return 0
else
log "Java 17 installation or configuration failed."
return 1
fi
}
# Main execution
log "Starting Java 17 installation and configuration process"
if [ "$(id -u)" -ne 0 ]; then
log "This script must be run as root"
exit 1
fi
install_java_17
if [ $? -eq 0 ]; then
log "Java 17 installation and configuration process completed successfully"
exit 0
else
log "Java 17 installation and configuration process failed"
# Additional debugging information
log "System information:"
uname -a | tee -a /var/log/java_install.log
log "Available disk space:"
df -h | tee -a /var/log/java_install.log
log "Memory usage:"
free -m | tee -a /var/log/java_install.log
log "Please check /var/log/java_install.log for more details"
exit 1
fi
validate service:
#!/bin/bash
set -e
# Function to log messages
log() {
echo "[$(date)] $1"
}
# Configuration
APP_DIR="/home/ec2-user"
JAR_FILE="app.jar"
log "Starting service validation"
log "Running as user: $(whoami)"
# Check if JAR file exists
if [ -f "${APP_DIR}/${JAR_FILE}" ]; then
log "SUCCESS: ${JAR_FILE} found in ${APP_DIR}"
else
log "ERROR: ${JAR_FILE} not found in ${APP_DIR}"
ls -l "${APP_DIR}"
exit 1
fi
# Check Java installation
if command -v java &> /dev/null; then
log "SUCCESS: Java is installed"
java -version
else
log "ERROR: Java is not installed"
exit 1
fi
# Additional system information
log "Current working directory:"
pwd
log "Current user and groups:"
id
log "Disk space:"
df -h
log "Service validation completed successfully"
exit 0
Tạo instance gitlab
Terraform:
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "vault" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.medium"
iam_instance_profile = "ec2-admin-role"
root_block_device {
volume_size = 30
volume_type = "gp3"
}
tags = {
Name = "Gitlab"
}
}
docker compose:
services:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.example.com'
gitlab_rails['gitlab_shell_ssh_port'] = 2222
ports:
- '80:80'
- '443:443'
- '2222:22'
volumes:
- ${GITLAB_HOME:-/opt/gitlab}/config:/etc/gitlab
- ${GITLAB_HOME:-/opt/gitlab}/logs:/var/log/gitlab
- ${GITLAB_HOME:-/opt/gitlab}/data:/var/opt/gitlab
shm_size: '256m'
volumes:
gitlab-config:
gitlab-logs:
gitlab-data:
Sau khi Gitlab đã khởi động thành công, bạn có thể truy cập nó qua:
Web interface: http://your_server_ip SSH: ssh://your_server_ip:2222
Lưu ý rằng khi sử dụng Git với SSH, bạn sẽ cần chỉ định port 2222, ví dụ: git clone ssh://git@your_server_ip:2222/your_reposito..
reset password gitlab:
docker exec -it gitlab_ce bin/bash
gitlab-rake "gitlab:password:reset[root]"
Phải đợi vài phút mới ra prompt cho nhập mật khẩu mới
Tạo app lấy dữ liệu từ Secret Manger
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>secrets-manager-retriever</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<aws.java.sdk.version>1.12.529</aws.java.sdk.version>
</properties>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-secretsmanager</artifactId>
<version>${aws.java.sdk.version}</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-cloudwatch</artifactId>
<version>${aws.java.sdk.version}</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-logs</artifactId>
<version>${aws.java.sdk.version}</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-ec2</artifactId>
<version>${aws.java.sdk.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.3.0</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>com.example.SecretsManagerRetriever</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
settings.xml:
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<mirrors>
<mirror>
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<url>http://3.0.102.196:8081/repository/maven-public/</url>
</mirror>
</mirrors>
</settings>
src/main/java/com/example/secretsmanagerretriever.java
package com.example;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.secretsmanager.AWSSecretsManager;
import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder;
import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest;
import com.amazonaws.services.secretsmanager.model.GetSecretValueResult;
import com.amazonaws.services.cloudwatch.AmazonCloudWatch;
import com.amazonaws.services.cloudwatch.AmazonCloudWatchClientBuilder;
import com.amazonaws.services.cloudwatch.model.*;
import com.amazonaws.services.logs.AWSLogs;
import com.amazonaws.services.logs.AWSLogsClientBuilder;
import com.amazonaws.services.logs.model.*;
import com.amazonaws.util.EC2MetadataUtils;
import java.io.FileWriter;
import java.io.IOException;
import java.util.Collections;
public class SecretsManagerRetriever {
private static final String REGION = "ap-southeast-1";
private static final String LOG_GROUP_NAME = "/com/example/secrets-manager-retriever";
private static final String METRIC_NAMESPACE = "SecretsManagerRetriever";
private static AmazonCloudWatch cloudWatchClient;
private static AWSLogs logsClient;
private static String logStreamName;
private static String instanceId;
public static void main(String[] args) {
if (args.length != 2) {
System.out.println("Usage: java -jar SecretsManagerRetriever.jar <secret-name> <local-file-path>");
System.exit(1);
}
String secretName = args[0];
String localFilePath = args[1];
ProfileCredentialsProvider credentialsProvider = new ProfileCredentialsProvider("developer");
cloudWatchClient = AmazonCloudWatchClientBuilder.standard()
.withRegion(REGION)
.withCredentials(credentialsProvider)
.build();
logsClient = AWSLogsClientBuilder.standard()
.withRegion(REGION)
.withCredentials(credentialsProvider)
.build();
// Get EC2 Instance ID for log stream name and metrics
instanceId = EC2MetadataUtils.getInstanceId();
if (instanceId == null || instanceId.isEmpty()) {
instanceId = "default-instance"; // Fallback if not running on EC2
}
logStreamName = instanceId;
try {
// Ensure Log Stream exists
createLogStream();
// Log start of operation
logMessage("Starting secret retrieval for: " + secretName);
// Retrieve secret from AWS Secrets Manager
String secretValue = getSecret(secretName, credentialsProvider);
// Write the secret value to the specified local file
writeToFile(localFilePath, secretValue);
// Log successful operation
logMessage("Secret value has been written to " + localFilePath);
// Send custom metrics
sendCustomMetric("SecretRetrievalSuccess", 1.0);
// Send secret length metric
sendSecretLengthMetric(secretValue);
} catch (Exception e) {
// Log error
logMessage("Error: " + e.getMessage());
e.printStackTrace();
// Send error metric
sendCustomMetric("SecretRetrievalError", 1.0);
} finally {
// Display information about where to find the metrics
System.out.println("\nMetrics Information:");
System.out.println("--------------------");
System.out.println("You can view the application metrics in AWS CloudWatch:");
System.out.println("- Region: " + REGION);
System.out.println("- Namespace: " + METRIC_NAMESPACE);
System.out.println("- Metrics: SecretRetrievalSuccess, SecretRetrievalError, SecretLength");
System.out.println("- Dimension: InstanceId = " + instanceId);
System.out.println("\nTo view these metrics:");
System.out.println("1. Go to the AWS CloudWatch console");
System.out.println("2. Select 'Metrics' from the left navigation pane");
System.out.println("3. Choose the '" + REGION + "' region");
System.out.println("4. Find and click on the '" + METRIC_NAMESPACE + "' namespace");
System.out.println("5. Select the metrics with the InstanceId dimension matching: " + instanceId);
System.out.println("6. You can then view and graph the metrics for this specific instance");
}
}
private static void createLogStream() {
try {
// Create Log Stream if it doesn't exist
try {
logsClient.createLogStream(new CreateLogStreamRequest(LOG_GROUP_NAME, logStreamName));
System.out.println("Log Stream created: " + logStreamName);
} catch (ResourceAlreadyExistsException e) {
System.out.println("Log Stream already exists: " + logStreamName);
}
} catch (Exception e) {
System.err.println("Error creating Log Stream: " + e.getMessage());
}
}
private static String getSecret(String secretName, ProfileCredentialsProvider credentialsProvider) {
AWSSecretsManager client = AWSSecretsManagerClientBuilder.standard()
.withRegion(REGION)
.withCredentials(credentialsProvider)
.build();
GetSecretValueRequest getSecretValueRequest = new GetSecretValueRequest()
.withSecretId(secretName);
GetSecretValueResult getSecretValueResult = client.getSecretValue(getSecretValueRequest);
return getSecretValueResult.getSecretString();
}
private static void writeToFile(String filePath, String content) throws IOException {
try (FileWriter writer = new FileWriter(filePath)) {
writer.write(content);
}
logMessage("File written: " + filePath);
}
private static void logMessage(String message) {
System.out.println(message);
InputLogEvent logEvent = new InputLogEvent()
.withTimestamp(System.currentTimeMillis())
.withMessage(message);
PutLogEventsRequest putLogEventsRequest = new PutLogEventsRequest()
.withLogGroupName(LOG_GROUP_NAME)
.withLogStreamName(logStreamName)
.withLogEvents(Collections.singletonList(logEvent));
try {
logsClient.putLogEvents(putLogEventsRequest);
} catch (Exception e) {
System.err.println("Error writing to CloudWatch Logs: " + e.getMessage());
}
}
private static void sendCustomMetric(String metricName, double value) {
Dimension dimension = new Dimension()
.withName("InstanceId")
.withValue(instanceId);
MetricDatum datum = new MetricDatum()
.withMetricName(metricName)
.withUnit("Count")
.withValue(value)
.withDimensions(dimension);
PutMetricDataRequest request = new PutMetricDataRequest()
.withNamespace(METRIC_NAMESPACE)
.withMetricData(datum);
try {
// Check if the metric already exists
DimensionFilter dimensionFilter = new DimensionFilter()
.withName("InstanceId")
.withValue(instanceId);
ListMetricsRequest listMetricsRequest = new ListMetricsRequest()
.withNamespace(METRIC_NAMESPACE)
.withMetricName(metricName)
.withDimensions(Collections.singletonList(dimensionFilter));
ListMetricsResult listMetricsResult = cloudWatchClient.listMetrics(listMetricsRequest);
if (listMetricsResult.getMetrics().isEmpty()) {
// Metric doesn't exist, create it
cloudWatchClient.putMetricData(request);
System.out.println("Created new metric: " + metricName + " with InstanceId: " + instanceId);
} else {
// Metric exists, just send the data
cloudWatchClient.putMetricData(request);
System.out.println("Sent data to existing metric: " + metricName + " with InstanceId: " + instanceId);
}
} catch (Exception e) {
System.err.println("Error sending metric data: " + e.getMessage());
}
}
private static void sendSecretLengthMetric(String secretValue) {
sendCustomMetric("SecretLength", secretValue.length());
}
}
UML:
#!/bin/bash
# Project name
PROJECT_NAME="secret-manager-retriever"
# Create project directory
mkdir -p $PROJECT_NAME
# Create Maven directory structure
mkdir -p $PROJECT_NAME/src/main/java/com/example
mkdir -p $PROJECT_NAME/src/main/resources
mkdir -p $PROJECT_NAME/src/test/java/com/example
# Create pom.xml file
cat <<'EOL' > $PROJECT_NAME/pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>secrets-manager-retriever</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<aws.java.sdk.version>1.12.529</aws.java.sdk.version>
</properties>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-secretsmanager</artifactId>
<version>${aws.java.sdk.version}</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-cloudwatch</artifactId>
<version>${aws.java.sdk.version}</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-logs</artifactId>
<version>${aws.java.sdk.version}</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-ec2</artifactId>
<version>${aws.java.sdk.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.3.0</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>com.example.SecretsManagerRetriever</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
EOL
# Create settings.xml file
cat <<'EOL' > $PROJECT_NAME/settings.xml
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<mirrors>
<mirror>
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<url>http://3.0.102.196:8081/repository/maven-public/</url>
</mirror>
</mirrors>
</settings>
EOL
# Create SecretsManagerRetriever.java file
cat <<'EOL' > $PROJECT_NAME/src/main/java/com/example/SecretsManagerRetriever.java
package com.example;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.secretsmanager.AWSSecretsManager;
import com.amazonaws.services.secretsmanager.AWSSecretsManagerClientBuilder;
import com.amazonaws.services.secretsmanager.model.GetSecretValueRequest;
import com.amazonaws.services.secretsmanager.model.GetSecretValueResult;
import com.amazonaws.services.cloudwatch.AmazonCloudWatch;
import com.amazonaws.services.cloudwatch.AmazonCloudWatchClientBuilder;
import com.amazonaws.services.cloudwatch.model.*;
import com.amazonaws.services.logs.AWSLogs;
import com.amazonaws.services.logs.AWSLogsClientBuilder;
import com.amazonaws.services.logs.model.*;
import com.amazonaws.util.EC2MetadataUtils;
import java.io.FileWriter;
import java.io.IOException;
import java.util.Collections;
public class SecretsManagerRetriever {
private static final String REGION = "ap-southeast-1";
private static final String LOG_GROUP_NAME = "/com/example/secrets-manager-retriever";
private static final String METRIC_NAMESPACE = "SecretsManagerRetriever";
private static AmazonCloudWatch cloudWatchClient;
private static AWSLogs logsClient;
private static String logStreamName;
private static String instanceId;
public static void main(String[] args) {
if (args.length != 2) {
System.out.println("Usage: java -jar SecretsManagerRetriever.jar <secret-name> <local-file-path>");
System.exit(1);
}
String secretName = args[0];
String localFilePath = args[1];
ProfileCredentialsProvider credentialsProvider = new ProfileCredentialsProvider("developer");
cloudWatchClient = AmazonCloudWatchClientBuilder.standard()
.withRegion(REGION)
.withCredentials(credentialsProvider)
.build();
logsClient = AWSLogsClientBuilder.standard()
.withRegion(REGION)
.withCredentials(credentialsProvider)
.build();
// Get EC2 Instance ID for log stream name and metrics
instanceId = EC2MetadataUtils.getInstanceId();
if (instanceId == null || instanceId.isEmpty()) {
instanceId = "default-instance"; // Fallback if not running on EC2
}
logStreamName = instanceId;
try {
// Ensure Log Stream exists
createLogStream();
// Log start of operation
logMessage("Starting secret retrieval for: " + secretName);
// Retrieve secret from AWS Secrets Manager
String secretValue = getSecret(secretName, credentialsProvider);
// Write the secret value to the specified local file
writeToFile(localFilePath, secretValue);
// Log successful operation
logMessage("Secret value has been written to " + localFilePath);
// Send custom metrics
sendCustomMetric("SecretRetrievalSuccess", 1.0);
// Send secret length metric
sendSecretLengthMetric(secretValue);
} catch (Exception e) {
// Log error
logMessage("Error: " + e.getMessage());
e.printStackTrace();
// Send error metric
sendCustomMetric("SecretRetrievalError", 1.0);
} finally {
// Display information about where to find the metrics
System.out.println("\nMetrics Information:");
System.out.println("--------------------");
System.out.println("You can view the application metrics in AWS CloudWatch:");
System.out.println("- Region: " + REGION);
System.out.println("- Namespace: " + METRIC_NAMESPACE);
System.out.println("- Metrics: SecretRetrievalSuccess, SecretRetrievalError, SecretLength");
System.out.println("- Dimension: InstanceId = " + instanceId);
System.out.println("\nTo view these metrics:");
System.out.println("1. Go to the AWS CloudWatch console");
System.out.println("2. Select 'Metrics' from the left navigation pane");
System.out.println("3. Choose the '" + REGION + "' region");
System.out.println("4. Find and click on the '" + METRIC_NAMESPACE + "' namespace");
System.out.println("5. Select the metrics with the InstanceId dimension matching: " + instanceId);
System.out.println("6. You can then view and graph the metrics for this specific instance");
}
}
private static void createLogStream() {
try {
// Create Log Stream if it doesn't exist
try {
logsClient.createLogStream(new CreateLogStreamRequest(LOG_GROUP_NAME, logStreamName));
System.out.println("Log Stream created: " + logStreamName);
} catch (ResourceAlreadyExistsException e) {
System.out.println("Log Stream already exists: " + logStreamName);
}
} catch (Exception e) {
System.err.println("Error creating Log Stream: " + e.getMessage());
}
}
private static String getSecret(String secretName, ProfileCredentialsProvider credentialsProvider) {
AWSSecretsManager client = AWSSecretsManagerClientBuilder.standard()
.withRegion(REGION)
.withCredentials(credentialsProvider)
.build();
GetSecretValueRequest getSecretValueRequest = new GetSecretValueRequest()
.withSecretId(secretName);
GetSecretValueResult getSecretValueResult = client.getSecretValue(getSecretValueRequest);
return getSecretValueResult.getSecretString();
}
private static void writeToFile(String filePath, String content) throws IOException {
try (FileWriter writer = new FileWriter(filePath)) {
writer.write(content);
}
logMessage("File written: " + filePath);
}
private static void logMessage(String message) {
System.out.println(message);
InputLogEvent logEvent = new InputLogEvent()
.withTimestamp(System.currentTimeMillis())
.withMessage(message);
PutLogEventsRequest putLogEventsRequest = new PutLogEventsRequest()
.withLogGroupName(LOG_GROUP_NAME)
.withLogStreamName(logStreamName)
.withLogEvents(Collections.singletonList(logEvent));
try {
logsClient.putLogEvents(putLogEventsRequest);
} catch (Exception e) {
System.err.println("Error writing to CloudWatch Logs: " + e.getMessage());
}
}
private static void sendCustomMetric(String metricName, double value) {
Dimension dimension = new Dimension()
.withName("InstanceId")
.withValue(instanceId);
MetricDatum datum = new MetricDatum()
.withMetricName(metricName)
.withUnit("Count")
.withValue(value)
.withDimensions(dimension);
PutMetricDataRequest request = new PutMetricDataRequest()
.withNamespace(METRIC_NAMESPACE)
.withMetricData(datum);
try {
// Check if the metric already exists
DimensionFilter dimensionFilter = new DimensionFilter()
.withName("InstanceId")
.withValue(instanceId);
ListMetricsRequest listMetricsRequest = new ListMetricsRequest()
.withNamespace(METRIC_NAMESPACE)
.withMetricName(metricName)
.withDimensions(Collections.singletonList(dimensionFilter));
ListMetricsResult listMetricsResult = cloudWatchClient.listMetrics(listMetricsRequest);
if (listMetricsResult.getMetrics().isEmpty()) {
// Metric doesn't exist, create it
cloudWatchClient.putMetricData(request);
System.out.println("Created new metric: " + metricName + " with InstanceId: " + instanceId);
} else {
// Metric exists, just send the data
cloudWatchClient.putMetricData(request);
System.out.println("Sent data to existing metric: " + metricName + " with InstanceId: " + instanceId);
}
} catch (Exception e) {
System.err.println("Error sending metric data: " + e.getMessage());
}
}
private static void sendSecretLengthMetric(String secretValue) {
sendCustomMetric("SecretLength", secretValue.length());
}
}
EOL
echo "Project structure created successfully with automatic Log Group and Stream creation."
Configure IAM role anywhere
Log in vào vault instance, sử dụng script sau để có CA để setup trust anchor, private cert và public cert tương tác với IAM role anywhere
# CA cho trust anchor:
vault secrets enable -path=iam-roles-anywhere-test -description="IAM Roles Anywhere Test" pki
vault write iam-roles-anywhere-test/root/generate/internal \
common_name=example.com \
ttl=6000h
vault read -field=certificate iam-roles-anywhere-test/cert/ca > ca.pem
# Tạo cert
vault write -format=json iam-roles-anywhere-test/issue/test common_name="example.com" ttl="720h" > cert.json
jq -r .data.certificate cert.json > cert.pem
jq -r .data.private_key cert.json > private.pem
Tạo các resources IAM role anywhere bằng terraform:
# Trust Anchor
resource "awscc_rolesanywhere_trust_anchor" "trust_anchor" {
name = "immersionday-trust-anchor"
enabled = true
source = {
source_type = "CERTIFICATE_BUNDLE"
source_data = {
x509_certificate_data = file("${path.module}/trust.pem")
}
}
lifecycle {
ignore_changes = [source.source_data, tags]
}
}
# RolesAnywhere Profile
resource "awscc_rolesanywhere_profile" "example_profile" {
name = "immersionday-profile"
role_arns = [aws_iam_role.rolesanywhere_role.arn]
enabled = true
duration_seconds = 3600 # 1 hour
}
# IAM Role for RolesAnywhere
resource "aws_iam_role" "rolesanywhere_role" {
name = "rolesanywhere-s3-ssm-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
Service = "rolesanywhere.amazonaws.com"
}
Action = ["sts:AssumeRole",
"sts:TagSession",
"sts:SetSourceIdentity"]
}
]
})
}
# IAM Policy for S3 and SSM
resource "aws_iam_policy" "s3_ssm_policy" {
name = "immersionday-s3-ssm-policy"
description = "Policy for S3 bucket access and SSM parameter retrieval"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket",
"s3:DeleteObject"
]
Resource = [
"arn:aws:s3:::immersionday-aaaa-jjjj",
"arn:aws:s3:::immersionday-aaaa-jjjj/*"
]
},
{
Effect = "Allow"
Action = [
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath"
]
Resource = "arn:aws:ssm:*:*:parameter/*"
}
]
})
}
# Attach the policy to the role
resource "aws_iam_role_policy_attachment" "rolesanywhere_policy_attach" {
role = aws_iam_role.rolesanywhere_role.name
policy_arn = aws_iam_policy.s3_ssm_policy.arn
}
# Outputs
output "rolesanywhere_profile_arn" {
value = awscc_rolesanywhere_profile.example_profile.profile_arn
}
output "trust_anchor_arn" {
value = awscc_rolesanywhere_trust_anchor.trust_anchor.trust_anchor_arn
}
output "iam_role_arn" {
value = aws_iam_role.rolesanywhere_role.arn
}
với trust.pem là CA cert được generate từ bên trên, có dạng như sau:
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUH2YeBmYX5wJ3GJWyqE79mifwhiwwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLZXhhbXBsZS5jb20wHhcNMjQwODI5MDc1MzU1WhcNMjQw
OTMwMDc1NDI1WjAWMRQwEgYDVQQDEwtleGFtcGxlLmNvbTCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAMgHj9o5UzEMOyF5RGYl4AaptIKTK93tK1BGSyre
pCVHB3PKhmxifUVU1Y2p9AJYWG+SACDDx9bqpNzlRFC7Y5YJ0hAsgydP+i8CHst2
bDPvZLi536ryGGdfR/D7PkSwws8mbg5ZqgCsvgKinGAOqiaBFjF09s51aXpdbpfY
U396MGi0OrZmNHhNYlSHmtnFyEFXyAaUMCkiTcOP5fQmUd1/Qbpa6Ski35MbScIN
n9p/+LdwS8c7SmInEdzzibgXeQ8reozAbPb8wxv1QvuOqBTQhNBG0sNFZw+IfJsp
eo1DfbSTLvDeqCN5aOKIwE2RjUZBwuWXcjVeC1sKIHnui20CAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFNGljDbuRyGh
e7a6KpE4GA/ZSWXKMB8GA1UdIwQYMBaAFNGljDbuRyGhe7a6KpE4GA/ZSWXKMBYG
A1UdEQQPMA2CC2V4YW1wbGUuY29tMA0GCSqGSIb3DQEBCwUAA4IBAQApEatvK+Z/
1pMCcBuPCq3rPB/gYNhXQ7bIOFHuos6J9wA8QPXkiip59a1pFh+xi0wv8HkhC4sS
6TWBBJcskKIE+KY8C3BXOu1GJ8Jsk49yKnRkp2kpa9OhGclYKZJ8nj8QiNjGdvp0
Y9U6kiNU7PLCjBoacYxlvP8UgV9eP5FVPMpwos2oTXl9TudYZKYgmLAab4rn194t
Fw/Io6snIaZqgWFzzEVwojrJxrvtkgMlntwwRU8LP2QUfvoQE63MNqDOZcdKd1CQ
Vc0zeAmtSoD0AV3ZEjjgPZNudl5PaY+I+P91sfGECHJiWceQo9+3S30e+kpxvz27
cneglCo53ebG
-----END CERTIFICATE-----
Ta sẽ lưu trữ private.pem và cert.pem vào vault path: secret/roleanywhere, dạng như sau:
Tạo app lấy certificate từ Vault
UML:
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>vault-certificate-retrieval</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>17</maven.compiler.source>
<maven.compiler.target>17</maven.compiler.target>
<spring-vault.version>3.1.0</spring-vault.version>
<spring-boot.version>3.0.0</spring-boot.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>3.0.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency> <groupId>org.springframework.vault</groupId>
<artifactId>spring-vault-core</artifactId>
<version>3.1.0</version>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.vault</groupId>
<artifactId>spring-vault-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</dependency>
<dependency>
<groupId>com.bettercloud</groupId>
<artifactId>vault-java-driver</artifactId>
<version>5.1.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>17</source>
<target>17</target>
</configuration>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<!--<version>boot.version</version>-->
<version>3.0.0</version>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>central</id>
<name>Central Repository</name>
<url>https://repo.maven.apache.org/maven2</url>
</repository>
<repository>
<id>nexus</id>
<name>Nexus Repository</name>
<url>http://3.0.102.196:8081/repository/maven-public/</url>
</repository>
</repositories>
</project>
settings.xml:
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<mirrors>
<mirror>
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<url>http://3.0.102.196:8081/repository/maven-public/</url>
</mirror>
</mirrors>
</settings>
src/main/java/com/example/VaultCertificateRetrieval.java
package com.example;
import com.bettercloud.vault.Vault;
import com.bettercloud.vault.VaultConfig;
import com.bettercloud.vault.VaultException;
import java.io.FileWriter;
import java.io.IOException;
public class VaultCertificateRetrieval {
public static void main(String[] args) {
String vaultAddress = "http://54.251.17.198:8200";
String roleId = "1d503160-cbd9-ae77-adb5-cee08039c829";
String secretId = "49fdb793-affb-57e6-415a-46b653173306";
try {
// Configure the Vault client
VaultConfig config = new VaultConfig()
.address(vaultAddress)
.build();
// Create a Vault instance
Vault vault = new Vault(config);
// Authenticate using AppRole
String token = vault.auth().loginByAppRole(roleId, secretId).getAuthClientToken();
System.out.println("Successfully authenticated. Token: " + token);
// Use the token for subsequent Vault operations
config.token(token).build();
Vault authenticatedVault = new Vault(config);
// Read cert.pem and private.pem from Vault
String certPem = authenticatedVault.logical().read("secret/roleanywhere").getData().get("cert.pem");
String privatePem = authenticatedVault.logical().read("secret/roleanywhere").getData().get("private.pem");
// Write cert.pem to file
writeToFile("cert.pem", certPem);
System.out.println("Successfully wrote cert.pem to file.");
// Write private.pem to file
writeToFile("private.pem", privatePem);
System.out.println("Successfully wrote private.pem to file.");
} catch (VaultException e) {
System.err.println("Error interacting with Vault: " + e.getMessage());
} catch (IOException e) {
System.err.println("Error writing to file: " + e.getMessage());
}
}
private static void writeToFile(String fileName, String content) throws IOException {
try (FileWriter writer = new FileWriter(fileName)) {
writer.write(content);
}
}
}
#!/bin/bash
# Kiểm tra và cài đặt Maven nếu cần
check_and_install_maven() {
if ! command -v mvn &> /dev/null; then
echo "Maven chưa được cài đặt. Đang cài đặt Maven..."
sudo yum install -y maven
if [ $? -eq 0 ]; then
echo "Maven đã được cài đặt thành công."
else
echo "Không thể cài đặt Maven. Vui lòng cài đặt thủ công."
exit 1
fi
else
echo "Maven đã được cài đặt."
fi
}
# Tạo thư mục gốc của dự án
mkdir -p vault-certificate-retrieval
cd vault-certificate-retrieval
# Tạo cấu trúc thư mục Maven
mkdir -p src/main/java/com/example
mkdir -p src/test/java/com/example
# Tạo file pom.xml
cat << EOF > pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>vault-certificate-retrieval</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>17</maven.compiler.source>
<maven.compiler.target>17</maven.compiler.target>
<spring-vault.version>3.1.0</spring-vault.version>
<spring-boot.version>3.0.0</spring-boot.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>3.0.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency> <groupId>org.springframework.vault</groupId>
<artifactId>spring-vault-core</artifactId>
<version>3.1.0</version>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.vault</groupId>
<artifactId>spring-vault-core</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</dependency>
<dependency>
<groupId>com.bettercloud</groupId>
<artifactId>vault-java-driver</artifactId>
<version>5.1.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>17</source>
<target>17</target>
</configuration>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<!--<version>boot.version</version>-->
<version>3.0.0</version>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>central</id>
<name>Central Repository</name>
<url>https://repo.maven.apache.org/maven2</url>
</repository>
<repository>
<id>nexus</id>
<name>Nexus Repository</name>
<url>http://3.0.102.196:8081/repository/maven-public/</url>
</repository>
</repositories>
</project>
EOF
cat << EOF > settings.xml
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<mirrors>
<mirror>
<id>nexus</id>
<mirrorOf>*</mirrorOf>
<url>http://3.0.102.196:8081/repository/maven-public/</url>
</mirror>
</mirrors>
</settings>
EOF
# Tạo file Java chính
cat << EOF > src/main/java/com/example/VaultCertificateRetrieval.java
package com.example;
import com.bettercloud.vault.Vault;
import com.bettercloud.vault.VaultConfig;
import com.bettercloud.vault.VaultException;
import java.io.FileWriter;
import java.io.IOException;
public class VaultCertificateRetrieval {
public static void main(String[] args) {
String vaultAddress = "http://54.251.17.198:8200";
String roleId = "1d503160-cbd9-ae77-adb5-cee08039c829";
String secretId = "49fdb793-affb-57e6-415a-46b653173306";
try {
// Configure the Vault client
VaultConfig config = new VaultConfig()
.address(vaultAddress)
.build();
// Create a Vault instance
Vault vault = new Vault(config);
// Authenticate using AppRole
String token = vault.auth().loginByAppRole(roleId, secretId).getAuthClientToken();
System.out.println("Successfully authenticated. Token: " + token);
// Use the token for subsequent Vault operations
config.token(token).build();
Vault authenticatedVault = new Vault(config);
// Read cert.pem and private.pem from Vault
String certPem = authenticatedVault.logical().read("secret/roleanywhere").getData().get("cert.pem");
String privatePem = authenticatedVault.logical().read("secret/roleanywhere").getData().get("private.pem");
// Write cert.pem to file
writeToFile("cert.pem", certPem);
System.out.println("Successfully wrote cert.pem to file.");
// Write private.pem to file
writeToFile("private.pem", privatePem);
System.out.println("Successfully wrote private.pem to file.");
} catch (VaultException e) {
System.err.println("Error interacting with Vault: " + e.getMessage());
} catch (IOException e) {
System.err.println("Error writing to file: " + e.getMessage());
}
}
private static void writeToFile(String fileName, String content) throws IOException {
try (FileWriter writer = new FileWriter(fileName)) {
writer.write(content);
}
}
}
EOF
# Kiểm tra và cài đặt Maven
check_and_install_maven
echo "Cấu trúc dự án đã được tạo thành công!"
echo "Để build dự án, hãy chạy: mvn clean package"
echo "Để chạy chương trình, hãy chạy: java -jar target/vault-certificate-retrieval-1.0-SNAPSHOT.jar"
echo "Ví dụ: java -jar target/vault-certificate-retrieval-1.0-SNAPSHOT.jar"
Tạo các web app instance
userdata:
#!/bin/bash
# Function to install CodeDeploy agent
install_codedeploy_agent() {
echo "Installing CodeDeploy agent..."
# Update packages
if ! sudo yum update -y; then
echo "Failed to update yum packages."
return 1
fi
# Install required packages
if ! sudo yum install -y ruby wget; then
echo "Failed to install ruby and wget."
return 1
fi
# Create a temporary directory for the installer
temp_dir=$(mktemp -d)
cd "$temp_dir" || return 1
# Download the installer
if ! wget https://aws-codedeploy-ap-southeast-1.s3.ap-southeast-1.amazonaws.com/latest/install; then
echo "Failed to download CodeDeploy installer."
cd - && rm -rf "$temp_dir"
return 1
fi
# Make the installer executable
chmod +x ./install
# Run the installer
if ! sudo ./install auto; then
echo "Failed to install CodeDeploy agent."
cd - && rm -rf "$temp_dir"
return 1
fi
# Clean up
cd - && rm -rf "$temp_dir"
# Write CodeDeploy agent configuration
if ! sudo tee /etc/codedeploy-agent/conf/codedeployagent.yml > /dev/null <<EOT
---
:log_aws_wire: false
:log_dir: '/var/log/aws/codedeploy-agent/'
:pid_dir: '/opt/codedeploy-agent/state/.pid/'
:program_name: codedeploy-agent
:root_dir: '/opt/codedeploy-agent/deployment-root'
:verbose: false
:wait_between_runs: 1
:proxy_uri:
:max_revisions: 5
:disable_imds_v1: false
:enable_auth_policy: true
EOT
then
echo "Failed to write CodeDeploy agent configuration."
return 1
fi
# Restart CodeDeploy agent to apply new configuration
if ! sudo systemctl restart codedeploy-agent; then
echo "Failed to restart CodeDeploy agent."
return 1
fi
# Enable CodeDeploy agent to start on boot
if ! sudo systemctl enable codedeploy-agent; then
echo "Failed to enable CodeDeploy agent."
return 1
fi
# Check if CodeDeploy agent is running
if ! sudo systemctl is-active --quiet codedeploy-agent; then
echo "CodeDeploy agent is not running."
return 1
fi
echo "CodeDeploy agent installed, configured and running successfully."
return 0
}
# Function to install AWS CLI
install_aws_cli() {
echo "Checking AWS CLI installation..."
if command -v aws &> /dev/null; then
echo "AWS CLI is already installed."
aws --version
return 0
fi
echo "Installing/Updating AWS CLI..."
if ! sudo yum install -y unzip curl; then
echo "Failed to install unzip and curl."
return 1
fi
if ! curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"; then
echo "Failed to download AWS CLI."
return 1
fi
if ! unzip -o awscliv2.zip; then
echo "Failed to unzip AWS CLI package."
return 1
fi
if [ -d "/usr/local/aws-cli" ]; then
echo "Updating existing AWS CLI installation..."
if ! sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update; then
echo "Failed to update AWS CLI."
return 1
fi
else
echo "Installing new AWS CLI..."
if ! sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli; then
echo "Failed to install AWS CLI."
return 1
fi
fi
# Change permissions for AWS CLI directory and binary
sudo chmod -R 755 /usr/local/aws-cli
sudo chmod 755 /usr/local/bin/aws
# Add AWS CLI to system-wide PATH if not already present
if ! grep -q "/usr/local/bin" /etc/profile; then
echo 'export PATH=$PATH:/usr/local/bin' | sudo tee -a /etc/profile > /dev/null
source /etc/profile
fi
if ! aws --version; then
echo "AWS CLI installation failed or not in PATH."
return 1
fi
rm -rf awscliv2.zip aws/
echo "AWS CLI has been successfully installed/updated and permissions set for all users."
return 0
}
# Function to check and install Java 17
install_java_17() {
echo "Checking Java installation..."
if type -p java; then
java_version=$(java -version 2>&1 | awk -F '"' '/version/ {print $2}')
echo "Current Java version: $java_version"
if [[ "$java_version" == "17"* ]]; then
echo "Java 17 is already installed."
return 0
fi
fi
echo "Java 17 is not installed. Attempting to install OpenJDK 17..."
if command -v yum &> /dev/null; then
# For RHEL/CentOS/Amazon Linux
sudo yum update -y
if sudo yum install -y java-17-openjdk-devel; then
echo "OpenJDK 17 installed successfully."
else
echo "Failed to install OpenJDK 17 using yum."
return 1
fi
elif command -v apt-get &> /dev/null; then
# For Ubuntu/Debian
sudo apt-get update
if sudo apt-get install -y openjdk-17-jdk; then
echo "OpenJDK 17 installed successfully."
else
echo "Failed to install OpenJDK 17 using apt-get."
return 1
fi
else
echo "Unable to install Java 17. No supported package manager found."
return 1
fi
# Set Java 17 as default
if command -v update-alternatives &> /dev/null; then
java_path=$(update-alternatives --list java | grep "java-17" | head -n 1)
if [ -n "$java_path" ]; then
sudo update-alternatives --set java "$java_path"
echo "Java 17 set as default."
else
echo "Failed to set Java 17 as default. Please set it manually."
fi
fi
# Verify installation
if java -version 2>&1 | grep -q "version \"17"; then
echo "Java 17 installed and configured successfully."
java -version
return 0
else
echo "Java 17 installation or configuration failed."
return 1
fi
}
# Main execution
main() {
set -e # Exit immediately if a command exits with a non-zero status
if install_codedeploy_agent; then
echo "CodeDeploy agent installation completed successfully."
else
echo "CodeDeploy agent installation failed."
return 1
fi
if install_aws_cli; then
echo "AWS CLI installation completed successfully."
else
echo "AWS CLI installation failed."
return 1
fi
if install_java_17; then
echo "Java 17 installation completed successfully."
else
echo "Java 17 installation failed."
return 1
fi
echo "All installations completed successfully."
return 0
}
# Run the main function
main
Một số lệnh check log userdata xem có lỗi không:
sudo cat /var/log/cloud-init.log
sudo cat /var/log/cloud-init-output.log
sudo cloud-init status
sudo cloud-init analyze show
sudo tail -n 100 /var/log/cloud-init-output.log
sudo tail -f /var/log/cloud-init-output.log
Check các tool sau cài đặt:
sudo systemctl status codedeploy-agent
java --version
aws --version
Để CodeDeploy agent có thể sử dụng private VPC endpoint thay vì public endpoint, thì ta phải add thêm :enable_auth_policy: true
vào file config và cần chú ý đảm bảo IAM instance profile phải có các quyền sau:
{
"Action" : [
"codedeploy-commands-secure:GetDeploymentSpecification",
"codedeploy-commands-secure:PollHostCommand",
"codedeploy-commands-secure:PutHostCommandAcknowledgement",
"codedeploy-commands-secure:PutHostCommandComplete"
],
"Effect" : "Allow",
"Resource" : "*"
}
và gắn vào 2 managed policy:
arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforAWSCodeDeploy arn:aws:iam::aws:policy/service-role/AWSCodeDeployRole
trường hợp muốn check log codedeploy agent:
less /var/log/aws/codedeploy-agent/codedeploy-agent.log
Testing và sử dụng application
Setup để web app lấy credentials từ IAM role anywhere
Chúng ta cần thực hiện build app get giá trị từ Vault ở bên trên thành file jar. Sau đó, để tiện hơn thì chúng ta tải signing helper từ link sau và lưu vào bucket immersionday-aaaa-jjjj: https://docs.aws.amazon.com/rolesanywhere/latest/userguide/credential-helper.html
Script setup:
#!/bin/bash
# Function to log messages
log() {
echo "[$(date)] $1"
}
# Main function
setup_aws() {
# Các biến cấu hình
S3_BUCKET="immersionday-aaaa-jjjj"
S3_KEY="aws_signing_helper"
AWS_CONFIG_FILE=~/.aws/config
CERT_FILE="/home/ssm-user/cert.pem"
PRIVATE_KEY_FILE="/home/ssm-user/private.pem"
PROFILE_ARN="arn:aws:rolesanywhere:ap-southeast-1:<accountid>:profile/<profile_id>"
ROLE_ARN="arn:aws:iam::<accountid>:role/roleanywhere-role"
TRUST_ANCHOR_ARN="arn:aws:rolesanywhere:ap-southeast-1:<accountid>:trust-anchor/<trust_anchor_id>"
# Tải aws_signing_helper từ S3
log "Đang tải aws_signing_helper từ S3..."
if ! aws s3 cp s3://${S3_BUCKET}/${S3_KEY} ./aws_signing_helper; then
log "Cảnh báo: Không thể tải aws_signing_helper từ S3. Kiểm tra quyền truy cập S3 và đường dẫn."
return 1
fi
log "aws_signing_helper đã được tải thành công."
# Cấp quyền thực thi cho aws_signing_helper
log "Đang cấp quyền thực thi cho aws_signing_helper..."
chmod +x ./aws_signing_helper
# Di chuyển aws_signing_helper vào /usr/local/bin
log "Đang di chuyển aws_signing_helper vào /usr/local/bin..."
if ! sudo mv ./aws_signing_helper /usr/local/bin/; then
log "Cảnh báo: Không thể di chuyển aws_signing_helper. Kiểm tra quyền sudo."
return 1
fi
log "aws_signing_helper đã được cài đặt thành công."
# Tạo thư mục ~/.aws nếu chưa tồn tại
mkdir -p ~/.aws
# Thêm hoặc cập nhật cấu hình profile developer
log "Đang cập nhật cấu hình AWS profile..."
cat << EOF > $AWS_CONFIG_FILE
[profile developer]
credential_process = /usr/local/bin/aws_signing_helper credential-process \
--certificate $CERT_FILE \
--private-key $PRIVATE_KEY_FILE \
--profile-arn $PROFILE_ARN \
--role-arn $ROLE_ARN \
--trust-anchor-arn $TRUST_ANCHOR_ARN
EOF
log "Cấu hình AWS profile developer đã được thêm vào $AWS_CONFIG_FILE"
# Hiển thị nội dung của file cấu hình
log "Nội dung của file $AWS_CONFIG_FILE:"
cat $AWS_CONFIG_FILE
# Kiểm tra quyền của các file chứng chỉ và khóa
log "Đang kiểm tra và cập nhật quyền của các file chứng chỉ và khóa..."
if [ -f "$CERT_FILE" ] && [ -f "$PRIVATE_KEY_FILE" ]; then
chmod 600 $CERT_FILE $PRIVATE_KEY_FILE
log "Quyền của các file chứng chỉ và khóa đã được cập nhật."
else
log "Cảnh báo: Không tìm thấy file chứng chỉ hoặc khóa. Hãy đảm bảo chúng tồn tại tại đường dẫn đã chỉ định."
fi
# Kiểm tra cấu hình
log "Đang kiểm tra cấu hình AWS..."
if aws sts get-caller-identity --profile developer; then
log "Cấu hình AWS đã hoàn tất và hoạt động chính xác."
else
log "Cảnh báo: Không thể xác thực với AWS. Kiểm tra lại cấu hình và quyền truy cập."
log "Đang hiển thị thông tin debug..."
aws sts get-caller-identity --profile developer --debug
fi
log "Quá trình cài đặt và cấu hình đã hoàn tất."
# Hiển thị thông tin hữu ích
log "Thông tin hữu ích:"
log "1. Đường dẫn aws_signing_helper: /usr/local/bin/aws_signing_helper"
log "2. File cấu hình AWS: $AWS_CONFIG_FILE"
log "3. Để sử dụng profile mới, hãy thêm --profile developer vào các lệnh AWS CLI của bạn"
}
# Chạy function chính
setup_aws
# Thông báo kết thúc script
echo "Script đã chạy xong. Bạn có thể tiếp tục sử dụng terminal."
Sau đó, chúng ta thực hiện chạy jenkins pipeline để tự động deploy app lấy giá trị vào web app instance.
Chạy app và kiểm tra
Chạy app:
java -jar app.jar /cloudwatch/config cloudwatch-config.json
trong đó:
cloudwatch/config
là tên của secretcloudwatch-config.json
là tên file sẽ ghi ra giá trị secret ở local
Chúng ta có thể dựa vào kết quả trả về của app để biết vị trí các log và metrics trả về, và kiểm tra trên AWS CloudWatch Console.
Cấu hình cloudwatch agent:
Cập nhật gói và cài đặt CloudWatch Agent
sudo yum update -y sudo yum install -y amazon-cloudwatch-agent
Sao chép file cấu hình vào thư mục cấu hình của CloudWatch Agent:
sudo cp cloudwatch-config.json /opt/aws/amazon-cloudwatch-agent/bin/cloudwatch-config.json
Khởi động CloudWatch Agent với cấu hình vừa sao chép:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/cloudwatch-config.json -s
Kiểm tra trạng thái của CloudWatch Agent:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a status
Check log của CloudWatch Agent:
sudo tail -f /var/log/amazon-cloudwatch-agent.log
Sau khi setup thành công CloudWatch Agent, chúng ta có thể check các metrics custom được bắn ra tại namespace CWAgent trên CloudWatch console.
Lưu ý
Mục đích đưa vào myApplication là để gom nhóm resource, để phân tách resources, nếu không muốn cho permission catalog thì sẽ không đưa vào myApplication:
Ngoài ra việc đưa vào myApplication cũng góp phần separate chi phí, hữu ích trong TH deploy nhiều resources.