sedコマンド

文字列を全置換したり行単位で抽出したり、削除したり、テキスト処理できるコマンド
コマンドラインパラメータで指定して非対話的に一括処理もできる
sedはStream EDitorの略

### sedの構文
sed OPTIONS… [SCRIPT] [INPUTFILE…]
[SCRIPT]とは “s/foo/bar/g”
“-e”オプションで直後に[SCRIPT]が来る

$ echo “Tech Blog” | sed -e “s/Blog/Comment/g”
Tech Comment
$ echo “Tech Blog” | sed -e “s/ /-/g”
Tech-Blog

バックスラッシュはエスケープ
$ echo “Tech Blog” | sed -e “s/ /\!/”
Tech!Blog

二つ目に見つかった”o”を”_”に変換
$ echo “Hello World” | sed -e “s/o/__/2”
Hello W__rld

### ファイルの書き換え
$ echo “Hello World” > sample.txt
$ sed -e “s/World/Coffee/g” sample.txt
Hello Coffee
$ cat sample.txt
Hello World
$ sed -i -e “s/World/Shinbashi/g” sample.txt
$ cat sample.txt
Hello Shinbashi

他にも色々使い方ができる
取り敢えず置換ができると覚えておく

GitLab CIを活用

.gitlab-ci.yml

stages:
  - npm
  - composer
  - upload

npm:
  stage: npm
  image: node:12.14.1-alpine3.111
  script:
    - npm install
    - npm audit fix
    - npm run production
    - tar czf node_modules.tar.gz node_modules 
  artifacts:
    paths:
      - node_modules.tar.gz

composer:
  stage: composer
  image: composer:1.9
  script:
    - composer install
    - zip -r ./${CI_PIPELINE_ID}.zip .
  artifacts:
    paths:
      - ./${CI_PIPELINE_ID}.zip

s3upload:
  stage: upload
  image: alpine:latest
  before_script:
    - apk add --no-cache python3
    - pip3 install awscli
  script:
    - aws s3 cp ./${CI_PIPELINE_ID}.zip s3://${S3BUCKET}/${APP}.zip
build:
  stage: build
  script:
    - echo compile and package
    - echo tag image version
    - branch_name=$(echo $CI_COMMIT_REF_NAME | sed 's/\//-/g')
    - version="$branch_name-$CI_PIPELINE_ID"
    - echo login ECR and push image
    - eval $(aws ecr get-login --no-include-email --region ap-northeast-1)
    - docker tag app:latest myimage:${version}
    - docker push myimage:${version}
  only:
    refs:
      - feature
      - develop
      - integration
      - hotfix
      - master
    changes:
      - src/*/*
  tags:
    - build-runner

deploy:
  stage: deploy
  script:
    - echo "Deploy app"
    - branch_name=$(echo $CI_COMMIT_REF_NAME | sed sed 's/\//-/g')
    - version="$branch_name-$CI_PIPELINE_ID"
    - echo $version > codedeploy/image_version.txt
    - cd codedeploy
    - zip -r deploy.zip appspec.yml image_version.txt scripts
    - aws s3 cp deploy.zip s3://codedeploy/automation/${CI_COMMIT_REF_NAME}/app/deploy.zip --metadata x-amz-meta-application-name=app,x-amz-meta-deploymentgroup-name=${obj}
  only:
    refs:
      - feature
      - develop
      - integration
      - hotfix
      - master
    changes:
      - src/**/*
  tags:
    - deploy-runner

install.sh

#!/bin/bash
# Script is run on instance

# Get app version
dir=$(dirname "$0")
version=$(cat ${dir}/../image_version.txt)

# Tracking version
OPS_DIR="/ect/ops"
export APP_VERSION=${version}

# Compose up
docker-compose up -d app

appspec.yml

version: 0.0
os: linux
hooks:
  BeforeInstall:
    - location: scripts/install.sh
      timeout: 300
      runas: root

GitLab, CodeCommit, CodeBuild, CodePipeline, CodeDeploy

Terraform

terraform
resource "aws_codepipeline" "pipeline" {
	name = "my-pipeline"
	role_arn = aws_iam_role.codepipeline.arn

	artifact_store {
		location = aws_s3_bucket.pipelien_bucket.bucket
		type = "s3"
	}

	stage {
		name = "Source"

		action {
			name = "Source"
			category = "Source"
			owner = "AWS"
			provider = "CodeCommit"
			version = 1
			output_artifacts = ["source"]
			configuration {
				BranchName = "develop"
				RepositoryName = aws_codecommit_repository.my_repository.repository_nmae
			}
		}
	}

	stage {
		name = "Build"

		action {
			name = "Build"
			category = "Build"
			owner = "AWS"
			provider = "CodeBuild"
			version = "1"
			run_order = 2
			input_artifacts = [
			"source"]
			output_artifacts = [
			"build"]
			configuration = {
				ProjectName = aws_codebuild_project.my_project.name
			}
		}
	}

	stage {
		name = "Deploy"

		action {
			name = "Deploy"
			category = "Deploy"
			owner = "AWS"
			provider = "ECS"
			version = 1
			run_order = 1
			input_artifacts = ["Build"]

			configuration {
				ClusterName = aws_ecs_cluster.my_clustername
				ServiceName = aws_ecs_service.my_service.name
				FileName = "${var.file_name}"
			}

		}
	}
}

gitlab-ci.yml

image: golang:1.15

veriables:
	REPO_NAME: gitlab.com/xxxxx/microservice

before_script:
	- mkdir -p $GOPATH/src/$(dirname $REPO_NAME)
	- ln -svf $CI_PROJECT_DIR $GOPATH/src/$REPO_NAME
	- cd $GOPATH/src/$REPO_NAME

stages:
	- test

test:
	stage: test
	script:
		make test

やはり terraform も結構使われてるのね。

gitlab container repositoryにpushせずに、直接ecrにpushする場合

build-demo-app:
	stage: build
	script:
		- docker build demo-app:latest
		- docker push xxxx.dkr.ecr.ap-northeast-1.amazonaws.com/demo-app-${CI_BUILD_REF_NAME}:latest

deploy-demo-app:
	stage: deploy
	script:
		- aws deploy create-deployment --application-name demo-app-${CI_BUILD_REF_NAME} --cli-input-json file://deployment.json --region ap-northeast-1

GitLab, CodePipeline, CodeDeployでデプロイする手順

1. GitLabにpush
2. GitLab CI/CDでS3にプロジェクトをアップロード
3. CodePipelineでS3アップロードを検知
4. CodeBuildでS3にアーティファクトを保存
5. CodeDeployでEC2インスタンスにデプロイ
-> CodePipelineでどうやってS3のアップロードを検知を設定するか…
–> codepipelineでソースプロバイダーにAmazon s3を選択し、検出オプションを変更するはCloudWatch Eventsを選択する

– artifactとは?
artifactとはパイプラインアクションによって処理されるアプリケーションのソースコード、構築されたアプリケーション、依存関係、定義ファイル、テンプレートなどのデータの集合体。アーティファクトは幾つかのアクションによって生成され、他のアクションによって消費される。ZipにしてS3アーティファクトバケットに保存される。

### 手順
プロジェクト作成
IAMユーザとS3バケットの作成
GitLab CI/CDの設定
デプロイ用EC2インスタンを作成/設定
CodeBuild設定
CodeDeploy設定
CodePipeline設定

$ tree
.
├── appspec.yml
├── buildspec.yml
└── hooks
└── restart.sh

$ docker run –rm -it -p 8080:8080 -v $(pwd):/app openjdk:11 bash
$ cd /app
$ .gradlew build
$ java -jar ./build/libs/cisample-0.0.1-SNAPSHOT.jar –server.port=8080

– S3 bucket versioningとは?
同じバケット内でオブジェクトの複数のバリアントを保持する手段のこと。バージョニング機能を使用すると、バケットに保持されたすべてのオブジェクトのすべてのバージョンを保持、取得、復元することができる。

### GitLab CI/CDの設定
pushを検知したらAWSのS3にプロジェクトの内容をアップロードする処理を追記する
GitLab側に変数として値を保持しておいて実行時に参照するように設定しておく

.gitlab-ci.yml

image: "alpine:3"

stages:
  - build
  - push

build:
  state: build
  script:
    - apk -no-cache add zip
    - zip -qr src.zip *
    - zip -u src.zip .gitlab-ci.yml
  artifacts:
    paths:
      - src.zip

deploy:
  stage: push
  script:
    - AWS_CLI_VERSION="1.18.31"
    - |-
      apk -no-cache add \
        python \
        py-pip \
        mailcap
    - pip install --upgrade awscli=$AWS_CLI_VERSION python-magic
    - mkdir ~/.aws
    - |-
      cat << EOS > ~/.aws/config
      [default]
      region = ap-northeast-1
      output = json
      EOS
    - |-
      cat << EOS > ~/.aws/credentials
      [default]
      aws_access_key_id = ${aws_access_key_id}
      aws_secrete_access_key = ${aws_secrete_access_key}
      EOS
    - aws s3 cp ./src.zip s3://${bucket_name}/src.zip

buildspec.yml

version: 0.2

phase:
  build:
    commands:
      - echo start build at `date`
      - ./gradlew build
artifacts:
  type: zip
  files:
    - ./appspec.yml
    - ./hooks/restart.sh
    - ./build/libs/cismaple-0.0.1-SNAPSHOT.jar
  name: artifact
  discard-paths: yes

appspec.yml

version: 0.0
os: linux
files:
  - source: /
    destination: /usr/local/app
hooks:
  AfterInstall:
    - location: restart.sh
      timeout: 180

なるほど、GitLabを使う場合でもCodePipelineを使用するのは同じなのね。フローは理解した。

[Docker] CodeDeployとCodePipeline

buildspec.yml

version: 0.2

env:
  variables:
    AWS_REGION_NAME: ap-northeast-1
    ECR_REPOSITORY_NAME: hpscript
    DOCKER_BUILDKIT: "1"

phases:
  install:
    runtime-versions:
      docker: 19

  pre_build:
    commands:
      - AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
      - aws ecr --region ap-northeast-1 get-login-password | docker login --username AWS --password-stdin https://${AWS_ACCOUNT_ID}.dkr.ecr.ap-northeast-1.amazonaws.com/hpscript
      - REPOSITORY_URI=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION_NAME}.amazonaws.com/${ECR_REPOSITORY_NAME}
      - IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | cut -c 1-7)
  
  build:
    commands:
      - docker image build -t ${REPOSITORY_URI}:${IMAGE_TAG} .
  post_build:
    commands:
      - docker image push ${REPOSITORY_URI}:${IMAGE_TAG}
      - printf '{"name":"%s","ImageURI":"%s"}' $ECR_REPOSITORY_NAME $REPOSITORY_URI:$IMAGE_TAG > imageDetail.json

artifacts:
  files:
    - imageDetail.json     

appspec.yaml

version: 1
Resources:
- TargetService:
    Type: AWS::ECS::Service
    Properties:
      TaskDefinition: <TASK_DEFINITION>
      LoadBalancerInfo:
        ContainerName: test-container
        ContainerPort: 80

taskdef.json

{
	"executionRoleArn": "arn:aws:iam::hoge:role/escTaskExectionRole",
	"containerDefinitions": [
		{
			"logConfiguration": {
				"logDriver": "awslogs",
				"options": {
					"awslogs-group": "/ecs/fuga-def",
					"awslogs-region": "ap-northeast-1",
					"awslogs-stream-prefix": "esc"
				}
			},
			"portMappings": [
				{
					"hostPort": 80,
					"protocol": "tcp",
					"containerPort": 80
				}
			],
			"cpu": 256,
			"readonlyRootFilesystem": true,
			"memoryReservation": 512,
			"image": "<IMAGE1_NAME>",
			"essential": true,
			"name": "app"
		}
	],
	"memory": "1024",
	"taskRoleArn": null,
	"compatibilities": [
		"EC2",
		"FARGATE"
	],
	"family": "test-def",
	"requiresCompatibilities": [
		"FARGATE"
	],
	"networkMode": "awsvpc",
	"cpu": "512"
}

$ tree
.
├── Dockerfile
├── appspec.yaml
├── buildspec.yml
├── index.html
└── taskdef.json

$ git push ${CODECOMMIT_REPO_URL}

### CodePipeline
Action: Deploy
Action provider: Amazon ECS(Blue/Green)
Region: Asia Pacific(Tokyo)
Input artifact: BuildArtifact, SourceArtifact

AWS CodeDeploy application name: AppECS-test-container-service
AWS CodeDeploy deployment group: DgpECS-test-container-service
Amazon ECS task definition: SourceArtifact
AWS CodeDeploy AppSpec file: SourceArtifact

Input artifact with image details: BuildArtifact
Placeholder text in the task definition: IMAGE1_NAME
Variable namespace: DeployVariables

### エラーメッセージ
An AppSpec file is required, but could not be found in the revision

What’s wrong with you?

[Docker] CodeBuildを使いたい

$ tree
.
├── Dockerfile
└── index.html

0 directories, 2 files

$ sudo docker build -t myapache .
$ sudo docker run -dit –name myapp -p 8080:80 myapache

buildspec.yml

version: 0.2

env:
  variables:
    AWS_REGION_NAME: ap-northeast-1
    ECR_REPOSITORY_NAME: hoge
    DOCKER_BUILDKIT: "1"

phases:
  install:
    runtime-versions:
      docker: 19

  pre_build:
    commands:
      - AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
      - aws ecr --region ap-northeast-1 get-login-password | docker login --username AWS --password-stdin https://${AWS_ACCOUNT_ID}.dkr.ecr.ap-northeast-1.amazonaws.com/hpscript
      - REPOSITORY_URI=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION_NAME}.amazonaws.com/${ECR_REPOSITORY_NAME}
      - IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | cut -c 1-7)
  
  build:
    commands:
      - docker image build -t ${REPOSITORY_URI}:${IMAGE_TAG} .
  post_build:
    commands:
      - docker image push ${REPOSITORY_URI}:${IMAGE_TAG}  

### AWS CodeBuild
– Project configuration
Project name: test
Build Badge: Enable build badge

– Source
Source provider: AWS CodeCommit
Repository: 指定
Reference type: Branch
Branch: 指定

– Environment
Environment image: Managed image
Operationg system: Amazon Linux2
Runtime(s): Standard
Image: aws/codebuild/amazonlinux2-x86_64-standard:3.0
Image version: Always use the latest image for this runtime
Environment type: Linux
Privileged: on
Service Role: New service role
Additional configuration
Timeout: 1 Hour 0 Minutes
Queued timeout: 8 Hour 0 Minutes
Certificate: Do not install any certificate
Compute: 3 GB memory, 2 vCPUs

– Buildspec
Use a buildspec file

– Artifact
Type: No artifacts

– Log
CloudWatch logs: CloudWatch logs

OK ここまでは出来た。次はcode deploy, codepipelineか。

githubのレポジトリをCodeCommitに複製する

$ aws –version
aws-cli/1.22.64 Python/3.8.10 Linux/5.4.0-109-generic botocore/1.24.9
$ git –version
git version 2.25.1

$ git init
$ git add .
$ git commit -m “first commit”
$ git remote add origin https://github.com/hpscript/cicd.git
$ git push -u origin master

### CodeCommit作成
$ GITHUB_ACCOUNT=hogehoge
$ REPO_NAME=fuga
$ GITHUB_REPO_URL=https://github.com/${GITHUB_ACCOUNT}/${REPO_NAME}.git

$ aws codecommit create-repository –repository-name ${REPO_NAME}

### 複製先のRepository情報を定義
$ CODECOMMIT_REGION=ap-northeast-1
$ CODECOMMIT_REPO_URL=https://git-codecommit.${CODECOMMIT_REGION}.amazonaws.com/v1/repos/${REPO_NAME}

### 認証情報ヘルパーの設定
$ git config –global credential.helper ‘!aws codecommit credential-helper $@’
$ git config –global credential.UseHttpPath true

### CodeCommitのRepositoryにpush
$ git push ${CODECOMMIT_REPO_URL} –all

なるほどー

[docker] nginxをリバースプロキシ

$ sudo docker run –rm -v /home/vagrant/dev/docker/basic:/tmp -it nginx /bin/bash
$ cp /etc/nginx/conf.d/default.conf /tmp/

$ sudo docker build -t rproxy-image .
$ sudo docker build -t web-image .

$ sudo docker network create web-network
$ sudo docker run –name=web-container –net=web-network -d web-image
$ sudo docker run –name=rproxy-container –net=web-network -p 8080:80 -d rproxy-image

うむ、OK

[docker] docker-compose.yml その2

docker-compose.yml

version: "2"
services:
  db-container:
    image: busybox
    volumes:
      - db-volume: /var/lib/mysql/

  mysql-container:
    image: mysql:8
    environment:
      - MYSQL_ROOT_PASSWORD=dbpass01
      - MYSQL_DATABASE=redmine
    command: --character-set-server-utf8mb4 --collation-server=utf8mb4_unicode_ci --default-authentication-plugin=mysql_native_password
    volumes_from:
      - db-container
    dpends_on:
      - db-container
    restart: always

  pma-container:
    image: phpmyadmin/phpmyadmin
    environment:
      - PMA_HOST=mysql-container
      - PMA_USER=root
      - PMA_PASSWORD=dbpass01
    ports:
      - 8080:80
    depends_on:
      - mysql-container
    restart: always
  wp-container:
    image: wordpress
    environent:
      - WORDPRESS_DB_HOST=mysql-container
      - WORDPRESS_DB_USER=root
      - WORDPRESS_DB_PASSWORD=dbpass01
    ports:
      - 8080:80
    depends_on:
      - mysql-container
    restart: always

  rm-container:
    image: redmine
    environment:
      - REDMINE_DB_MYSQL=mysql-container
      - REDMINE_DB_USERNAME=root
      - REDMINE_DB_PASSWORD=dbpass01
    ports:
      - 3000:3000
    depends_on:
      - mysql-container
    restart: always

volumes:
  db-volume: