AWS CloudWatch

Amazon CloudWatch provides real-time monitoring of Amazon Web Service (AWS) resources and the applications that user are running on AWS. You can use CloudWatch to collect and track metrics. Metrics are variables that can be measured in terms of resources and applications.

The CloudWatch homepage automatically displays metrics about each of the AWS services you are using. In addition, you can create custom dashboards to view custom application metrics or view custom collections of selected metrics.

You can monitor metrics and create alarms that send notifications when thresholds are exceeded or automatically change the resources being monitored. For example, you can monitor CPU usage and disk reads and writes for Amazon EC2 instances, and use that data to decide whether you should launch additional instances to handle the increased load. You can also use this data to save money by shutting down underutilized instances.

Repro SDKって何?

あれ、国産?というか、Japanブランドって地に落ちたな。ことITに限っては何一ついいところがない。
https://repro.io/jp/

A variety of hitters to improve “user continuity rate” is completed with this tool. You can discover application issues with “analytics” features such as quantitative analysis that measures number of accesses and qualitative analysis that observes how users are using the app. Also support the growth of apps by leveraging “marketing” features such as push notifications and in-app message sending to target user based on that issue. Furthermore, with Repro, AI can automatically measure highly likely users through machine learning and use it for marketing.

なるほどー

Cleanup

Open storage area
– Delete unnecessary files that waste space and free up space for what you need.

– Delete unnecessary file such as cache, temporary files, residual data from your Android device or app
– See which apps occupy the most space
– Identify and delete apps you no longer use

file出力, 生成

#!/usr/bin/perl --

print "Content-type:text/html\n\n";
$file="finance.csv";
open(IN, $file) or die "$!";
while(<IN>){
	print $_;
}
close (IN);

出力はOKだが、問題は生成の方だな

#!/usr/bin/perl --

print "Content-type:text/html\n\n";
$file="finance.csv";
$outfile="finance_out.csv";

open(IN, $file) or die "$!";
open(OUT, ">$outfile") or die "$!";

while(<IN>){
	print OUT $_;
}
close (IN);
close (OUT);

拡張子を.csvとして、OUTと指定するだけで出来るんだ。なんか拍子抜けでな。

ほう

#!/usr/bin/perl —

print “Content-type:text/html\n\n”;
$file=”finance.csv”;
$outfile=”finance_out.csv”;

open(IN, $file) or die “$!”;
open(OUT, “>$outfile”) or die “$!”;

while(){
# 改行コード除去
chomp($_);

# 各行をカンマ区切りで分割
@data = split(/,/, $_);

for ($i=0; $i<@data; $++){ $data[$i] =~ s/( | )+//g; } } close (IN); close (OUT); [/perl]

Crash Report

What is a crash report?

When an error occurs in the application, it may be terminated, but the terminal stores the information inside the terminated application at that time.

The screen will be displayed at the end, but if you tap the report, the development team can use the information for defect correction.

The crash report contains the following information, but does not contain personally identifying information.
– App version
– Android version
– Device type
– Error occurrence time
– Error occurrence point
– Number of occurrences
– Message you entered

ドコモの「アプリクラッシュポート」についてを見てみましょう
アプリクラッシュレポート

awsの統合ログ収集

awsでログ管理に必要なサービス
– CloudWatch Logs
– CloudTrail
– AWS Config

Splunkでログデータ運用を実現?
https://www.splunk.com
あああああああ、ログまわりだと、必ず機械学習が出てくるなー なんだかんだで。。

LogStorage For AWS

Purpose
The purpose of managing logs in the system is largely 1)monitoring, 2)management, and 3)auditing. 1)In monitoring, whether the system on AWS is operating normally, if the rules for network access are set properly, or if there is a security problem in the system on AWS, these will be continuously logged. It is checked by monitoring in real time. In addition, if there is any problem in the system, logs are also used to investigate the cause. 2) In management, the above monitoring points can be analyzed from the viewpoint of historical data, and trends in system usage and traffic can be analyzed and used for system expansion and change planning. 3)In audits, logs are important in terms of leaving a system operation trail of who did what and when.

AWSのけるログ管理
AWS上でのログ管理は、ログを収集・蓄積するAWSサービスが既に用意されており、そうしたサービスを利用してログを管理・モニタリングしていく
– AWSサービスに対する管理アクセスログ: AWS CloudTrail、 AWS Config → CloudWatch Logs, S3, SQS
– AWS上の通信トラフィックのログ: VPC Flow Logs, ELB Logs → CloudWatch Logs
– EC2インスタンス内のアクセスログ、アプリケーションログ、システムロ:AWS CloudWatch

なるほどー、肝心なのはCloudWatchやなー

ls -lhs

[vagrant@localhost test]$ ls -lhs
合計 20K
4.0K -rw-rw-r– 1 vagrant vagrant 104 4月 26 08:44 2019 index.php
4.0K -rw-rw-r– 1 vagrant vagrant 18 4月 29 11:39 2019 test
12K -rw-r–r– 1 vagrant vagrant 11K 4月 24 08:36 2019 test.swf

-lオプション:ファイルの詳細も表示
-hオプション:単位を読みやすい形式で表示
-sオプション:ファイルサイズ順で表示

なるほどーーーーーーー

kibanaをインストールしよう

[vagrant@localhost ~]$ cd /etc/yum.repos.d
[vagrant@localhost yum.repos.d]$ ls
CentOS-Base.repo logstash.repo remi-php54.repo
CentOS-Debuginfo.repo mariadb.repo remi-php70.repo
CentOS-Media.repo mysql-community-source.repo remi-php71.repo
CentOS-Vault.repo mysql-community.repo remi-php72.repo
CentOS-fasttrack.repo nginx.repo remi-php73.repo
elasticsearch.repo nodesource-el.repo remi-safe.repo
epel-testing.repo remi-glpi91.repo remi.repo
epel.repo remi-glpi92.repo
jenkins.repo remi-glpi93.repo
[vagrant@localhost yum.repos.d]$ sudo touch kibana.repo
[vagrant@localhost yum.repos.d]$ sudo vi kibana.repo
[vagrant@localhost yum.repos.d]$ sudo yum install kibana
インストール:
kibana.x86_64 0:7.0.0-1

完了しました!

ヒアドキュメント

[vagrant@localhost test]$ echo line1
line1
[vagrant@localhost test]$ echo line2
line2
[vagrant@localhost test]$ echo line3
line3
[vagrant@localhost test]$ cat < line1
> line2
> line3
> EOF
line1
line2
line3

ヒアドキュメントでファイルが作れる!?

[vagrant@localhost test]$ cat < test
> line1
> line2
> line3
> EOF
[vagrant@localhost test]$ cat test
line1
line2
line3

すげーーーーーーーー
で、kibanaのリポジトリを作成しようとすると、、
[vagrant@localhost ~]$ sudo cat << _EOF_ > /etc/yum.repos.d/kibana.repo
> [kibana-4.5]
> name=Kibana repository for 4.5.x packages
> baseurl=http://packages.elastic.co/kibana/4.5/centos
> gpgcheck=1
> gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
> enabled=1
> _EOF_
-bash: /etc/yum.repos.d/kibana.repo: 許可がありません

なんだよ。。

Elastic searchをインストール

[vagrant@localhost ~]$ rpm –import https://artifacts.elastic.co/GPG-KEY-elasticsearch
[vagrant@localhost ~]$ cd /etc/yum.repos.d/
[vagrant@localhost yum.repos.d]$ sudo touch elasticsearch.repo
[vagrant@localhost yum.repos.d]$ sudo vi elasticsearch.repo

[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

[vagrant@localhost yum.repos.d]$ sudo yum install elasticsearch

### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using chkconfig
sudo chkconfig –add elasticsearch
### You can start elasticsearch service by executing
sudo service elasticsearch start
Created elasticsearch keystore in /etc/elasticsearch
Verifying : elasticsearch-7.0.0-1.x86_64 1/1

インストール:
elasticsearch.x86_64 0:7.0.0-1

完了しました!

[vagrant@localhost yum.repos.d]$ sudo chkconfig –add elasticsearch
[vagrant@localhost yum.repos.d]$ sudo service elasticsearch start
elasticsearch を起動中: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error=’Not enough space’ (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /var/log/elasticsearch/hs_err_pid5271.log
[失敗]
なにいいいいいいいいいいいいいいいいいいいいいいいいいい
最近メモリが足りないってエラーが多すぎるんだが。。

[vagrant@localhost ~]$ sudo cat /etc/elasticsearch/jvm.options
## JVM configuration

################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms1g
-Xmx1g

################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################

## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

## G1GC Configuration
# NOTE: G1GC is only supported on JDK version 10 or later.
# To use G1GC uncomment the lines below.
# 10-:-XX:-UseConcMarkSweepGC
# 10-:-XX:-UseCMSInitiatingOccupancyOnly
# 10-:-XX:+UseG1GC
# 10-:-XX:InitiatingHeapOccupancyPercent=75

## DNS cache policy
# cache ttl in seconds for positive DNS lookups noting that this overrides the
# JDK security property networkaddress.cache.ttl; set to -1 to cache forever
-Des.networkaddress.cache.ttl=60
# cache ttl in seconds for negative DNS lookups noting that this overrides the
# JDK security property networkaddress.cache.negative ttl; set to -1 to cache
# forever
-Des.networkaddress.cache.negative.ttl=10

## optimizations

# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch

## basic

# explicitly set the stack size
-Xss1m

# set to headless, just in case
-Djava.awt.headless=true

# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8

# use our provided JNA always versus the system one
-Djna.nosys=true

# turn off a JDK optimization that throws away stack traces for common
# exceptions because stack traces are important for debugging
-XX:-OmitStackTraceInFastThrow

# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0

# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true

-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=/var/lib/elasticsearch

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log

## JDK 8 GC logging

8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:/var/log/elasticsearch/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m
# due to internationalization enhancements in JDK 9 Elasticsearch need to set the provider to COMPAT otherwise
# time/date parsing will break in an incompatible way for some date patterns and locals
9-:-Djava.locale.providers=COMPAT

-Xms1g、-Xmx1gの箇所ね。
VMのメモリを変えたいところだ。

デフォルトで2Gってどういうことだよ。。

とりあえず、500mに変える。
[vagrant@localhost ~]$ sudo vi /etc/elasticsearch/jvm.options
[vagrant@localhost ~]$ sudo service elasticsearch start
elasticsearch を起動中: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[ OK ]