td agent execution log

$ /var/log/td-agent/td-agent.log
~ omission  ~
013-11-16 14:21:22 +0900 [info]: adding source type="forward"
2013-11-16 14:21:22 +0900 [info]: adding source type="http"
2013-11-16 14:21:22 +0900 [info]: adding source type="debug_agent"
2013-11-16 14:21:22 +0900 [info]: adding source type="tail"
2013-11-16 14:21:22 +0900 [warn]: 'pos_file PATH' parameter is not set to a 'tail' source.
2013-11-16 14:21:22 +0900 [warn]: this parameter is highly recommended to save the position to resume tailing.
2013-11-16 14:21:22 +0900 [info]: adding match pattern="td.*.*" type="tdlog"
2013-11-16 14:21:22 +0900 [info]: adding match pattern="debug.**" type="stdout"
2013-11-16 14:21:22 +0900 [info]: adding match pattern="mongo.**" type="mongo"
2013-11-16 14:21:22 +0900 [info]: adding match pattern="apache.access" type="file"
2013-11-16 14:21:22 +0900 [error]: unexpected error error_class=Mongo::ConnectionFailure error=#
  2013-11-16 14:21:22 +0900 [error]: /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/mongo-1.8.6/lib/mongo/networking.rb:167:in `receive_header'
~ omission  ~

/var/log/lsyncd/lsyncd.status

[vagrant@localhost ~]$ sudo tail -n 10 /var/log/lsyncd/lsyncd.status
Lsyncd status report at Tue Mar 12 00:44:51 2019

Sync1 source=/tmp/origin/
There are 0 delays
Excluding:
  nothing.


Inotify watching 1 directories
  1: /tmp/origin/

It doesn’t seem to be an lsync execution log.

What is /var/log/messages described?

The various logs output according to the system status are output to the following log files.

Log file name contents
/var/log/messages: general system related messages
/var/log/cron: messages about proccessing results that are executed periodically
/var/log/maillog: messages about mail
/var/log/spooler: printing messasges
/var/log/boot.log: Message about OS boot

Therefore, “messages” output general system related messages.

[vagrant@localhost ~]$ sudo tail -n 10 /var/log/messages
May 29 08:22:59 localhost kernel: ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
May 29 08:22:59 localhost kernel: ata3.00: failed command: FLUSH CACHE
May 29 08:22:59 localhost kernel: ata3.00: cmd e7/00:00:00:00:00/00:00:00:00:00/a0 tag 7
May 29 08:22:59 localhost kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
May 29 08:22:59 localhost kernel: ata3.00: status: { DRDY }
May 29 08:22:59 localhost kernel: ata3: hard resetting link
May 29 08:22:59 localhost kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
May 29 08:22:59 localhost kernel: ata3.00: configured for UDMA/133
May 29 08:22:59 localhost kernel: ata3.00: device reported invalid CHS sector 0
May 29 08:22:59 localhost kernel: ata3: EH complete
[vagrant@localhost ~]$ sudo tail -n 10 /var/log/messages-20190526
May 25 12:24:02 localhost /etc/init.d/kibana: Attempting ‘restart’ on kibana
May 25 12:24:02 localhost /etc/init.d/kibana: Killing kibana (pid 10114) with SIGTERM
May 25 12:24:02 localhost /etc/init.d/kibana: Waiting kibana (pid 10114) to die…
May 25 12:24:03 localhost /etc/init.d/kibana: Waiting kibana (pid 10114) to die…
May 25 12:24:03 localhost /etc/init.d/kibana: kibana stopped.
May 25 12:24:03 localhost /etc/init.d/kibana: kibana started
May 25 12:36:01 localhost dhclient[963]: DHCPREQUEST on eth0 to 10.0.2.2 port 67 (xid=0x5c1ebdd1)
May 25 12:36:01 localhost dhclient[963]: DHCPACK from 10.0.2.2 (xid=0x5c1ebdd1)
May 25 12:36:03 localhost dhclient[963]: bound to 10.0.2.15 — renewal in 39825 seconds.
May 26 03:27:07 localhost rsyslogd: [origin software=”rsyslogd” swVersion=”5.8.10″ x-pid=”1170″ x-info=”http://www.rsyslog.com”] rsyslogd was HUPed

[vagrant@localhost ~]$ sudo tail -n 10 /var/log/secure
May 27 22:14:51 localhost sshd[12653]: pam_unix(sshd:session): session closed for user vagrant
May 29 08:31:19 localhost sshd[13390]: Accepted password for vagrant from 192.168.35.1 port 64022 ssh2
May 29 08:31:19 localhost sshd[13390]: pam_unix(sshd:session): session opened for user vagrant by (uid=0)
May 29 08:31:19 localhost sshd[13392]: subsystem request for sftp
May 29 08:50:13 localhost sshd[13400]: Accepted password for vagrant from 192.168.35.1 port 49488 ssh2
May 29 08:50:13 localhost sshd[13400]: pam_unix(sshd:session): session opened for user vagrant by (uid=0)
May 29 08:51:32 localhost sudo: vagrant : TTY=pts/0 ; PWD=/home/vagrant ; USER=root ; COMMAND=/usr/bin/tail -n 10 /var/log/messages
May 29 08:52:25 localhost sudo: vagrant : TTY=pts/0 ; PWD=/home/vagrant ; USER=root ; COMMAND=/usr/bin/tail -n 10 /var/log/messages-20190526
May 29 08:58:59 localhost sudo: vagrant : TTY=pts/0 ; PWD=/home/vagrant ; USER=root ; COMMAND=/usr/bin/tail -n /var/log/secure
May 29 08:59:23 localhost sudo: vagrant : TTY=pts/0 ; PWD=/home/vagrant ; USER=root ; COMMAND=/usr/bin/tail -n 10 /var/log/secure

[vagrant@localhost ~]$ sudo tail -n 10 /var/log/cron
May 29 03:22:02 localhost anacron[13128]: Normal exit (1 job run)
May 29 04:01:01 localhost CROND[13336]: (root) CMD (run-parts /etc/cron.hourly)
May 29 04:01:01 localhost run-parts(/etc/cron.hourly)[13336]: starting 0anacron
May 29 04:01:01 localhost run-parts(/etc/cron.hourly)[13345]: finished 0anacron
May 29 06:01:02 localhost CROND[13347]: (root) CMD (run-parts /etc/cron.hourly)
May 29 06:01:02 localhost run-parts(/etc/cron.hourly)[13347]: starting 0anacron
May 29 06:01:02 localhost run-parts(/etc/cron.hourly)[13356]: finished 0anacron
May 29 07:01:02 localhost CROND[13372]: (root) CMD (run-parts /etc/cron.hourly)
May 29 07:01:03 localhost run-parts(/etc/cron.hourly)[13372]: starting 0anacron
May 29 07:01:03 localhost run-parts(/etc/cron.hourly)[13381]: finished 0anacron

[vagrant@localhost ~]$ sudo tail -n 10 /var/log/maillog

[vagrant@localhost ~]$ sudo tail -n 10 /var/log/spooler

[vagrant@localhost ~]$ sudo tail -n 10 /var/log/boot.log
Starting httpd: httpd: Could not reliably determine the server’s fully qualified domain name, using localhost.localdomain for ServerName
[ OK ]
Starting crond: [ OK ]
Starting Jenkins [ OK ]
May 19 07:16:49 localhost NET[12444]: /sbin/dhclient-script : updated /etc/resolv.conf
May 20 20:09:00 localhost NET[14514]: /sbin/dhclient-script : updated /etc/resolv.conf
May 22 08:31:26 localhost NET[14936]: /sbin/dhclient-script : updated /etc/resolv.conf
May 24 19:43:41 localhost NET[6409]: /sbin/dhclient-script : updated /etc/resolv.conf
May 26 13:02:39 localhost NET[12459]: /sbin/dhclient-script : updated /etc/resolv.conf
May 29 03:04:33 localhost NET[13192]: /sbin/dhclient-script : updated /etc/resolv.conf

Well, I absolutely understood that the output is completely different depending on the log file.

LogLevel’s Error Log

It is possible to specify up to 8 levels in the error log up to which level errors should be recorded. Set to “LogLevel” to specify.

Level: Meaning
emerg: Serious error that the server can not run.
alert: an error more serious than crit
crit: serious error
error: error
warn: warning
notice: notification message
info: server information
debug: information debugging

Is this basically something to do when look at “crit” and “error”?

awsの統合ログ収集

awsでログ管理に必要なサービス
– CloudWatch Logs
– CloudTrail
– AWS Config

Splunkでログデータ運用を実現?
https://www.splunk.com
あああああああ、ログまわりだと、必ず機械学習が出てくるなー なんだかんだで。。

LogStorage For AWS

Purpose
The purpose of managing logs in the system is largely 1)monitoring, 2)management, and 3)auditing. 1)In monitoring, whether the system on AWS is operating normally, if the rules for network access are set properly, or if there is a security problem in the system on AWS, these will be continuously logged. It is checked by monitoring in real time. In addition, if there is any problem in the system, logs are also used to investigate the cause. 2) In management, the above monitoring points can be analyzed from the viewpoint of historical data, and trends in system usage and traffic can be analyzed and used for system expansion and change planning. 3)In audits, logs are important in terms of leaving a system operation trail of who did what and when.

AWSのけるログ管理
AWS上でのログ管理は、ログを収集・蓄積するAWSサービスが既に用意されており、そうしたサービスを利用してログを管理・モニタリングしていく
– AWSサービスに対する管理アクセスログ: AWS CloudTrail、 AWS Config → CloudWatch Logs, S3, SQS
– AWS上の通信トラフィックのログ: VPC Flow Logs, ELB Logs → CloudWatch Logs
– EC2インスタンス内のアクセスログ、アプリケーションログ、システムロ:AWS CloudWatch

なるほどー、肝心なのはCloudWatchやなー

Logstashを学ぼう

Input
While data is distributed across many systems in different formats, Logstash is a variety of input plug-ins that capture events from different sources simultaneously. You can easily, continuously and smoothly implement data injection from logs, metrics, webapplications, data stores, or various cloud services.

Filter ここは重要か?
Data analysis and transformation
As data is transferred from source to store, Logstash’s filter parses each event and identifies and structures the fields. It further transforms the data into a common format that is best for analysis that delivers business value.

– Extract structures from unstructured data using grok filters
– Get geographical information from IP address
– Anonymize personal information and completely exclude confidential information fields
– Easy processs data from any source, format or schema.
あ、なんかこれは凄そうだ。。

Output
Choose a storage location, and transfer data.
The ideal destination is Elasticsearch, but other destinations are of course also available. The search and analysis possibilities are not impeded.

Logstash allows you to specify various output destinations and transfer data freely. This high degree of flexibility allows you to work with many downstreams.

うん、サンプルをいじりたくなってきました。OKOK!

Laravel log

storate/logs/laravel.log

[2018-11-17 11:22:12] local.ERROR: SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes (SQL: alter table `users` add unique `users_email_unique`(`email`)) {"exception":"[object] (Illuminate\\Database\\QueryException(code: 42000): SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes (SQL: alter table `users` add unique `users_email_unique`(`email`)) at /home/vagrant/local/zeus/vendor/laravel/framework/src/Illuminate/Database/Connection.php:664, PDOException(code: 42000): SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes at /home/vagrant/local/zeus/vendor/laravel/framework/src/Illuminate/Database/Connection.php:458)
[stacktrace]

httpd.conf error log

path: /etc/httpd/conf

ErrorLog logs/error_log

Set the log output destination in error_log of php.ini.
When describing the file name, write it with absolute path.

Centralized log service configuration settingss

Log type
The logs are not all the same, but the purpose are different depending on the type.
– application log
– accessibility log
– security log
– other

Overall view of log processing
– collect: AmazonS3, Amazon Kinesis, Amazon DynamoDB, Amazon RDS(Aurora)
– process: AWS Lambda, KCL Apps
– analyze: Amazon EMR
– save: Amazon Redshift, Amazon Machine Learning

Logs output from AWS services
Log specific to each environment such as OS and application