K8S学习笔记系列7--使用ELK收集K8S内的应用进行日志分析

乡下的树 2021年03月10日 406次浏览

1 收集K8S日志方案

K8s系统里的业务应用是高度“动态化”的,随着容器编排的进行,业务容器在不断的被创建、被摧毁、被漂移、被扩缩容…
我们需要这样一套日志收集、分析的系统:

  1. 收集 – 能够采集多种来源的日志数据(流式日志收集器)
  2. 传输 – 能够稳定的把日志数据传输到中央系统(消息队列)
  3. 存储 – 可以将日志以结构化数据的形式存储起来(搜索引擎)
  4. 分析 – 支持方便的分析、检索方法,最好有GUI管理系统(web)
  5. 警告 – 能够提供错误报告,监控机制(监控系统)

1.1 传统ELk模型缺点:

e3nzeylmpwzw

1.2 K8s容器日志收集模型

mjgxmjkucg5n

2 制作tomcat底包

改造dubbo-demo-consumer为tomcat启动
准备tomcat的底包镜像

2.1 准备tomcat底包

2.1.1 下载tomcat8二进制包

k8s-7-200.itdo.top运维主机上操作

[root@k8s-7-200 ~]# cd /opt/src/
[root@k8s-7-200 ~]# wget http://mirror.bit.edu.cn/apache/tomcat/tomcat-8/v8.5.50/bin/apache-tomcat-8.5.50.tar.gz
[root@k8s-7-200 ~]# wget https://mirrors.cloud.tencent.com/apache/tomcat/tomcat-8/v8.5.75/bin/apache-tomcat-8.5.75.tar.gz
[root@k8s-7-200 ~]# mkdir /data/dockerfile/tomcat
[root@k8s-7-200 ~]# tar xf apache-tomcat-8.5.75.tar.gz -C /data/dockerfile/tomcat
[root@k8s-7-200 ~]# cd /data/dockerfile/tomcat

2.1.2 优化配置tomcat

删除自带网页

[root@k8s-7-200 ~]# rm -rf apache-tomcat-8.5.75/webapps/*

关闭AJP端口

[root@k8s-7-200 ~]#  vim apache-tomcat-8.5.75/conf/server.xml
  <!-- <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> -->

修改日志类型
删除3manager,4host-manager的handlers

[root@k8s-7-200 ~]#  vim apache-tomcat-8.5.75/conf/logging.properties
handlers = [1catalina.org.apache.juli.AsyncFileHandler](http://1catalina.org.apache.juli.asyncfilehandler/), [2localhost.org.apache.juli.AsyncFileHandler](http://2localhost.org.apache.juli.asyncfilehandler/), java.util.logging.ConsoleHandler

日志级别改为INFO

1catalina.org.apache.juli.AsyncFileHandler.level = INFO
2localhost.org.apache.juli.AsyncFileHandler.level = INFO
java.util.logging.ConsoleHandler.level = INFO

注释所有关于3manager,4host-manager日志的配置

#3manager.org.apache.juli.AsyncFileHandler.level = FINE
#3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
#3manager.org.apache.juli.AsyncFileHandler.prefix = manager.
#3manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
#4host-manager.org.apache.juli.AsyncFileHandler.level = FINE
#4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
#4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager.
#4host-manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8

2.2 准备docker镜像

2.2.1 创建dockerfile

[root@k8s-7-200 ~]# vim Dockerfile

From harbor.itdo.top/public/jre8:8u112   #修改镜像地址
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  echo 'Asia/Shanghai' >/etc/timezone
ENV CATALINA_HOME /opt/tomcat
ENV LANG zh_CN.UTF-8
ADD apache-tomcat-8.5.75/ /opt/tomcat   #修改tomcat版本
ADD config.yml /opt/prom/config.yml
ADD jmx_javaagent-0.3.1.jar /opt/prom/jmx_javaagent-0.3.1.jar
WORKDIR /opt/tomcat
ADD entrypoint.sh /entrypoint.sh
CMD ["/bin/bash","/entrypoint.sh"]

2.2.2 准备dockerfile所需文件

下载JVM监控所需jar包

[root@k8s-7-200 ~]# wget -O jmx_javaagent-0.3.1.jar https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar

jmx_agent读取的配置文件
[root@k8s-7-200 ~]# vim config.yml
用途:prometheus基于文件file_sd的自动发现,主要场景是tomcat放在k8s外部运行时使用到,因为我们是放在k8s内部运行,使用基于pod的自动发现规则,所以这里可有可有

---
rules:
 - pattern: '.*'

容器启动脚本
[root@k8s-7-200 ~]# vim entrypoint.sh

#!/bin/bash
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml" # Pod ip:port 监控规则传给jvm监控客户端
C_OPTS=${C_OPTS}             # 启动追加applo参数,参数在dp.yaml定义中
MIN_HEAP=${MIN_HEAP:-"128m"} # java虚拟机初始化时的最小内存
MAX_HEAP=${MAX_HEAP:-"128m"} # java虚拟机初始化时的最大内存
JAVA_OPTS=${JAVA_OPTS:-"-Xmn384m -Xss256k -Duser.timezone=GMT+08  -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram  -Dfile.encoding=UTF8 -Dsun.jnu.encoding=UTF8"}     # 年轻代,gc回收
CATALINA_OPTS="${CATALINA_OPTS}"
JAVA_OPTS="${M_OPTS} ${C_OPTS} -Xms${MIN_HEAP} -Xmx${MAX_HEAP} ${JAVA_OPTS}"
sed -i -e "1a\JAVA_OPTS=\"$JAVA_OPTS\"" -e "1a\CATALINA_OPTS=\"$CATALINA_OPTS\"" /opt/tomcat/bin/catalina.sh   #插入JAVA_OPTS、CATALINA_OPTS到启动脚本中
cd /opt/tomcat && /opt/tomcat/bin/catalina.sh run 2>&1 >> /opt/tomcat/logs/stdout.log  #run是为了让tomcat在前台运行,不让容器退出,stdout.log打印到容器日志

添加执行权限

[root@k8s-7-200 ~]# chmod +x entrypoint.sh

2.2.3 构建docker镜像并推送至Harbor

[root@k8s-7-200 ~]# docker build . -t harbor.itdo.top/base/tomcat:v8.5.75
[root@k8s-7-200 ~]# docker push harbor.itdo.top/base/tomcat:v8.5.75

改造dubbo-demo-web项目

说明:因为jar修改为tomcat运行需要修改源码里面的依赖和配置,我不是开发人员,这里我们直接使用dubbo-demo-web源码分支tomcat
https://gitee.com/stanleywang/dubbo-demo-web/tree/tomcat
直接下载tomcat分支代码

git clone -b tomcat https://gitee.com/itdotop/dubbo-demo-web.git
cd dubbo-demo-web
rm -rf .git 
git init  
git add .
git commit -m "tomcat commit"
git remote add origin [email protected]:itdotop/dubbo-demo-web.git
git push -u origin master:tomcat

新建Jenkins流水线

  • New Item–>Enter an item name(输入name)–>Pipeline -> OK
tomcat-demo

image-1648091869067

  • 勾选Discard old builds
    Days to keep builds : 3
    Max # of builds to keep : 30
    clipboard
  • 勾选This project is parameterized(添加下面11个参数)
  1. Add Parameter -> String Parameter
    Name : app_name
    Default Value : 空值
    Description : 项目的名称. e.g: dubbo-demo-service
    image-1648092054753
  2. Add Parameter -> String Parameter
    Name : image_name
    Default Value :
    Description : project docker image name. e.g: app/dubbo-demo-service
    clipboard-1648092096216
  3. Add Parameter -> String Parameter
    Name : git_repo
    Default Value :
    Description : project git repository. e.g: https://gitee.com/itdotop/dubbo-demo-service.git 项目所在的git中央仓库的地址
    clipboard-1648092120861
  4. Add Parameter -> String Parameter
    Name : git_ver
    Default Value :tomcat #我们使用tomcat的分支
    Description : git commit id of the project.
    注:生产中建议使用git commit id,因为他是唯一性切不可篡改
    clipboard-1648092144425
  5. Add Parameter -> String Parameter
    Name : add_tag
    Default Value :
    Description : project docker image tag, date_timestamp recommended. e.g: 190117_1920
    image-1648092186883
  6. Add Parameter -> String Parameter
    Name : mvn_dir
    Default Value : ./
    Description : project maven directory. e.g: ./
    clipboard-1648092899719
  7. Add Parameter -> String Parameter
    Name : target_dir
    Default Value : ./dubbo-client/target 一般问开发路径
    Description : the relative path of target file such as .jar or .war package. e.g: ./dubbo-server/target
    clipboard-1648092935884
  8. Add Parameter -> String Parameter
    Name : mvn_cmd
    Default Value : mvn clean package -Dmaven.test.skip=true
    Description : maven command. e.g: mvn clean package -e -q -Dmaven.test.skip=true
    clipboard-1648092959749
  9. Add Parameter -> Choice Parameter
    Name : base_image
    Default Value :
    base/tomcat:v7.0.94
    base/tomcat:v8.5.75
    base/tomcat:v9.0.17
    Description : project base image list in harbor.itdo.top.项目使用的docker底包镜像
    clipboard-1648092996258
  10. Add Parameter -> Choice Parameter
    Name : maven
    Default Value :
    3.6.0-8u181
    3.2.5-6u025
    2.2.1-6u025
    3.6.1-11u013
    3.6.1-7u80
    3.6.1-8u221
    Description : different maven edition.执行编译使用的maven软件版本
    clipboard-1648093020567
  11. Add Parameter -> String Parameter(对比jar增加webapp dir参数)
    Name : root_url
    Default Value : ROOT
    Description : webapp dir.
    clipboard-1648093068282

Pipeline Script

说明: ADD ./projece_dir /opt/tomcat/webapps/ROOT #因为war名称不是固定的,所以这里我们采用将解压出来的war文件,放到你定义的目录下,默认值为ROOT

pipeline {
  agent any 
    stages {
    stage('pull') { //get project code from repo 
      steps {
        sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
        }
    }
    stage('build') { //exec mvn cmd
      steps {
        sh "cd ${params.app_name}/${env.BUILD_NUMBER}  && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
      }
    }
    stage('unzip') { //unzip  target/*.war -c target/project_dir
      steps {
        sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && unzip *.war -d ./project_dir"
      }
    }
    stage('image') { //build image and push to registry
      steps {
        writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.itdo.top/${params.base_image}
ADD ${params.target_dir}/project_dir /opt/tomcat/webapps/${params.root_url}"""
        sh "cd  ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.itdo.top/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.itdo.top/${params.image_name}:${params.git_ver}_${params.add_tag}"
      }
    }
  }
}

最后保存

构建应用镜像

构建参数如下

dubbo-demo-web
app/dubbo-demo-web
[email protected]:itdotop/dubbo-demo-web.git  #私有仓库,使用ssh方式认证拉取
tomcat
20220218_1730
./
./dubbo-client/target
mvn clean package -Dmaven.test.skip=true
ROOT

clipboard-1648093186412
构建成功
clipboard-1648093206892
检查harbor镜像
image-1648093250147

准备k8s的资源配置清单

不再需要单独准备资源配置清单

应用资源配置清单(dev、test和prod环境都需要修改,这里以test为例子)

k8s dashboard上直接修改应用deployment中image的值为jenkins打包出来的镜像
文档里的例子是:harbor.itdo.top/app/dubbo-demo-web:tomcat_20220218_2250
clipboard-1648093291434
说明:

更改image镜像为tomcat..
删除
              {
                "containerPort": 20880,
                "protocol": "TCP"
              }
删除              
              {
                "name": "JAR_BALL",
                "value": "dubbo-client.jar"
              }    

浏览器访问

http://demo.itdo.top/hello?name=dev
http://demo-test.itdo.top/hello?name=test
http://demo-prod.itdo.top/hello?name=prod

clipboard-1648093335174
clipboard-1648093351162

3 部署ElasticSearch

Elasticsearch 是一个有状态的服务,不建议部署在Kubernetes集群中,本次采用单节点部署Elasticsearch,部署节点为 k8s-7-12.host.top

ELK需要JDK环境,需要提前安装jdk
[root@k8s-7-12 ~]# tar zxf jdk1.8.0_72.tar.gz 
[root@k8s-7-12 ~]# mv jdk1.8.0_72 /usr/local/java

[root@k8s-7-12 ~]# vi /etc/profile
export JAVA_HOME=/usr/local/java
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH

[root@k8s-7-12 ~]# # source /etc/profile

3.1 安装ElasticSearch

3.1.1 下载二进制包

[root@k8s-7-12 ~]# cd /opt/src
[root@k8s-7-12 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.3.tar.gz
[root@k8s-7-12 ~]# tar xf elasticsearch-6.8.3.tar.gz -C /opt/
[root@k8s-7-12 ~]# ln -s /opt/elasticsearch-6.8.3/ /opt/elasticsearch
[root@k8s-7-12 ~]# cd /opt/elasticsearch

3.1.2 配置elasticsearch.yml

[root@k8s-7-12 ~]# mkdir -p /data/elasticsearch/{data,logs}
 
[root@k8s-7-12 ~]# vim config/elasticsearch.yml
# 调整以下配置
cluster.name: es.itdo.top
node.name: k8s-7-12.host.top
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
bootstrap.memory_lock: true    #锁定内存咋NYONF
network.host: 10.4.7.12
http.port: 9200

3.2 优化其他设置

3.2.1 设置jvm参数

elasticsearch]# vi config/jvm.options
# 根据环境设置,-Xms和-Xmx设置为相同的值,推荐设置为机器内存的一半左右,官方推荐最大不要超过32, 不然GC时间太长
-Xms512m 
-Xmx512m

3.2.2 创建普通用户

[root@k8s-7-12 ~]# useradd -s /bin/bash -M es
[root@k8s-7-12 ~]# chown -R es.es /opt/elasticsearch-6.8.3
[root@k8s-7-12 ~]# chown -R es.es /data/elasticsearch/

3.2.3 调整文件描述符

[root@k8s-7-12 ~]# vim /etc/security/limits.d/es.conf
es hard nofile 65536
es soft fsize unlimited
es hard memlock unlimited
es soft memlock unlimited

3.2.4 调整内核参数

[root@k8s-7-12 ~]# sysctl -w vm.max_map_count=262144
[root@k8s-7-12 ~]# echo "vm.max_map_count=262144" > /etc/sysctl.conf
[root@k8s-7-12 ~]# sysctl -p
vm.max_map_count = 262144

3.3 启动ES

3.3.1 启动es服务

[root@k8s-7-12 ~]#  su -c "/opt/elasticsearch/bin/elasticsearch -d" es
[root@k8s-7-12 ~]#  netstat -luntp|grep 9200
tcp6    0   0 10.4.7.12:9200     :::*          LISTEN   16784/java

注:在启动中如果有错误,可以参考

错误信息: the default discovery settings are unsuitable for production use; at least one of
ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

解决方法:
[root@k8s-7-12 ~]# vim config/elasticsearch.yml

# 取消注释,并保留一个节点
cluster.initial_master_nodes: ["node-1"]

验证ES安装是否正常

[root@k8s-7-12 ~]# curl 'http://10.4.7.12:9200/?pretty'
{
  "name" : "k8s-7-12.host.top",
  "cluster_name" : "es.itdo.top",
  "cluster_uuid" : "Uigzj5NFTSi-ZgPR89M-8w",
  "version" : {
    "number" : "6.8.3",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "0c48c0e",
    "build_date" : "2019-08-29T19:05:24.312154Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

image-1648129707968

3.3.1 调整ES日志模板

[root@k8s-7-12 ~]#curl -XPUT http://10.4.7.12:9200/_template/k8s -H 'content-Type:application/json' -d '{
 "template" : "k8s*",
 "index_patterns": ["k8s*"], 
 "settings": {
  "number_of_shards": 5,     #分片数量
  "number_of_replicas": 0    # 副本集数量,生产建议3份副本集,本es为单节点,不能配置副本集,调整为0
 }
}'

image-1648129790748

4 部署kafka和kafka-manager

k8s-7-11.host.top上操作:
jdk安装方式省略

安装zookeeper(需要java环境)
[root@k8s-7-22 ~]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
[root@k8s-7-22 ~]# tar zxf zookeeper-3.4.14.tar.gz -C /opt/
[root@k8s-7-22 ~]# ln -s /opt/zookeeper-3.4.14 /opt/zookeeper
[root@k8s-7-22 ~]# mkdir -pv /data/zookeeper/data /data/zookeeper/logs

配置zookeeper
[root@k8s-7-22 ~]# vi /opt/zookeeper/conf/zoo.cfg
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/data/zookeeper/data
    dataLogDir=/data/zookeeper/logs
    clientPort=2181
    
启动zookeeper
[root@k8s-7-22 ~]# /opt/zookeeper/bin/zkServer.sh start

4.1 但节点安装kafka

4.1.1 下载包

[root@k8s-7-11 ~]# cd /opt/src
[root@k8s-7-11 ~]# wget https://archive.apache.org/dist/kafka/2.2.0/kafka_2.12-2.2.0.tgz
[root@k8s-7-11 ~]# tar xf kafka_2.12-2.2.0.tgz -C /opt/
[root@k8s-7-11 ~]# ln -s /opt/kafka_2.12-2.2.0/ /opt/kafka
[root@k8s-7-11 ~]# cd /opt/kafka

4.1.2 修改配置

[root@k8s-7-11 ~]# mkdir -p /data/kafka/logs
[root@k8s-7-11 ~]# vim config/server.properties
log.dirs=/data/kafka/logs
zookeeper.connect=k8s-7-22.host.top:2181    # zk消息队列地址 
log.flush.interval.messages=10000
log.flush.interval.ms=1000
#添加下面两行
delete.topic.enable=true
host.name=k8s-7-11.host.top

4.1.3 启动kafka

[root@k8s-7-11 ~]# bin/kafka-server-start.sh -daemon config/server.properties
[root@k8s-7-11 ~]# netstat -luntp|grep 9092
tcp6    0   0 10.4.7.11:9092     :::*          LISTEN   34240/java

4.2 获取kafka-manager的docker镜像

运维主机k8s-7-200.host.top上操作:
kafka-manager是雅虎开源的一个kafka web管理页面,非必须

4.2.1 方法一 通过dockerfile获取

1 准备Dockerfile

[root@k8s-7-200 ~]# vim /data/dockerfile/kafka-manager/Dockerfile
FROM hseeberger/scala-sbt
 
ENV ZK_HOSTS=10.4.7.22:2181 \   #写死了zk的地址
     KM_VERSION=2.0.0.2  #kafka-manager版本号
 
RUN mkdir -p /tmp && \
    cd /tmp && \
    wget https://github.com/yahoo/kafka-manager/archive/${KM_VERSION}.tar.gz && \
    tar xxf ${KM_VERSION}.tar.gz && \
    cd /tmp/kafka-manager-${KM_VERSION} && \
    sbt clean dist && \
    unzip  -d / ./target/universal/kafka-manager-${KM_VERSION}.zip && \
    rm -fr /tmp/${KM_VERSION} /tmp/kafka-manager-${KM_VERSION}
 
WORKDIR /kafka-manager-${KM_VERSION}
 
EXPOSE 9000
ENTRYPOINT ["./bin/kafka-manager","-Dconfig.file=conf/application.conf"]

#存在的问题:
#1. kafka-manager 改名为 CMAK,压缩包名称和内部目录名发生了变化
#2. sbt 编译需要下载很多依赖,因为不可描述的原因,速度非常慢,个人非VPN网络大概率失败
#3. 因本人不具备VPN条件,编译失败。又因为第一条,这个dockerfile大概率需要修改
#4. 生产环境中一定要自己重新做一份!

2 制作docker镜像

[root@k8s-7-200 ~]# cd /data/dockerfile/kafka-manager
[root@k8s-7-200 ~]# docker build . -t harbor.itdo.top/infra/kafka-manager:v2.0.0.2
(漫长的过程)
[root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/kafka-manager:latest

构建过程极其漫长,大概率会失败,因此可以通过第二种方式下载构建好的镜像
失败1:dl.bintray.com已经下线了,无法访问
image-1648130051399

4.2.2 方法二直接下载docker镜像

[root@k8s-7-200 ~]# docker pull sheepkiller/kafka-manager:latest  #非官方镜像,太久没更新,版本过低
[root@k8s-7-200 ~]# docker pull stanleyws/kafka-manager:tagname   #老男孩镜像,但是写死了ZK地址为10.4.7.11
[root@k8s-7-200 ~]#  docker load < kafka-manager-v2.0.0.2.tar     #或者直接本地导入,镜像写死了ZK地址为10.4.7.11,可以通过deployment.yaml传入变量修改zk地址               
[root@k8s-7-200 ~]# docker images|grep kafka-manager
[root@k8s-7-200 ~]# docker tag  4e4a8c5dabab harbor.itdo.top/infra/kafka-manager:v2.0.0.2
[root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/kafka-manager:v2.0.0.2

4.3 部署kafka-manager

[root@k8s-7-200 ~]# mkdir /data/k8s-yaml/kafka-manager && cd /data/k8s-yaml/kafka-manager

4.3.1 准备dp清单

[root@k8s-7-200 ~]# vim deployment.yaml
kind: Deployment
apiVersion: extensions/v1beta1   #k8s 1.6后extensions/v1beta1已经废弃,换成apps/v1,
metadata:
  name: kafka-manager
  namespace: infra
  labels: 
    name: kafka-manager
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: kafka-manager
  template:
    metadata:
      labels: 
        app: kafka-manager
        name: kafka-manager
    spec:
      containers:
      - name: kafka-manager
        image: harbor.itdo.top/infra/kafka-manager:v2.0.0.2
        ports:
        - containerPort: 9000
          protocol: TCP
        env:
        - name: ZK_HOSTS
          value: zk1.itdo.top:2181
        - name: APPLICATION_SECRET
          value: letmein             #kafka manager 默认密码
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600

4.3.2 准备svc资源清单

[root@k8s-7-200 ~]# vim service.yaml
kind: Service
apiVersion: v1
metadata: 
  name: kafka-manager
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 9000
    targetPort: 9000
  selector: 
    app: kafka-manager

4.3.3 准备ingress资源清单

[root@k8s-7-200 ~]# vim ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: kafka-manager
  namespace: infra
spec:
  rules:
  - host: km.itdo.top
    http:
      paths:
      - path: /
        backend: 
          serviceName: kafka-manager
          servicePort: 9000

4.3.4 应用资源配置清单

任意一台运算节点上:

[root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kafka-manager/deployment.yaml
[root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kafka-manager/service.yaml
[root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kafka-manager/ingress.yaml

4.3.5 解析域名

k8s-7-11.host.top上

[root@k8s-7-11 ~]# vim /var/named/itdo.top.zone
km    A   10.4.7.10
[root@k8s-7-11 ~]# systemctl restart named
[root@k8s-7-11 ~]# dig -t A km.itdo.top @10.4.7.11 +short
10.4.7.10

4.3.6 浏览器访问

http://km.itdo.top
添加集群
image
查看集群信息
image
查看topic
image

5 部署filebeat

filebeat版本最好跟es版本一致
image

daf1a5e905c415daf68a8192a069f913a1d48e2c79e270da118385ba12a93aaa91bda4953c3402a6f0abf1c177f7bcc916a70bcac41977f69a6566565a8fae9c

是sha的一个指纹集,获取的方法:选择相应的版本,点击sha,会下载一个文本,文本中就是sha的指纹集
image
image
运维主机k8s-7-200.host.top上

5.1 制作docker镜像

[root@k8s-7-200 ~]# mkdir /data/dockerfile/filebeat && cd /data/dockerfile/filebeat

5.1.1 准备Dockerfile

官网有现成的docker镜像可以考虑使用: https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html

[root@k8s-7-200 ~]# vim Dockerfile
FROM debian:jessie
# 如果更换版本,需在官网下载同版本LINUX64-BIT的sha替换FILEBEAT_SHA1
ENV FILEBEAT_VERSION=7.5.1 \ 
    FILEBEAT_SHA1=daf1a5e905c415daf68a8192a069f913a1d48e2c79e270da118385ba12a93aaa91bda4953c3402a6f0abf1c177f7bcc916a70bcac41977f69a6566565a8fae9c  

RUN set -x && \
 apt-get update && \
 apt-get install -y wget && \
 wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz -O /opt/filebeat.tar.gz && \
 cd /opt && \
 echo "${FILEBEAT_SHA1} filebeat.tar.gz" | sha512sum -c - && \
 tar xzvf filebeat.tar.gz && \
 cd filebeat-* && \
 cp filebeat /bin && \
 cd /opt && \
 rm -rf filebeat* && \
 apt-get purge -y wget && \
 apt-get autoremove -y && \
 apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY docker-entrypoint.sh /
ENTRYPOINT ["/bin/bash","/docker-entrypoint.sh"]

5.1.3 准备启动脚本

docker-entrypoint.sh

#!/bin/bash
 
ENV=${ENV:-"test"}                    # 定义日志收集的环境,测试Or生产,通过dp.yaml插入变量的参数
PROJ_NAME=${PROJ_NAME:-"no-define"}   # 定义项目名称,关系到topic名称,通过dp.yaml插入变量的参数
MULTILINE=${MULTILINE:-"^\d{2}"}      # 多行匹配,这里正则表达式表示以2个数字开头的为一行,具体业务根据日志格式来定
 #配置文件
cat >/etc/filebeat.yaml << EOF
filebeat.inputs:
- type: log
  fields_under_root: true
  fields:
    topic: logm-${PROJ_NAME}
  paths:
    - /logm/*.log     #logm多行匹配
    - /logm/*/*.log
    - /logm/*/*/*.log
    - /logm/*/*/*/*.log
    - /logm/*/*/*/*/*.log
  scan_frequency: 120s
  max_bytes: 10485760
  multiline.pattern: ${MULTILINE}
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 100
- type: log
  fields_under_root: true
  fields:
    topic: logu-${PROJ_NAME}
  paths:
    - /logu/*.log    #logu单行匹配
    - /logu/*/*.log
    - /logu/*/*/*.log
    - /logu/*/*/*/*.log
    - /logu/*/*/*/*/*.log
    - /logu/*/*/*/*/*/*.log
output.kafka:
  hosts: ["10.4.7.11:9092"]   #定义kafka地址,多个kafka用逗号隔开
  topic: k8s-fb-${ENV}-%{[topic]}    #%{[topic]}对应logu-${PROJ_NAME}或logm-${PROJ_NAME},%是filebeat内部的变量
  version: 2.0.0      # 即使kafka版本超过2.0,也写2.0.0,因为目前最高支持2.0.0
  required_acks: 0
  max_message_bytes: 10485760
EOF
 
set -xe
 #启动命令
if [[ "$1" == "" ]]; then
     exec filebeat  -c /etc/filebeat.yaml 
else
    exec "$@"
fi

日志格式
image

[root@k8s-7-200 ~]# chmod u+x docker-entrypoint.sh

5.1.4 构建镜像

[root@k8s-7-200 ~]# docker build . -t harbor.itdo.top/infra/filebeat:v7.5.1
[root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/filebeat:v7.5.1

image

5.2 以边车模式运行POD

5.2.1 准备资源配置清单

使用dubbo-demo-consumer的镜像,以边车模式运行filebeat,进行收集日志
修改dp.yaml

[root@k8s-7-200 ~]# vim /data/k8s-yaml/test/dubbo-demo-consumer/deployment-filebeat.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-consumer
  namespace: test
  labels: 
    name: dubbo-demo-consumer
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-consumer
  template:
    metadata:
      labels: 
        app: dubbo-demo-consumer
        name: dubbo-demo-consumer
      annotations:
        blackbox_path: "/hello?name=health"
        blackbox_port: "8080"
        blackbox_scheme: "http"
        prometheus_io_scrape: "true"
        prometheus_io_port: "12346"
        prometheus_io_path: "/"
    spec:
      containers:
      - name: dubbo-demo-consumer
        image: harbor.itdo.top/app/dubbo-demo-web:tomcat_191222_1200  #使用前面生成的tomcat镜像
        ports:
        - containerPort: 8080
          protocol: TCP
        env:
        - name: C_OPTS
          value: -Denv=fat -Dapollo.meta=http://config-test.itdo.top  #连接apollo
        imagePullPolicy: IfNotPresent
#--------新增内容--------
        volumeMounts:
        - mountPath: /opt/tomcat/logs
          name: logm
      - name: filebeat    
        image: harbor.itdo.top/infra/filebeat:v7.5.1  #filebeat镜像地址
        imagePullPolicy: IfNotPresent
        env:
        - name: ENV
          value: test             # 测试环境
        - name: PROJ_NAME
          value: dubbo-demo-web   # 项目名
        volumeMounts:
        - mountPath: /logm        
          name: logm
      volumes:
      - emptyDir: {} #随机在宿主机找目录创建,容器删除时一起删除
        name: logm
#--------新增结束--------
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600

5.2.2 应用资源清单

任意node节点

[root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/test/dubbo-demo-consumer/deployment-filebeat.yaml

启动后,检查现在consumery应该有3个容器运行
image
查看tomcat是否启动正常访问
image
查看tomcat日志,注意:是进入filebeat_dubbo-demo-consumer的容器中,查看logm目录下是否有日志
可以理解为filebeat和tomcat两个容器共享一个临时存储,非持久化,日志数据最终保留在ES中

[root@k8s-7-22 ~]# kubectl -n test exec -it k8s_filebeat-dobbo-demo-consumer-...... /bin/bash
[root@k8s-7-22 ~]# ls /logm  

注:filebeat跟dubbo-demo-consumer共享ip(网络名称空间),uts/net/user是共享的
image

5.2.3 验证

浏览器访问http://km.itdo.top,看到kafaka-manager里,topic打进来,即为成功
image
验证topic数据

[root@k8s-7-11 ~]# cd /opt/kafka/bin/
[root@k8s-7-11 ~]# ./kafka-console-consumer.sh --bootstrap-server 10.4.7.11:9092 --topic k8s-fb-prod-logm-dubbo-demo-web --from-beginning

–from-beginning,读取历史未消费的数据

6 部署logstash

运维主机k8s-7-200.host.top上
https://www.elastic.co/cn/support/matrix
说明:版本最好保持跟ES版本一致
logstash分环境启动,资源足够时也可以按项目启动

6.1 准备docker镜像

logstash官方下载地址:https://hub.docker.com/_/logstash?tab=tags

6.1.1 下载官方镜像

[root@k8s-7-200 ~]# docker pull logstash:6.8.6
[root@k8s-7-200 ~]# docker tag  d0a2dac51fcb harbor.itdo.top/infra/logstash:v6.8.6
[root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/logstash:v6.8.6

6.1.2 准备配置文件

准备目录

[root@k8s-7-200 ~]# mkdir /etc/logstash/

创建test.conf

[root@k8s-7-200 ~]# vim /etc/logstash/logstash-test.conf
input {
  kafka {
    bootstrap_servers => "10.4.7.11:9092"
    client_id => "10.4.7.200"   #运维主机k8s-7-200上运行logstash
    consumer_threads => 4
    group_id => "k8s_test"               # 为test组
    topics_pattern => "k8s-fb-test-.*"   # 只收集k8s-fb-test开头的topics
  }
}
 
filter {
  json {
    source => "message"
  }
}
 
output {
  elasticsearch {
    hosts => ["10.4.7.12:9200"]   #=打到ES
    index => "k8s-test-%{+YYYY.MM.DD}"   #YYYY.MM.DD 按天收集,日志少的话,可以YYYY.MM按月来收集
  }
}

创建prod.conf

[root@k8s-7-200 ~]#vim /etc/logstash/logstash-prod.conf
input {
  kafka {
    bootstrap_servers => "10.4.7.11:9092"
    client_id => "10.4.7.200"
    consumer_threads => 4
    group_id => "k8s_prod"                   
    topics_pattern => "k8s-fb-prod-.*" 
  }
}
 
filter {
  json {
    source => "message"
  }
}
 
output {
  elasticsearch {
    hosts => ["10.4.7.12:9200"]
    index => “k8s-prod-%{+YYYY.MM.DD}"
  }
}

创建dev.conf

[root@k8s-7-200 ~]#vim /etc/logstash/logstash-dev.conf
input {
  kafka {
    bootstrap_servers => "10.4.7.11:9092"
    client_id => "10.4.7.200"
    consumer_threads => 4
    group_id => "k8s_dev"
    topics_pattern => "k8s-fb-dev-.*"
  }
}
 
filter {
  json {
    source => "message"
  }
}
 
output {
  elasticsearch {
    hosts => ["10.4.7.12:9200"]
    index => "k8s-dev-%{+YYYY.MM.DD}"
  }
}

6.2 启动logstash

6.2.1 启动测试环境的logstash

[root@k8s-7-200 ~]# docker run -d \
    --restart=always \
    --name logstash-test \
    -v /etc/logstash:/etc/logstash  \   #挂载宿主机配置文件目录
    harbor.itdo.top/infra/logstash:v6.8.6 \
    -f /etc/logstash/logstash-test.conf  #指定配置文件
    
[root@k8s-7-200 ~]# docker run -d \
    --restart=always \
    --name logstash-prod \
    -v /etc/logstash:/etc/logstash  \
    harbor.itdo.top/infra/logstash:v6.8.6 \
    -f /etc/logstash/logstash-prod.conf 
    
[root@k8s-7-200 ~]# docker run -d \
    --restart=always \
    --name logstash-dev \
    -v /etc/logstash:/etc/logstash  \
    harbor.itdo.top/infra/logstash:v6.8.6 \
    -f /etc/logstash/logstash-dev.conf  
 
[root@k8s-7-200 ~]# docker ps -a|grep logstash #查看容器状态

image

6.2.2 查看es是否接收数据

先访问demo-test.itdo.top/hello?name=xx,让其产生访问日志

[root@k8s-7-200 logstash]# curl http://10.4.7.12:9200/_cat/indices?v 
health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   k8s-test-2022.02.55 YyAhuCVrQDGlO_zyU1QwlQ   5   0          8            0       96kb           96kb
green  open   k8s-dev-2022.02.55  96ocIPrJR5m8jdKtZz4iIA   5   0         33            0    380.5kb        380.5kb
green  open   k8s-prod-2022.02.55 KFavKaavRCyOfGGwttbIhg   5   0

7 部署Kibana

运维主机k8s-200.host.top上

7.1 准备相关资源

7.1.1 准备docker镜像

[root@k8s-7-200 ~]# docker pull kibana:6.8.6
[root@k8s-7-200 ~]# docker tag adfab5632ef4 harbor.itdo.top/infra/kibana:v6.8.6
[root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/kibana:v6.8.6

准备目录

[root@k8s-7-200 ~]# mkdir /data/k8s-yaml/kibana && cd /data/k8s-yaml/kibana

7.1.3 准备dp资源清单

[root@k8s-7-200 ~]# vim deployment.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kibana
  namespace: infra
  labels: 
    name: kibana
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: kibana
  template:
    metadata:
      labels: 
        app: kibana
        name: kibana
    spec:
      containers:
      - name: kibana
        image: harbor.itdo.top/infra/kibana:v6.8.6
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5601
          protocol: TCP
        env:
        - name: ELASTICSEARCH_URL    #es地址
          value: http://10.4.7.12:9200
      imagePullSecrets:
      - name: harbor
      securityContext: 
        runAsUser: 0
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600

7.1.4 准备svc资源清单

[root@k8s-7-200 ~]# vim service.yaml
kind: Service
apiVersion: v1
metadata: 
  name: kibana
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 5601
    targetPort: 5601
  selector: 
    app: kibana

7.1.5 准备ingress资源清单

[root@k8s-7-200 ~]# vim ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: kibana
  namespace: infra
spec:
  rules:
  - host: kibana.itdo.top
    http:
      paths:
      - path: /
        backend: 
          serviceName: kibana
          servicePort: 5601

7.2 应用资源

7.2.1 应用资源配置清单

[root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kibana/deployment.yaml
[root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kibana/service.yaml
[root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kibana/ingress.yaml

7.2.2 解析域名

[root@k8s-7-11 ~]# vim /var/named/itdo.top.zone
kibana         A  10.4.7.10
[root@k8s-7-11 ~]# systemctl restart named
[root@k8s-7-11 ~]# dig -t A kibana.itdo.top @10.4.7.11 +short
10.4.7.10

7.2.3 浏览器访问

访问http://kibana.itdo.top
image
新版kibana已经自带监控功能,这里打开它,可以看到ES的信息
image
image
image

7.3 kibana的使用

创建test环境索引
image
image
完成
image
创建完成后,点击discover就能看到日志了
image
创建prod环境索引
image
image
image
创建dev环境索引
image
image

  1. 选择区域
    image
  2. 时间选择器
    选择日志时间
  • 快速时间
  • 绝对时间
  • 相对时间
    image
  1. 环境选择器
    选择对应环境的日志
  • k8s-test-*
  • k8s-prod-*
  • k8s-dev-*
    image
  1. 项目选择器
  • 对应filebeat的PROJ_NAME值
  • Add a fillter
  • topic is ${PROJ_NAME}
    dubbo-demo-service
    dubbo-demo-web
    image
    image
  1. 关键字选择器
  • exception
  • error
  • 其他业务关键字
  • 支持正则表达式
    tips:将测试环境dubbo-demo-service服务关闭,手动访问http://demo-test.od.com/hello?name=test产生错误日志
    添加两个常用message、logfile.file.path、hostname字段
    image
    如下图
    image
    image
    image

dubbo-demo-service.jar项目日志处理

a.修改jre8/entrypoint.sh
image
image
b.修改jenkins流水线,base_image增加刚才制作的底包镜像
image
c.重新构建dubbo-demo-service镜像
参数如下

app_name:dubbo-demo-service
image_name:app/dubbo-demo-service
git_repo:https://gitee.com/itdotop/dubbo-demo-service.git
git_ver:apollo
add_tag:220224_1500_with_logs
mvn_dir:./
target_dir:./dubbo-server/target
mvn_cmd:mvn clean package -Dmaven.test.skip=true
base_image:base/jre8:8u112_with_logs
maven:3.6.1-8u221

d.修改dev环境资源配置清单

[root@k8s-7-200 dubbo-demo-service]# vim deployment-filebeat.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-service
  namespace: app
  labels: 
    name: dubbo-demo-service
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-service
  template:
    metadata:
      labels: 
        app: dubbo-demo-service
        name: dubbo-demo-service
      annotations:
        blackbox_port: "20880"
        blackbox_scheme: tcp
        prometheus_io_path: /
        prometheus_io_port: "12346"
        prometheus_io_scrape: "true"
    spec:
      containers:
      - name: dubbo-demo-service
        image: harbor.itdo.top/app/dubbo-demo-service:apollo_220224_1500_with_logs  #指向修改好的新镜像
        ports:
        - containerPort: 20880
          protocol: TCP
        env:
        - name: C_OPTS
          value: -Denv=dev -Dapollo.meta=http://config.itdo.top
        - name: JAR_BALL
          value: dubbo-server.jar    
        imagePullPolicy: IfNotPresent
        ---新增内容----
        volumeMounts:
        - mountPath: /opt/logs   #设置日志路径
          name: logm
      - name: filebeat    
        image: harbor.itdo.top/infra/filebeat:v7.5.1
        imagePullPolicy: IfNotPresent
        env:
        - name: ENV
          value: dev   #指定环境
        - name: PROJ_NAME
          value: dubbo-demo-service  #指定项目名称
        volumeMounts:
        - mountPath: /logm        
          name: logm
      volumes:
      - emptyDir: {}
        name: logm
        ----新增结束------
      imagePullSecrets: 
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600

f.应用dev环境资源配置清单(测试、生产环境操作一样)

[root@k8s-7-21 src]# kubectl apply -f http://k8s-yaml.itdo.top/dubbo-demo-service/deployment-filebeat.yaml
[root@k8s-7-21 src]# kubectl apply -f http://k8s-yaml.itdo.top/test/dubbo-demo-service/deployment-filebeat.yaml
[root@k8s-7-21 src]# kubectl apply -f http://k8s-yaml.itdo.top/prod/dubbo-demo-service/deployment-filebeat.yaml  

g.检查kafka topic
image
h.kibana查看ES日志
image