K8S学习笔记系列7--使用ELK收集K8S内的应用进行日志分析

乡下的树 2021年03月10日 694次浏览

1 收集K8S日志方案

K8s系统里的业务应用是高度“动态化”的,随着容器编排的进行,业务容器在不断的被创建、被摧毁、被漂移、被扩缩容…
我们需要这样一套日志收集、分析的系统:

  1. 收集 – 能够采集多种来源的日志数据(流式日志收集器)
  2. 传输 – 能够稳定的把日志数据传输到中央系统(消息队列)
  3. 存储 – 可以将日志以结构化数据的形式存储起来(搜索引擎)
  4. 分析 – 支持方便的分析、检索方法,最好有GUI管理系统(web)
  5. 警告 – 能够提供错误报告,监控机制(监控系统)

1.1 传统ELk模型缺点:

e3nzeylmpwzw

e3nzeylmpwzw

1.2 K8s容器日志收集模型

mjgxmjkucg5n

mjgxmjkucg5n

2 制作tomcat底包

改造dubbo-demo-consumer为tomcat启动
准备tomcat的底包镜像

2.1 准备tomcat底包

2.1.1 下载tomcat8二进制包

k8s-7-200.itdo.top运维主机上操作

  1. [root@k8s-7-200 ~]# cd /opt/src/
  2. [root@k8s-7-200 ~]# wget http://mirror.bit.edu.cn/apache/tomcat/tomcat-8/v8.5.50/bin/apache-tomcat-8.5.50.tar.gz
  3. [root@k8s-7-200 ~]# wget https://mirrors.cloud.tencent.com/apache/tomcat/tomcat-8/v8.5.75/bin/apache-tomcat-8.5.75.tar.gz
  4. [root@k8s-7-200 ~]# mkdir /data/dockerfile/tomcat
  5. [root@k8s-7-200 ~]# tar xf apache-tomcat-8.5.75.tar.gz -C /data/dockerfile/tomcat
  6. [root@k8s-7-200 ~]# cd /data/dockerfile/tomcat

2.1.2 优化配置tomcat

删除自带网页

  1. [root@k8s-7-200 ~]# rm -rf apache-tomcat-8.5.75/webapps/*

关闭AJP端口

  1. [root@k8s-7-200 ~]# vim apache-tomcat-8.5.75/conf/server.xml
  2. <!-- <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> -->

修改日志类型
删除3manager,4host-manager的handlers

  1. [root@k8s-7-200 ~]# vim apache-tomcat-8.5.75/conf/logging.properties
  2. handlers = [1catalina.org.apache.juli.AsyncFileHandler](http://1catalina.org.apache.juli.asyncfilehandler/), [2localhost.org.apache.juli.AsyncFileHandler](http://2localhost.org.apache.juli.asyncfilehandler/), java.util.logging.ConsoleHandler

日志级别改为INFO

  1. 1catalina.org.apache.juli.AsyncFileHandler.level = INFO
  2. 2localhost.org.apache.juli.AsyncFileHandler.level = INFO
  3. java.util.logging.ConsoleHandler.level = INFO

注释所有关于3manager,4host-manager日志的配置

  1. #3manager.org.apache.juli.AsyncFileHandler.level = FINE
  2. #3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
  3. #3manager.org.apache.juli.AsyncFileHandler.prefix = manager.
  4. #3manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
  5. #4host-manager.org.apache.juli.AsyncFileHandler.level = FINE
  6. #4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
  7. #4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager.
  8. #4host-manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8

2.2 准备docker镜像

2.2.1 创建dockerfile

[root@k8s-7-200 ~]# vim Dockerfile

  1. From harbor.itdo.top/public/jre8:8u112 #修改镜像地址
  2. RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
  3. echo 'Asia/Shanghai' >/etc/timezone
  4. ENV CATALINA_HOME /opt/tomcat
  5. ENV LANG zh_CN.UTF-8
  6. ADD apache-tomcat-8.5.75/ /opt/tomcat #修改tomcat版本
  7. ADD config.yml /opt/prom/config.yml
  8. ADD jmx_javaagent-0.3.1.jar /opt/prom/jmx_javaagent-0.3.1.jar
  9. WORKDIR /opt/tomcat
  10. ADD entrypoint.sh /entrypoint.sh
  11. CMD ["/bin/bash","/entrypoint.sh"]

2.2.2 准备dockerfile所需文件

下载JVM监控所需jar包

  1. [root@k8s-7-200 ~]# wget -O jmx_javaagent-0.3.1.jar https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar

jmx_agent读取的配置文件
[root@k8s-7-200 ~]# vim config.yml
用途:prometheus基于文件file_sd的自动发现,主要场景是tomcat放在k8s外部运行时使用到,因为我们是放在k8s内部运行,使用基于pod的自动发现规则,所以这里可有可有

  1. ---
  2. rules:
  3. - pattern: '.*'

容器启动脚本
[root@k8s-7-200 ~]# vim entrypoint.sh

  1. #!/bin/bash
  2. M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml" # Pod ip:port 监控规则传给jvm监控客户端
  3. C_OPTS=${C_OPTS} # 启动追加applo参数,参数在dp.yaml定义中
  4. MIN_HEAP=${MIN_HEAP:-"128m"} # java虚拟机初始化时的最小内存
  5. MAX_HEAP=${MAX_HEAP:-"128m"} # java虚拟机初始化时的最大内存
  6. JAVA_OPTS=${JAVA_OPTS:-"-Xmn384m -Xss256k -Duser.timezone=GMT+08 -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram -Dfile.encoding=UTF8 -Dsun.jnu.encoding=UTF8"} # 年轻代,gc回收
  7. CATALINA_OPTS="${CATALINA_OPTS}"
  8. JAVA_OPTS="${M_OPTS} ${C_OPTS} -Xms${MIN_HEAP} -Xmx${MAX_HEAP} ${JAVA_OPTS}"
  9. sed -i -e "1a\JAVA_OPTS=\"$JAVA_OPTS\"" -e "1a\CATALINA_OPTS=\"$CATALINA_OPTS\"" /opt/tomcat/bin/catalina.sh #插入JAVA_OPTS、CATALINA_OPTS到启动脚本中
  10. cd /opt/tomcat && /opt/tomcat/bin/catalina.sh run 2>&1 >> /opt/tomcat/logs/stdout.log #run是为了让tomcat在前台运行,不让容器退出,stdout.log打印到容器日志

添加执行权限

  1. [root@k8s-7-200 ~]# chmod +x entrypoint.sh

2.2.3 构建docker镜像并推送至Harbor

  1. [root@k8s-7-200 ~]# docker build . -t harbor.itdo.top/base/tomcat:v8.5.75
  2. [root@k8s-7-200 ~]# docker push harbor.itdo.top/base/tomcat:v8.5.75

改造dubbo-demo-web项目

说明:因为jar修改为tomcat运行需要修改源码里面的依赖和配置,我不是开发人员,这里我们直接使用dubbo-demo-web源码分支tomcat
https://gitee.com/stanleywang/dubbo-demo-web/tree/tomcat
直接下载tomcat分支代码

  1. git clone -b tomcat https://gitee.com/itdotop/dubbo-demo-web.git
  2. cd dubbo-demo-web
  3. rm -rf .git
  4. git init
  5. git add .
  6. git commit -m "tomcat commit"
  7. git remote add origin git@gitee.com:itdotop/dubbo-demo-web.git
  8. git push -u origin master:tomcat

新建Jenkins流水线

  • New Item–>Enter an item name(输入name)–>Pipeline -> OK
  1. tomcat-demo

image-1648091869067

image-1648091869067

  • 勾选Discard old builds
    Days to keep builds : 3
    Max # of builds to keep : 30
    clipboard

    clipboard

  • 勾选This project is parameterized(添加下面11个参数)
  1. Add Parameter -> String Parameter
    Name : app_name
    Default Value : 空值
    Description : 项目的名称. e.g: dubbo-demo-service
    image-1648092054753

    image-1648092054753

  2. Add Parameter -> String Parameter
    Name : image_name
    Default Value :
    Description : project docker image name. e.g: app/dubbo-demo-service
    clipboard-1648092096216

    clipboard-1648092096216

  3. Add Parameter -> String Parameter
    Name : git_repo
    Default Value :
    Description : project git repository. e.g: https://gitee.com/itdotop/dubbo-demo-service.git 项目所在的git中央仓库的地址
    clipboard-1648092120861

    clipboard-1648092120861

  4. Add Parameter -> String Parameter
    Name : git_ver
    Default Value :tomcat #我们使用tomcat的分支
    Description : git commit id of the project.
    注:生产中建议使用git commit id,因为他是唯一性切不可篡改
    clipboard-1648092144425

    clipboard-1648092144425

  5. Add Parameter -> String Parameter
    Name : add_tag
    Default Value :
    Description : project docker image tag, date_timestamp recommended. e.g: 190117_1920
    image-1648092186883

    image-1648092186883

  6. Add Parameter -> String Parameter
    Name : mvn_dir
    Default Value : ./
    Description : project maven directory. e.g: ./
    clipboard-1648092899719

    clipboard-1648092899719

  7. Add Parameter -> String Parameter
    Name : target_dir
    Default Value : ./dubbo-client/target 一般问开发路径
    Description : the relative path of target file such as .jar or .war package. e.g: ./dubbo-server/target
    clipboard-1648092935884

    clipboard-1648092935884

  8. Add Parameter -> String Parameter
    Name : mvn_cmd
    Default Value : mvn clean package -Dmaven.test.skip=true
    Description : maven command. e.g: mvn clean package -e -q -Dmaven.test.skip=true
    clipboard-1648092959749

    clipboard-1648092959749

  9. Add Parameter -> Choice Parameter
    Name : base_image
    Default Value :
    base/tomcat:v7.0.94
    base/tomcat:v8.5.75
    base/tomcat:v9.0.17
    Description : project base image list in harbor.itdo.top.项目使用的docker底包镜像
    clipboard-1648092996258

    clipboard-1648092996258

  10. Add Parameter -> Choice Parameter
    Name : maven
    Default Value :
    3.6.0-8u181
    3.2.5-6u025
    2.2.1-6u025
    3.6.1-11u013
    3.6.1-7u80
    3.6.1-8u221
    Description : different maven edition.执行编译使用的maven软件版本
    clipboard-1648093020567

    clipboard-1648093020567

  11. Add Parameter -> String Parameter(对比jar增加webapp dir参数)
    Name : root_url
    Default Value : ROOT
    Description : webapp dir.
    clipboard-1648093068282

    clipboard-1648093068282

Pipeline Script

说明: ADD ./projece_dir /opt/tomcat/webapps/ROOT #因为war名称不是固定的,所以这里我们采用将解压出来的war文件,放到你定义的目录下,默认值为ROOT

  1. pipeline {
  2. agent any
  3. stages {
  4. stage('pull') { //get project code from repo
  5. steps {
  6. sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
  7. }
  8. }
  9. stage('build') { //exec mvn cmd
  10. steps {
  11. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
  12. }
  13. }
  14. stage('unzip') { //unzip target/*.war -c target/project_dir
  15. steps {
  16. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && unzip *.war -d ./project_dir"
  17. }
  18. }
  19. stage('image') { //build image and push to registry
  20. steps {
  21. writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.itdo.top/${params.base_image}
  22. ADD ${params.target_dir}/project_dir /opt/tomcat/webapps/${params.root_url}"""
  23. sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.itdo.top/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.itdo.top/${params.image_name}:${params.git_ver}_${params.add_tag}"
  24. }
  25. }
  26. }
  27. }

最后保存

构建应用镜像

构建参数如下

  1. dubbo-demo-web
  2. app/dubbo-demo-web
  3. git@gitee.com:itdotop/dubbo-demo-web.git #私有仓库,使用ssh方式认证拉取
  4. tomcat
  5. 20220218_1730
  6. ./
  7. ./dubbo-client/target
  8. mvn clean package -Dmaven.test.skip=true
  9. ROOT

clipboard-1648093186412

clipboard-1648093186412


构建成功
clipboard-1648093206892

clipboard-1648093206892


检查harbor镜像
image-1648093250147

image-1648093250147

准备k8s的资源配置清单

不再需要单独准备资源配置清单

应用资源配置清单(dev、test和prod环境都需要修改,这里以test为例子)

k8s dashboard上直接修改应用deployment中image的值为jenkins打包出来的镜像
文档里的例子是:harbor.itdo.top/app/dubbo-demo-web:tomcat_20220218_2250
clipboard-1648093291434

clipboard-1648093291434


说明:

  1. 更改image镜像为tomcat..
  2. 删除
  3. {
  4. "containerPort": 20880,
  5. "protocol": "TCP"
  6. }
  7. 删除
  8. {
  9. "name": "JAR_BALL",
  10. "value": "dubbo-client.jar"
  11. }

浏览器访问

  1. http://demo.itdo.top/hello?name=dev
  2. http://demo-test.itdo.top/hello?name=test
  3. http://demo-prod.itdo.top/hello?name=prod

clipboard-1648093335174

clipboard-1648093335174


clipboard-1648093351162

clipboard-1648093351162

3 部署ElasticSearch

Elasticsearch 是一个有状态的服务,不建议部署在Kubernetes集群中,本次采用单节点部署Elasticsearch,部署节点为 k8s-7-12.host.top

  1. ELK需要JDK环境,需要提前安装jdk
  2. [root@k8s-7-12 ~]# tar zxf jdk1.8.0_72.tar.gz
  3. [root@k8s-7-12 ~]# mv jdk1.8.0_72 /usr/local/java

  4. [root@k8s-7-12 ~]# vi /etc/profile
  5. export JAVA_HOME=/usr/local/java
  6. export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
  7. export PATH=$JAVA_HOME/bin:$PATH

  8. [root@k8s-7-12 ~]# # source /etc/profile

3.1 安装ElasticSearch

3.1.1 下载二进制包

  1. [root@k8s-7-12 ~]# cd /opt/src
  2. [root@k8s-7-12 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.3.tar.gz
  3. [root@k8s-7-12 ~]# tar xf elasticsearch-6.8.3.tar.gz -C /opt/
  4. [root@k8s-7-12 ~]# ln -s /opt/elasticsearch-6.8.3/ /opt/elasticsearch
  5. [root@k8s-7-12 ~]# cd /opt/elasticsearch

3.1.2 配置elasticsearch.yml

  1. [root@k8s-7-12 ~]# mkdir -p /data/elasticsearch/{data,logs}

  2. [root@k8s-7-12 ~]# vim config/elasticsearch.yml
  3. # 调整以下配置
  4. cluster.name: es.itdo.top
  5. node.name: k8s-7-12.host.top
  6. path.data: /data/elasticsearch/data
  7. path.logs: /data/elasticsearch/logs
  8. bootstrap.memory_lock: true #锁定内存咋NYONF
  9. network.host: 10.4.7.12
  10. http.port: 9200

3.2 优化其他设置

3.2.1 设置jvm参数

  1. elasticsearch]# vi config/jvm.options
  2. # 根据环境设置,-Xms和-Xmx设置为相同的值,推荐设置为机器内存的一半左右,官方推荐最大不要超过32, 不然GC时间太长
  3. -Xms512m
  4. -Xmx512m

3.2.2 创建普通用户

  1. [root@k8s-7-12 ~]# useradd -s /bin/bash -M es
  2. [root@k8s-7-12 ~]# chown -R es.es /opt/elasticsearch-6.8.3
  3. [root@k8s-7-12 ~]# chown -R es.es /data/elasticsearch/

3.2.3 调整文件描述符

  1. [root@k8s-7-12 ~]# vim /etc/security/limits.d/es.conf
  2. es hard nofile 65536
  3. es soft fsize unlimited
  4. es hard memlock unlimited
  5. es soft memlock unlimited

3.2.4 调整内核参数

  1. [root@k8s-7-12 ~]# sysctl -w vm.max_map_count=262144
  2. [root@k8s-7-12 ~]# echo "vm.max_map_count=262144" > /etc/sysctl.conf
  3. [root@k8s-7-12 ~]# sysctl -p
  4. vm.max_map_count = 262144

3.3 启动ES

3.3.1 启动es服务

  1. [root@k8s-7-12 ~]# su -c "/opt/elasticsearch/bin/elasticsearch -d" es
  2. [root@k8s-7-12 ~]# netstat -luntp|grep 9200
  3. tcp6 0 0 10.4.7.12:9200 :::* LISTEN 16784/java

注:在启动中如果有错误,可以参考

  1. 错误信息: the default discovery settings are unsuitable for production use; at least one of
  2. ERROR: [1] bootstrap checks failed
  3. [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

  4. 解决方法:
  5. [root@k8s-7-12 ~]# vim config/elasticsearch.yml

  6. # 取消注释,并保留一个节点
  7. cluster.initial_master_nodes: ["node-1"]

验证ES安装是否正常

  1. [root@k8s-7-12 ~]# curl 'http://10.4.7.12:9200/?pretty'
  2. {
  3. "name" : "k8s-7-12.host.top",
  4. "cluster_name" : "es.itdo.top",
  5. "cluster_uuid" : "Uigzj5NFTSi-ZgPR89M-8w",
  6. "version" : {
  7. "number" : "6.8.3",
  8. "build_flavor" : "default",
  9. "build_type" : "tar",
  10. "build_hash" : "0c48c0e",
  11. "build_date" : "2019-08-29T19:05:24.312154Z",
  12. "build_snapshot" : false,
  13. "lucene_version" : "7.7.0",
  14. "minimum_wire_compatibility_version" : "5.6.0",
  15. "minimum_index_compatibility_version" : "5.0.0"
  16. },
  17. "tagline" : "You Know, for Search"
  18. }

image-1648129707968

image-1648129707968

3.3.1 调整ES日志模板

  1. [root@k8s-7-12 ~]#curl -XPUT http://10.4.7.12:9200/_template/k8s -H 'content-Type:application/json' -d '{
  2. "template" : "k8s*",
  3. "index_patterns": ["k8s*"],
  4. "settings": {
  5. "number_of_shards": 5, #分片数量
  6. "number_of_replicas": 0 # 副本集数量,生产建议3份副本集,本es为单节点,不能配置副本集,调整为0
  7. }
  8. }'

image-1648129790748

image-1648129790748

4 部署kafka和kafka-manager

k8s-7-11.host.top上操作:
jdk安装方式省略

  1. 安装zookeeper(需要java环境)
  2. [root@k8s-7-22 ~]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
  3. [root@k8s-7-22 ~]# tar zxf zookeeper-3.4.14.tar.gz -C /opt/
  4. [root@k8s-7-22 ~]# ln -s /opt/zookeeper-3.4.14 /opt/zookeeper
  5. [root@k8s-7-22 ~]# mkdir -pv /data/zookeeper/data /data/zookeeper/logs

  6. 配置zookeeper
  7. [root@k8s-7-22 ~]# vi /opt/zookeeper/conf/zoo.cfg
  8. tickTime=2000
  9. initLimit=10
  10. syncLimit=5
  11. dataDir=/data/zookeeper/data
  12. dataLogDir=/data/zookeeper/logs
  13. clientPort=2181

  14. 启动zookeeper
  15. [root@k8s-7-22 ~]# /opt/zookeeper/bin/zkServer.sh start

4.1 但节点安装kafka

4.1.1 下载包

  1. [root@k8s-7-11 ~]# cd /opt/src
  2. [root@k8s-7-11 ~]# wget https://archive.apache.org/dist/kafka/2.2.0/kafka_2.12-2.2.0.tgz
  3. [root@k8s-7-11 ~]# tar xf kafka_2.12-2.2.0.tgz -C /opt/
  4. [root@k8s-7-11 ~]# ln -s /opt/kafka_2.12-2.2.0/ /opt/kafka
  5. [root@k8s-7-11 ~]# cd /opt/kafka

4.1.2 修改配置

  1. [root@k8s-7-11 ~]# mkdir -p /data/kafka/logs
  2. [root@k8s-7-11 ~]# vim config/server.properties
  3. log.dirs=/data/kafka/logs
  4. zookeeper.connect=k8s-7-22.host.top:2181 # zk消息队列地址
  5. log.flush.interval.messages=10000
  6. log.flush.interval.ms=1000
  7. #添加下面两行
  8. delete.topic.enable=true
  9. host.name=k8s-7-11.host.top

4.1.3 启动kafka

  1. [root@k8s-7-11 ~]# bin/kafka-server-start.sh -daemon config/server.properties
  2. [root@k8s-7-11 ~]# netstat -luntp|grep 9092
  3. tcp6 0 0 10.4.7.11:9092 :::* LISTEN 34240/java

4.2 获取kafka-manager的docker镜像

运维主机k8s-7-200.host.top上操作:
kafka-manager是雅虎开源的一个kafka web管理页面,非必须

4.2.1 方法一 通过dockerfile获取

1 准备Dockerfile

  1. [root@k8s-7-200 ~]# vim /data/dockerfile/kafka-manager/Dockerfile
  2. FROM hseeberger/scala-sbt

  3. ENV ZK_HOSTS=10.4.7.22:2181 \ #写死了zk的地址
  4. KM_VERSION=2.0.0.2 #kafka-manager版本号

  5. RUN mkdir -p /tmp && \
  6. cd /tmp && \
  7. wget https://github.com/yahoo/kafka-manager/archive/${KM_VERSION}.tar.gz && \
  8. tar xxf ${KM_VERSION}.tar.gz && \
  9. cd /tmp/kafka-manager-${KM_VERSION} && \
  10. sbt clean dist && \
  11. unzip -d / ./target/universal/kafka-manager-${KM_VERSION}.zip && \
  12. rm -fr /tmp/${KM_VERSION} /tmp/kafka-manager-${KM_VERSION}

  13. WORKDIR /kafka-manager-${KM_VERSION}

  14. EXPOSE 9000
  15. ENTRYPOINT ["./bin/kafka-manager","-Dconfig.file=conf/application.conf"]

  16. #存在的问题:
  17. #1. kafka-manager 改名为 CMAK,压缩包名称和内部目录名发生了变化
  18. #2. sbt 编译需要下载很多依赖,因为不可描述的原因,速度非常慢,个人非VPN网络大概率失败
  19. #3. 因本人不具备VPN条件,编译失败。又因为第一条,这个dockerfile大概率需要修改
  20. #4. 生产环境中一定要自己重新做一份!

2 制作docker镜像

  1. [root@k8s-7-200 ~]# cd /data/dockerfile/kafka-manager
  2. [root@k8s-7-200 ~]# docker build . -t harbor.itdo.top/infra/kafka-manager:v2.0.0.2
  3. (漫长的过程)
  4. [root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/kafka-manager:latest

构建过程极其漫长,大概率会失败,因此可以通过第二种方式下载构建好的镜像
失败1:dl.bintray.com已经下线了,无法访问
image-1648130051399

image-1648130051399

4.2.2 方法二直接下载docker镜像

  1. [root@k8s-7-200 ~]# docker pull sheepkiller/kafka-manager:latest #非官方镜像,太久没更新,版本过低
  2. [root@k8s-7-200 ~]# docker pull stanleyws/kafka-manager:tagname #老男孩镜像,但是写死了ZK地址为10.4.7.11
  3. [root@k8s-7-200 ~]# docker load < kafka-manager-v2.0.0.2.tar #或者直接本地导入,镜像写死了ZK地址为10.4.7.11,可以通过deployment.yaml传入变量修改zk地址
  4. [root@k8s-7-200 ~]# docker images|grep kafka-manager
  5. [root@k8s-7-200 ~]# docker tag 4e4a8c5dabab harbor.itdo.top/infra/kafka-manager:v2.0.0.2
  6. [root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/kafka-manager:v2.0.0.2

4.3 部署kafka-manager

  1. [root@k8s-7-200 ~]# mkdir /data/k8s-yaml/kafka-manager && cd /data/k8s-yaml/kafka-manager

4.3.1 准备dp清单

  1. [root@k8s-7-200 ~]# vim deployment.yaml
  2. kind: Deployment
  3. apiVersion: extensions/v1beta1 #k8s 1.6后extensions/v1beta1已经废弃,换成apps/v1,
  4. metadata:
  5. name: kafka-manager
  6. namespace: infra
  7. labels:
  8. name: kafka-manager
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: kafka-manager
  14. template:
  15. metadata:
  16. labels:
  17. app: kafka-manager
  18. name: kafka-manager
  19. spec:
  20. containers:
  21. - name: kafka-manager
  22. image: harbor.itdo.top/infra/kafka-manager:v2.0.0.2
  23. ports:
  24. - containerPort: 9000
  25. protocol: TCP
  26. env:
  27. - name: ZK_HOSTS
  28. value: zk1.itdo.top:2181
  29. - name: APPLICATION_SECRET
  30. value: letmein #kafka manager 默认密码
  31. imagePullPolicy: IfNotPresent
  32. imagePullSecrets:
  33. - name: harbor
  34. restartPolicy: Always
  35. terminationGracePeriodSeconds: 30
  36. securityContext:
  37. runAsUser: 0
  38. schedulerName: default-scheduler
  39. strategy:
  40. type: RollingUpdate
  41. rollingUpdate:
  42. maxUnavailable: 1
  43. maxSurge: 1
  44. revisionHistoryLimit: 7
  45. progressDeadlineSeconds: 600

4.3.2 准备svc资源清单

  1. [root@k8s-7-200 ~]# vim service.yaml
  2. kind: Service
  3. apiVersion: v1
  4. metadata:
  5. name: kafka-manager
  6. namespace: infra
  7. spec:
  8. ports:
  9. - protocol: TCP
  10. port: 9000
  11. targetPort: 9000
  12. selector:
  13. app: kafka-manager

4.3.3 准备ingress资源清单

  1. [root@k8s-7-200 ~]# vim ingress.yaml
  2. kind: Ingress
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: kafka-manager
  6. namespace: infra
  7. spec:
  8. rules:
  9. - host: km.itdo.top
  10. http:
  11. paths:
  12. - path: /
  13. backend:
  14. serviceName: kafka-manager
  15. servicePort: 9000

4.3.4 应用资源配置清单

任意一台运算节点上:

  1. [root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kafka-manager/deployment.yaml
  2. [root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kafka-manager/service.yaml
  3. [root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kafka-manager/ingress.yaml

4.3.5 解析域名

k8s-7-11.host.top上

  1. [root@k8s-7-11 ~]# vim /var/named/itdo.top.zone
  2. km A 10.4.7.10
  3. [root@k8s-7-11 ~]# systemctl restart named
  4. [root@k8s-7-11 ~]# dig -t A km.itdo.top @10.4.7.11 +short
  5. 10.4.7.10

4.3.6 浏览器访问

http://km.itdo.top
添加集群
image

image


查看集群信息
image

image


查看topic
image

image

5 部署filebeat

filebeat版本最好跟es版本一致
image

image

  1. daf1a5e905c415daf68a8192a069f913a1d48e2c79e270da118385ba12a93aaa91bda4953c3402a6f0abf1c177f7bcc916a70bcac41977f69a6566565a8fae9c

是sha的一个指纹集,获取的方法:选择相应的版本,点击sha,会下载一个文本,文本中就是sha的指纹集
image

image


image

image


运维主机k8s-7-200.host.top上

5.1 制作docker镜像

  1. [root@k8s-7-200 ~]# mkdir /data/dockerfile/filebeat && cd /data/dockerfile/filebeat

5.1.1 准备Dockerfile

官网有现成的docker镜像可以考虑使用: https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html

  1. [root@k8s-7-200 ~]# vim Dockerfile
  2. FROM debian:jessie
  3. # 如果更换版本,需在官网下载同版本LINUX64-BIT的sha替换FILEBEAT_SHA1
  4. ENV FILEBEAT_VERSION=7.5.1 \
  5. FILEBEAT_SHA1=daf1a5e905c415daf68a8192a069f913a1d48e2c79e270da118385ba12a93aaa91bda4953c3402a6f0abf1c177f7bcc916a70bcac41977f69a6566565a8fae9c

  6. RUN set -x && \
  7. apt-get update && \
  8. apt-get install -y wget && \
  9. wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz -O /opt/filebeat.tar.gz && \
  10. cd /opt && \
  11. echo "${FILEBEAT_SHA1} filebeat.tar.gz" | sha512sum -c - && \
  12. tar xzvf filebeat.tar.gz && \
  13. cd filebeat-* && \
  14. cp filebeat /bin && \
  15. cd /opt && \
  16. rm -rf filebeat* && \
  17. apt-get purge -y wget && \
  18. apt-get autoremove -y && \
  19. apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
  20. COPY docker-entrypoint.sh /
  21. ENTRYPOINT ["/bin/bash","/docker-entrypoint.sh"]

5.1.3 准备启动脚本

docker-entrypoint.sh

  1. #!/bin/bash

  2. ENV=${ENV:-"test"} # 定义日志收集的环境,测试Or生产,通过dp.yaml插入变量的参数
  3. PROJ_NAME=${PROJ_NAME:-"no-define"} # 定义项目名称,关系到topic名称,通过dp.yaml插入变量的参数
  4. MULTILINE=${MULTILINE:-"^\d{2}"} # 多行匹配,这里正则表达式表示以2个数字开头的为一行,具体业务根据日志格式来定
  5. #配置文件
  6. cat >/etc/filebeat.yaml << EOF
  7. filebeat.inputs:
  8. - type: log
  9. fields_under_root: true
  10. fields:
  11. topic: logm-${PROJ_NAME}
  12. paths:
  13. - /logm/*.log #logm多行匹配
  14. - /logm/*/*.log
  15. - /logm/*/*/*.log
  16. - /logm/*/*/*/*.log
  17. - /logm/*/*/*/*/*.log
  18. scan_frequency: 120s
  19. max_bytes: 10485760
  20. multiline.pattern: ${MULTILINE}
  21. multiline.negate: true
  22. multiline.match: after
  23. multiline.max_lines: 100
  24. - type: log
  25. fields_under_root: true
  26. fields:
  27. topic: logu-${PROJ_NAME}
  28. paths:
  29. - /logu/*.log #logu单行匹配
  30. - /logu/*/*.log
  31. - /logu/*/*/*.log
  32. - /logu/*/*/*/*.log
  33. - /logu/*/*/*/*/*.log
  34. - /logu/*/*/*/*/*/*.log
  35. output.kafka:
  36. hosts: ["10.4.7.11:9092"] #定义kafka地址,多个kafka用逗号隔开
  37. topic: k8s-fb-${ENV}-%{[topic]} #%{[topic]}对应logu-${PROJ_NAME}或logm-${PROJ_NAME},%是filebeat内部的变量
  38. version: 2.0.0 # 即使kafka版本超过2.0,也写2.0.0,因为目前最高支持2.0.0
  39. required_acks: 0
  40. max_message_bytes: 10485760
  41. EOF

  42. set -xe
  43. #启动命令
  44. if [[ "$1" == "" ]]; then
  45. exec filebeat -c /etc/filebeat.yaml
  46. else
  47. exec "$@"
  48. fi

日志格式
image

image

  1. [root@k8s-7-200 ~]# chmod u+x docker-entrypoint.sh

5.1.4 构建镜像

  1. [root@k8s-7-200 ~]# docker build . -t harbor.itdo.top/infra/filebeat:v7.5.1
  2. [root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/filebeat:v7.5.1

image

image

5.2 以边车模式运行POD

5.2.1 准备资源配置清单

使用dubbo-demo-consumer的镜像,以边车模式运行filebeat,进行收集日志
修改dp.yaml

  1. [root@k8s-7-200 ~]# vim /data/k8s-yaml/test/dubbo-demo-consumer/deployment-filebeat.yaml
  2. kind: Deployment
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: dubbo-demo-consumer
  6. namespace: test
  7. labels:
  8. name: dubbo-demo-consumer
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: dubbo-demo-consumer
  14. template:
  15. metadata:
  16. labels:
  17. app: dubbo-demo-consumer
  18. name: dubbo-demo-consumer
  19. annotations:
  20. blackbox_path: "/hello?name=health"
  21. blackbox_port: "8080"
  22. blackbox_scheme: "http"
  23. prometheus_io_scrape: "true"
  24. prometheus_io_port: "12346"
  25. prometheus_io_path: "/"
  26. spec:
  27. containers:
  28. - name: dubbo-demo-consumer
  29. image: harbor.itdo.top/app/dubbo-demo-web:tomcat_191222_1200 #使用前面生成的tomcat镜像
  30. ports:
  31. - containerPort: 8080
  32. protocol: TCP
  33. env:
  34. - name: C_OPTS
  35. value: -Denv=fat -Dapollo.meta=http://config-test.itdo.top #连接apollo
  36. imagePullPolicy: IfNotPresent
  37. #--------新增内容--------
  38. volumeMounts:
  39. - mountPath: /opt/tomcat/logs
  40. name: logm
  41. - name: filebeat
  42. image: harbor.itdo.top/infra/filebeat:v7.5.1 #filebeat镜像地址
  43. imagePullPolicy: IfNotPresent
  44. env:
  45. - name: ENV
  46. value: test # 测试环境
  47. - name: PROJ_NAME
  48. value: dubbo-demo-web # 项目名
  49. volumeMounts:
  50. - mountPath: /logm
  51. name: logm
  52. volumes:
  53. - emptyDir: {} #随机在宿主机找目录创建,容器删除时一起删除
  54. name: logm
  55. #--------新增结束--------
  56. imagePullSecrets:
  57. - name: harbor
  58. restartPolicy: Always
  59. terminationGracePeriodSeconds: 30
  60. securityContext:
  61. runAsUser: 0
  62. schedulerName: default-scheduler
  63. strategy:
  64. type: RollingUpdate
  65. rollingUpdate:
  66. maxUnavailable: 1
  67. maxSurge: 1
  68. revisionHistoryLimit: 7
  69. progressDeadlineSeconds: 600

5.2.2 应用资源清单

任意node节点

  1. [root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/test/dubbo-demo-consumer/deployment-filebeat.yaml

启动后,检查现在consumery应该有3个容器运行
image

image


查看tomcat是否启动正常访问
image

image


查看tomcat日志,注意:是进入filebeat_dubbo-demo-consumer的容器中,查看logm目录下是否有日志
可以理解为filebeat和tomcat两个容器共享一个临时存储,非持久化,日志数据最终保留在ES中

  1. [root@k8s-7-22 ~]# kubectl -n test exec -it k8s_filebeat-dobbo-demo-consumer-...... /bin/bash
  2. [root@k8s-7-22 ~]# ls /logm

注:filebeat跟dubbo-demo-consumer共享ip(网络名称空间),uts/net/user是共享的
image

image

5.2.3 验证

浏览器访问http://km.itdo.top,看到kafaka-manager里,topic打进来,即为成功
image

image


验证topic数据

  1. [root@k8s-7-11 ~]# cd /opt/kafka/bin/
  2. [root@k8s-7-11 ~]# ./kafka-console-consumer.sh --bootstrap-server 10.4.7.11:9092 --topic k8s-fb-prod-logm-dubbo-demo-web --from-beginning

–from-beginning,读取历史未消费的数据

6 部署logstash

运维主机k8s-7-200.host.top上
https://www.elastic.co/cn/support/matrix
说明:版本最好保持跟ES版本一致
logstash分环境启动,资源足够时也可以按项目启动

6.1 准备docker镜像

logstash官方下载地址:https://hub.docker.com/_/logstash?tab=tags

6.1.1 下载官方镜像

  1. [root@k8s-7-200 ~]# docker pull logstash:6.8.6
  2. [root@k8s-7-200 ~]# docker tag d0a2dac51fcb harbor.itdo.top/infra/logstash:v6.8.6
  3. [root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/logstash:v6.8.6

6.1.2 准备配置文件

准备目录

  1. [root@k8s-7-200 ~]# mkdir /etc/logstash/

创建test.conf

  1. [root@k8s-7-200 ~]# vim /etc/logstash/logstash-test.conf
  2. input {
  3. kafka {
  4. bootstrap_servers => "10.4.7.11:9092"
  5. client_id => "10.4.7.200" #运维主机k8s-7-200上运行logstash
  6. consumer_threads => 4
  7. group_id => "k8s_test" # 为test组
  8. topics_pattern => "k8s-fb-test-.*" # 只收集k8s-fb-test开头的topics
  9. }
  10. }

  11. filter {
  12. json {
  13. source => "message"
  14. }
  15. }

  16. output {
  17. elasticsearch {
  18. hosts => ["10.4.7.12:9200"] #=打到ES
  19. index => "k8s-test-%{+YYYY.MM.DD}" #YYYY.MM.DD 按天收集,日志少的话,可以YYYY.MM按月来收集
  20. }
  21. }

创建prod.conf

  1. [root@k8s-7-200 ~]#vim /etc/logstash/logstash-prod.conf
  2. input {
  3. kafka {
  4. bootstrap_servers => "10.4.7.11:9092"
  5. client_id => "10.4.7.200"
  6. consumer_threads => 4
  7. group_id => "k8s_prod"
  8. topics_pattern => "k8s-fb-prod-.*"
  9. }
  10. }

  11. filter {
  12. json {
  13. source => "message"
  14. }
  15. }

  16. output {
  17. elasticsearch {
  18. hosts => ["10.4.7.12:9200"]
  19. index => “k8s-prod-%{+YYYY.MM.DD}"
  20. }
  21. }

创建dev.conf

  1. [root@k8s-7-200 ~]#vim /etc/logstash/logstash-dev.conf
  2. input {
  3. kafka {
  4. bootstrap_servers => "10.4.7.11:9092"
  5. client_id => "10.4.7.200"
  6. consumer_threads => 4
  7. group_id => "k8s_dev"
  8. topics_pattern => "k8s-fb-dev-.*"
  9. }
  10. }

  11. filter {
  12. json {
  13. source => "message"
  14. }
  15. }

  16. output {
  17. elasticsearch {
  18. hosts => ["10.4.7.12:9200"]
  19. index => "k8s-dev-%{+YYYY.MM.DD}"
  20. }
  21. }

6.2 启动logstash

6.2.1 启动测试环境的logstash

  1. [root@k8s-7-200 ~]# docker run -d \
  2. --restart=always \
  3. --name logstash-test \
  4. -v /etc/logstash:/etc/logstash \ #挂载宿主机配置文件目录
  5. harbor.itdo.top/infra/logstash:v6.8.6 \
  6. -f /etc/logstash/logstash-test.conf #指定配置文件

  7. [root@k8s-7-200 ~]# docker run -d \
  8. --restart=always \
  9. --name logstash-prod \
  10. -v /etc/logstash:/etc/logstash \
  11. harbor.itdo.top/infra/logstash:v6.8.6 \
  12. -f /etc/logstash/logstash-prod.conf

  13. [root@k8s-7-200 ~]# docker run -d \
  14. --restart=always \
  15. --name logstash-dev \
  16. -v /etc/logstash:/etc/logstash \
  17. harbor.itdo.top/infra/logstash:v6.8.6 \
  18. -f /etc/logstash/logstash-dev.conf

  19. [root@k8s-7-200 ~]# docker ps -a|grep logstash #查看容器状态

image

image

6.2.2 查看es是否接收数据

先访问demo-test.itdo.top/hello?name=xx,让其产生访问日志

  1. [root@k8s-7-200 logstash]# curl http://10.4.7.12:9200/_cat/indices?v
  2. health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
  3. green open k8s-test-2022.02.55 YyAhuCVrQDGlO_zyU1QwlQ 5 0 8 0 96kb 96kb
  4. green open k8s-dev-2022.02.55 96ocIPrJR5m8jdKtZz4iIA 5 0 33 0 380.5kb 380.5kb
  5. green open k8s-prod-2022.02.55 KFavKaavRCyOfGGwttbIhg 5 0

7 部署Kibana

运维主机k8s-200.host.top上

7.1 准备相关资源

7.1.1 准备docker镜像

  1. [root@k8s-7-200 ~]# docker pull kibana:6.8.6
  2. [root@k8s-7-200 ~]# docker tag adfab5632ef4 harbor.itdo.top/infra/kibana:v6.8.6
  3. [root@k8s-7-200 ~]# docker push harbor.itdo.top/infra/kibana:v6.8.6

准备目录

  1. [root@k8s-7-200 ~]# mkdir /data/k8s-yaml/kibana && cd /data/k8s-yaml/kibana

7.1.3 准备dp资源清单

  1. [root@k8s-7-200 ~]# vim deployment.yaml
  2. kind: Deployment
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: kibana
  6. namespace: infra
  7. labels:
  8. name: kibana
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: kibana
  14. template:
  15. metadata:
  16. labels:
  17. app: kibana
  18. name: kibana
  19. spec:
  20. containers:
  21. - name: kibana
  22. image: harbor.itdo.top/infra/kibana:v6.8.6
  23. imagePullPolicy: IfNotPresent
  24. ports:
  25. - containerPort: 5601
  26. protocol: TCP
  27. env:
  28. - name: ELASTICSEARCH_URL #es地址
  29. value: http://10.4.7.12:9200
  30. imagePullSecrets:
  31. - name: harbor
  32. securityContext:
  33. runAsUser: 0
  34. strategy:
  35. type: RollingUpdate
  36. rollingUpdate:
  37. maxUnavailable: 1
  38. maxSurge: 1
  39. revisionHistoryLimit: 7
  40. progressDeadlineSeconds: 600

7.1.4 准备svc资源清单

  1. [root@k8s-7-200 ~]# vim service.yaml
  2. kind: Service
  3. apiVersion: v1
  4. metadata:
  5. name: kibana
  6. namespace: infra
  7. spec:
  8. ports:
  9. - protocol: TCP
  10. port: 5601
  11. targetPort: 5601
  12. selector:
  13. app: kibana

7.1.5 准备ingress资源清单

  1. [root@k8s-7-200 ~]# vim ingress.yaml
  2. kind: Ingress
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: kibana
  6. namespace: infra
  7. spec:
  8. rules:
  9. - host: kibana.itdo.top
  10. http:
  11. paths:
  12. - path: /
  13. backend:
  14. serviceName: kibana
  15. servicePort: 5601

7.2 应用资源

7.2.1 应用资源配置清单

  1. [root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kibana/deployment.yaml
  2. [root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kibana/service.yaml
  3. [root@k8s-7-200 ~]# kubectl apply -f http://k8s-yaml.itdo.top/kibana/ingress.yaml

7.2.2 解析域名

  1. [root@k8s-7-11 ~]# vim /var/named/itdo.top.zone
  2. kibana A 10.4.7.10
  3. [root@k8s-7-11 ~]# systemctl restart named
  4. [root@k8s-7-11 ~]# dig -t A kibana.itdo.top @10.4.7.11 +short
  5. 10.4.7.10

7.2.3 浏览器访问

访问http://kibana.itdo.top
image

image


新版kibana已经自带监控功能,这里打开它,可以看到ES的信息
image

image


image

image


image

image

7.3 kibana的使用

创建test环境索引
image

image


image

image


完成
image

image


创建完成后,点击discover就能看到日志了
image

image


创建prod环境索引
image

image


image

image


image

image


创建dev环境索引
image

image


image

image

  1. 选择区域
    image

    image

  2. 时间选择器
    选择日志时间
  • 快速时间
  • 绝对时间
  • 相对时间
    image

    image

  1. 环境选择器
    选择对应环境的日志
  • k8s-test-*
  • k8s-prod-*
  • k8s-dev-*
    image

    image

  1. 项目选择器
  • 对应filebeat的PROJ_NAME值
  • Add a fillter
  • topic is ${PROJ_NAME}
    dubbo-demo-service
    dubbo-demo-web
    image

    image


    image

    image

  1. 关键字选择器
  • exception
  • error
  • 其他业务关键字
  • 支持正则表达式
    tips:将测试环境dubbo-demo-service服务关闭,手动访问http://demo-test.od.com/hello?name=test产生错误日志
    添加两个常用message、logfile.file.path、hostname字段
    image

    image


    如下图
    image

    image


    image

    image


    image

    image

dubbo-demo-service.jar项目日志处理

a.修改jre8/entrypoint.sh
image

image


image

image


b.修改jenkins流水线,base_image增加刚才制作的底包镜像
image

image


c.重新构建dubbo-demo-service镜像
参数如下

  1. app_name:dubbo-demo-service
  2. image_name:app/dubbo-demo-service
  3. git_repo:https://gitee.com/itdotop/dubbo-demo-service.git
  4. git_ver:apollo
  5. add_tag:220224_1500_with_logs
  6. mvn_dir:./
  7. target_dir:./dubbo-server/target
  8. mvn_cmd:mvn clean package -Dmaven.test.skip=true
  9. base_image:base/jre8:8u112_with_logs
  10. maven:3.6.1-8u221

d.修改dev环境资源配置清单

  1. [root@k8s-7-200 dubbo-demo-service]# vim deployment-filebeat.yaml
  2. kind: Deployment
  3. apiVersion: extensions/v1beta1
  4. metadata:
  5. name: dubbo-demo-service
  6. namespace: app
  7. labels:
  8. name: dubbo-demo-service
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: dubbo-demo-service
  14. template:
  15. metadata:
  16. labels:
  17. app: dubbo-demo-service
  18. name: dubbo-demo-service
  19. annotations:
  20. blackbox_port: "20880"
  21. blackbox_scheme: tcp
  22. prometheus_io_path: /
  23. prometheus_io_port: "12346"
  24. prometheus_io_scrape: "true"
  25. spec:
  26. containers:
  27. - name: dubbo-demo-service
  28. image: harbor.itdo.top/app/dubbo-demo-service:apollo_220224_1500_with_logs #指向修改好的新镜像
  29. ports:
  30. - containerPort: 20880
  31. protocol: TCP
  32. env:
  33. - name: C_OPTS
  34. value: -Denv=dev -Dapollo.meta=http://config.itdo.top
  35. - name: JAR_BALL
  36. value: dubbo-server.jar
  37. imagePullPolicy: IfNotPresent
  38. ---新增内容----
  39. volumeMounts:
  40. - mountPath: /opt/logs #设置日志路径
  41. name: logm
  42. - name: filebeat
  43. image: harbor.itdo.top/infra/filebeat:v7.5.1
  44. imagePullPolicy: IfNotPresent
  45. env:
  46. - name: ENV
  47. value: dev #指定环境
  48. - name: PROJ_NAME
  49. value: dubbo-demo-service #指定项目名称
  50. volumeMounts:
  51. - mountPath: /logm
  52. name: logm
  53. volumes:
  54. - emptyDir: {}
  55. name: logm
  56. ----新增结束------
  57. imagePullSecrets:
  58. - name: harbor
  59. restartPolicy: Always
  60. terminationGracePeriodSeconds: 30
  61. securityContext:
  62. runAsUser: 0
  63. schedulerName: default-scheduler
  64. strategy:
  65. type: RollingUpdate
  66. rollingUpdate:
  67. maxUnavailable: 1
  68. maxSurge: 1
  69. revisionHistoryLimit: 7
  70. progressDeadlineSeconds: 600

f.应用dev环境资源配置清单(测试、生产环境操作一样)

  1. [root@k8s-7-21 src]# kubectl apply -f http://k8s-yaml.itdo.top/dubbo-demo-service/deployment-filebeat.yaml
  2. [root@k8s-7-21 src]# kubectl apply -f http://k8s-yaml.itdo.top/test/dubbo-demo-service/deployment-filebeat.yaml
  3. [root@k8s-7-21 src]# kubectl apply -f http://k8s-yaml.itdo.top/prod/dubbo-demo-service/deployment-filebeat.yaml

g.检查kafka topic
image

image


h.kibana查看ES日志
image

image