實(shí)時(shí)統(tǒng)計(jì)每天pv,uv的sparkStreaming結(jié)合redis結(jié)果存入mysql供前端展示

最近有個(gè)需求,實(shí)時(shí)統(tǒng)計(jì)pv,uv,結(jié)果按照date,hour,pv,uv來(lái)展示,按天統(tǒng)計(jì),第二天重新統(tǒng)計(jì),當(dāng)然了實(shí)際還需要按照類型字段分類統(tǒng)計(jì)pv,uv,比如按照date,hour,pv,uv,type來(lái)展示。這里介紹最基本的pv,uv的展示。

1、項(xiàng)目流程


日志數(shù)據(jù)從flume采集過(guò)來(lái),落到hdfs供其它離線業(yè)務(wù)使用,也會(huì)sinkkafkasparkStreamingkafka拉數(shù)據(jù)過(guò)來(lái),計(jì)算pv,uvuv是用的redisset集合去重,最后把結(jié)果寫(xiě)入mysql數(shù)據(jù)庫(kù),供前端展示使用。

2、具體過(guò)程

1)pv的計(jì)算

拉取數(shù)據(jù)有兩種方式,基于receiveddirect方式,這里用direct直拉的方式,用的mapWithState算子保存狀態(tài),這個(gè)算子與updateStateByKey一樣,并且性能更好。當(dāng)然了實(shí)際中數(shù)據(jù)過(guò)來(lái)需要經(jīng)過(guò)清洗,過(guò)濾,才能使用。

定義一個(gè)狀態(tài)函數(shù)

// 實(shí)時(shí)流量狀態(tài)更新函數(shù)
  val mapFunction = (datehour:String, pv:Option[Long], state:State[Long]) => {
    val accuSum = pv.getOrElse(0L) + state.getOption().getOrElse(0L)
    val output = (datehour,accuSum)
    state.update(accuSum)
    output
  }
Java
 計(jì)算pv
 val stateSpec = StateSpec.function(mapFunction)
 val helper_count_all = helper_data.map(x => (x._1,1L)).mapWithState(stateSpec).stateSnapshots().repartition(2)
Java

這樣就很容易的把pv計(jì)算出來(lái)了。

2)uv的計(jì)算

uv是要全天去重的,每次進(jìn)來(lái)一個(gè)batch的數(shù)據(jù),如果用原生的reduceByKey或者groupByKey對(duì)配置要求太高,在配置較低情況下,我們申請(qǐng)了一個(gè)93Gredis用來(lái)去重,原理是每進(jìn)來(lái)一條數(shù)據(jù),將date作為keyguid加入set集合,20秒刷新一次,也就是將set集合的尺寸取出來(lái),更新一下數(shù)據(jù)庫(kù)即可。

helper_data_dis.foreachRDD(rdd => {
  rdd.foreachPartition(eachPartition => {
    var jedis: Jedis = null
    try {
      jedis = getJedis
      eachPartition.foreach(x => {
        val arr = x._2.split("\t")
        val date: String = arr(0).split(":")(0)

        // helper 統(tǒng)計(jì)
        val key0 = "helper_" + date
        jedis.sadd(key0, x._1)
        jedis.expire(key0, ConfigFactory.rediskeyexists)
        // helperversion 統(tǒng)計(jì)
        val key = date + "_" + arr(1)
        jedis.sadd(key, x._1)
        jedis.expire(key, ConfigFactory.rediskeyexists)
      })
    } catch {
      case e: Exception => {
        logger.error(e)
        logger2.error(HelperHandle.getClass.getSimpleName + e)
      }
    } finally {
      if (jedis != null) {
        closeJedis(jedis)
      }
    }
  })
})

// 獲取jedis連接
def getJedis: Jedis = {
  val jedis = RedisPoolUtil.getPool.getResource
  jedis
}

// 釋放jedis連接
def closeJedis(jedis: Jedis): Unit = {
  RedisPoolUtil.getPool.returnResource(jedis)
}
Java

redis連接池代碼RedisPoolUtil.scala

package com.js.ipflow.utils

import com.js.ipflow.start.ConfigFactory
import org.apache.commons.pool2.impl.GenericObjectPoolConfig
import redis.clients.jedis.JedisPool

/**
  * redis 連接池工具類
  * @author keguang
  */

object RedisPoolUtil extends Serializable{
  @transient private var pool: JedisPool = null

  /**
    * 讀取jedis配置信息, 出發(fā)jedis初始化
    */
  def initJedis: Unit ={
    ConfigFactory.initConfig()
    val maxTotal = 50
    val maxIdle = 30
    val minIdle = 10
    val redisHost = ConfigFactory.redishost
    val redisPort = ConfigFactory.redisport
    val redisTimeout = ConfigFactory.redistimeout
    val redisPassword = ConfigFactory.redispassword
    makePool(redisHost, redisPort, redisTimeout, redisPassword, maxTotal, maxIdle, minIdle)
  }

  def makePool(redisHost: String, redisPort: Int, redisTimeout: Int,redisPassword:String, maxTotal: Int, maxIdle: Int, minIdle: Int): Unit = {
   init(redisHost, redisPort, redisTimeout, redisPassword, maxTotal, maxIdle, minIdle, true, false, 10000)
  }

  /**
    * 初始化jedis連接池
    * @param redisHost host
    * @param redisPort 端口
    * @param redisTimeout 連接redis超時(shí)時(shí)間
    * @param redisPassword redis密碼
    * @param maxTotal 總的連接數(shù)
    * @param maxIdle 最大空閑連接數(shù)
    * @param minIdle 最小空閑連接數(shù)
    * @param testOnBorrow
    * @param testOnReturn
    * @param maxWaitMillis
    */
  def init(redisHost: String, redisPort: Int, redisTimeout: Int,redisPassword:String, maxTotal: Int, maxIdle: Int, minIdle: Int, testOnBorrow: Boolean, testOnReturn: Boolean, maxWaitMillis: Long): Unit = {
    if (pool == null) {
      val poolConfig = new GenericObjectPoolConfig()
      poolConfig.setMaxTotal(maxTotal)
      poolConfig.setMaxIdle(maxIdle)
      poolConfig.setMinIdle(minIdle)
      poolConfig.setTestOnBorrow(testOnBorrow)
      poolConfig.setTestOnReturn(testOnReturn)
      poolConfig.setMaxWaitMillis(maxWaitMillis)
      pool = new JedisPool(poolConfig, redisHost, redisPort, redisTimeout,redisPassword)

      val hook = new Thread {
        override def run = pool.destroy()
      }
      sys.addShutdownHook(hook.run)
    }
  }

  def getPool: JedisPool = {
    if(pool == null){
      initJedis
    }
    pool
  }

}
Java

3)結(jié)果保存到數(shù)據(jù)庫(kù)

結(jié)果保存到mysql,數(shù)據(jù)庫(kù),20秒刷新一次數(shù)據(jù)庫(kù),前端展示刷新一次,就會(huì)重新查詢一次數(shù)據(jù)庫(kù),做到實(shí)時(shí)統(tǒng)計(jì)展示pv,uv的目的。

/**
  * 插入數(shù)據(jù)
  *
  * @param data (addTab(datehour)+helperversion)
  * @param tbName
  * @param colNames
  */
def insertHelper(data: DStream[(String, Long)], tbName: String, colNames: String*): Unit = {
  data.foreachRDD(rdd => {
    val tmp_rdd = rdd.map(x => x._1.substring(11, 13).toInt)
    if (!rdd.isEmpty()) {
      val hour_now = tmp_rdd.max() // 獲取當(dāng)前結(jié)果中最大的時(shí)間,在數(shù)據(jù)恢復(fù)中可以起作用
      rdd.foreachPartition(eachPartition => {
        var jedis: Jedis = null
        var conn: Connection = null
        try {
          jedis = getJedis
          conn = MysqlPoolUtil.getConnection()
          conn.setAutoCommit(false)
          val stmt = conn.createStatement()
          eachPartition.foreach(x => {
            if (colNames.length == 7) {
              val datehour = x._1.split("\t")(0)
              val helperversion = x._1.split("\t")(1)
              val date_hour = datehour.split(":")
              val date = date_hour(0)
              val hour = date_hour(1).toInt

              val colName0 = colNames(0) // date
              val colName1 = colNames(1) // hour
              val colName2 = colNames(2) // count_all
              val colName3 = colNames(3) // count
              val colName4 = colNames(4) // helperversion
              val colName5 = colNames(5) // datehour
              val colName6 = colNames(6) // dh

              val colValue0 = addYin(date)
              val colValue1 = hour
              val colValue2 = x._2.toInt
              val colValue3 = jedis.scard(date + "_" + helperversion) // // 2018-07-08_10.0.1.22
              val colValue4 = addYin(helperversion)
              var colValue5 = if (hour < 10) "'" + date + " 0" + hour + ":00 " + helperversion + "'" else "'" + date + " " + hour + ":00 " + helperversion + "'"
              val colValue6 = if (hour < 10) "'" + date + " 0" + hour + ":00'" else "'" + date + " " + hour + ":00'"

              var sql = ""
              if (hour == hour_now) { // uv只對(duì)現(xiàn)在更新
                sql = s"insert into ${tbName}(${colName0},${colName1},${colName2},${colName3},${colName4},${colName5},${colName6}) values(${colValue0},${colValue1},${colValue2},${colValue3},${colValue4},${colValue5},${colValue6}) on duplicate key update ${colName2} =  ${colValue2},${colName3} = ${colValue3}"
                logger.warn(sql)
                stmt.addBatch(sql)
              } /* else {
              sql = s"insert into ${tbName}(${colName0},${colName1},${colName2},${colName4},${colName5},${colName6}) values(${colValue0},${colValue1},${colValue2},${colValue4},${colValue5},${colValue6}) on duplicate key update ${colName2} =  ${colValue2}"
            }*/
            } else if (colNames.length == 5) {
              val date_hour = x._1.split(":")
              val date = date_hour(0)
              val hour = date_hour(1).toInt
              val colName0 = colNames(0) // date
              val colName1 = colNames(1) // hour
              val colName2 = colNames(2) // helper_count_all
              val colName3 = colNames(3) // helper_count
              val colName4 = colNames(4) // dh

              val colValue0 = addYin(date)
              val colValue1 = hour
              val colValue2 = x._2.toInt
              val colValue3 = jedis.scard("helper_" + date) // // helper_2018-07-08
              val colValue4 = if (hour < 10) "'" + date + " 0" + hour + ":00'" else "'" + date + " " + hour + ":00'"

              var sql = ""
              if (hour == hour_now) { // uv只對(duì)現(xiàn)在更新
                sql = s"insert into ${tbName}(${colName0},${colName1},${colName2},${colName3},${colName4}) values(${colValue0},${colValue1},${colValue2},${colValue3},${colValue4}) on duplicate key update ${colName2} =  ${colValue2},${colName3} = ${colValue3}"
                logger.warn(sql)
                stmt.addBatch(sql)
              }
            }
          })
          stmt.executeBatch() // 批量執(zhí)行sql語(yǔ)句
          conn.commit()
        } catch {
          case e: Exception => {
            logger.error(e)
            logger2.error(HelperHandle.getClass.getSimpleName + e)
          }
        } finally {
          if (jedis != null) {
            closeJedis(jedis)
          }

          if(conn != null){
            conn.close()
          }
        }
      })
    }
  })
}

// 計(jì)算當(dāng)前時(shí)間距離次日零點(diǎn)的時(shí)長(zhǎng)(毫秒)
def resetTime = {
    val now = new Date()
    val todayEnd = Calendar.getInstance
    todayEnd.set(Calendar.HOUR_OF_DAY, 23) // Calendar.HOUR 12小時(shí)制
    todayEnd.set(Calendar.MINUTE, 59)
    todayEnd.set(Calendar.SECOND, 59)
    todayEnd.set(Calendar.MILLISECOND, 999)
    todayEnd.getTimeInMillis - now.getTime
 }
Java

msql 連接池代碼MysqlPoolUtil.scala

package com.js.ipflow.utils

import java.sql.{Connection, PreparedStatement, ResultSet}

import com.js.ipflow.start.ConfigFactory
import org.apache.commons.dbcp.BasicDataSource
import org.apache.logging.log4j.LogManager

/**
  *jdbc mysql 連接池工具類
  * @author keguang
  */
object MysqlPoolUtil {

  val logger = LogManager.getLogger(MysqlPoolUtil.getClass.getSimpleName)

  private var bs:BasicDataSource = null

  /**
    * 創(chuàng)建數(shù)據(jù)源
    * @return
    */
  def getDataSource():BasicDataSource={
    if(bs==null){
      ConfigFactory.initConfig()
      bs = new BasicDataSource()
      bs.setDriverClassName("com.mysql.jdbc.Driver")
      bs.setUrl(ConfigFactory.mysqlurl)
      bs.setUsername(ConfigFactory.mysqlusername)
      bs.setPassword(ConfigFactory.mysqlpassword)
      bs.setMaxActive(50)           // 設(shè)置最大并發(fā)數(shù)
      bs.setInitialSize(20)          // 數(shù)據(jù)庫(kù)初始化時(shí),創(chuàng)建的連接個(gè)數(shù)
      bs.setMinIdle(20)              // 在不新建連接的條件下,池中保持空閑的最少連接數(shù)。
      bs.setMaxIdle(20)             // 池里不會(huì)被釋放的最多空閑連接數(shù)量。設(shè)置為0時(shí)表示無(wú)限制。
      bs.setMaxWait(5000)             // 在拋出異常之前,池等待連接被回收的最長(zhǎng)時(shí)間(當(dāng)沒(méi)有可用連接時(shí))。設(shè)置為-1表示無(wú)限等待。
      bs.setMinEvictableIdleTimeMillis(10*1000)     // 空閑連接5秒中后釋放
      bs.setTimeBetweenEvictionRunsMillis(1*60*1000)      //1分鐘檢測(cè)一次是否有死掉的線程
      bs.setTestOnBorrow(true)
    }
    bs
  }

  /**
    * 釋放數(shù)據(jù)源
    */
  def shutDownDataSource(){
    if(bs!=null){
      bs.close()
    }
  }

  /**
    * 獲取數(shù)據(jù)庫(kù)連接
    * @return
    */
  def getConnection():Connection={
    var con:Connection = null
    try {
      if(bs!=null){
        con = bs.getConnection()
      }else{
        con = getDataSource().getConnection()
      }
    } catch{
      case e:Exception => logger.error(e)
    }
    con
  }

  /**
    * 關(guān)閉連接
    */
  def closeCon(rs:ResultSet ,ps:PreparedStatement,con:Connection){
    if(rs!=null){
      try {
        rs.close()
      } catch{
        case e:Exception => println(e.getMessage)
      }
    }
    if(ps!=null){
      try {
        ps.close()
      } catch{
        case e:Exception => println(e.getMessage)
      }
    }
    if(con!=null){
      try {
        con.close()
      } catch{
        case e:Exception => println(e.getMessage)
      }
    }
  }
}
Java

4)數(shù)據(jù)容錯(cuò)

流處理消費(fèi)kafka都會(huì)考慮到數(shù)據(jù)丟失問(wèn)題,一般可以保存到任何存儲(chǔ)系統(tǒng),包括mysql,hdfs,hbase,redis,zookeeper等到。這里用SparkStreaming自帶的checkpoint機(jī)制來(lái)實(shí)現(xiàn)應(yīng)用重啟時(shí)數(shù)據(jù)恢復(fù)。

checkpoint

這里采用的是checkpoint機(jī)制,在重啟或者失敗后重啟可以直接讀取上次沒(méi)有完成的任務(wù),從kafka對(duì)應(yīng)offset讀取數(shù)據(jù)。

// 初始化配置文件
ConfigFactory.initConfig()

val conf = new SparkConf().setAppName(ConfigFactory.sparkstreamname)
conf.set("spark.streaming.stopGracefullyOnShutdown","true")
conf.set("spark.streaming.kafka.maxRatePerPartition",consumeRate)
conf.set("spark.default.parallelism","24")
val sc = new SparkContext(conf)

while (true){
    val ssc = StreamingContext.getOrCreate(ConfigFactory.checkpointdir + DateUtil.getDay(0),getStreamingContext _ )
    ssc.start()
    ssc.awaitTerminationOrTimeout(resetTime)
    ssc.stop(false,true)
}
Java






checkpoint是每天一個(gè)目錄,在第二天凌晨定時(shí)銷毀StreamingContext對(duì)象,重新統(tǒng)計(jì)計(jì)算pv,uv。

注意
ssc.stop(false,true)表示優(yōu)雅地銷毀StreamingContext對(duì)象,不能銷毀SparkContext對(duì)象,ssc.stop(true,true)會(huì)停掉SparkContext對(duì)象,程序就直接停了。

應(yīng)用遷移或者程序升級(jí)

在這個(gè)過(guò)程中,我們把應(yīng)用升級(jí)了一下,比如說(shuō)某個(gè)功能寫(xiě)的不夠完善,或者有邏輯錯(cuò)誤,這時(shí)候都是需要修改代碼,重新打jar包的,這時(shí)候如果把程序停了,新的應(yīng)用還是會(huì)讀取老的checkpoint,可能會(huì)有兩個(gè)問(wèn)題:

  1. 執(zhí)行的還是上一次的程序,因?yàn)?code>checkpoint里面也有序列化的代碼;
  2. 直接執(zhí)行失敗,反序列化失敗;

其實(shí)有時(shí)候,修改代碼后不用刪除checkpoint也是可以直接生效,經(jīng)過(guò)很多測(cè)試,我發(fā)現(xiàn)如果對(duì)數(shù)據(jù)的過(guò)濾操作導(dǎo)致數(shù)據(jù)過(guò)濾邏輯改變,還有狀態(tài)操作保存修改,也會(huì)導(dǎo)致重啟失敗,只有刪除checkpoint才行,可是實(shí)際中一旦刪除checkpoint,就會(huì)導(dǎo)致上一次未完成的任務(wù)和消費(fèi)kafkaoffset丟失,直接導(dǎo)致數(shù)據(jù)丟失,這種情況下我一般這么做。

這種情況一般是在另外一個(gè)集群,或者把checkpoint目錄修改下,我們是代碼與配置文件分離,所以修改配置文件checkpoint的位置還是很方便的。然后兩個(gè)程序一起跑,除了checkpoint目錄不一樣,會(huì)重新建,都插入同一個(gè)數(shù)據(jù)庫(kù),跑一段時(shí)間后,把舊的程序停掉就好。以前看官網(wǎng)這么說(shuō),只能記住不能清楚明了,只有自己做時(shí)才會(huì)想一下辦法去保證數(shù)據(jù)準(zhǔn)確。

5)日志

日志用的log4j2,本地保存一份,ERROR級(jí)別的日志會(huì)通過(guò)郵件發(fā)送到郵箱。

val logger = LogManager.getLogger(HelperHandle.getClass.getSimpleName)
  // 郵件level=error日志
  val logger2 = LogManager.getLogger("email")
Java

3、主要代碼

需要的maven依賴:

        
            org.apache.spark
            spark-core_2.11
            ${spark.version}
            provided
        
        
            org.apache.spark
            spark-streaming_2.11
            ${spark.version}
            provided
        
        
            mysql
            mysql-connector-java
            5.1.40
        
        
            commons-dbcp
            commons-dbcp
            1.4
            provided
        
XML

讀取配置文件代碼ConfigFactory .java

package com.js.ipflow.start;

import com.google.common.io.Resources;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.dom4j.Document;
import org.dom4j.DocumentException;
import org.dom4j.Element;
import org.dom4j.io.SAXReader;

import java.io.File;

public class ConfigFactory {
    private final static Logger log = LogManager.getLogger("email");

    public static String kafkaipport;
    public static String kafkazookeeper;
    public static String kafkatopic;
    public static String kafkagroupid;
    public static String mysqlurl;
    public static String mysqlusername;
    public static String mysqlpassword;
    public static String redishost;
    public static int redisport;
    public static String redispassword;
    public static int redistimeout;
    public static int rediskeyexists;
    public static String sparkstreamname;
    public static int sparkstreamseconds;
    public static String sparkstreammaster = "spark://qcloud-spark01:7077"; // 僅供本地測(cè)試使用
    public static String localpath;
    public static String checkpointdir;
    // public static String gracestopfile; // 優(yōu)雅得kill掉程序
    public static String keydeserilizer;
    public static String valuedeserilizer;

    /**
     * 初始化所有的通用信息
     */
    public static void initConfig(){readCommons();}

    /**
     * 讀取commons.xml文件
     */
    private static void readCommons(){
        SAXReader reader = new SAXReader(); // 構(gòu)建xml解析器
        Document document = null;
        try{
            document = reader.read(Resources.getResource("commons.xml"));
        }catch (DocumentException e){
            log.error("ConfigFactory.readCommons",e);
        }

        if(document != null){
            Element root = document.getRootElement();

            Element kafkaElement = root.element("kafka");
            kafkaipport = kafkaElement.element("ipport").getText();
            kafkazookeeper = kafkaElement.element("zookeeper").getText();
            kafkatopic = kafkaElement.element("topic").getText();
            kafkagroupid = kafkaElement.element("groupid").getText();
            keydeserilizer=kafkaElement.element("keySer").getText();
            valuedeserilizer=kafkaElement.element("valSer").getText();

            Element mysqlElement = root.element("mysql");
            mysqlurl = mysqlElement.element("url").getText();
            mysqlusername = mysqlElement.element("username").getText();
            mysqlpassword = mysqlElement.element("password").getText();

            Element redisElement = root.element("redis");
            redishost = redisElement.element("host").getText();
            redisport = Integer.valueOf(redisElement.element("port").getText());
            redispassword = redisElement.element("password").getText();
            redistimeout = Integer.valueOf(redisElement.element("timeout").getText());
            rediskeyexists = Integer.valueOf(redisElement.element("keyexists").getText());

            Element sparkElement = root.element("spark");
            // sparkstreammaster = sparkElement.element("streammaster").getText();
            sparkstreamname = sparkElement.element("streamname").getText();
            sparkstreamseconds = Integer.valueOf(sparkElement.element("seconds").getText());

            Element pathElement = root.element("path");
            localpath = pathElement.element("localpath").getText();
            checkpointdir = pathElement.element("checkpointdir").getText();
            // gracestopfile = pathElement.element("gracestopfile").getText();

        }else {
            log.warn("commons.xml配置文件讀取錯(cuò)誤...");
        }
    }
}
Java

主要業(yè)務(wù)代碼,如下:

package com.js.ipflow.flash.helper

import java.sql.Connection
import java.util.{Calendar, Date}

import com.alibaba.fastjson.JSON
import com.js.ipflow.start.ConfigFactory
import com.js.ipflow.utils.{DateUtil, MysqlPoolUtil, RedisPoolUtil}
import kafka.serializer.StringDecoder
import org.apache.logging.log4j.LogManager
import org.apache.spark.streaming.dstream.DStream
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, State, StateSpec, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
import redis.clients.jedis.Jedis

object HelperHandle {

  val logger = LogManager.getLogger(HelperHandle.getClass.getSimpleName)
  // 郵件level=error日志
  val logger2 = LogManager.getLogger("email")

  def main(args: Array[String]): Unit = {
    helperHandle(args(0))
  }

  def helperHandle(consumeRate: String): Unit = {

    // 初始化配置文件
    ConfigFactory.initConfig()

    val conf = new SparkConf().setAppName(ConfigFactory.sparkstreamname)
    conf.set("spark.streaming.stopGracefullyOnShutdown", "true")
    conf.set("spark.streaming.kafka.maxRatePerPartition", consumeRate)
    conf.set("spark.default.parallelism", "30")
    val sc = new SparkContext(conf)

    while (true) {
      val ssc = StreamingContext.getOrCreate(ConfigFactory.checkpointdir + DateUtil.getDay(0), getStreamingContext _)
      ssc.start()
      ssc.awaitTerminationOrTimeout(resetTime)
      ssc.stop(false, true)
    }

    def getStreamingContext(): StreamingContext = {
      val stateSpec = StateSpec.function(mapFunction)
      val ssc = new StreamingContext(sc, Seconds(ConfigFactory.sparkstreamseconds))
      ssc.checkpoint(ConfigFactory.checkpointdir + DateUtil.getDay(0))
      val zkQuorm = ConfigFactory.kafkazookeeper
      val topics = ConfigFactory.kafkatopic
      val topicSet = Set(topics)
      val kafkaParams = Map[String, String](
        "metadata.broker.list" -> (ConfigFactory.kafkaipport)
        , "group.id" -> (ConfigFactory.kafkagroupid)
        , "auto.offset.reset" -> kafka.api.OffsetRequest.LargestTimeString
      )

      val rmessage = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
        ssc, kafkaParams, topicSet
      )

      // helper數(shù)據(jù) (dateHour,guid,helperversion)
      val helper_data = FilterHelper.getHelperData(rmessage.map(x => {
        val message = JSON.parseObject(x._2).getString("message")
        JSON.parseObject(message)
      })).repartition(60).cache()

      // (guid, datehour + helperversion)
      val helper_data_dis = helper_data.map(x => (x._2, addTab(x._1) + x._3)).reduceByKey((x, y) => y)

      // pv,uv
      val helper_count = helper_data.map(x => (x._1, 1L)).mapWithState(stateSpec).stateSnapshots().repartition(2)

      // helperversion
      val helper_helperversion_count = helper_data.map(x => (addTab(x._1) + x._3, 1L)).mapWithState(stateSpec).stateSnapshots().repartition(2)
      helper_data_dis.foreachRDD(rdd => {
        rdd.foreachPartition(eachPartition => {
          var jedis: Jedis = null
          try {
            jedis = getJedis
            eachPartition.foreach(x => {
              val arr = x._2.split("\t")
              val date: String = arr(0).split(":")(0)

              // helper 統(tǒng)計(jì)
              val key0 = "helper_" + date
              jedis.sadd(key0, x._1)
              jedis.expire(key0, ConfigFactory.rediskeyexists)
              // helperversion 統(tǒng)計(jì)
              val key = date + "_" + arr(1)
              jedis.sadd(key, x._1)
              jedis.expire(key, ConfigFactory.rediskeyexists)
            })
          } catch {
            case e: Exception => {
              logger.error(e)
              logger2.error(HelperHandle.getClass.getSimpleName + e)
            }
          } finally {
            if (jedis != null) {
              closeJedis(jedis)
            }
          }
        })
      })
      insertHelper(helper_helperversion_count, "statistic_realtime_flash_helper", "date", "hour", "count_all", "count", "helperversion", "datehour", "dh")
      insertHelper(helper_count, "statistic_realtime_helper_count", "date", "hour", "helper_count_all", "helper_count", "dh")

      ssc
    }
  }

  ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
  // 計(jì)算當(dāng)前時(shí)間距離次日零點(diǎn)的時(shí)長(zhǎng)(毫秒)
  def resetTime = {
    val now = new Date()
    val todayEnd = Calendar.getInstance
    todayEnd.set(Calendar.HOUR_OF_DAY, 23) // Calendar.HOUR 12小時(shí)制
    todayEnd.set(Calendar.MINUTE, 59)
    todayEnd.set(Calendar.SECOND, 59)
    todayEnd.set(Calendar.MILLISECOND, 999)
    todayEnd.getTimeInMillis - now.getTime
  }

  /**
    * 插入數(shù)據(jù)
    *
    * @param data (addTab(datehour)+helperversion)
    * @param tbName
    * @param colNames
    */
  def insertHelper(data: DStream[(String, Long)], tbName: String, colNames: String*): Unit = {
    data.foreachRDD(rdd => {
      val tmp_rdd = rdd.map(x => x._1.substring(11, 13).toInt)
      if (!rdd.isEmpty()) {
        val hour_now = tmp_rdd.max() // 獲取當(dāng)前結(jié)果中最大的時(shí)間,在數(shù)據(jù)恢復(fù)中可以起作用
        rdd.foreachPartition(eachPartition => {
          var jedis: Jedis = null
          var conn: Connection = null
          try {
            jedis = getJedis
            conn = MysqlPoolUtil.getConnection()
            conn.setAutoCommit(false)
            val stmt = conn.createStatement()
            eachPartition.foreach(x => {
              if (colNames.length == 7) {
                val datehour = x._1.split("\t")(0)
                val helperversion = x._1.split("\t")(1)
                val date_hour = datehour.split(":")
                val date = date_hour(0)
                val hour = date_hour(1).toInt

                val colName0 = colNames(0) // date
                val colName1 = colNames(1) // hour
                val colName2 = colNames(2) // count_all
                val colName3 = colNames(3) // count
                val colName4 = colNames(4) // helperversion
                val colName5 = colNames(5) // datehour
                val colName6 = colNames(6) // dh

                val colValue0 = addYin(date)
                val colValue1 = hour
                val colValue2 = x._2.toInt
                val colValue3 = jedis.scard(date + "_" + helperversion) // // 2018-07-08_10.0.1.22
                val colValue4 = addYin(helperversion)
                var colValue5 = if (hour < 10) "'" + date + " 0" + hour + ":00 " + helperversion + "'" else "'" + date + " " + hour + ":00 " + helperversion + "'"
                val colValue6 = if (hour < 10) "'" + date + " 0" + hour + ":00'" else "'" + date + " " + hour + ":00'"

                var sql = ""
                if (hour == hour_now) { // uv只對(duì)現(xiàn)在更新
                  sql = s"insert into ${tbName}(${colName0},${colName1},${colName2},${colName3},${colName4},${colName5},${colName6}) values(${colValue0},${colValue1},${colValue2},${colValue3},${colValue4},${colValue5},${colValue6}) on duplicate key update ${colName2} =  ${colValue2},${colName3} = ${colValue3}"
                  logger.warn(sql)
                  stmt.addBatch(sql)
                } /* else {
                sql = s"insert into ${tbName}(${colName0},${colName1},${colName2},${colName4},${colName5},${colName6}) values(${colValue0},${colValue1},${colValue2},${colValue4},${colValue5},${colValue6}) on duplicate key update ${colName2} =  ${colValue2}"
              }*/
              } else if (colNames.length == 5) {
                val date_hour = x._1.split(":")
                val date = date_hour(0)
                val hour = date_hour(1).toInt
                val colName0 = colNames(0) // date
                val colName1 = colNames(1) // hour
                val colName2 = colNames(2) // helper_count_all
                val colName3 = colNames(3) // helper_count
                val colName4 = colNames(4) // dh

                val colValue0 = addYin(date)
                val colValue1 = hour
                val colValue2 = x._2.toInt
                val colValue3 = jedis.scard("helper_" + date) // // helper_2018-07-08
                val colValue4 = if (hour < 10) "'" + date + " 0" + hour + ":00'" else "'" + date + " " + hour + ":00'"

                var sql = ""
                if (hour == hour_now) { // uv只對(duì)現(xiàn)在更新
                  sql = s"insert into ${tbName}(${colName0},${colName1},${colName2},${colName3},${colName4}) values(${colValue0},${colValue1},${colValue2},${colValue3},${colValue4}) on duplicate key update ${colName2} =  ${colValue2},${colName3} = ${colValue3}"
                  logger.warn(sql)
                  stmt.addBatch(sql)
                }
              }
            })
            stmt.executeBatch() // 批量執(zhí)行sql語(yǔ)句
            conn.commit()
          } catch {
            case e: Exception => {
              logger.error(e)
              logger2.error(HelperHandle.getClass.getSimpleName + e)
            }
          } finally {
            if (jedis != null) {
              closeJedis(jedis)
            }

            if(conn != null){
              conn.close()
            }
          }
        })
      }
    })
  }

  def addYin(str: String): String = {
    "'" + str + "'"
  }

  // 字符串添加tab格式化方法
  def addTab(str: String): String = {
    str + "\t";
  }

  // 實(shí)時(shí)流量狀態(tài)更新函數(shù)
  val mapFunction = (datehour: String, pv: Option[Long], state: State[Long]) => {
    val accuSum = pv.getOrElse(0L) + state.getOption().getOrElse(0L)
    val output = (datehour, accuSum)
    state.update(accuSum)
    output
  }

  // 獲取jedis連接
  def getJedis: Jedis = {
    val jedis = RedisPoolUtil.getPool.getResource
    jedis
  }

  // 釋放jedis連接
  def closeJedis(jedis: Jedis): Unit = {
    RedisPoolUtil.getPool.returnResource(jedis)
  }

}








作者:柯廣的網(wǎng)絡(luò)日志

微信公眾號(hào):Java大數(shù)據(jù)與數(shù)據(jù)倉(cāng)庫(kù)