Kubernetes-在Kubernetes集群上搭建Solr集群

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
云原生网关 MSE Higress,422元/月
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
简介: 准备工作日常套路,制作solr镜像,我用的是solr 6.5.0版本DockerfileFROM java:openjdk-8-jreMAINTAINER leo.

准备工作


  1. 日常套路,制作solr镜像,我用的是solr 6.5.0版本
Dockerfile
FROM java:openjdk-8-jre
MAINTAINER leo.lee "lis85@163.com"

ENV SOLR_GROUP solr
ENV SOLR_USER solr
ENV SOLE_UID 8983
ENV HOME /home/${SOLR_USER}
ENV USER_BIN /usr/local/bin/

RUN apt-get update && \
    apt-get install -y iproute netcat lsof jq libxml2-utils xmlstarlet tar && \
    apt-get clean && \
    mkdir ${HOME} && \
    groupadd -r ${SOLR_GROUP} && \
    useradd -u ${SOLE_UID} -g ${SOLR_GROUP} -d ${HOME} ${SOLR_USER} && \
    chown -R ${SOLR_USER}:${SOLR_GROUP} ${HOME} && \
    chown -R ${SOLR_USER}:${SOLR_GROUP} ${USER_BIN}

USER ${SOLR_USER}
WORKDIR ${HOME}

ENV SOLR_VERSION 6.5.0
RUN curl -L -o ${HOME}/solr-${SOLR_VERSION}.tgz http://archive.apache.org/dist/lucene/solr/${SOLR_VERSION}/solr-${SOLR_VERSION}.tgz && \
    tar -C ${HOME} -xf ${HOME}/solr-${SOLR_VERSION}.tgz && \
    rm ${HOME}/solr-${SOLR_VERSION}.tgz

ENV SOLR_PREFIX ${HOME}/solr-${SOLR_VERSION}

ADD docker-run.sh ${USER_BIN}
ADD docker-stop.sh ${USER_BIN}
RUN chmod +x ${USER_BIN}/docker-run.sh && \
    chmod +x ${USER_BIN}/docker-stop.sh

EXPOSE 8983 7983 18983

CMD ["/usr/local/bin/docker-run.sh"]
docker-run.sh
# IP detection.
DETECTED_IP_LIST=($(
  ip addr show | grep -e "inet[^6]" | \
    sed -e "s/.*inet[^6][^0-9]*\([0-9.]*\)[^0-9]*.*/\1/" | \
    grep -v "^127\."
))
DETECTED_IP=${DETECTED_IP_LIST[0]:-127.0.0.1}
echo "DETECTED_IP=${DETECTED_IP}"

# Set environment variables.
SOLR_PREFIX=${SOLR_PREFIX:-/opt/solr}
echo "SOLR_PREFIX=${SOLR_PREFIX}"

SOLR_HOST=${SOLR_HOST:-${DETECTED_IP}}
echo "SOLR_HOST=${SOLR_HOST}"
SOLR_PORT=${SOLR_PORT:-8983}
echo "SOLR_PORT=${SOLR_PORT}"
SOLR_SERVER_DIR=${SOLR_SERVER_DIR:-${SOLR_PREFIX}/server}
echo "SOLR_SERVER_DIR=${SOLR_SERVER_DIR}"
SOLR_HOME=${SOLR_HOME:-${SOLR_SERVER_DIR}/solr}
echo "SOLR_HOME=${SOLR_HOME}"
SOLR_HEAP_SIZE=${SOLR_HEAP_SIZE:-512m}
echo "SOLR_HEAP_SIZE=${SOLR_HEAP_SIZE}"
SOLR_ADDITIONAL_PARAMETERS=${SOLR_ADDITIONAL_PARAMETERS:-""}
echo "SOLR_ADDITIONAL_PARAMETERS=${SOLR_ADDITIONAL_PARAMETERS}"
ZK_HOST=${ZK_HOST:-""}
echo "ZK_HOST=${ZK_HOST}"
ZK_HOST_LIST=($(echo ${ZK_HOST} | sed -e 's/^\(.\{1,\}:[0-9]\{1,\}\)*\(.*\)$/\1/g' | tr -s ',' ' '))
echo "ZK_HOST_LIST=${ZK_HOST_LIST}"
ZK_ZNODE=$(echo ${ZK_HOST} | sed -e 's/^\(.\{1,\}:[0-9]\{1,\}\)*\(.*\)$/\2/g')
echo "ZK_ZNODE=${ZK_ZNODE}"

STOP_PORT=${STOP_PORT:-$(expr $SOLR_PORT - 1000)}
echo "STOP_PORT=${STOP_PORT}"

ENABLE_REMOTE_JMX_OPTS=${ENABLE_REMOTE_JMX_OPTS:-false}
echo "ENABLE_REMOTE_JMX_OPTS=${ENABLE_REMOTE_JMX_OPTS}"
RMI_PORT=${RMI_PORT:-"1$SOLR_PORT"}
echo "RMI_PORT=${RMI_PORT}"

SOLR_PID_DIR=${SOLR_PID_DIR:-${SOLR_PREFIX}/bin}
echo "SOLR_PID_DIR=${SOLR_PID_DIR}"

CORE_NAME=${CORE_NAME:-""}
echo "CORE_NAME=${CORE_NAME}"

COLLECTION_NAME=${COLLECTION_NAME:-""}
echo "COLLECTION_NAME=${COLLECTION_NAME}"
COLLECTION_CONFIG_NAME=${COLLECTION_CONFIG_NAME:-${COLLECTION_NAME}_configs}
echo "COLLECTION_CONFIG_NAME=${COLLECTION_CONFIG_NAME}"
NUM_SHARDS=${NUM_SHARDS:-1}
echo "NUM_SHARDS=${NUM_SHARDS}"
REPLICATION_FACTOR=${REPLICATION_FACTOR:-1}
echo "REPLICATION_FACTOR=${REPLICATION_FACTOR}"
MAX_SHARDS_PER_NODE=${MAX_SHARDS_PER_NODE:-1}
echo "MAX_SHARDS_PER_NODE=${MAX_SHARDS_PER_NODE}"
CLOUD_SCRIPTS_DIR=${SOLR_PREFIX}/server/scripts/cloud-scripts
echo "CLOUD_SCRIPTS_DIR=${CLOUD_SCRIPTS_DIR}"
SOLR_COLLECTIONS_API_PATH=/solr/admin/collections
echo "SOLR_COLLECTIONS_API_PATH=${SOLR_COLLECTIONS_API_PATH}"

CONFIGSET=${CONFIGSET:-data_driven_schema_configs}
echo "CONFIGSET=${CONFIGSET}"

ENABLE_CORS=${ENABLE_CORS:-false}
echo "ENABLE_CORS=${ENABLE_CORS}"
FILTER_NAME=${FILTER_NAME:-cross-origin}
echo "FILTER_NAME=${FILTER_NAME}"
FILTER_CLASS=${FILTER_CLASS:-org.eclipse.jetty.servlets.CrossOriginFilter}
echo "FILTER_CLASS=${FILTER_CLASS}"
URL_PATTERN=${URL_PATTERN:-/*}
echo "URL_PATTERN=${URL_PATTERN}"
ALLOWED_ORIGINS=${ALLOWED_ORIGINS:-*}
echo "ALLOWED_ORIGINS=${ALLOWED_ORIGINS}"
ALLOWED_METHODS=${ALLOWED_METHODS:-GET,POST,OPTIONS,DELETE,PUT,HEAD}
echo "ALLOWED_METHODS=${ALLOWED_METHODS}"
ALLOWED_HEADERS=${ALLOWED_HEADERS:-origin,content-type,accept}
echo "ALLOWED_HEADERS=${ALLOWED_HEADERS}"

SOLR_ACCESS_RETRY_COUNT=${SOLR_ACCESS_RETRY_COUNT:-10}
echo "SOLR_ACCESS_RETRY_COUNT=${SOLR_ACCESS_RETRY_COUNT}"
SOLR_ACCESS_INTERVAL=${SOLR_ACCESS_INTERVAL:-1}
echo "SOLR_ACCESS_INTERVAL=${SOLR_ACCESS_INTERVAL}"

# Start function
function start() {
  NODE_NAME=${SOLR_HOST}:${SOLR_PORT}_solr

  if [ "${ENABLE_CORS}" = "true" ]; then
    echo "Enabling CORS"
    cp ${SOLR_SERVER_DIR}/etc/webdefault.xml ${SOLR_SERVER_DIR}/etc/webdefault.xml.backup
    xmlstarlet ed \
      -N x="http://java.sun.com/xml/ns/javaee" \
      -s "/x:web-app" -t elem -n "filter" \
      -s "/x:web-app/filter[last()]" -t elem -n "filter-name" -v "${FILTER_NAME}" \
      -s "/x:web-app/filter[last()]" -t elem -n "filter-class" -v "${FILTER_CLASS}" \
      -s "/x:web-app/filter[last()]" -t elem -n "init-param" \
      -s "/x:web-app/filter[last()]/init-param[last()]" -t elem -n "param-name" -v "allowedOrigins" \
      -s "/x:web-app/filter[last()]/init-param[last()]" -t elem -n "param-value" -v "${ALLOWED_ORIGINS}" \
      -s "/x:web-app/filter[last()]" -t elem -n "init-param" \
      -s "/x:web-app/filter[last()]/init-param[last()]" -t elem -n "param-name" -v "allowedMethods" \
      -s "/x:web-app/filter[last()]/init-param[last()]" -t elem -n "param-value" -v "${ALLOWED_METHODS}" \
      -s "/x:web-app/filter[last()]" -t elem -n "init-param" \
      -s "/x:web-app/filter[last()]/init-param[last()]" -t elem -n "param-name" -v "allowedHeaders" \
      -s "/x:web-app/filter[last()]/init-param[last()]" -t elem -n "param-value" -v "${ALLOWED_HEADERS}" \
      -s "/x:web-app" -t elem -n "filter-mapping" \
      -s "/x:web-app/filter-mapping[last()]" -t elem -n "filter-name" -v "${FILTER_NAME}" \
      -s "/x:web-app/filter-mapping[last()]" -t elem -n "url-pattern" -v "${URL_PATTERN}" \
      ${SOLR_SERVER_DIR}/etc/webdefault.xml > ${SOLR_SERVER_DIR}/etc/webdefault.xml.cors
    mv ${SOLR_SERVER_DIR}/etc/webdefault.xml.cors ${SOLR_SERVER_DIR}/etc/webdefault.xml
  fi

  if [ -n "${ZK_HOST}" ]; then
    # Create a znode to ZooKeeper.
    for TMP_ZK_HOST in "${ZK_HOST_LIST[@]}"
    do
      ZK_HOST_NAME=$(echo ${TMP_ZK_HOST} | cut -d":" -f1)
      ZK_HOST_PORT=$(echo ${TMP_ZK_HOST} | cut -d":" -f2)

      MATCHED_ZNODE=$(${CLOUD_SCRIPTS_DIR}/zkcli.sh -zkhost ${ZK_HOST_NAME}:${ZK_HOST_PORT} -cmd list | grep -E "^\s+${ZK_ZNODE}\s+.*$")
      if [ -z "${MATCHED_ZNODE}" ]; then
        echo "Creating a znode ${ZK_ZNODE} to ZooKeeper at ${ZK_HOST_NAME}:${ZK_HOST_PORT}"
        ${CLOUD_SCRIPTS_DIR}/zkcli.sh -zkhost ${ZK_HOST_NAME}:${ZK_HOST_PORT} -cmd makepath ${ZK_ZNODE}

        # Wait until znode created.
        for i in `seq ${SOLR_ACCESS_RETRY_COUNT}`
        do
          MATCHED_ZNODE=$(${CLOUD_SCRIPTS_DIR}/zkcli.sh -zkhost ${ZK_HOST_NAME}:${ZK_HOST_PORT} -cmd list | grep -E "^\s+${ZK_ZNODE}\s+.*$")
          if [ -n "${MATCHED_ZNODE}" ]; then
            echo "A znode ${ZK_ZNODE} has been created to ZooKeeper at ${ZK_HOST_NAME}:${ZK_HOST_PORT}"
            break
          fi
          sleep ${SOLR_ACCESS_INTERVAL}
        done
      else
        echo "A znode ${ZK_ZNODE} already exist in ZooKeeper at ${ZK_HOST_NAME}:${ZK_HOST_PORT}"
      fi
    done

    # Start Solr in SolrCloud mode.
    echo "Starting solr in SolrCloud mode"
    ${SOLR_PREFIX}/bin/solr start -h ${SOLR_HOST} -p ${SOLR_PORT} -m ${SOLR_HEAP_SIZE} -d ${SOLR_SERVER_DIR} -s ${SOLR_HOME} -z ${ZK_HOST} -a "${SOLR_ADDITIONAL_PARAMETERS}"
  else
    # Start Solr standalone mode.
    echo "Starting solr in standalone mode"
    ${SOLR_PREFIX}/bin/solr start -h ${SOLR_HOST} -p ${SOLR_PORT} -m ${SOLR_HEAP_SIZE} -d ${SOLR_SERVER_DIR} -s ${SOLR_HOME} -a "${SOLR_ADDITIONAL_PARAMETERS}"
  fi

  # Get Solr process id.
  SOLR_PID=$(cat $(find ${SOLR_PID_DIR} -name solr-${SOLR_PORT}.pid -type f))
  if [ -z "${SOLR_PID}" ]; then
    SOLR_PID=$(ps auxww | grep start\.jar | grep solr.solr.home | grep -E "^.*\s-Djetty.port=${SOLR_PORT}[^0-9]{0,}.*$" | grep -v grep | awk '{print $2}')
  fi

  # Wait until Solr started.
  for i in `seq ${SOLR_ACCESS_RETRY_COUNT}`
  do
    SOLR_STATUS_JSON=$(${SOLR_PREFIX}/bin/solr status | sed -n -E "/Solr process ${SOLR_PID} running on port ${SOLR_PORT}/,/}/p" | sed -n -e "/{/,/}/p")
    if [ -n "${SOLR_STATUS_JSON}" ]; then
      echo "${SOLR_STATUS_JSON}"
      break
    fi
    sleep ${SOLR_ACCESS_INTERVAL}
  done

  if [ -n "${ZK_HOST}" ]; then
    # Wait until the node is registered to live_nodes.
    for i in `seq ${SOLR_ACCESS_RETRY_COUNT}`
    do
      SOLR_CLUSTER_STATUS_JSON=$(curl -s "http://${SOLR_HOST}:${SOLR_PORT}${SOLR_COLLECTIONS_API_PATH}?action=CLUSTERSTATUS&wt=json")
      LIVE_NODE_LIST=($(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.live_nodes[]"))
      if [[ " ${LIVE_NODE_LIST[@]} " =~ " ${NODE_NAME} " ]]; then
        echo "A node ${NODE_NAME} has been registered"
        break
      else
        echo "A node ${NODE_NAME} is not registered yet"
      fi
      sleep ${SOLR_ACCESS_INTERVAL}
    done

    # Upload configset.
    COLLECTION_CONFIG_UPLOADED="0"
    for TMP_ZK_HOST in "${ZK_HOST_LIST[@]}"
    do
      ZK_HOST_NAME=$(echo ${TMP_ZK_HOST} | cut -d":" -f1)
      ZK_HOST_PORT=$(echo ${TMP_ZK_HOST} | cut -d":" -f2)

      # Check configset.
      MATCHED_COLLECTION_CONFIG_NAME=$(${CLOUD_SCRIPTS_DIR}/zkcli.sh -zkhost ${ZK_HOST_NAME}:${ZK_HOST_PORT} -cmd list | grep -E "^\s+${ZK_ZNODE}/configs/${COLLECTION_CONFIG_NAME}\s+.*$")
      if [ -z "${MATCHED_COLLECTION_CONFIG_NAME}" ]; then
        echo "Uploading ${SOLR_HOME}/configsets/${CONFIGSET}/conf for config ${COLLECTION_CONFIG_NAME} to ZooKeeper at ${ZK_HOST_NAME}:${ZK_HOST_PORT}${ZK_ZNODE}"
        ${CLOUD_SCRIPTS_DIR}/zkcli.sh -zkhost ${ZK_HOST_NAME}:${ZK_HOST_PORT}${ZK_ZNODE} -cmd upconfig -confdir ${SOLR_HOME}/configsets/${CONFIGSET}/conf/ -confname ${COLLECTION_CONFIG_NAME}

        # Wait until config uploaded.
        for i in `seq ${SOLR_ACCESS_RETRY_COUNT}`
        do
          MATCHED_COLLECTION_CONFIG_NAME=$(${CLOUD_SCRIPTS_DIR}/zkcli.sh -zkhost ${ZK_HOST_NAME}:${ZK_HOST_PORT} -cmd list | grep -E "^\s+${ZK_ZNODE}/configs/${COLLECTION_CONFIG_NAME}\s+.*$")
          if [ -n "${MATCHED_COLLECTION_CONFIG_NAME}" ]; then
            echo "Config ${COLLECTION_CONFIG_NAME} has been uploaded to ZooKeeper at ${ZK_HOST_NAME}:${ZK_HOST_PORT}${ZK_ZNODE}"
            COLLECTION_CONFIG_UPLOADED="1"
            break
          else
            echo "Config ${COLLECTION_CONFIG_NAME} is not uploaded in ZooKeeper at ${ZK_HOST_NAME}:${ZK_HOST_PORT}${ZK_ZNODE} yet"
          fi
          sleep ${SOLR_ACCESS_INTERVAL}
        done
      else
        echo "Config ${COLLECTION_CONFIG_NAME} already exists in ZooKeeper at ${ZK_HOST_NAME}:${ZK_HOST_PORT}${ZK_ZNODE}"
        COLLECTION_CONFIG_UPLOADED="1"
      fi
      if [ "${COLLECTION_CONFIG_UPLOADED}" = "1" ]; then
        break
      fi
    done

    # Create collection.
    SOLR_CLUSTER_STATUS_JSON=$(curl -s "http://${SOLR_HOST}:${SOLR_PORT}${SOLR_COLLECTIONS_API_PATH}?action=CLUSTERSTATUS&wt=json")
    COLLECTION_NAME_LIST=($(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.collections | keys[]"))
    if [[ " ${COLLECTION_NAME_LIST[@]} " =~ " ${COLLECTION_NAME} " ]]; then
      echo "A collection ${COLLECTION_NAME} already exists"
    else
      echo "Creating collection ${COLLECTION_NAME}"
      curl -s "http://${SOLR_HOST}:${SOLR_PORT}${SOLR_COLLECTIONS_API_PATH}?action=CREATE&name=${COLLECTION_NAME}&router.name=compositeId&numShards=${NUM_SHARDS}&replicationFactor=${REPLICATION_FACTOR}&maxShardsPerNode=${MAX_SHARDS_PER_NODE}&createNodeSet=EMPTY&collection.configName=${COLLECTION_CONFIG_NAME}&wt=json" | jq .

      # Wait until collection created
      for i in `seq ${SOLR_ACCESS_RETRY_COUNT}`
      do
        SOLR_CLUSTER_STATUS_JSON=$(curl -s "http://${SOLR_HOST}:${SOLR_PORT}${SOLR_COLLECTIONS_API_PATH}?action=CLUSTERSTATUS&wt=json")
        COLLECTION_NAME_LIST=($(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.collections | keys[]"))
        if [[ " ${COLLECTION_NAME_LIST[@]} " =~ " ${COLLECTION_NAME} " ]]; then
          echo "A collection ${COLLECTION_NAME} has been created"
          break
        else
          echo "A collection ${COLLECTION_NAME} has not been created yet"
        fi
        sleep ${SOLR_ACCESS_INTERVAL}        
      done
    fi

    # Find shard to add.
    SOLR_CLUSTER_STATUS_JSON=$(curl -s "http://${SOLR_HOST}:${SOLR_PORT}${SOLR_COLLECTIONS_API_PATH}?action=CLUSTERSTATUS&wt=json")
    ACTIVE_SHARD_NAME_LIST=($(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.collections.${COLLECTION_NAME}.shards | to_entries | .[] | select(.value.state == \"active\") | .key"))
    SHARD_NAME=${ACTIVE_SHARD_NAME_LIST[0]}
    MIN_REPLICA_COUNT=$(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.collections.${COLLECTION_NAME}.shards.${ACTIVE_SHARD_NAME_LIST[0]}.replicas | to_entries | .[] | select(.value.state == \"active\") | .key" | wc -l)
    for TMP_SHARD_NAME in "${ACTIVE_SHARD_NAME_LIST[@]}"
    do  
      ACTIVE_REPLICA_COUNT=$(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.collections.${COLLECTION_NAME}.shards.${TMP_SHARD_NAME}.replicas | to_entries | .[] | select(.value.state == \"active\") | .key" | wc -l)
      if [ -z "${ACTIVE_REPLICA_COUNT}" ]; then
        ACTIVE_REPLICA_COUNT=0
      fi
      echo "${TMP_SHARD_NAME} has ${ACTIVE_REPLICA_COUNT} replica(s)"
      if [[ ${MIN_REPLICA_COUNT} -gt ${ACTIVE_REPLICA_COUNT} ]]; then
        SHARD_NAME=${TMP_SHARD_NAME}
        MIN_REPLICA_COUNT=${ACTIVE_REPLICA_COUNT}
      fi
      
      echo "Target shard is ${TMP_SHARD_NAME}"

      # Add replica.
      echo "Adding replica ${NODE_NAME} to ${COLLECTION_NAME}/${TMP_SHARD_NAME}"
      curl -s "http://${SOLR_HOST}:${SOLR_PORT}${SOLR_COLLECTIONS_API_PATH}?action=ADDREPLICA&collection=${COLLECTION_NAME}&shard=${TMP_SHARD_NAME}&node=${NODE_NAME}&wt=json" | jq .
    done
  else
    echo "Creating Solr sore"
    if [ -n "${CORE_NAME}" ]; then
      # Create Solr core.
      ${SOLR_PREFIX}/bin/solr create_core -c ${CORE_NAME} -d ${CONFIGSET}
    fi
  fi

  echo "Initialized"
}

trap "docker-stop.sh; exit 1" TERM KILL INT QUIT

# Start
start

# Start infinitive loop
while true
do
  sleep 1
done
docker-stop.sh
# IP detection.
DETECTED_IP_LIST=($(
  ip addr show | grep -e "inet[^6]" | \
    sed -e "s/.*inet[^6][^0-9]*\([0-9.]*\)[^0-9]*.*/\1/" | \
    grep -v "^127\."
))
DETECTED_IP=${DETECTED_IP_LIST[0]:-127.0.0.1}
echo "DETECTED_IP=${DETECTED_IP}"

# Set environment variables.
SOLR_PREFIX=${SOLR_PREFIX:-/opt/solr}
echo "SOLR_PREFIX=${SOLR_PREFIX}"

SOLR_HOST=${SOLR_HOST:-${DETECTED_IP}}
echo "SOLR_HOST=${SOLR_HOST}"
SOLR_PORT=${SOLR_PORT:-8983}
echo "SOLR_PORT=${SOLR_PORT}"
SOLR_SERVER_DIR=${SOLR_SERVER_DIR:-${SOLR_PREFIX}/server}
echo "SOLR_SERVER_DIR=${SOLR_SERVER_DIR}"
SOLR_HOME=${SOLR_HOME:-${SOLR_SERVER_DIR}/solr}
echo "SOLR_HOME=${SOLR_HOME}"
ZK_HOST=${ZK_HOST:-""}
echo "ZK_HOST=${ZK_HOST}"
ZK_HOST_LIST=($(echo ${ZK_HOST} | sed -e 's/^\(.\{1,\}:[0-9]\{1,\}\)*\(.*\)$/\1/g' | tr -s ',' ' '))
echo "ZK_HOST_LIST=${ZK_HOST_LIST}"
ZK_ZNODE=$(echo ${ZK_HOST} | sed -e 's/^\(.\{1,\}:[0-9]\{1,\}\)*\(.*\)$/\2/g')
echo "ZK_ZNODE=${ZK_ZNODE}"

SOLR_COLLECTIONS_API_PATH=/solr/admin/collections
echo "SOLR_COLLECTIONS_API_PATH=${SOLR_COLLECTIONS_API_PATH}"

SOLR_ACCESS_RETRY_COUNT=${SOLR_ACCESS_RETRY_COUNT:-10}
echo "SOLR_ACCESS_RETRY_COUNT=${SOLR_ACCESS_RETRY_COUNT}"
SOLR_ACCESS_INTERVAL=${SOLR_ACCESS_INTERVAL:-1}
echo "SOLR_ACCESS_INTERVAL=${SOLR_ACCESS_INTERVAL}"

# Stop function.
function stop() {
  NODE_NAME=${SOLR_HOST}:${SOLR_PORT}_solr

  # SolrCloud mode?
  if [ -n "${ZK_HOST}" ]; then
    # Get collection list.
    SOLR_CLUSTER_STATUS_JSON=$(curl -s "http://${SOLR_HOST}:${SOLR_PORT}${SOLR_COLLECTIONS_API_PATH}?action=CLUSTERSTATUS&wt=json")
    COLLECTION_NAME_LIST=($(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.collections | keys[]"))
    for COLLECTION_NAME in "${COLLECTION_NAME_LIST[@]}"
    do
      # Get shard list in a collection.
      SHARD_NAME_LIST=($(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.collections.${COLLECTION_NAME}.shards | keys[]"))
      for SHARD_NAME in "${SHARD_NAME_LIST[@]}"
      do
        # Get replica list in a shard.
        REPLICA_NAME_LIST=($(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.collections.${COLLECTION_NAME}.shards.${SHARD_NAME}.replicas | to_entries | .[] | select(.value.node_name == \"${NODE_NAME}\") | .key"))
        if [ -n "${REPLICA_NAME_LIST[@]}" ]; then
          for REPLICA_NAME in "${REPLICA_NAME_LIST[@]}"
          do
            # Delete replica.
            echo "Deleting replica ${REPLICA_NAME}(${NODE_NAME}) from ${COLLECTION_NAME}/${SHARD_NAME}"
            curl -s "http://${SOLR_HOST}:${SOLR_PORT}${SOLR_COLLECTIONS_API_PATH}?action=DELETEREPLICA&collection=${COLLECTION_NAME}&shard=${SHARD_NAME}&replica=${REPLICA_NAME}&wt=json" | jq .
          done
        fi
      done
    done

    # Wait until replica deleted.
    for i in `seq ${SOLR_ACCESS_RETRY_COUNT}`
    do
      SOLR_CLUSTER_STATUS_JSON=$(curl -s "http://${SOLR_HOST}:${SOLR_PORT}${SOLR_COLLECTIONS_API_PATH}?action=CLUSTERSTATUS&wt=json")
      REPLICA_NAME_LIST=($(echo ${SOLR_CLUSTER_STATUS_JSON} | jq -r ".cluster.collections.${COLLECTION_NAME}.shards.${SHARD_NAME}.replicas | to_entries | .[] | select(.value.node_name == \"${NODE_NAME}\") | .key"))
      if [ -z "${REPLICA_NAME_LIST[@]}" ]; then
        echo "${NODE_NAME} has been deleted"
        break
      else
        echo "A node ${NODE_NAME} is not deleted yet"
      fi
      sleep ${SOLR_ACCESS_INTERVAL}
    done
  fi
  
  ${SOLR_PREFIX}/bin/solr stop -p ${SOLR_PORT}

  echo "Deleted"
}

# Stop
stop

部署Solr


这里采用的是Replication Controller方式部署,我定义了一个服务,pod保持3个,zookeeper使用的是我之前部署的。

#solr service
apiVersion: v1
kind: Service
metadata:
  name: solr-service
  namespace: default
  labels:
    app: solr-service
spec:
  type: NodePort
  ports:
  - name: solr
    port: 8983
    targetPort: 8983
    protocol: TCP
  - name: stop
    port: 7983
    targetPort: 7983
    protocol: TCP
  - name: rmi
    port: 18983
    targetPort: 18983
    protocol: TCP
  selector:
    app: solr-pod
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: solr-controller
  namespace: default
  labels:
    app: solr-controller
spec:
  replicas: 3
  selector:
    app: solr-pod
  template:
    metadata:
      labels:
        app: solr-pod
    spec:
      containers:
      - name: solr-container
        image: registry.docker.uih/library/leo-solr:6.5.0
        imagePullPolicy: IfNotPresent
        ports:
        - name: solr
          containerPort: 8983
          protocol: TCP
        - name: stop
          containerPort: 7983
          protocol: TCP
        - name: rmi
          containerPort: 18983
          protocol: TCP
        env:
        - name: SOLR_HOST
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: ZK_HOST
          value: zk-cs:2181/solr
        - name: COLLECTION_NAME
          value: collection1
        - name: NUM_SHARDS
          value: "3"
        - name: COLLECTION_CONFIG_NAME
          value: data_driven_schema_configs

使用如下命令进行部署

kubectl create -f solr.yaml

部署完成后通过浏览器访问solr服务的地址就可以看到我们熟悉的Solr界面了

img_8f22d52d206d76bce9d97b50c1b345c9.png
dashboard
img_44db6293a72d700b0218ae6d0db0eff2.png
query

Enjoy!!!

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
18天前
|
Kubernetes Ubuntu Windows
【Azure K8S | AKS】分享从AKS集群的Node中查看日志的方法(/var/log)
【Azure K8S | AKS】分享从AKS集群的Node中查看日志的方法(/var/log)
|
18天前
|
存储 Kubernetes Go
【Azure K8S | AKS】在AKS集群中创建 PVC(PersistentVolumeClaim)和 PV(PersistentVolume) 示例
【Azure K8S | AKS】在AKS集群中创建 PVC(PersistentVolumeClaim)和 PV(PersistentVolume) 示例
|
9天前
|
存储 Kubernetes 负载均衡
CentOS 7.9二进制部署K8S 1.28.3+集群实战
本文详细介绍了在CentOS 7.9上通过二进制方式部署Kubernetes 1.28.3+集群的全过程,包括环境准备、组件安装、证书生成、高可用配置以及网络插件部署等关键步骤。
72 3
CentOS 7.9二进制部署K8S 1.28.3+集群实战
|
9天前
|
Kubernetes 负载均衡 前端开发
二进制部署Kubernetes 1.23.15版本高可用集群实战
使用二进制文件部署Kubernetes 1.23.15版本高可用集群的详细教程,涵盖了从环境准备到网络插件部署的完整流程。
18 2
二进制部署Kubernetes 1.23.15版本高可用集群实战
|
8天前
|
存储 Kubernetes 测试技术
k8s使用pvc,pv,sc关联ceph集群
文章介绍了如何在Kubernetes中使用PersistentVolumeClaim (PVC)、PersistentVolume (PV) 和StorageClass (SC) 来关联Ceph集群,包括创建Ceph镜像、配置访问密钥、删除默认存储类、编写和应用资源清单、创建资源以及进行访问测试的步骤。同时,还提供了如何使用RBD动态存储类来关联Ceph集群的指南。
24 7
|
8天前
|
存储 Kubernetes 数据安全/隐私保护
k8s对接ceph集群的分布式文件系统CephFS
文章介绍了如何在Kubernetes集群中使用CephFS作为持久化存储,包括通过secretFile和secretRef两种方式进行认证和配置。
19 5
|
9天前
|
Kubernetes 负载均衡 应用服务中间件
kubeadm快速构建K8S1.28.1高可用集群
关于如何使用kubeadm快速构建Kubernetes 1.28.1高可用集群的详细教程。
25 2
|
9天前
|
Kubernetes Linux API
CentOS 7.6使用kubeadm部署k8s 1.17.2测试集群实战篇
该博客文章详细介绍了在CentOS 7.6操作系统上使用kubeadm工具部署kubernetes 1.17.2版本的测试集群的过程,包括主机环境准备、安装Docker、配置kubelet、初始化集群、添加节点、部署网络插件以及配置k8s node节点管理api server服务器。
37 0
CentOS 7.6使用kubeadm部署k8s 1.17.2测试集群实战篇
|
12天前
|
运维 Kubernetes Cloud Native
云原生之旅:Kubernetes 集群的搭建与实践Python 编程入门:从零基础到编写实用脚本
【8月更文挑战第30天】在数字化转型的大潮中,云原生技术以其弹性、可扩展性及高效运维能力成为企业IT架构升级的关键。本文将通过实际操作演示如何在本地环境搭建一个简易的Kubernetes集群,带你领略云原生的魅力所在。从集群规划到服务部署,每一步都是对云原生理念的深刻理解和应用。让我们共同探索,如何通过Kubernetes集群的搭建和运维,提升业务灵活性和创新能力。
|
15天前
|
Kubernetes Cloud Native 应用服务中间件
云原生之旅:Kubernetes集群搭建与应用部署
【8月更文挑战第28天】在数字化浪潮中,云原生技术正成为企业IT架构转型的重要驱动力。本文将通过实践案例,引导读者理解云原生的核心概念,掌握Kubernetes集群的搭建方法,并学会如何部署和管理容器化应用。文章不仅提供详细的操作步骤和示例代码,还深入探讨了云原生技术背后的哲学及其对企业数字化转型的影响,旨在帮助读者构建起对云原生世界的全面认识,并激发对技术创新和应用实践的思考。