暂时未有相关云产品技术能力~
一、 前言🍐云原生技术可以帮助企业实现降本增效,提高业务的灵活性和可扩展性。云原生技术的降本增效主要是由以下因素推动的:成本压力:随着业务规模和数据量的不断增长,传统基础设施(如物理服务器和虚拟机)的成本和管理复杂度不断上升,对企业的成本压力越来越大。业务需求:企业面对的业务需求越来越多样化和复杂化,需要快速、灵活地部署和管理应用程序,以满足市场需求,传统的基础设施无法满足这些需求。技术发展:随着云计算、大数据、人工智能等新技术的发展,企业需要更加智能、高效的IT基础设施来支持业务创新和发展。二、 Crane开源项目简介🍎 Crane是由腾讯云主导开源的国内第一个基于云原生技术的成本优化项目,遵循FinOps标准,已经获得FinOps基金会授予的全球首个认证降本增效开源方案。它为使用Kubernetes集群的企业提供了一种简单、可靠且强大的自动化部署工具。Crane的设计初衷是为了帮助企业更好地管理和扩展其 Kubernetes 集群,从而实现更高效的云原生应用管理。它易于使用、高度可定制和可扩展。它提供了一组简单易用的命令行工具,使得开发者和管理员都能轻松地将应用程序部署到 Kubernetes 集群中。Crane 还支持多种云平台,并且可以根据具体的业务需求进行定制。Crane并已经被腾讯、网易、思必驰、酷家乐、明源云、数数科技等公司部署在生产系统,其主要贡献者来自腾讯、小红书、谷歌、eBay、微软、特斯拉等知名公司。2.1. Crane整体框架🍒Craned 是 Crane 的最核心组件,它管理了 CRDs 的生命周期以及API。Craned 通过 Deployment 方式部署且由两个容器组成:Craned: 运行了 Operators 用来管理 CRDs,向 Dashboard 提供了 WebApi,Predictors 提供了 TimeSeries APIDashboard: 基于 TDesign’s Starter 脚手架研发的前端项目,提供了易于上手的产品功能① FadvisorFadvisor 提供一组 Exporter 计算集群云资源的计费和账单数据并存储到你的监控系统,比如 Prometheus。Fadvisor 通过 Cloud Provider 支持了多云计费的 API。② Metric AdapterMetric Adapter 实现了一个 Custom Metric Apiserver. Metric Adapter 读取 CRDs 信息并提供基于 Custom/External Metric API 的 HPA Metric 的数据。③ Crane AgentCrane Agent 通过 DaemonSet 部署在集群的节点上。2.2. Crane主要功能🍅🟩成本可视化和优化评估提供一组 Exporter 计算集群云资源的计费和账单数据并存储到你的监控系统,比如 Prometheus。多维度的成本洞察,优化评估。通过 Cloud Provider 支持多云计费。🟥推荐框架提供了一个可扩展的推荐框架以支持多种云资源的分析,内置了多种推荐器:资源推荐,副本推荐,HPA 推荐,闲置资源推荐。🟪基于预测的水平弹性器EffectiveHorizontalPodAutoscaler 支持了预测驱动的弹性。它基于社区 HPA 做底层的弹性控制,支持更丰富的弹性触发策略(预测,观测,周期),让弹性更加高效,并保障了服务的质量。🟧负载感知的调度器动态调度器根据实际的节点利用率构建了一个简单但高效的模型,并过滤掉那些负载高的节点来平衡集群。🟨拓扑感知的调度器Crane Scheduler与Crane Agent配合工作,支持更为精细化的资源拓扑感知调度和多种绑核策略,可解决复杂场景下“吵闹的邻居问题",使得资源得到更合理高效的利用。🟦基于 QOS 的混部QOS相关能力保证了运行在 Kubernetes 上的 Pod 的稳定性。具有多维指标条件下的干扰检测和主动回避能力,支持精确操作和自定义指标接入;具有预测算法增强的弹性资源超卖能力,复用和限制集群内的空闲资源;具备增强的旁路cpuset管理能力,在绑核的同时提升资源利用效率。三、Crane实验前期准备🍊采用VMware Workstation虚拟化软件,基于Rocky Linux开源企业级系统安装部署单机版的集群系统即可完成Crane开源项目。本实验环境配置说明⌛系统版本内存大小磁盘大小网络模式IP地址Rocky Linux release 8.7≥8GB(推荐)30GBNAT模式192.168.200.60本实验软件版本说明👑必要组件组件版本dockerv23.0.5kubectlv1.27.1helmv3.11.3kindv0.18.03.1. 系统初始化📖1、修改主机名 hostnamectl set-hostname Crane 2、关闭防火墙 systemctl stop firewalld && systemctl disable firewalld systemctl status firewalld 3、关闭selinux # 临时允许 setenforce 0 getenforce # 永久允许 sed -i "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config cat /etc/selinux/config 4、关闭swap分区 # 查看swapoff的版本 swapoff --version # 临时关闭❎ swapoff -a # 永久关闭❎ sed -ri 's/.*swap.*/#&/' /etc/fstab # 重启生效 # 使用swapon检查 swapon -v #输出为空,表示swap已关闭 5、配置网卡信息 cat /etc/sysconfig/network-scripts/ifcfg-ens32 systemctl restart NetworkManager nmcli connection up ens160 6、配置阿里云镜像 sed -e 's|^mirrorlist=|#mirrorlist=|g' \ -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \ -i.bak \ /etc/yum.repos.d/Rocky-*.repo dnf makecache 7、生成本地缓存 yum makecache fast 8、更新YUM源软件包 yum update -y 9、重启系统 reboot3.2. Docker安装📑1、使用yum安装gcc相关环境 yum install -y gcc gcc-c++ 2、安装需要的依赖包 yum install -y yum-utils 3、设置阿里云docker镜像 yum-config-manager \ --add-repo \ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 4、安装docker、docker-ce、ee企业版 yum install -y docker-ce docker-ce-cli containerd.io 5、启动Docker systemctl start docker && systemctl enable docker && systemctl status docker查看所安装的docker版本信息,此时docker服务没有启动。[root@Crane ~]# docker version Client: Docker Engine - Community Version: 23.0.6 API version: 1.42 Go version: go1.19.9 Git commit: ef23cbc Built: Fri May 5 21:19:08 2023 OS/Arch: linux/amd64 Context: default Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?启动docker服务并设置docker服务开机自启动,查看Docker服务状态。systemctl start docker && systemctl enable docker && systemctl status docker查看docker版本信息[root@Crane ~]# docker version Client: Docker Engine - Community Version: 23.0.6 API version: 1.42 Go version: go1.19.9 Git commit: ef23cbc Built: Fri May 5 21:19:08 2023 OS/Arch: linux/amd64 Context: default Server: Docker Engine - Community Engine: Version: 23.0.6 API version: 1.42 (minimum version 1.12) Go version: go1.19.9 Git commit: 9dbdbd4 Built: Fri May 5 21:18:15 2023 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.21 GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8 runc: Version: 1.1.7 GitCommit: v1.1.7-0-g860f061 docker-init: Version: 0.19.0 GitCommit: de40ad03.3. kubectl安装📚🔗参考链接:在 Linux 系统中安装并设置 kubectl | Kubernetes用 curl 在 Linux 系统中安装 kubectl[root@Crane ~]# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 138 100 138 0 0 276 0 --:--:-- --:--:-- --:--:-- 276 100 46.9M 100 46.9M 0 0 5556k 0 0:00:08 0:00:08 --:--:-- 6553k [root@crane ~]# ll total 48096 -rw-------. 1 root root 1322 Mar 29 2022 anaconda-ks.cfg -rw-r--r-- 1 root root 49246208 May 7 11:21 kubectl # 验证该可执行文件: 1️⃣下载 kubectl 校验和文件 [root@Crane ~]# curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 138 100 138 0 0 281 0 --:--:-- --:--:-- --:--:-- 280 100 64 100 64 0 0 65 0 --:--:-- --:--:-- --:--:-- 65 2️⃣基于校验和文件,验证 kubectl 的可执行文件 [root@Crane ~]# echo "$(cat kubectl.sha256) kubectl" | sha256sum --check kubectl: OK安装 kubectl# 安装kubectl [root@crane ~]# sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl # 查看kubectl的安装版本 [root@Crane ~]# kubectl version --client WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"clean", BuildDate:"2023-04-14T13:21:19Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"} Kustomize Version: v5.0.1 # 查看kubectl版本的详细信息 [root@Crane ~]# kubectl version --client --output=yaml clientVersion: buildDate: "2023-04-14T13:21:19Z" compiler: gc gitCommit: 4c9411232e10168d7b050c49a1b59f6df9d7ea4b gitTreeState: clean gitVersion: v1.27.1 goVersion: go1.20.3 major: "1" minor: "27" platform: linux/amd64 kustomizeVersion: v5.0.13.4. helm安装📕⌛参考链接:Helm | 安装Helm# 获取helm脚本,可能需要开启代理。 [root@Crane ~]# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 # 查看目录 [root@Crane ~]# ll total 48112 -rw-------. 1 root root 1322 Mar 29 2022 anaconda-ks.cfg -rw-r--r--. 1 root root 11345 May 10 11:53 get_helm.sh -rw-r--r--. 1 root root 49246208 May 10 11:48 kubectl -rw-r--r--. 1 root root 64 May 10 11:49 kubectl.sha256 # 赋予该脚本可执行的权限 [root@crane ~]# chmod 700 get_helm.sh # 执行该脚本安装helm [root@Crane ~]# ./get_helm.sh Downloading https://get.helm.sh/helm-v3.11.3-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm # 查看helm版本信息 [root@Crane ~]# helm version version.BuildInfo{Version:"v3.11.3", GitCommit:"323249351482b3bbfc9f5004f65d400aa70f9ae7", GitTreeState:"clean", GoVersion:"go1.20.3"}3.5. kind安装📙🔗参考链接:kind – Quick Start# 获取kind脚本 [root@Crane ~]# curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.18.0/kind-linux-amd64 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 98 100 98 0 0 81 0 0:00:01 0:00:01 --:--:-- 81 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 100 6808k 100 6808k 0 0 1064k 0 0:00:06 0:00:06 --:--:-- 2552k # 赋予该脚本可执行权限 [root@Crane ~]# chmod +x ./kind # 转移目录 [root@Crane ~]# sudo mv ./kind /usr/local/bin/kind # 查看kind版本信息 [root@Crane ~]# kind version kind v0.18.0 go1.20.2 linux/amd64
一、前言GPT(Generative Pre-trained Transformer)是一种基于Transformer架构的预训练语言模型,由OpenAI公司开发。在自然语言处理领域引起了广泛的关注和研究。我认为GPT系列模型的出现,标志着自然语言处理领域的一次重大进展。GPT模型利用大规模的语言数据进行预训练,学习到了丰富的语言知识和语言规律,可以生成高质量的语言文本。这种预训练的方法,避免了传统的监督学习过程中需要大量标注数据的问题,提高了模型的可扩展性和泛化能力。同时,GPT模型的生成能力和智能性也引发了一些担忧。由于GPT模型可以生成逼真的语言文本,有可能被用于虚假信息和欺诈行为等不良用途。因此,我们需要对GPT模型进行更加深入的研究和探讨,以确保它的应用能够为人类社会带来更多的好处。总的来说,GPT模型是自然语言处理领域的一项重要技术创新,具有广阔的应用前景和研究价值。我们需要在推动技术发展的同时,也要注意技术的安全和合理应用。二、PlumGPT介绍篇PlumGPT是一种基于Transformer架构的预训练语言模型,由中国科学院计算技术研究所和华为公司联合开发。它是GPT系列模型的一个分支,可以用于自然语言生成、文本分类、问答系统等多种自然语言处理任务。PlumGPT采用了与GPT-2相似的模型结构,包括多层Transformer编码器和解码器,以及自回归机制。其预训练过程使用了大规模的中文语料库,包括百度百科、新闻、微博等多种数据源。在预训练过程中,PlumGPT学习到了丰富的语言知识和语言规律,可以生成高质量的语言文本。与其他语言模型相比,PlumGPT在中文自然语言处理任务上具有很高的准确度和效率,可以为中文自然语言处理领域的研究和应用提供强有力的支持。三、PlumGPT登录篇登录PlumGPT界面。输入登录账号和密码。初次登录进去,有相关提示。PlumGPT This is a free research preview. 🔬 Our goal is to get external feedback in order to improve our systems and make them safer. 🚨While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. lt is not intended to give advice. PlumGPT How we collect data 🦾 Conversations may be reviewed by our Al trainers to improve our systems. 🔐 Please don't share any sensitive information in your conversations.这是一个免费的研究预览。 我们的目标是获得外部反馈,以改进我们的系统,使其更安全。 虽然我们有适当的保障措施,但系统可能偶尔会产生不正确或误导性的信息,并产生冒犯或有偏见的内容。它的目的不是提供建议。 我们的人工智能培训师可能会审查对话,以改进我们的系统。 请不要在谈话中分享任何敏感信息。与PlumGPT开始畅聊之旅。四、PlumGPT体验篇1、与PlumGPT聊天与PlumGPT聊天。上下文前后对照,逻辑清晰。2、让PlumGPT翻译提供给PlumGPT一段英文,立刻给出中文翻译。3、让PlumGPT创作写作思路清晰,用词造句流畅通顺。4、请PlumGPT写推荐信推荐信模板写的很nice,赞!5、让PlumGPT展示图片使用Markdown语法可以展现图片,这个很赞呐~6、让PlumGPT充当百科小助手提供许多学习链接和学习路径。免去了搜索查找的困境。五、PlumGPT总结篇作为AI语言模型,我觉得PlumGPT具有以下优势和特长:高效性:我可以在短时间内回答大量的问题,而且不会感到疲倦或犯错。知识广度:我可以回答各种各样的问题,包括科技、文化、历史、语言等多个领域。语言处理:我可以理解和处理自然语言,包括语法、词汇、语义、情感等方面,从而更好地回答问题。学习能力:我可以通过不断学习和训练来提高自己的回答质量和准确性。交互性:我可以与用户进行实时对话,根据用户的提问和反馈来调整回答,提供更好的服务。期待PlumGPT后续拥有更多精彩表现
❌pod节点启动失败,nginx服务无法正常访问,服务状态显示为ImagePullBackOff。[root@m1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-f89759699-cgjgp 0/1 ImagePullBackOff 0 103m💥查看nginx服务的Pod节点详细信息。[root@m1 ~]# kubectl describe pod nginx-f89759699-cgjgp Name: nginx-f89759699-cgjgp Namespace: default Priority: 0 Service Account: default Node: n1/192.168.200.84 Start Time: Fri, 10 Mar 2023 08:40:33 +0800 Labels: app=nginx pod-template-hash=f89759699 Annotations: <none> Status: Pending IP: 10.244.3.20 IPs: IP: 10.244.3.20 Controlled By: ReplicaSet/nginx-f89759699 Containers: nginx: Container ID: Image: nginx Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-zk8sj (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-zk8sj: Type: Secret (a volume populated by a Secret) SecretName: default-token-zk8sj Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BackOff 57m (x179 over 100m) kubelet Back-off pulling image "nginx" Normal Pulling 7m33s (x22 over 100m) kubelet Pulling image "nginx" Warning Failed 2m30s (x417 over 100m) kubelet Error: ImagePullBackOff发现,获取nginx镜像失败。可能是由于Docker服务引起的。于是,检查Docker是否正常启动systemctl status docker发现,docker服务启动失败💢,手动尝试重新启动。systemctl restart docker但是,重启docker服务失败,出现如下报错信息。[root@m1 ~]# systemctl restart docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.执行systemctl restart docker命令失效。接着,当执行docker version命令时,发现未能连接到Docker daemon[root@m1 ~]# docker version Client: Docker Engine - Community Version: 20.10.17 API version: 1.41 Go version: go1.17.11 Git commit: 100c701 Built: Mon Jun 6 23:03:11 2022 OS/Arch: linux/amd64 Context: default Experimental: true Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?于是,再次通过执行systemctl status docker命令,查看docker服务未能启动,阅读输出报错信息,如下所示。[root@m1 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2023-03-10 10:28:16 CST; 4min 35s ago Docs: https://docs.docker.com Main PID: 2221 (code=exited, status=1/FAILURE) Mar 10 10:28:13 m1 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE Mar 10 10:28:13 m1 systemd[1]: docker.service: Failed with result 'exit-code'. Mar 10 10:28:13 m1 systemd[1]: Failed to start Docker Application Container Engine. Mar 10 10:28:16 m1 systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart. Mar 10 10:28:16 m1 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3. Mar 10 10:28:16 m1 systemd[1]: Stopped Docker Application Container Engine. Mar 10 10:28:16 m1 systemd[1]: docker.service: Start request repeated too quickly. Mar 10 10:28:16 m1 systemd[1]: docker.service: Failed with result 'exit-code'. Mar 10 10:28:16 m1 systemd[1]: Failed to start Docker Application Container Engine. [root@m1 ~]#通过上述输出显示,Docker 服务进程的启动失败,状态为 1/FAILURE。✅接下来,尝试通过以下步骤来排查和解决问题:1️⃣查看 Docker 服务日志:使用以下命令查看 Docker 服务日志,以便更详细地了解失败原因。sudo journalctl -u docker.service2️⃣ 通过输出Ddocker日志分析,提取到了相关报错信息片段,发现是配置daemon中的/etc/docker/daemon.json配置文件出错导致的。Mar 10 10:20:17 m1 systemd[1]: Starting Docker Application Container Engine... Mar 10 10:20:17 m1 dockerd[1572]: unable to configure the Docker daemon with file /etc/docker/daemon.json: invalid character '"' after object key:value pair Mar 10 10:20:17 m1 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE Mar 10 10:20:17 m1 systemd[1]: docker.service: Failed with result 'exit-code'. Mar 10 10:20:17 m1 systemd[1]: Failed to start Docker Application Container Engine. Mar 10 10:20:19 m1 systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart. Mar 10 10:20:19 m1 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2. Mar 10 10:20:19 m1 systemd[1]: Stopped Docker Application Container Engine.3️⃣此时,查看daemon配置文件/etc/docker/daemon.json是否配置正确。[root@m1 ~]# cat /etc/docker/daemon.json { # 设置 Docker 镜像的注册表镜像源为阿里云镜像源。 "registry-mirrors": ["https://w2kavmmf.mirror.aliyuncs.com"] # 指定 Docker 守护进程使用 systemd 作为 cgroup driver。 "exec-opts": ["native.cgroupdriver=systemd"] }咋一看,配置信息没有什么问题,都是正确的,但仔细一看,就会发现应该在"registry-mirrors"选项的结尾添加逗号。犯了缺少逗号(,)导致的语法错误,终于找到了问题根源。🟢修改后:[root@m1 ~]# cat /etc/docker/daemon.json { "registry-mirrors": ["https://w2kavmmf.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] }按下:wq报错退出。4️⃣ 重新加载系统并重新启动Docker服务systemctl daemon-reload systemctl restart docker systemctl status docker5️⃣检查docker版本信息是否输出正常[root@m1 ~]# docket version -bash: docket: command not found [root@m1 ~]# docker version Client: Docker Engine - Community Version: 20.10.17 API version: 1.41 Go version: go1.17.11 Git commit: 100c701 Built: Mon Jun 6 23:03:11 2022 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.17 API version: 1.41 (minimum version 1.12) Go version: go1.17.11 Git commit: a89b842 Built: Mon Jun 6 23:01:29 2022 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.6 GitCommit: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 runc: Version: 1.1.2 GitCommit: v1.1.2-0-ga916309 docker-init: Version: 0.19.0 GitCommit: de40ad0[root@m1 ~]# docker info Client: Context: default Debug Mode: false Plugins: app: Docker App (Docker Inc., v0.9.1-beta3) buildx: Docker Buildx (Docker Inc., v0.8.2-docker) scan: Docker Scan (Docker Inc., v0.17.0) Server: Containers: 20 Running: 8 Paused: 0 Stopped: 12 Images: 20 Server Version: 20.10.17 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc Default Runtime: runc Init Binary: docker-init containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 runc version: v1.1.2-0-ga916309 init version: de40ad0 Security Options: seccomp Profile: default Kernel Version: 4.18.0-372.9.1.el8.x86_64 Operating System: Rocky Linux 8.6 (Green Obsidian) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 9.711GiB Name: m1 ID: 4YIS:FHSB:YXRI:CED5:PJSJ:EAS2:BCR3:GJJF:FDPK:EDJH:DVKU:AIYJ Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://w2kavmmf.mirror.aliyuncs.com/ Live Restore Enabled: false至此,Docker服务重启成功,pod节点恢复正常,Nginx服务能够正常访问。[root@m1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-f89759699-cgjgp 1/1 Running 0 174m查看pod详细信息,显示正常。[root@m1 ~]# kubectl describe pod nginx-f89759699-cgjgp Name: nginx-f89759699-cgjgp Namespace: default Priority: 0 Service Account: default Node: n1/192.168.200.84 Start Time: Fri, 10 Mar 2023 08:40:33 +0800 Labels: app=nginx pod-template-hash=f89759699 Annotations: <none> Status: Running IP: 10.244.3.20 IPs: IP: 10.244.3.20 Controlled By: ReplicaSet/nginx-f89759699 Containers: nginx: Container ID: docker://88bdc2bfa592f60bf99bac2125b0adae005118ae8f2f271225245f20b7cfb3c8 Image: nginx Image ID: docker-pullable://nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2 Port: <none> Host Port: <none> State: Running Started: Fri, 10 Mar 2023 10:37:42 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-zk8sj (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-zk8sj: Type: Secret (a volume populated by a Secret) SecretName: default-token-zk8sj Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BackOff 58m (x480 over 171m) kubelet Back-off pulling image "nginx" [root@m1 ~]#
一、报错信息报错信息如下图所示。'ping' 不是内部或外部命令,也不是可运行的程序或批处理文件。二、解决措施在环境变量中添加ping.exe路径1、检查c:\windows\system32目录下,是否存放PING.exe可执行文件。2、运行sysdm.cpl打开系统属性,依次点击高级——环境变量。在用户变量和系统变量中的Path下添加c:\windows\system32,报错确定即可。点击新建,添加c:\windows\system32。3、重新打开CMD命令符提示符窗口,进行ping功能测试,成功解决。=至此,解决“‘ping’ 不是内部或外部命令,也不是可运行的程序或批处理文件。”的问题
龙蜥操作系统(Anolis OS)简介Anolis OS 8 是 OpenAnolis 社区推出的完全开源、中立、开放的发行版,它支持多计算架构,也面向云端场景优化,兼容 CentOS 软件生态。Anolis OS 8 旨在为广大开发者和运维人员提供稳定、高性能、安全、可靠、开源的操作系统服务。下载Anolis镜像下载链接:https://mirrors.openanolis.cn/anolis/8.6/isos/QU1/x86_64/安装操作本篇以 Anolis OS 8.6 版本为例,使用AnolisOS-8.6-QU1-x86_64-dvd.iso2、选择安装过程中使用的语言。这里选择"中文"。3、设置安装信息4、配置网络信息参数5、设置磁盘分区6、开始安装7、重启系统8、勾选许可证协议9、设置用户名10、开始使用11、查看系统版本信息
1、搭建K8S环境平台规划2、服务器硬件配置要求测试环境配置要求节点CPU内核数内存大小硬盘大小master2核及以上4G及以上20GB及以上node4核及以上8G及以上40GB及以上生产环境配置要求配置要求更高。3、搭建k8s集群部署方式(1)kubeadm是一个K8S部署工具,提供kubeadm init 和 kubeadm join,用于快速部署K8S集群。参考链接:使用 kubeadm 引导集群 | Kubernetes(2)二进制包4、kubeadm方式部署——系统初始化操作kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具,这个工具能通过两条指令完成一个kubernetes集群的部署:第一、创建一个 Master 节点 kubeadm init第二、将 Node 节点加入到当前集群中 kubeadm join <Master节点的IP和端口>环境要求工作节点主机名IP地址系统版本内存CPU磁盘masterk8s-master192.168.200.31CentOS 7.92GB2核30GBnode1k8s-node1192.168.200.32CentOS 7.98GB2核30GBnode2k8s-node2192.168.200.33CentOS 7.98GB2核30GB准备CentOS 7.9镜像安装三台虚拟机硬件配置要求如上三台虚拟机能够相互访问虚拟机可以访问外网,拉去镜像禁止swap分区安装前准备工作1、修改主机名 hostnamectl set-hostname k8s-master 2、添加主机名 cat >> /etc/hosts << EOF 192.168.200.31 k8s-master 192.168.200.32 k8s-node1 192.168.200.33 k8s-node2 EOF 3、关闭防火墙 systemctl stop firewalld && systemctl disable firewalld systemctl status firewalld 4、关闭selinux # 临时允许 setenforce 0 getenforce # 永久允许 sed -i "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config cat /etc/selinux/config 5、关闭swap分区 # 参考链接:https://www.cnblogs.com/architectforest/p/12982886.html # 查看swapoff的版本 swapoff --version # 临时关闭❎ swapoff -a # 永久关闭❎ sed -ri 's/.*swap.*/#&/' /etc/fstab # 重启生效 # 使用swapon检查 swapon -v #输出为空,表示swap已关闭 # 使用free命令检查 free -m # 重新启动swap分区 swapon -a 6、配置网卡联网 cat /etc/sysconfig/network-scripts/ifcfg-ens32 7、配置阿里云镜像 cd /etc/yum.repos.d/ && mkdir bak && mv CentOS-* bak/ curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo 8、生成本地缓存 yum makecache fast 9、更新YUM源软件包 yum update -y 10、将桥接的 IPv4 流量传递到 iptables 的链 cat >> /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF # cat /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 [root@k8s-master ~]# 11、设置主机之间时间同步 yum install -y ntpdate ntpdate time.windows.com5、kubeadm方式部署——部署master节点1️⃣所有节点安装Docker#yum安装gcc相关环境(需要确保虚拟机可以上外网。) yum install -y gcc && yum install -y gcc-c++ 1、卸载旧版本docker yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine 2、安装需要的依赖包 yum install -y yum-utils 3、设置阿里云docker镜像 yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo # 默认镜像源是国外的,不推荐使用 yum-config-manager \ --add-repo \ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 推荐使用国内镜像 4、安装docker docker-ce ee企业版 yum install -y docker-ce docker-ce-cli containerd.io 5、启动Docker systemctl start docker && systemctl enable docker && systemctl status docker 6、查看docker版本信息 docker version2️⃣所有节点配置阿里云Docker、kubernetes镜像7、配置阿里云docker镜像加速 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://w2kavmmf.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload && systemctl restart docker 8、配置阿里云Kubernetes 镜像 cat >> /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF3️⃣所有节点安装kubelet kubeadm kubectl# 指定K8S版本安装,不指定版本默认安装最新版。 yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 systemctl enable kubelet4️⃣部署Kubernetes Master节点这里指定阿里云镜像仓库地址,默认的镜像地址无法加载访问。kubeadm init \ --apiserver-advertise-address=192.168.200.31 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.18.0 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16要开始使用集群,需要以普通用户的身份运行以下命令:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady master 6m8s v1.18.0 [root@k8s-master ~]# 6、kubeadm方式部署——部署node节点5️⃣将 Node节点 加入到Kubernetes Master节点在Node端执行操作。加入任何数量的工作节点,通过运行以下每一个根节点:kubeadm join 192.168.200.31:6443 --token 3myqeb.35plbttpfc0tjlvz \ --discovery-token-ca-cert-hash sha256:b8378ad91dc3c88577869edd53937f0be1851ae972035b8449e4eae875ef2542 # 集群状态为NotReady,需要添加CNI网络插件 [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady master 12m v1.18.0 k8s-node01 NotReady <none> 116s v1.18.0 k8s-node02 NotReady <none> 5s v1.18.0 # 查看kubernetes版本 [root@k8s-master ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}默认token有效期为24小时,过期后,该token不能使用,需要重新创建token,命令如下:kubeadm token create --print-join-command6️⃣部署CNI网络插件wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flankubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@k8s-master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7ff77c879f-9wt65 1/1 Running 0 26m coredns-7ff77c879f-vf892 1/1 Running 0 26m etcd-k8s-master 1/1 Running 0 26m kube-apiserver-k8s-master 1/1 Running 0 26m kube-controller-manager-k8s-master 1/1 Running 0 26m kube-flannel-ds-65b8n 1/1 Running 0 4m22s kube-flannel-ds-nx6gj 1/1 Running 0 4m22s kube-flannel-ds-r6f25 1/1 Running 0 4m22s kube-proxy-9mvdl 1/1 Running 0 26m kube-proxy-pwd2b 1/1 Running 0 14m kube-proxy-zslgz 1/1 Running 0 16m kube-scheduler-k8s-master 1/1 Running 0 26m查看集群节点状态是Ready[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 28m v1.18.0 k8s-node01 Ready <none> 18m v1.18.0 k8s-node02 Ready <none> 16m v1.18.07️⃣测试kubernetes集群在Kubernetes集群中创建一个pod,验证是否正常运行:# 拉取nginx镜像 kubectl create deployment nginx --image=nginx # nginx启动完成 # kubectl get pod NAME READY STATUS RESTARTS AGE nginx-f89759699-r6j49 1/1 Running 0 88s # 暴露nginx端口80 kubectl expose deployment nginx --port=80 --type=NodePort # 查看暴露端口信息 # kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-f89759699-r6j49 1/1 Running 0 3m6s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37m service/nginx NodePort 10.101.19.205 <none> 80:31814/TCP 26s访问地址:http://NodeIP:Port
1️⃣实验概要使用 Azure 门户部署 AKS 群集。在该群集中运行一个包含 Web 前端和 Redis 实例的示例多容器应用程序。2️⃣实验准备拥有Azure订阅3️⃣实验过程🔴创建AKS集群1、登录Azure门户2、在搜索框🔍中输入Kubernetes服务3、选择"创建Kubernetes服务"。4、在"基本信息"页面上,配置以下选项。单击下一步。项目详细信息选择订阅选择或者创建一个Azure资源组。如,K8SResourceGroup群集详细信息集群预设配置:Standard ($$)Kubernetes集群名称:myAKSCluster区域:(Asia Pacific) Korea Central可用性区域:保留默认值选择kubernetes版本:保留kubernetes默认值API服务器可用性:99.5%(针对成本进行优化)主节点池群集内主节点池中节点的数量和大小。对于生产工作负载,为了获得复原能力,建议至少有 3 个节点。对于开发或测试工作负载,只需要一个节点。节点大小:默认缩放方法:默认节点计数范围:默认5、在"节点池"页面上,保持默认选项即可。单击下一步。6、在"访问"页面上,默认启用 Kubernetes 基于角色的访问控制 (RBAC) 选项,以便更精细地控制对部署在 AKS 群集中的 Kubernetes 资源的访问权限。7、在"网络"页面上,默认。Kubenet 网络插件使用默认值为群集新建 VNet。使用 Azure CNI 网络插件,群集可以使用新的或现有的 VNet 并对地址进行自定义。应用程序 Pod 直接连接到 VNet,便于与 VNet 功能本机集成。8、在"集成"页面,保持默认。其余选项保持默认即可。9、点击"查看+创建"。验证通过后,点击"创建"。10、等待部署完成。创建 AKS 群集需要几分钟时间。11、选择"转到资源"。连接AKS集群若要管理 Kubernetes 群集,使用 Kubernetes 命令行客户端 kubectl若使用的是 Azure Cloud Shell,则 kubectl 已安装。若要在本地 shell 安装中执行这些操作:验证是否已安装 Azure CLI。通过 az login 命令连接到 Azure。1、使用 Azure 门户顶部的 >_ 按钮打开 Cloud Shell。2、使用 az aks get-credentials 命令将 kubectl 配置为连接到你的 Kubernetes 群集。 以下命令将下载凭据,并将 Kubernetes CLI 配置为使用这些凭据。az aks get-credentials --resource-group K8SResourceGroup --name myAKSClusterPS /home/xu> kubectl get nodes NAME STATUS ROLES AGE VERSION aks-agentpool-90599387-vmss000000 Ready agent 16m v1.22.6 PS /home/xu> 🟢删除AKS集群为了避免产生 Azure 费用,如果不打算完成后续教程,请清理不需要的资源。 在 AKS 群集仪表板上选择“删除”按钮。 也可以在 Cloud Shell 中使用az aks delete 命令。az aks delete --resource-group K8SResourceGroup --name myAKSCluster --yes --no-wait
五、配置elasticsearch和Kibana相连接进入elasticsearch容器中生成令牌或进入kibana容器中修改kibana.yml配置文件# 方法一: # 进入elasticsearch容器中生成令牌(失败) elasticsearch@900c8e4dbe11:~$ ./bin/elasticsearch-create-enrollment-token --scope kibana ERROR: Failed to determine the health of the cluster. elasticsearch@900c8e4dbe11:~$ # 方法二: # 进入kibana容器中修改kibana.yml配置文件(本实验采用此方法),修改访问的IP地址,为本地127.0.0.1访问。 kibana@900c8e4dbe11:~$cat >config/kibana.yml<<EOF # # ** THIS IS AN AUTO-GENERATED FILE ** # # Default Kibana configuration for docker target server.host: "0.0.0.0" server.shutdownTimeout: "5s" elasticsearch.hosts: [ "http://127.0.0.1:9200" ] monitoring.ui.container.elasticsearch.enabled: true # 此处设置显示语言为中文 i18n.locale: "zh-CN" EOF # 查看修改后的配置内容 kibana@900c8e4dbe11:~$ cat config/kibana.yml # # ** THIS IS AN AUTO-GENERATED FILE ** # # Default Kibana configuration for docker target server.host: "0.0.0.0" server.shutdownTimeout: "5s" elasticsearch.hosts: [ "http://127.0.0.1:9200" ] monitoring.ui.container.elasticsearch.enabled: true i18n.locale: "zh-CN" kibana@900c8e4dbe11:~$ # 退出容器并重启容器 kibana@900c8e4dbe11:~$ exit exit [root@docker ~]# docker restart kibana kibana需要输入注册令牌,再elasticsearch容器中生成。或者修改kibana.yml配置文件进入kiabana管理界面。运行样例数据,查看效果。六、Kibana设置成中文图形化界面设置之前的图形界面如下# Kibana将语言设置为中文 6.7以后系统开始支持中文,修改语言只需要添加一行配置即可。 # 在kibana.yml配置文件中添加一行配置即可。 [root@docker ~]# docker exec -it kibana bash kibana@900c8e4dbe11:~$ # 修改kibana.yml配置文件 kibana@900c8e4dbe11:~$cat >config/kibana.yml<<EOF # # ** THIS IS AN AUTO-GENERATED FILE ** # # Default Kibana configuration for docker target server.host: "0.0.0.0" server.shutdownTimeout: "5s" elasticsearch.hosts: [ "http://127.0.0.1:9200" ] monitoring.ui.container.elasticsearch.enabled: true # 此处设置显示语言为中文 i18n.locale: "zh-CN" EOF # 查看修改后的kibana.yml配置文件 kibana@900c8e4dbe11:~$ cat config/kibana.yml # # ** THIS IS AN AUTO-GENERATED FILE ** # # Default Kibana configuration for docker target server.host: "0.0.0.0" server.shutdownTimeout: "5s" elasticsearch.hosts: [ "http://192.168.200.66:9200" ] monitoring.ui.container.elasticsearch.enabled: true i18n.locale: "zh-CN" kibana@900c8e4dbe11:~$设置中文显示图形界面如下七、安装 Heartbeat1、下载并安装 Heartbeatcurl -L -O https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-8.2.0-x86_64.rpm sudo rpm -vi heartbeat-8.2.0-x86_64.rpm2、编辑配置修改 /etc/heartbeat/heartbeat.yml 以设置连接信息: output.elasticsearch: hosts: ["<es_url>"] username: "elastic" password: "<password>" # If using Elasticsearch's default certificate ssl.ca_trusted_fingerprint: "<es cert fingerprint>" setup.kibana: host: "<kibana_url>"其中,<password> 是 elastic 用户的密码,<es_url> 是 Elasticsearch 的 URL,<kibana_url> 是 Kibana 的 URL。要使用 Elasticsearch 生成的默认证书 配置 SSL,请在 <es cert fingerprint> 中添加其指纹。3、编辑配置 - 添加监测# 在 heartbeat.yml 文件中编辑 heartbeat.monitors 设置。 heartbeat.monitors: - type: http urls: ["<http://localhost:9200>"] schedule: "@every 10s"其中 <http://localhost:9200> 是受监测 URL。4、启动 Heartbeat# setup 命令加载 Kibana 索引模式。 sudo heartbeat setup sudo service heartbeat-elastic start5、Heartbeat 状态确认从 Heartbeat 收到数据八、其他kibana容器中没有vi编辑命令。可以使用cat>配置文件<<EOF编辑方式进行编辑。权限问题。更新软件权限不足。使用管理员的身份进入容器。# 查看集群状态 [root@docker ~]# curl -X GET "localhost:9200/_cluster/health?pretty" { "cluster_name" : "docker-cluster", "status" : "yellow", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 14, "active_shards" : 14, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 1, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 93.33333333333333 } [root@docker ~]# # 节点简要信息 [root@docker ~]# curl -X GET "localhost:9200/_cat/nodes?pretty&v" ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.18.0.2 54 77 6 0.01 0.05 0.11 cdfhilmrstw * 900c8e4dbe11 [root@docker ~]# #索引列表 [root@docker ~]# curl -X GET "localhost:9200/_cat/indices?v" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open .ds-heartbeat-8.2.0-2022.05.15-000001 AkhgkfJgQL2SHqBNVuBp5g 1 1 0 0 225b 225b green open kibana_sample_data_logs 8mgvHLdDTIm5TwvOlB2QXA 1 0 14074 0 9.2mb 9.2mb green open kibana_sample_data_ecommerce L5j4XqRhRqmA2mIDORvZpw 1 0 4675 0 4.3mb 4.3mb [root@docker ~]#
实验示例图💥注意事项elasticsearch安装之后,十分消耗内存资源,需要手动配置限制内存大小。elasticsearch和Kibana安装时,版本号需要一致。本实验采用的Linux系统是CentOS Linux release 7.9.2009 (Core),docker版本是20.10.14elasticsearch和kibana版本都是8.2.0一、什么是elasticsearch?Elasticsearch 是一个分布式、RESTful 风格的搜索和数据分析引擎,能够解决不断涌现出的各种用例。作为 Elastic Stack 的核心,它集中存储您的数据,帮助您发现意料之中以及意料之外的情况。二、什么是Kibana?Kibana 是一个开源分析和可视化平台,旨在与 Elasticsearch 配合使用。您可以使用 Kibana 搜索、查看存储在 Elasticsearch 索引中的数据并与之交互。您可以轻松执行高级数据分析,并在各种图表、表格和地图中可视化数据。三、使用docker安装部署elasticsearch创建专属网络docker network create xybnet docker network ls docker inspect xybnet执行命令结果[root@docker ~]# docker network create xybnet b4562c006813576d161c84f729c1a6aebf0eecb1ced954159ba02f32cd6ee656 [root@docker ~]# docker network ls NETWORK ID NAME DRIVER SCOPE b2ac7dc0d1c0 bridge bridge local 9fd62dbfb07f host host local 27700772b8f7 none null local b4562c006813 xybnet bridge local [root@docker ~]# docker inspect xybnet [ { "Name": "xybnet", "Id": "b4562c006813576d161c84f729c1a6aebf0eecb1ced954159ba02f32cd6ee656", "Created": "2022-05-13T23:03:55.546299236+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ] [root@docker ~]#下载elasticsearch镜像docker search elasticsearch docker pull elasticsearch:8.2.0 docker images执行命令结果[root@docker ~]# docker search elasticsearch [root@docker ~]# docker pull elasticsearch:8.2.0 8.2.0: Pulling from library/elasticsearch e0b25ef51634: Already exists 860caabdf263: Already exists 9fbe6bc43ac5: Already exists 9d4f6737f430: Already exists 10f01841fd3e: Already exists dae1e3bba098: Already exists 0a3767e40ef9: Already exists 7d786dfd085d: Already exists 7ce904f28ed3: Already exists Digest: sha256:6bd33a35f529d349d8d385856b138d73241555abf2851287c055665494680b8d Status: Downloaded newer image for elasticsearch:8.2.0 docker.io/library/elasticsearch:8.2.0 [root@docker ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE redis 6-alpine3.15 6d12d0de5a46 2 weeks ago 32.4MB elasticsearch 8.2.0 f75ee9faf718 3 weeks ago 1.21GB tomcat latest fb5657adc892 4 months ago 680MB elasticsearch latest 5acf0e8da90b 3 years ago 486MB [root@docker ~]# docker inspect elasticsearch:8.2.0 [ { "Id": "sha256:f75ee9faf7183b931afb70d416647824c9b344e83905bbe7f70062b5eab91e43", "RepoTags": [ "elasticsearch:8.2.0" ], "RepoDigests": [ ******以下输出内容省略******创建并启动Elasticsearch容器服务# 此命令限制运行内存大小以及挂载卷 docker run -d --name xybes --net xybnet -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms1024m -Xmx2048m" -p 5601:5601 -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data elasticsearch:8.2.0 # 此命令不设置自定义网络 docker run -d --name xybes -p 9200:9200 -p 9300:9300 -p 5601:5601 -e "discovery.type=single-node" elasticsearch:8.2.0 # 本实验执行此命令 docker run -d --name xybes --net xybnet -p 9200:9200 -p 9300:9300 -p 5601:5601 -e "discovery.type=single-node" elasticsearch:8.2.0 docker ps# 参数解释说明 # -d 后台运行 # --name xybes 指定容器唯一的名称,方便管理 # --net xybnet 指定网络 # -p 9200:9200 -p 9300:9300 映射容器端口到宿主机上 # -e "discovery.type=single-node" 环境变量配置为单机模式 # -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data 持久化数据存储 # -e ES_JAVA_OPTS="-Xms1024m -Xmx2048m" 设置内存大小 # elasticsearch:8.2.0 镜像名称和版本号执行命令结果[root@docker ~]# docker run -d --name xybes --net xybnet -p 9200:9200 -p 9300:9300 -p 5601:5601 -e "discovery.type=single-node" elasticsearch:8.2.0 900c8e4dbe11c4460543859b8c887d1fbb21b33071474e079de430e087fdb92f [root@docker ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 900c8e4dbe11 elasticsearch:8.2.0 "/bin/tini -- /usr/l…" 17 seconds ago Up 16 seconds 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp xybes [root@docker ~]# docker inspect 900c8e4dbe11 [ { "Id": "900c8e4dbe11c4460543859b8c887d1fbb21b33071474e079de430e087fdb92f", "Created": "2022-05-14T12:56:51.502325109Z", "Path": "/bin/tini", "Args": [ "--", "/usr/local/bin/docker-entrypoint.sh", "eswrapper" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 7605, "ExitCode": 0, "Error": "", "StartedAt": "2022-05-14T12:56:52.189429009Z", "FinishedAt": "0001-01-01T00:00:00Z" }, ******以下输出内容省略****** # 进入xybes容器 [root@docker ~]# docker exec -it xybes /bin/bash cluster.name: "docker-cluster" # 查看配置目录 elasticsearch@900c8e4dbe11:~$ ls LICENSE.txt NOTICE.txt README.asciidoc bin config data jdk lib logs modules plugins # 查看设置es用户密码帮助命令 elasticsearch@900c8e4dbe11:~$ ./bin/elasticsearch-setup-passwords -h Sets the passwords for reserved users Commands -------- auto - Uses randomly generated passwords interactive - Uses passwords entered by a user Non-option arguments: command Option Description ------ ----------- -E <KeyValuePair> Configure a setting -h, --help Show help -s, --silent Show minimal output -v, --verbose Show verbose output # 以自定义的方式设置密码 elasticsearch@900c8e4dbe11:~$ ./bin/elasticsearch-setup-passwords interactive ****************************************************************************** Note: The 'elasticsearch-setup-passwords' tool has been deprecated. This command will be removed in a future release. ****************************************************************************** Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana_system]: Reenter password for [kibana_system]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic] elasticsearch@900c8e4dbe11:~$ ls LICENSE.txt NOTICE.txt README.asciidoc bin config data jdk lib logs modules plugins # 进入elasticsearch.yml配置文件,关闭SSL(即修改此命令xpack.security.enabled: false) elasticsearch@900c8e4dbe11:~$ vi config/elasticsearch.yml elasticsearch@900c8e4dbe11:~$ cat config/elasticsearch.yml cluster.name: "docker-cluster" network.host: 0.0.0.0 #----------------------- BEGIN SECURITY AUTO CONFIGURATION ----------------------- # # The following settings, TLS certificates, and keys have been automatically # generated to configure Elasticsearch security features on 14-05-2022 12:56:54 # # -------------------------------------------------------------------------------- # Enable security features xpack.security.enabled: false xpack.security.enrollment.enabled: true # Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents xpack.security.http.ssl: enabled: true keystore.path: certs/http.p12 # Enable encryption and mutual authentication between cluster nodes xpack.security.transport.ssl: enabled: true verification_mode: certificate keystore.path: certs/transport.p12 truststore.path: certs/transport.p12 #----------------------- END SECURITY AUTO CONFIGURATION ------------------------- # 退出容器 elasticsearch@900c8e4dbe11:~$ exit exit # 系统重新加载 [root@docker ~]# systemctl daemon-reload # 重启xybes容器 [root@docker ~]# docker restart xybes xybes # 测试访问 [root@docker ~]# curl http://localhost:9200 curl: (56) Recv failure: Connection reset by peer # 使用IP:9200访问成功 [root@docker ~]# curl http://192.168.200.66:9200 { "name" : "900c8e4dbe11", "cluster_name" : "docker-cluster", "cluster_uuid" : "wDwmop88TiO1Rkf1fecHvg", "version" : { "number" : "8.2.0", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "b174af62e8dd9f4ac4d25875e9381ffe2b9282c5", "build_date" : "2022-04-20T10:35:10.180408517Z", "build_snapshot" : false, "lucene_version" : "9.1.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" } [root@docker ~]## 以SSL安全模式访问。 [root@docker ~]# docker cp xybes:/usr/share/elasticsearch/config/certs/http_ca.crt ./ # 访问成功 [root@docker ~]# curl --cacert http_ca.crt -u elastic https://localhost:9200 Enter host password for user 'elastic': { "name" : "900c8e4dbe11", "cluster_name" : "docker-cluster", "cluster_uuid" : "wDwmop88TiO1Rkf1fecHvg", "version" : { "number" : "8.2.0", "build_flavor" : "default", "build_type" : "docker", "build_hash" : "b174af62e8dd9f4ac4d25875e9381ffe2b9282c5", "build_date" : "2022-04-20T10:35:10.180408517Z", "build_snapshot" : false, "lucene_version" : "9.1.0", "minimum_wire_compatibility_version" : "7.17.0", "minimum_index_compatibility_version" : "7.0.0" }, "tagline" : "You Know, for Search" } [root@docker ~]#四、使用docker安装部署Kibana下载kibana镜像(注意对应版本)docker search kibana docker pull kibana:8.2.0 docker images执行命令结果[root@docker ~]# docker pull kibana:8.2.0 8.2.0: Pulling from library/kibana e0b25ef51634: Already exists 16168a059524: Pull complete a7c5b97fb1b3: Pull complete b4997d90f2a3: Pull complete 08edfcb77367: Pull complete 162b89073472: Pull complete c27ed485e628: Pull complete c8ec5118d07e: Pull complete 3098c58d1611: Pull complete f0cd89b25439: Pull complete 90247f6ea1db: Pull complete 3bdda07522a2: Pull complete 36a9ec86c178: Pull complete Digest: sha256:0ba5d3d3ddab3212eadd15bcc701c24a2baafe2f8bd7ced9d2a750cf227b8a06 Status: Downloaded newer image for kibana:8.2.0 docker.io/library/kibana:8.2.0 [root@docker ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE redis 6-alpine3.15 6d12d0de5a46 2 weeks ago 32.4MB kibana 8.2.0 58a692253df4 3 weeks ago 752MB elasticsearch 8.2.0 f75ee9faf718 3 weeks ago 1.21GB tomcat latest fb5657adc892 4 months ago 680MB elasticsearch 7.7.0 7ec4f35ab452 2 years ago 757MB elasticsearch latest 5acf0e8da90b 3 years ago 486MB [root@docker ~]#创建并启动kibana容器# 本实验执行此命令 docker run -it -d --name kibana --network=container:xybes kibana:8.2.0 docker run -it -d -e ELASTICSEARCH_URL=http://127.0.0.1:9200 --name kibana --network=container:xybes -v /data/kibana/config:/usr/share/kibana/config kibana:8.2.0执行命令结果[root@docker ~]# docker run -it -d --name kibana --network=container:xybes kibana:8.2.0 88969a52ec18c84fa7950a80f0211fc645c8de2df49b2b70ee8847e8903e026c [root@docker ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 88969a52ec18 kibana:8.2.0 "/bin/tini -- /usr/l…" 50 seconds ago Up 49 seconds kibana 900c8e4dbe11 elasticsearch:8.2.0 "/bin/tini -- /usr/l…" 4 hours ago Up 3 hours 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp xybes [root@docker ~]# [root@docker ~]# docker inspect 88969a52ec18 [ { "Id": "88969a52ec18c84fa7950a80f0211fc645c8de2df49b2b70ee8847e8903e026c", "Created": "2022-05-14T17:08:38.738984864Z", "Path": "/bin/tini", "Args": [ "--", "/usr/local/bin/kibana-docker" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 27371, "ExitCode": 0, "Error": "", "StartedAt": "2022-05-14T17:08:38.908540948Z", "FinishedAt": "0001-01-01T00:00:00Z" }, ******以下输出内容省略******
二、OceanBase介绍OceanBase是由蚂蚁集团完全自主研发的金融级分布式关系数据库,始创于2010年。OceanBase具有数据强一致、高可用、高性能、在线扩展、高度兼容SQL标准和主流关系数据库、低成本等特点。OceanBase 社区版是一款开源分布式 HTAP(Hybrid Transactional/Analytical Processing)数据库管理系统,具有原生分布式架构,支持金融级高可用、透明水平扩展、分布式事务、多租户和语法兼容等企业级特性。OceanBase 内核通过大规模商用场景的考验,已服务众多行业客户,现面向未来持续构建内核技术竞争力。三、OceanBase安装操作本实验基于CentOS 7.9系统进行演示操作[root@oceanbase ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)安装前期准备本实验采用单机模式 的部署方式,在同一台机器上安装服务端和客户端进行测试。需要内存大小8GB 以上;(本实验内存大小 10 GB)磁盘空间大小65GB以上;(本实验磁盘大小 95 GB)1、通过 YUM 软件源下载并安装 OBD执行以下三种命令。# yum install -y yum-utils # yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo # yum install -y ob-deploy[root@obd ~]# yum install -y yum-utils Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Package yum-utils-1.1.31-54.el7_8.noarch already installed and latest version Nothing to do [root@obd ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/oceanbase/OceanBase.repo Loaded plugins: fastestmirror adding repo from: https://mirrors.aliyun.com/oceanbase/OceanBase.repo grabbing file https://mirrors.aliyun.com/oceanbase/OceanBase.repo to /etc/yum.repos.d/OceanBase.repo repo saved to /etc/yum.repos.d/OceanBase.repo [root@obd ~]# yum install -y ob-deploy Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Package ob-deploy-1.2.1-9.el7.x86_64 already installed and latest version Nothing to do [root@obd ~]#或者离线安装 OBD 1. 下载 OBD 离线 RPM 安装包。 2. 运行以下命令安装 OBD。 # yum install -y ob-deploy-1.0.0-1.el7.x86_64.rpm # source /etc/profile.d/obd.sh2、下载 OceanBase 数据库配置文件模板从 Github 上下载对应的配置文件模板。本实验采用的是mini-local-example.yaml 配置文件Gitee下载链接:example/mini-local-example.yaml · OceanBase/obdeploy - Gitee.comoceanbase-ce: servers: # Please don't use hostname, only IP can be supported - 127.0.0.1 global: # The working directory for OceanBase Database. OceanBase Database is started under this directory. This is a required field. home_path: /xyb/observer # The directory for data storage. The default value is $home_path/store. # data_dir: /data # The directory for clog, ilog, and slog. The default value is the same as the data_dir value. # redo_dir: /redo # Please set devname as the network adaptor's name whose ip is in the setting of severs. # if set severs as "127.0.0.1", please set devname as "lo" # if current ip is 192.168.1.10, and the ip's network adaptor's name is "eth0", please use "eth0" devname: lo mysql_port: 2881 # External port for OceanBase Database. The default value is 2881. DO NOT change this value after the cluster is started. rpc_port: 2882 # Internal port for OceanBase Database. The default value is 2882. DO NOT change this value after the cluster is started. zone: zone1 cluster_id: 1 # please set memory limit to a suitable value which is matching resource. memory_limit: 8G # The maximum running memory for an observer system_memory: 4G # The reserved system memory. system_memory is reserved for general tenants. The default value is 30G. stack_size: 512K cpu_count: 16 cache_wash_threshold: 1G __min_full_resource_pool_memory: 268435456 workers_per_cpu_quota: 10 schema_history_expire_time: 1d # The value of net_thread_count had better be same as cpu's core number. net_thread_count: 4 sys_bkgd_migration_retry_num: 3 minor_freeze_times: 10 enable_separate_sys_clog: 0 enable_merge_by_turn: FALSE datafile_disk_percentage: 20 # The percentage of the data_dir space to the total disk space. This value takes effect only when datafile_size is 0. The default value is 90. syslog_level: INFO # System log level. The default value is INFO. enable_syslog_wf: false # Print system logs whose levels are higher than WARNING to a separate log file. The default value is true. enable_syslog_recycle: true # Enable auto system log recycling or not. The default value is false. max_syslog_file_count: 4 # The maximum number of reserved log files before enabling auto recycling. The default value is 0. # root_password: # root user password, can be empty3、部署 OceanBase 数据库运行以下命令部署集群obd cluster deploy <deploy_name> -c <deploy_config_file> -A[root@obs ~]# obd cluster deploy xybobs -c mini-local-example.yaml Update OceanBase-community-stable-el7 ok Update OceanBase-development-kit-el7 ok Download oceanbase-ce-3.1.2-10000392021123010.el7.x86_64.rpm (46.45 M): 100% [##############] Time: 0:00:08 5.74 MB/s Package oceanbase-ce-3.1.2 is available. install oceanbase-ce-3.1.2 for local ok +-------------------------------------------------------------------------------------------+ | Packages | +--------------+---------+-----------------------+------------------------------------------+ | Repository | Version | Release | Md5 | +--------------+---------+-----------------------+------------------------------------------+ | oceanbase-ce | 3.1.2 | 10000392021123010.el7 | 7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed | +--------------+---------+-----------------------+------------------------------------------+ Repository integrity check ok Parameter check ok Open ssh connection ok Remote oceanbase-ce-3.1.2-7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed repository install ok Remote oceanbase-ce-3.1.2-7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed repository lib check !! [WARN] 127.0.0.1 oceanbase-ce-3.1.2-7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed require: libmariadb.so.3 Try to get lib-repository Download oceanbase-ce-libs-3.1.2-10000392021123010.el7.x86_64.rpm (155.22 K): 100% [########] Time: 0:00:00 6.76 MB/s Package oceanbase-ce-libs-3.1.2 is available. install oceanbase-ce-libs-3.1.2 for local ok Use oceanbase-ce-libs-3.1.2-94fff0ab31de053051dba66039e3185fa390cad5 for oceanbase-ce-3.1.2-7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed Remote oceanbase-ce-libs-3.1.2-94fff0ab31de053051dba66039e3185fa390cad5 repository install ok Remote oceanbase-ce-3.1.2-7fafba0fac1e90cbd1b5b7ae5fa129b64dc63aed repository lib check ok Cluster status check ok Initializes observer work home ok xybobs deployed [root@obs ~]# 4、启动 OceanBase 数据库[root@obs ~]# obd cluster start xybobs Get local repositories and plugins ok Open ssh connection ok Load cluster param plugin ok Check before start observer ok [WARN] (127.0.0.1) clog and data use the same disk (/) Start observer ok observer program health check ok Connect to observer ok Initialize cluster Cluster bootstrap ok Wait for observer init ok +---------------------------------------------+ | observer | +-----------+---------+------+-------+--------+ | ip | version | port | zone | status | +-----------+---------+------+-------+--------+ | 127.0.0.1 | 3.1.2 | 2881 | zone1 | active | +-----------+---------+------+-------+--------+ xybobs running [root@obs ~]#5、连接OceanBase数据库安装OceanBase数据库客户端 OBClient# yum install -y obclient[root@obs ~]# yum install -y obclient Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package obclient.x86_64 0:2.0.0-2.el7 will be installed --> Processing Dependency: libobclient >= 2.0.0 for package: obclient-2.0.0-2.el7.x86_64 --> Running transaction check ---> Package libobclient.x86_64 0:2.0.0-2.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================== Package Arch Version Repository Size ====================================================================================================================================== Installing: obclient x86_64 2.0.0-2.el7 oceanbase.community.stable 40 M Installing for dependencies: libobclient x86_64 2.0.0-2.el7 oceanbase.community.stable 643 k Transaction Summary ====================================================================================================================================== Install 1 Package (+1 Dependent package) Total download size: 41 M Installed size: 188 M Downloading packages: (1/2): libobclient-2.0.0-2.el7.x86_64.rpm | 643 kB 00:00:00 (2/2): obclient-2.0.0-2.el7.x86_64.rpm | 40 MB 00:00:06 -------------------------------------------------------------------------------------------------------------------------------------- Total 6.6 MB/s | 41 MB 00:00:06 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : libobclient-2.0.0-2.el7.x86_64 1/2 Installing : obclient-2.0.0-2.el7.x86_64 2/2 Verifying : libobclient-2.0.0-2.el7.x86_64 1/2 Verifying : obclient-2.0.0-2.el7.x86_64 2/2 Installed: obclient.x86_64 0:2.0.0-2.el7 Dependency Installed: libobclient.x86_64 0:2.0.0-2.el7 Complete! [root@obs ~]#使用Root用户登录 OceanBase 数据库[root@obs ~]# obclient -h192.168.200.88 -P2881 -uroot Welcome to the OceanBase. Commands end with ; or \g. Your MySQL connection id is 3221487658 Server version: 5.7.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | oceanbase | | information_schema | | mysql | | SYS | | LBACSYS | | ORAAUDITOR | | test | +--------------------+ 7 rows in set (0.007 sec) MySQL [(none)]> exit Bye切换使用obs用户登录 OceanBase 数据库[root@obs ~]# su obs [obs@obs root]$ obclient -h192.168.200.88 -P2881 -uroot Welcome to the OceanBase. Commands end with ; or \g. Your MySQL connection id is 3221487837 Server version: 5.7.25 OceanBase 3.1.2 (r10000392021123010-d4ace121deae5b81d8f0b40afbc4c02705b7fc1d) (Built Dec 30 2021 02:47:29) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. # 查看数据库 MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | oceanbase | | information_schema | | mysql | | SYS | | LBACSYS | | ORAAUDITOR | | test | +--------------------+ 7 rows in set (0.002 sec) MySQL [(none)]>6、OceanBase 数据库 常用命令# obs帮助命令 [root@obs ~]# obd -h Usage: obd <command> [options] Available commands: cluster Deploy and manage a cluster. mirror Manage a component repository for OBD. repo Manage local repository for OBD. test Run test for a running deployment. update Update OBD. Options: --version show program's version number and exit -h, --help Show help and exit. -v, --verbose Activate verbose output. # 查看obd管理的集群列表 [root@obs ~]# obd cluster list +------------------------------------------------------+ | Cluster List | +--------+---------------------------+-----------------+ | Name | Configuration Path | Status (Cached) | +--------+---------------------------+-----------------+ | xybobs | /root/.obd/cluster/xybobs | running | +--------+---------------------------+-----------------+ # 查看集群状态 [root@obs ~]# obd cluster display xybobs Get local repositories and plugins ok Open ssh connection ok Cluster status check ok Connect to observer ok Wait for observer init ok +---------------------------------------------+ | observer | +-----------+---------+------+-------+--------+ | ip | version | port | zone | status | +-----------+---------+------+-------+--------+ | 127.0.0.1 | 3.1.2 | 2881 | zone1 | active | +-----------+---------+------+-------+--------+ 四、安装过程中的报错信息磁盘空间不足,至少所需65G磁盘空间大小。[root@obs ~]# vim mini-local-example.yaml [root@obs ~]# obd cluster autodeploy xyb -c mini-local-example.yaml Update OceanBase-community-stable-el7 ok Update OceanBase-development-kit-el7 ok Download oceanbase-ce-3.1.2-10000392021123010.el7.x86_64.rpm (46.45 M): 100% [####] Time: 0:00:06 7.88 MB/s Package oceanbase-ce-3.1.2 is available. install oceanbase-ce-3.1.2 for local ok Cluster param config check ok Open ssh connection ok Generate observer configuration x [ERROR] (127.0.0.1) / not enough disk space. (Avail: 14.9G, Need: 64.1G). Use `redo_dir` to set other disk for clog网卡名称错误❌,本机安装使用的IP是127.0.0.1,对应的网络名称是lo。销毁集群后重新部署。[root@obs ~]# obd cluster start xybobs Get local repositories and plugins ok Open ssh connection ok Load cluster param plugin ok Check before start observer x [WARN] (127.0.0.1) clog and data use the same disk (/) [ERROR] 127.0.0.1 ens32 fail to ping 127.0.0.1. Please check configuration `devname` [root@obs ~]#提示所需的系统内存不足,需要提升内存大小。(推荐内存大小在16GB以上)[root@obs ~]# obd cluster start xybobs Get local repositories and plugins ok Open ssh connection ok Load cluster param plugin ok Check before start observer x [ERROR] (127.0.0.1) not enough memory. (Free: 7.3G, Need: 8.0G) [WARN] (127.0.0.1) clog and data use the same disk (/) [root@obs ~]#
SNMP:简单网络管理协议三种通信方式:读(get,getnext),写(set),trap(陷阱)端口:161/udp 162/udpSNMP协议监控网络设备:交换机、路由器MIB:Management Information Base信息管理基础OID:Object ID 对象ID1、下载安装SNMP包[root@zabbix-server ~]# yum install net-snmp net-snmp-utils2、修改snmpd.conf配置文件配置文件:定义ACL访问控制[root@zabbix-server ~]# vim /etc/snmp/snmpd.conf #定义认证符,将社区名称"public"映射为"安全名称" # sec.name source community com2sec notConfigUser default public #将安全名称映射到一个组名 # groupName securityModel securityName group notConfigGroup v1 notConfigUser group notConfigGroup v2c notConfigUser #为我们创建一个视图,让我们的团队有权利 view systemview included .1.3.6.1.2.1.1 view systemview included .1.3.6.1.2.1.2 #网络接口的相关数据 view systemview included .1.3.6.1.4.1.2021 # 系统资源负载,memory、disk io、cpu load view systemview included .1.3.6.1.2.1.25 +__________________________________+ 掩码:.1.3.6.1.2.1. 1.1.0:系统描述信息,SysDesc 1.3.0:监控时间,SysUptime 1.5.0:主机名,SysName 1.7.0:主机提供的服务,SysService 掩码:.1.3.6.1.2.2. 2.1.0:网络接口数目 2.2.1.2:网络接口的描述信息 2.2.1.3:网络接口类型 +__________________________________+ # 授权对systemview视图只读访问权限 #### # Finally, grant the group read-only access to the systemview view. # group context sec.model sec.level prefix read write notif access notConfigGroup "" any noauth exact systemview none none3、启动SNMP服务[root@zabbix-server ~]# systemctl start snmpd #被监视端开启的服务 [root@zabbix-server ~]# systemctl enable snmpd [root@zabbix-server ~]# systemctl start snmptrapd #监视端口开启的服务(如果允许被监视端启动主动监控时启用) [root@zabbix-server ~]# systemctl enable snmptrapd [root@zabbix-server ~]# systemctl status snmptrapd4、测试SNMP监控是否生效# 在zabbix服务端进行测试 [root@zabbix-server ~]# snmpget -v 2c -c public 192.168.200.60 .1.3.6.1.2.1.1.3.0 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (30223) 0:05:02.23 [root@zabbix-server ~]# snmpget -v 2c -c public 192.168.200.60 .1.3.6.1.2.1.1.5.0 SNMPv2-MIB::sysName.0 = STRING: zabbix-server5、在监控页面,对应主机上添加SNMP接口和模板
二、PostgreSQL介绍PostgreSQL是一种特性非常齐全的自由软件的对象-关系型数据库管理系统(ORDBMS),是以加州大学计算机系开发的POSTGRES,4.2版本为基础的对象关系型数据库管理系统。POSTGRES的许多领先概念只是在比较迟的时候才出现在商业网站数据库中。PostgreSQL支持大部分的SQL标准并且提供了很多其他现代特性,如复杂查询、外键、触发器、视图、事务完整性、多版本并发控制等。同样,PostgreSQL也可以用许多方法扩展,例如通过增加新的数据类型、函数、操作符、聚集函数、索引方法、过程语言等。另外,因为许可证的灵活,任何人都可以以任何目的免费使用、修改和分发PostgreSQL。(——PostgreSQL_百度百科)三、PostgreSQL安装本实验基于CentOS 7.9系统进行演示操作[root@postgresql ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)安装准备修改主机名 # hostnamectl set-hostname prostgresql 关闭防火墙 # systemctl stop firewalld # systemctl disable firewalld 关闭SELinux安全模式 # setenforce 0 # getenforce 配置网络信息并测试连通性 vim /etc/sysconfig/network-scripts/ifcfg-ens32 主要修改如下参数信息即可。 BOOTPROTO=static ONBOOT=yes IPADDR=192.168.200.25 PREFIX=24 GATEWAY=192.168.200.1 DNS1=192.168.200.1 按:wq保存退出。 重启网卡 # systemctl restart network # ping bing.com 配置阿里云CentOS YUM源,加快镜像访问下载 参考链接:https://blog.csdn.net/qq_45392321/article/details/121450443 # yum clean all # yum makecache # yum repolist 升级系统🆙 # yum update 检查postgresql是否安装 # rpm -qa | grep postgre 检查PostgreSQL 安装位置 # rpm -qal | grep postgres 新增postgres用户组 # groupadd postgres 新增postgres用户并且设置这个postgres用户属于创建的postgres用户组 # useradd -g postgres postgres 修改postgres用户密码 [root@postgresql ~]# passwd postgres Changing password for user postgres. New password: BAD PASSWORD: The password is a palindrome Retype new password: passwd: all authentication tokens updated successfully. [root@postgresql ~]# 重启系统 reboot1、查询并安装postgresql-serveryum list | grep postgresql-server yum install -y postgresql-server.x86_642、初始化postgresql-server数据库service postgresql initdb# service postgresql initdb Hint: the preferred way to do this is now "postgresql-setup initdb" Initializing database ... OK3、启动postgresql服务并设置开机自启动systemctl start postgresql systemctl enable postgresql4、查看postgresql服务状态systemctl status postgresql5、查看服务进程信息[root@postgresql ~]# ps -ef | grep postgres postgres 1405 1 0 16:05 ? 00:00:00 /usr/bin/postgres -D /var/lib/pgsql/data -p 5432 postgres 1406 1405 0 16:05 ? 00:00:00 postgres: logger process postgres 1408 1405 0 16:05 ? 00:00:00 postgres: checkpointer process postgres 1409 1405 0 16:05 ? 00:00:00 postgres: writer process postgres 1410 1405 0 16:05 ? 00:00:00 postgres: wal writer process postgres 1411 1405 0 16:05 ? 00:00:00 postgres: autovacuum launcher process postgres 1412 1405 0 16:05 ? 00:00:00 postgres: stats collector process root 1440 1131 0 16:07 pts/0 00:00:00 grep --color=auto postgres [root@postgresql ~]# 6、查看postgresql服务端口是否开启# ss -tunpl | grep postgres tcp LISTEN 0 128 127.0.0.1:5432 *:* users:(("postgres",pid=1349,fd=4)) tcp LISTEN 0 128 [::1]:5432 [::]:* users:(("postgres",pid=1349,fd=3)) [root@postgresql ~]# # netstat -tunpl | grep 5432 tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 1349/postgres tcp6 0 0 ::1:5432 :::* LISTEN 1349/postgres四、测试连接1、切换postgres用户[root@postgresql ~]# su postgres [postgres@postgresql root]$2、连接数据库[root@postgresql ~]# su postgres [postgres@postgresql root]$ psql -U postgres could not change directory to "/root" psql (9.2.24) Type "help" for help. postgres=# # 使用 \l 用于查看已经存在的数据库: postgres=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+-----------+---------+-------+----------------------- postgres | postgres | SQL_ASCII | C | C | template0 | postgres | SQL_ASCII | C | C | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | SQL_ASCII | C | C | =c/postgres + | | | | | postgres=CTc/postgres (3 rows) postgres=# # 进入命令行工具,可以使用 \help 来查看各个命令的语法 postgres-# \help3、创建数据库# 创建一个 runoobdb 的数据库 postgres=# CREATE DATABASE xybdiy; CREATE DATABASE postgres=# # 使用 \c + 数据库名 来进入数据库 postgres=# \c xybdiy You are now connected to database "xybdiy" as user "postgres". xybdiy=# 4、创建表格# 创建了一个表,表名为 COMPANY 表格,主键为 ID,NOT NULL 表示字段不允许包含 NULL 值 xybdiy=# CREATE TABLE COMPANY( xybdiy(# ID INT PRIMARY KEY NOT NULL, xybdiy(# NAME TEXT NOT NULL, xybdiy(# AGE INT NOT NULL, xybdiy(# ADDRESS CHAR(50), xybdiy(# SALARY REAL xybdiy(# ); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "company_pkey" for table "company" CREATE TABLE # 使用 \d 命令来查看表格是否创建成功 xybdiy=# \d List of relations Schema | Name | Type | Owner --------+---------+-------+---------- public | company | table | postgres (1 row) xybdiy=# CREATE TABLE DEPARTMENT( xybdiy(# ID INT PRIMARY KEY NOT NULL, xybdiy(# DEPT CHAR(50) NOT NULL, xybdiy(# EMP_ID INT NOT NULL xybdiy(# ); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "department_pkey" for table "department" CREATE TABLE xybdiy=# \d List of relations Schema | Name | Type | Owner --------+------------+-------+---------- public | company | table | postgres public | department | table | postgres (2 rows) xybdiy=# 五、修改配置文件1、修改postgresql的配置文件# vim /var/lib/pgsql/data/postgresql.conf # 修改监听IP listen_addresses = '*' # 打开日志采集器 logging_collector = on # 设置日志目录 log_directory = 'pg_log'2、修改 pg_hba.conf 服务连接配置文件# vim /var/lib/pgsql/data/pg_hba.conf 77 # TYPE DATABASE USER ADDRESS METHOD 78 79 # "local" is for Unix domain socket connections only 80 local all all trust 81 # IPv4 local connections: 82 host all all 127.0.0.1/32 trust 83 host all all 0.0.0.0/0 trust 84 # IPv6 local connections: 85 host all all ::1/128 md53、重启postgresql服务# systemctl restart postgresql五、测试远程连接测试连接测试成功后,连接连接成功至此,安装PostgreSQL数据库完成。
二、Oracle Linux介绍Oracle Linux是Linux发行版本之一,全称为Oracle Enterprise Linux,简称OEL,是Oracle公司在2006年初发布第一个版本,以对Oracle软件和硬件支持较好见长。 Oracle以Red Hat Linux做为起始,移除了Red Hat的商标,然后加入了Linux的错误修正。Oracle Enterprise Linux现在是,并旨在保持为,与Red Hat Enterprise Linux完全兼容。三、Oracle Linux安装选择安装,单击回车键选择希望在安装过程中使用的语言磁盘分区设置Root密码准备就绪,开始安装。等待安装完成。安装完成,重启系统。接受许可协议设置用户名设置用户名密码设置完成至此,Oracle Linux安装完成。
[root@centos ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)二、Remi简介Remi repository 是包含最新版本 PHP 和 MySQL 包的 Linux 源,由 Remi 提供维护。有个这个源之后,使用 YUM 安装或更新 PHP、MySQL、phpMyAdmin 等服务器相关程序的时候就非常方便了。三、Remi配置1、下载安装包下载链接🔊: Remi’s RPM repositorywget https://mirrors.aliyun.com/remi/enterprise/remi-release-7.rpm或者curl -O https://mirrors.aliyun.com/remi/enterprise/remi-release-7.rpm2、安装安装包rpm -Uvh remi-release-7.rpm3、查看是否安装[root@centos ~]# ls /etc/yum.repos.d/ bak CentOS-Sources.repo remi-glpi92.repo remi-php71.repo remi.repo CentOS-Base.repo CentOS-Vault.repo remi-glpi93.repo remi-php72.repo remi-safe.repo CentOS-CR.repo CentOS-x86_64-kernel.repo remi-glpi94.repo remi-php73.repo zabbix.repo CentOS-Debuginfo.repo epel.repo remi-modular.repo remi-php74.repo CentOS-fasttrack.repo epel-testing.repo remi-php54.repo remi-php80.repo CentOS-Media.repo remi-glpi91.repo remi-php70.repo remi-php81.repo [root@centos ~]# 4、清除并生成本地YUM源缓存yum clean all yum makecache yum repolist
1、运行JDK安装包jdk-8u181-windows-x642、进入开始安装向导界面,单击下一步。如图所示。3、选择jdk安装目录。单击“下一步”。4、开始安装,等待安装完成。5、单击确定,继续安装。6、默认选择,单击“下一步”。7、等待安装完成。如图所示。8、安装完成。点击“关闭”。接下来配置环境变量。9、配置环境变量。打开控制面板,选择“系统和安全”,选择“系统”,选择“高级系统设置”。如图所示。10、打开后,选择“环境变量”。如图所示11、点击系统变量下面的“新建”按钮,进行环境变量配置。12、变量名JAVA_HOME(代表你的JDK安装路径),值对应的是JDK的安装路径。如图所示。变量名:JAVA_HOME变量值:C:\Program Files\Java\jdk1.8.0_181 //根据自己的实际路径配置13、继续在系统变量里面新建一个CLASSPATH变量,其变量值如下图所示。变量名:CLASSPATH变量值:.;%JAVA_HOME%\lib\dt.jar;%JAVA_HOME%\lib\tools.jar;14、在系统变量里面找一个变量名是PATH的变量,需要在它的值域里面追加一段如下的代码。确定保存。%JAVA_HOME%\bin;%JAVA_HOME%\jre\bin;15、测试java环境变量配置是否正确。如图所示,配置完成。
二、Telnet介绍Telnet是一种应用层协议,使用于互联网及局域网中,使用虚拟终端的形式,提供双向、以文字字符串为主的命令行接口交互功能。属于TCP/IP协议族的其中之一,是互联网远程登录服务的标准协议和主要方式,常用于服务器的远程控制,可供用户在本地主机执行远程主机上的工作。三、Telnet安装与配置本实验基于CentOS 7.9 系统进行操作演示。[root@master ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)1 、查询telnet相关服务组件是否已安装查询telnet-server、telnet-client、xinetd等服务组件是否安装。查询得知,xinetd、telnet和telnet-server未安装。[root@master ~]# rpm -qa | grep telnet [root@master ~]# rpm -qa | grep xinetd [root@master ~]# yum list | grep telnet telnet.x86_64 1:0.17-66.el7 updates telnet-server.x86_64 1:0.17-66.el7 updates [root@master ~]# yum list | grep xinetd xinetd.x86_64 2:2.3.15-14.el7 base xinetd 是 Linux 系统的超级守护进程,长期驻存于后台,并监听来自网络的请求,从而启动对应的服务。而 telnet 正是 xinetd 管辖的服务之一。2、查询YUM源中是否提供telnet相关组件查看一下所配置的YUM源内是否提供了telnet相关的安装包yum provides telnet telnet-server xinetd[root@master ~]# yum list | grep telnet && yum list | grep xinetd telnet.x86_64 1:0.17-66.el7 updates telnet-server.x86_64 1:0.17-66.el7 updates xinetd.x86_64 2:2.3.15-14.el7 base [root@master ~]# 3、安装telnet相关服务组件包含telnet、telnet-server、xinetd服务组件yum install -y xinetd telnet telnet-server# 执行此命令进行安装 yum install -y xinetd telnet telnet-server4、查询是否安装完成yum list | grep telnet && yum list | grep xinetd# 执行此命令查询,带@符号的表示已安装。 [root@master ~]# yum list | grep telnet && yum list | grep xinetd telnet.x86_64 1:0.17-66.el7 @updates telnet-server.x86_64 1:0.17-66.el7 @updates xinetd.x86_64 2:2.3.15-14.el7 @base # 参考链接:https://www.cnblogs.com/gengbo/p/15913541.html # 查询所有已安装的软件信息 [root@master ~]# rpm -qa telnet telnet-server xinetd xinetd-2.3.15-14.el7.x86_64 telnet-server-0.17-66.el7.x86_64 telnet-0.17-66.el7.x86_64 # 显示详细信息 [root@master ~]# rpm -qi telnet-server Name : telnet-server Epoch : 1 Version : 0.17 Release : 66.el7 Architecture: x86_64 Install Date: Tue 22 Feb 2022 11:34:33 AM CST Group : System Environment/Daemons Size : 56361 License : BSD Signature : RSA/SHA256, Wed 18 Nov 2020 10:20:43 PM CST, Key ID 24c6a8a7f4a80eb5 Source RPM : telnet-0.17-66.el7.src.rpm Build Date : Tue 17 Nov 2020 12:44:28 AM CST Build Host : x86-01.bsys.centos.org Relocations : (not relocatable) Packager : CentOS BuildSystem <http://bugs.centos.org> Vendor : CentOS URL : http://web.archive.org/web/20070819111735/www.hcs.harvard.edu/~dholland/computers/old-netkit.html Summary : The server program for the Telnet remote login protocol Description : Telnet is a popular protocol for logging into remote systems over the Internet. The package includes a daemon that supports Telnet remote logins into the host machine. The daemon is disabled by default. You may enable the daemon by editing /etc/xinetd.d/telnet # 显示所有文件列表 [root@master ~]# rpm -ql telnet telnet-server xinetd /usr/bin/telnet /usr/share/doc/telnet-0.17 /usr/share/doc/telnet-0.17/README /usr/share/man/man1/telnet.1.gz /usr/lib/systemd/system/telnet.socket /usr/lib/systemd/system/telnet@.service /usr/sbin/in.telnetd /usr/share/man/man5/issue.net.5.gz /usr/share/man/man8/in.telnetd.8.gz /usr/share/man/man8/telnetd.8.gz /etc/sysconfig/xinetd /etc/xinetd.conf /etc/xinetd.d/chargen-dgram /etc/xinetd.d/chargen-stream /etc/xinetd.d/daytime-dgram /etc/xinetd.d/daytime-stream /etc/xinetd.d/discard-dgram /etc/xinetd.d/discard-stream /etc/xinetd.d/echo-dgram /etc/xinetd.d/echo-stream /etc/xinetd.d/tcpmux-server /etc/xinetd.d/time-dgram /etc/xinetd.d/time-stream /usr/lib/systemd/system/xinetd.service /usr/sbin/xinetd /usr/share/doc/xinetd-2.3.15 /usr/share/doc/xinetd-2.3.15/CHANGELOG /usr/share/doc/xinetd-2.3.15/COPYRIGHT /usr/share/doc/xinetd-2.3.15/README /usr/share/doc/xinetd-2.3.15/empty.conf /usr/share/doc/xinetd-2.3.15/sample.conf /usr/share/man/man5/xinetd.conf.5.gz /usr/share/man/man5/xinetd.log.5.gz /usr/share/man/man8/xinetd.8.gz5、启动telnet相关服务执行命令,开启服务,并设置开机自启动。systemctl start telnet.socket xinetd systemctl enable telnet.socket xinetd systemctl status telnet.socket xinetd6、查看服务监听端口[root@master ~]# netstat -tnl |grep 23 tcp6 0 0 :::23 :::* LISTEN [root@master ~]# ss -tunpl | grep 23 tcp LISTEN 0 128 [::]:23 [::]:* users:(("systemd",pid=1,fd=33)) [root@master ~]# 四、Telnet远程连接测试1、创建用户xybdiy[root@master ~]# useradd xybdiy [root@master ~]# passwd xybdiy Changing password for user xybdiy. New password: BAD PASSWORD: The password is a palindrome Retype new password: passwd: all authentication tokens updated successfully.2、使用xybdiy用户telnet登录telnet 192.168.200.11 Kernel 3.10.0-1160.53.1.el7.x86_64 on an x86_64 master login: xybdiy Password: [xybdiy@master ~]$ su - Password: Last login: Tue Feb 22 12:39:35 CST 2022 on pts/1 Last failed login: Tue Feb 22 12:52:02 CST 2022 on pts/2 There was 1 failed login attempt since the last successful login. [root@master ~]#3、设置允许ROOT用户Telnet登录参考链接:[telnet允许root用户登录 - 规格严格-功夫到家 - 博客园](https://www.cnblogs.com/diyunpeng/p/8403534.html#:~:text=telnet允许root用户登录 默认情况下,linux不允许root用户以telnet方式登录linux主机,若要允许root用户登录,可采取以下3种方法之一: 1、修改login文件 redhat中对于远程登录的限制体现在,%2Fetc%2Fpam.d%2Flogin 文件中,如果把限制的内容注销掉, 那么限制将不起作用 。)1️⃣# 修改login文件 vim /etc/pam.d/login 注释这一行文件 #account required pam_nologin.so 按:wq保存退出。 2️⃣# 注释掉securetty文件 mv /etc/securetty /etc/securetty.bakC:\Users\xybdiy>telnet 192.168.200.11 Kernel 3.10.0-1160.53.1.el7.x86_64 on an x86_64 master login: root Password: Last failed login: Tue Feb 22 13:59:24 CST 2022 from ::ffff:192.168.200.2 on pts/1 There was 1 failed login attempt since the last successful login. Last login: Tue Feb 22 13:45:55 on pts/2 [root@master ~]#[root@master ~]# telnet localhost Trying ::1... Connected to localhost. Escape character is '^]'. Kernel 3.10.0-1160.53.1.el7.x86_64 on an x86_64 master login: root Password: Last login: Tue Feb 22 14:11:49 from ::ffff:192.168.200.2 [root@master ~]#
二、lrzsz功能介绍lrzsz是一款在linux里可代替ftp上传和下载的程序。使用这个软件程序只需要一个命令就可以快速解决Windows端和Linux端的文件上传和下载问题。十分方便。注意事项:该软件适合传输小文件,超过4GB的大文件无法进行传输。三、安装操作步骤1、Linux端说明:本实验室基于CentOS 7.9系统安装使用的。[root@centos ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)查找lrzsz包# yum provides lrzsz[root@centos ~]# yum provides lrzsz Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: ftp-stud.hs-esslingen.de lrzsz-0.12.20-36.el7.x86_64 : The lrz and lsz modem communications programs Repo : base lrzsz-0.12.20-36.el7.x86_64 : The lrz and lsz modem communications programs Repo : @base [root@centos ~]# 安装lrzsz包# yum install -y lrzsz[root@centos ~]# yum install -y lrzsz Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: ftp-stud.hs-esslingen.de Package lrzsz-0.12.20-36.el7.x86_64 already installed and latest version Nothing to do2、Windows端注意:使用Xshel、SecureCRT远程连接软件,通过ssh/telnet连接Linux端服务器可以支持使用。3、测试使用sz从Linux端下载文件至Windows端rz从Windows端上传文件至Linux端更多详细信息,执行man sz/rz命令进行查看从Linux端下载文件至Windows端#在Linux端创建一个名为xybdiy的配置文件 # vim xybdiy 添加如下内容: Hello World! xybdiy 按:wq保存退出。 #执行如下命令从Linux端下载此配置文件至Windows端 # sz xybdiy 出现如下"浏览文件夹"的选项框,选择存放位置。选择下载文件存放的位置。下载完成,至Windows端查看。在Windows端打开此配置文件查看。从Windows端上传文件至Linux端在Windows端,修改刚才下载的配置文件名称和添加内容。在Linux端,执行rz命令,选择需要从Windows上传的文件。传送成功。在Linux端查看配置文件xybdiy-windows[root@centos ~]# ll total 16 -rw-------. 1 root root 1531 Feb 9 11:55 anaconda-ks.cfg -rw-r--r-- 1 root root 219 Feb 15 17:17 cook -rw-r--r-- 1 root root 20 Feb 19 14:55 xybdiy -rw-r--r-- 1 root root 66 Feb 19 15:01 xybdiy-windows [root@centos ~]# cat xybdiy-windows Hello World! xybdiy # Windows端上传至Linux端 Hello xybDIY! [root@centos ~]# 至此,lrzsz工具测试完成。
二、问题描述💛这段时间使用虚拟机装载了CentOS 7.9版本的Linux操作系统,配置好相关信息参数后,发现使用SSH命令远程连接访问服务器时,总是需要等待几十秒,不能直接按完回车后,立即跳出输入登录密码的命令提示符。所以上网搜索了一下问题。知道了问题所在。主要是由两个原因造成了。DNS反向解析的问题Gssap认证问题三、解决措施💙Ⅰ、解决SSH登录慢问题📌1、查看系统版本号[root@zabbix-server ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)2、修改 /etc/ssh/sshd_config 配置文件# vim /etc/ssh/sshd_config 修改如下内容: GSSAPIAuthentication no # 关闭认证接口 UseDNS no # 关闭DNS解析功能 按:wq保存退出。GSSAPI ( Generic Security Services Application Programming Interface) 是一套类似Kerberos 5 的通用网络安全系统接口。该接口是对各种不同的客户端服务器安全机制的封装,以消除安全接口的不同,降低编程难度。但该接口在目标机器无域名解析时会有问题。系统是默认开启的,需要手动关闭即可。3、重启SSH服务# systemctl restart sshd # systemctl status sshdⅡ、解决SSH连接超时断开问题📌1、修改 /etc/ssh/sshd_config 配置文件ClientAliveInterval 0表示服务器端向客户端请求消息的时间间隔,默认是0, 不发送。ClientAliveInterval 60表示每分钟向客户端发送一次,然后客户端响应,这样保持长时间连接的状态,SSH远程连接不断开。ClientAliveCountMax表示服务器发出请求后客户端没有响应的次数达到一定值,就自动断开。正常情况下,客户端不会不响应。默认即可。修改内容如下:# vim /etc/ssh/sshd_config 修改内容如下: ClientAliveInterval 60 ClientAliveCountMax 5 按:wq保存退出。ClientAliveInterval n如果n秒之内没有接收到客户端的消息,就通过加密通道发送一条信息。参见ClientAliveCountMax。默认值为0,意味着不发送消息。ClientAliveCountMax nn指定sshd从客户端断开连接之前,在没有接收到响应时能够。发送client-alive消息的条数。参见ClientAliveInterval。默认值为3。2、重启SSH服务# systemctl start sshd # systemctl status sshdⅢ、SSH连接测试📌配置生效,已经能够快速连接。C:\Users\xybdiy>ssh root@192.168.200.60 root@192.168.200.60's password: Last login: Fri Feb 18 13:16:08 2022 from 192.168.200.2 [root@zabbix-server ~]#
二、Grafana介绍什么是GrafanaGrafana是一个可视化面板(Dashboard),有着非常漂亮的图表和布局展示,功能齐全的度量仪表盘和图形编辑器。支持Graphite、zabbix、InfluxDB、Prometheus和OpenTSDB作为数据源。Grafana特点1.grafana拥有快速灵活的客户端图表,面板插件有许多不同方式的可视化指标和日志,官方库中具有丰富的仪表盘插件,比如热图、折线图、图表等多种展示方式,让我们复杂的数据展示的美观而优雅。2.Grafana支持许多不同的时间序列数据(数据源)存储后端。每个数据源都有一个特定查询编辑器。官方支持以下数据源:Graphite、infloxdb、opensdb、prometheus、elasticsearch、cloudwatch。每个数据源的查询语言和功能明显不同。你可以将来自多个数据源的数据组合到一个仪表板上,但每个面板都要绑定到属于特定组织的特定数据源3.Grafana中的警报允许您将规则附加到仪表板面板上。保存仪表板时,Gravana会将警报规则提取到单独的警报规则存储中,并安排它们进行评估。报警消息还能通过钉钉、邮箱等推送至移动端。但目前grafana只支持graph面板的报警。4.Grafana使用来自不同数据源的丰富事件注释图表,将鼠标悬停在事件上会显示完整的事件元数据和标记;5.Grafana使用Ad-hoc过滤器允许动态创建新的键/值过滤器,这些过滤器会自动应用于使用该数据源的所有查询。三、Grafana安装步骤本实验基于CentOS 7.9搭建部署1、基础环境配置修改主机名[root@localhost ~]# hostnamectl set-hostname grafana [root@localhost ~]# bash [root@grafana ~]# hostnamectl Static hostname: grafana Icon name: computer-vm Chassis: vm Machine ID: db3692199b194e6b9ac9f92ef24f9c6e Boot ID: 56bd71938e91499ca3106ce091c032ef Virtualization: vmware Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-1160.el7.x86_64 Architecture: x86-64关闭防火墙和SElinux安全模式systemctl stop firewalld systemctl disable firewalld [root@grafana ~]# setenforce 0 [root@grafana ~]# getenforce Permissive配置网卡信息[root@grafana ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens32 [root@grafana ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens32 DEVICE=ens32 ONBOOT=yes IPADDR=192.168.200.100 PREFIX=24 GATEWAY=192.168.200.1 DNS1=114.114.114.114 DNS2=192.168.200.1配置阿里云CentOS镜像源[root@grafana yum.repos.d]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2523 100 2523 0 0 14587 0 --:--:-- --:--:-- --:--:-- 14668 [root@grafana yum.repos.d]# ll total 4 drwxr-xr-x. 2 root root 220 Feb 11 12:27 bak -rw-r--r--. 1 root root 2523 Feb 11 12:27 CentOS-Base.repo [root@grafana yum.repos.d]# yum clean all [root@grafana yum.repos.d]# yum makecache [root@grafana yum.repos.d]# yum repolist更新CentOS系统[root@grafana ~]# yum update2、Grafana下载新建 /etc/yum.repos.d/grafana.repo,内容为[grafana] name=grafana baseurl=https://mirrors.aliyun.com/grafana/yum/rpm repo_gpgcheck=0 enabled=1 gpgcheck=0 按:wq保存退出即可。 #清除YUM缓存 yum makecache #加载YUM镜像源 yum repolist3、Grafana安装[root@grafana ~]# yum install grafana-enterprise-8.3.4-1.x86_64.rpm [root@grafana ~]# rpm -qa | grep grafana grafana-enterprise-8.3.4-1.x86_644、启动Grafana服务[root@grafana ~]# systemctl daemon-reload [root@grafana ~]# systemctl start grafana-server [root@grafana ~]# systemctl enable grafana-server [root@grafana ~]# systemctl status grafana-server5、访问Grafana Web控制面板打开浏览器,输入http://192.168.200.100:3000(端口为3000),打开Grafana控制面板, 初始默认账号和密码均为 admin,初次登录需要修改密码。设置新密码。登录成功至此,Grafana安装完成。
二、OpenWRT简介OpenWRT(曾用名 LEDE)是一款广泛应用于路由器的嵌入式操作系统。本站提供 OpenWRT 的包管理器 opkg的镜像,以加快国内访问速度。三、使用VM虚拟机安装OpenWRT操作步骤1、下载OpenWRT.img镜像源下载地址:https://mirrors.aliyun.com/openwrt2、将.img镜像文件转换成.vmdk虚拟硬盘格式文件所需工具为:StarWind V2V Image Converter(1)打开StarWind V2V Image Converter(2)选择将要转换的镜像文件存储的位置(选择Local file)本地文件夹(3)选择源镜像文件(4)选择转换格式(5)选择磁盘格式(6)确认信息,开始转换(7)转换完成3、使用VM虚拟机创建OpenWRT部分关键截图如下图所示① 选择的客户机操作系统如下图所示② 使用NAT地址转换模式③ 选择磁盘,为刚刚转换过后的.vmdk磁盘格式文件④ 创建完成,点击“开启此虚拟机”⑤ 登录成功4、修改网络参数信息并SSH连接root@OpenWrt:~# vi /etc/config/network root@OpenWrt:~# cat /etc/config/network config interface 'loopback' option device 'lo' option proto 'static' option ipaddr '127.0.0.1' option netmask '255.0.0.0' config globals 'globals' option ula_prefix 'fd33:8f52:e9fd::/48' config device option name 'br-lan' option type 'bridge' list ports 'eth0' config interface 'lan' option device 'br-lan' option proto 'static' option ipaddr '192.168.200.50' option netmask '255.255.255.0' option ip6assign '60' root@OpenWrt:~# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br-lan state UP qlen 1000 link/ether 00:0c:29:b4:2f:04 brd ff:ff:ff:ff:ff:ff 5: br-lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:0c:29:b4:2f:04 brd ff:ff:ff:ff:ff:ff inet 192.168.200.50/24 brd 192.168.200.255 scope global br-lan valid_lft forever preferred_lft forever inet6 fd33:8f52:e9fd::1/60 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:feb4:2f04/64 scope link valid_lft forever preferred_lft forever root@OpenWrt:~# _________________________________________________________________ C:\Users\xybdiy>ssh root@192.168.200.50 The authenticity of host '192.168.200.50 (192.168.200.50)' can't be established. ED25519 key fingerprint is SHA256:uINCvTddAyG9bGGRCD/5R2b7DSmUoxLDcyNe4Pcr9OA. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.200.50' (ED25519) to the list of known hosts. BusyBox v1.33.1 (2021-10-24 09:01:35 UTC) built-in shell (ash) _______ ________ __ | |.-----.-----.-----.| | | |.----.| |_ | - || _ | -__| || | | || _|| _| |_______|| __|_____|__|__||________||__| |____| |__| W I R E L E S S F R E E D O M ----------------------------------------------------- OpenWrt 21.02.1, r16325-88151b8303 ----------------------------------------------------- === WARNING! ===================================== There is no root password defined on this device! Use the "passwd" command to set up a new password in order to prevent unauthorized SSH logins. -------------------------------------------------- root@OpenWrt:~#5、访问OpenWRT的WEB界面,输入配置的IP地址即可四、更换阿里OpenWRT镜像源手工替换登录到路由器,并编辑 /etc/opkg/distfeeds.conf文件,将其中的downloads.openwrt.org 替换为 **mirrors.aliyun.com/openwrt`**即可。快捷替换执行如下命令自动替换sed -i 's_downloads.openwrt.org_mirrors.aliyun.com/openwrt_' /etc/opkg/distfeeds.conf操作过程如下:root@OpenWrt:~# cat /etc/opkg/distfeeds.conf src/gz openwrt_core https://downloads.openwrt.org/releases/21.02.1/targets/x86/64/packages src/gz openwrt_base https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/base src/gz openwrt_luci https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/luci src/gz openwrt_packages https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/packages src/gz openwrt_routing https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/routing src/gz openwrt_telephony https://downloads.openwrt.org/releases/21.02.1/packages/x86_64/telephony root@OpenWrt:~# sed -i 's_downloads.openwrt.org_mirrors.aliyun.com/openwrt_' /etc/opkg/distfeeds.conf root@OpenWrt:~# cat /etc/opkg/distfeeds.conf src/gz openwrt_core https://mirrors.aliyun.com/openwrt/releases/21.02.1/targets/x86/64/packages src/gz openwrt_base https://mirrors.aliyun.com/openwrt/releases/21.02.1/packages/x86_64/base src/gz openwrt_luci https://mirrors.aliyun.com/openwrt/releases/21.02.1/packages/x86_64/luci src/gz openwrt_packages https://mirrors.aliyun.com/openwrt/releases/21.02.1/packages/x86_64/packages src/gz openwrt_routing https://mirrors.aliyun.com/openwrt/releases/21.02.1/packages/x86_64/routing src/gz openwrt_telephony https://mirrors.aliyun.com/openwrt/releases/21.02.1/packages/x86_64/telephony root@OpenWrt:~#五、问题反馈暂未解决使用VM虚拟机创建的OpenWRT系统,使其能够访问外网,已经尝试多种方法,依旧不行。问题留存,待解决。root@OpenWrt:~# ping qq.com ping: bad address 'qq.com' root@OpenWrt:~#
1、下载Microsoft VS Code安装包官方下载地址:https://code.visualstudio.com/Download2、打开安装包,点击“我同意此协议”。3、选择安装位置。4、选择开始文件夹。5、★选择附加任务★。1、通过code 打开“操作添加到windows资源管理器文件上下文菜单 :把这个两个勾选上,可以对文件使用鼠标右键,选择VSCode 打开。2、将code注册为受支持的文件类型的编辑器:不建议勾选,这样会默认使用VSCode打开支持的相关文件,文件图标也会改变。3、添加到PATH(重启后生效):建议勾选,这样可以使用控制台打开VSCode 了。6、确认配置信息,点击“安装”。7、完成Microsoft VS Code安装。8、打开Microsoft VS Code软件。9、将Microsoft VS Code设置成中文。
创建任务计划程序,定期重启服务器中的JAR包脚本1、JAR包重启脚本的编写编写要求:1)关闭现有JAR包,重新启动JAR包。(2)编写JAR包重启的脚本文件,同时重启多个JAR包,设置排序时间,不要同一时刻重启,防止服务系统卡顿,导致业务中断。@echo off set port=8911 for /f "tokens=1-5" %%i in ('netstat -ano^|findstr ":%port%"') do ( echo kill the process %%m who use the port taskkill /pid %%m -t -f goto start ) :start START "xxxxxx.jar 8911" java -jar -Dfile.encoding=utf-8 xxxxxx.jar & ping localhost -n 5 set port=8001 for /f "tokens=1-5" %%i in ('netstat -ano^|findstr ":%port%"') do ( echo kill the process %%m who use the port taskkill /pid %%m -t -f goto start ) :start START "xxxxxx.jar 8001" java -jar -Dfile.encoding=utf-8 xxxxxx.jar & ping localhost -n 10 pause2、在任务计划程序中,创建基本任务。常规:使用最高权限运行;配置:Windows server 2012、Windows server 2016、Windows server 2019;触发器:开始任务:按预定计划;设置:每周;每隔一周,选择星期日、星期三;状态设置已启用;操作:选择启动程序。设置:浏览添加想要执行的程序或脚本。添加参数(可选);起始于(可选):将需要重启的JAR包和JAR包重启脚本放在一起同一个目录下,填写该JAR包和JAR包重启脚本的文件目录。设置:如果此任务已经运行,以下规则适用:选择“停止现有实例”。1、打开控制面板,选择系统和安全,点击管理工具。2、选择“任务计划程序”。3、点击“创建任务”。开始设置步骤。(1)创建任务——设置任务名称。如图所示。(2)点击“触发器”,选择“新建”。(3)新建触发器,设置相关参数。(4)新建操作(5)设置所需条件。(6)设置,选择“停止现有实例”四、设置完成。进行验证。五、所遇问题脚本执行后,jar启动卡住,需要按Enter回车键才行。解决办法:1、打开cmd运行框,右击白框,选择“属性”。2、“快速编辑模式”取消勾选
一、前提准备1、Windows11镜像下载推荐使用Windows官方镜像,链接如下:Windows 11镜像下载官方链接下拉至下载Windows 11磁盘映像(ISO),选择Windows 11的版本,之后,选择系统语言“简体中文”,点击“确认”后,点击“64-bit-Download”下载。2、虚拟机硬件资源分配内存推荐8G以上硬盘推荐64G以上3、虚拟机安装注意事项(1)需要启用UEFI模式;(2)需要添加可信平台模块TPM二、安装步骤1、打开VMware Workstation 16 pro软件2、选择“创建新的虚拟机”3、选择“自定义”类型的配置4、选择虚拟机硬件兼容性5、安装客户机操作系统这里选择“稍后安装操作系统”6、选择客户机操作系统7、命名虚拟机虚拟机名称自定义;并且选择虚拟机安装的位置。8、固件类型选择选择 UEFI,则根据客户机操作系统,可以选择启用 UEFI 安全引导。UEFI 安全引导通过阻止加载未使用可接受数字签名签署的驱动程序和操作系统加载程序来保证引导进程的安全。参考链接:配置固件类型这里选择UEFI并勾选“安全引导”。9、处理器配置根据电脑自身配置进行设置。10、虚拟机内存大小根据电脑自身内存大小动态调节,建议不小于4GB。11、网络类型这里使用“使用桥接网络”,即与本地客户端电脑位于同一网络下。12、选择I/O控制器类型这里采用推荐,默认即可。13、选择磁盘类型14、选择磁盘选择“创建新虚拟磁盘”15、指定磁盘容量建议磁盘大小不小于64GB16、指定磁盘文件17、确认配置虚拟机相关参数18、选择“编辑虚拟机设置”19、挂载镜像文件20、★访问控制加密★选择“访问控制”,点击“加密”输入加密密码设置完成,如图所示21、★添加可信平台模块★注意:必须先设置“访问控制”,进行加密,才能添加可信平台模块。添加可信平台模块后,即在TPM设备存在时,无法移除加密。如图所示。当重新打开VMware Workstation软件,点击Windows 11虚拟机时,需要输入密码,才能开启虚拟机。如图所示。22、点击“开启此虚拟机”23、选择Windows要安装的语言24、点击“现在安装”25、选择“我没有产品密钥”26、选择要安装的操作系统27、选择接收许可条款28、选择自定义安装Windows29、磁盘分区这里的需要安装Windows11系统的磁盘不能低于52GB,否则无法进行安装。30、等待安装完成并重启31、系统初始化设置操作截图如下所示。这里也可以选择“登录选项”,选择脱机使用。创建PIN系统初始化设置完成,等待几分钟,系统自动进入操作界面。32、安装VMware Tools工具点击安装程序,“傻瓜式”单击下一步安装即可。安装VMware Tools工具后,系统重启。Windows 11系统安装完成。三、总结分享总结Windows 11系统安装过程与以往Windows操作系统大致相同。注意以下几点:1、在虚拟机硬件模块上面,需要开启控制访问功能,即设置密码。2、需要添加“可信平台模块”,即TPM,需要先设置访问控制的密码,才能添加。3、Windows 11安装的系统磁盘不能低于52GB,否则系统安装不上。Windows 11系统刚出来,大家对此系统的看法褒贬不一,任何新的东西出来都需要时间的打磨和考验。虽然有许多不足的地方,但大家不妨可以先使用虚拟机体验一波,或者直接升级🆙尝鲜。都是没有啥问题的,根据自己需要来就行。分享这次由Windows 10系统升级到Windows 11系统,让我想起了好几年前,当时的Windows 7系统升级到Windows 10系统的时候,当时自己迫不及待的升级了,原本Windows 7系统在自己的台式机上运行的如此流畅,但升级到Windows 10系统后,虽然挺新颖的(对当时来说),但是明显感觉新系统操作时变得有点卡顿,当时有点小后悔了。当时啥也不知道。现在想来,可能是新系统兼容性问题,也可能是主机运行内存太小,资源消耗得快等等一系列原因造成。对比过去,想想现在,感觉现在的Windows 10系统运行的也很如此流畅。哈哈~以上就是我个人的经验分享和心得体会。仅供参考哈~
1、启动安装程序选择Install EulerOS V2.0SP5,敲击回车键加载配置文件。2、设置安装程序语言选择“中文”,继续下一步操作。3、进入安装界面,设置安装参数配置信息。(1)软件选择 选择“带GUI的服务器”,点击“完成”,返回主安装界面。(2)选择要安装的磁盘,点击“我要配置分区”,点击完成,进行下一步操作。(3)选择LVM,点击“点击这里自动创建它们”。(4)系统自动完成分区操作。点击“完成”。(5)选择“接受更改”。4、点击“开始安装”5、设置ROOT密码设置完成后,点击“完成”,等待安装结束重启系统。5、等待安装完成重启系统。6、点击“重启”。7、选择接收许可证8、设置网络和主机名9、点击“完成配置”。10、设置相关初始化操作,开始使用EulerOS。11、安装VMware Tools工具操作步骤点击“安装VMware Tools”。打开终端。进入VMware Tools压缩包的目录下。复制此压缩包到/usr/local/bin目录下。进入/usr/local/bin目录下,解压此压缩包。使用ls查看。执行./vmware-install.pl命令。进行安装。中途敲击回车键或者输入yes,进行安装过程确认。
4、mkcert测试验证默认生成的证书格式为PEM(Privacy Enhanced Mail)格式,任何支持PEM格式证书的程序都可以使用。比如常见的Apache或Nginx等,这里我们用 python 自带的SimpleHttpServer演示一下这个证书的效果(代码参考来自:https://gist.github.com/dergachev/7028596)前提条件:运行此pyhton脚本需要在本地环境中提前安装好python环境下载链接:https://www.python.org/downloads/windows/python环境安装参考链接:https://blog.csdn.net/u012106306/article/details/100040680python2 版本#!/usr/bin/env python2 import BaseHTTPServer, SimpleHTTPServer import ssl httpd = BaseHTTPServer.HTTPServer(('0.0.0.0', 443), SimpleHTTPServer.SimpleHTTPRequestHandler) httpd.socket = ssl.wrap_socket(httpd.socket, certfile='./localhost+2.pem', keyfile='./localhost+2-key.pem', server_side=True, ssl_version=ssl.PROTOCOL_TLSv1_2) httpd.serve_forever()python3 版本#!/usr/bin/env python3 import http.server import ssl httpd = http.server.HTTPServer(('0.0.0.0', 443), http.server.SimpleHTTPRequestHandler) httpd.socket = ssl.wrap_socket(httpd.socket, certfile='./localhost+2.pem', keyfile='./localhost+2-key.pem', server_side=True, ssl_version=ssl.PROTOCOL_TLSv1_2) httpd.serve_forever()双击运行simple-https-server.py脚本。打开浏览器,输入https://192.168.2.5:8000,显示连接是安全的。验证发现使用https://192.168.31.170本机访问也是可信的。然后需要将 CA 证书发放给局域网内其他的用户。可以看到 CA 路径下有两个文件rootCA-key.pem和rootCA.pem两个文件,用户需要信任rootCA.pem这个文件。将rootCA.pem拷贝一个副本,并命名为rootCA.crt(因为 windows 并不识别pem扩展名,并且 Ubuntu 也不会将pem扩展名作为 CA 证书文件对待),将rootCA.crt文件分发给其他用户,手工导入。C:\>mkcert-v1.4.3-windows-amd64.exe -CAROOT C:\Users\Administrator\AppData\Local\mkcertWindows系统操作访问演示点击“安装证书”单击下一步。windows 导入证书的方法是双击这个文件,在证书导入向导中将证书导入`受信任的根证书颁发机构。点击“完成”。点击“是”。再次点击此证书。已被添加为信任。使用浏览器验证。输入https://192.168.2.25:8000,发现可信任。Linux系统操作访问演示[root@server ~]# ifconfig ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.2.115 netmask 255.255.255.0 broadcast 192.168.2.255 inet6 fe80::5ccf:c1e4:1339:b7b6 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:5b:bd:72 txqueuelen 1000 (Ethernet) RX packets 22455 bytes 19633664 (18.7 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6252 bytes 693732 (677.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 87 bytes 9353 (9.1 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 87 bytes 9353 (9.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@server ~]# ls -l total 8 -rw-------. 1 root root 1532 Jul 9 05:13 anaconda-ks.cfg -rw-r--r-- 1 root root 1793 Aug 12 23:22 rootCA.pem [root@server ~]# cp -a rootCA.pem /etc/pki/ca-trust/source/anchors/ #将ca证书放在此路径下 [root@server ~]# /bin/update-ca-trust #执行此命令更新 [root@server ~]# [root@server ~]# curl -I https://192.168.2.25:8000 HTTP/1.0 200 OK Server: SimpleHTTP/0.6 Python/3.9.6 Date: Fri, 13 Aug 2021 06:51:54 GMT Content-type: text/html; charset=utf-8 Content-Length: 1536 [root@server ~]#[root@server ~]# curl -Iv https://192.168.2.25:8000 #加上-v参数输出还会告诉证书是可信的。 * About to connect() to 192.168.2.25 port 8000 (#0) * Trying 192.168.2.25... * Connected to 192.168.2.25 (192.168.2.25) port 8000 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * Server certificate: * subject: OU=PC-20201120MNLV\\Administrator@PC-20201120MNLV,O=mkcert development certificate * start date: Aug 13 03:41:36 2021 GMT * expire date: Nov 13 03:41:36 2023 GMT * common name: (nil) * issuer: CN=mkcert PC-20201120MNLV\\Administrator@PC-20201120MNLV,OU=PC-20201120MNLV\\Administrator@PC-20201120MNLV,O=mkcert development CA > HEAD / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: 192.168.2.25:8000 > Accept: */* > * HTTP 1.0, assume close after body < HTTP/1.0 200 OK HTTP/1.0 200 OK < Server: SimpleHTTP/0.6 Python/3.9.6 Server: SimpleHTTP/0.6 Python/3.9.6 < Date: Fri, 13 Aug 2021 07:05:13 GMT Date: Fri, 13 Aug 2021 07:05:13 GMT < Content-type: text/html; charset=utf-8 Content-type: text/html; charset=utf-8 < Content-Length: 1536 Content-Length: 1536 < * Closing connection 05、mkcert高级设置可以使用打开 mkcert-v1.4.3-windows-amd64.exe –help 查看帮助,会发现很多高级用法。 比如 -cert-file FILE, -key-file FILE, -p12-file FILE 可以定义输出的证书文件名。 -client 可以产生客户端认证证书,用于SSL双向认证。之前的文章介绍过使用openssl脚本的(Nginx SSL快速双向认证配置 3),可以对比下。 -pkcs12 命令可以产生 PKCS12 格式的证书。java程序通常不支持 PEM 格式的证书,但是支持 PKCS12 格式的证书。通过这个程序我们可以很方便的产生 PKCS12 格式的证书直接给Java程序使用。 mkcert 127.0.0.1 localhost //后面还可以继续空格添加其他域名或IP地址,默认是pem格式 mkcert -pkcs12 192.168.10.123 //生成p12格式的正式iis可以用,默认密码为:“changeit” mkcert -client 192.168.10.123 //客户端证书,默认是pem格式 mkcert -pkcs12 -client 192.168.10.123 //生成p12格式客户端证书,win用户可以直接导入,默认密码为:“changeit”C:\>mkcert-v1.4.3-windows-amd64.exe -help Usage of mkcert: $ mkcert -install Install the local CA in the system trust store. $ mkcert example.org Generate "example.org.pem" and "example.org-key.pem". $ mkcert example.com myapp.dev localhost 127.0.0.1 ::1 Generate "example.com+4.pem" and "example.com+4-key.pem". $ mkcert "*.example.it" Generate "_wildcard.example.it.pem" and "_wildcard.example.it-key.pem". $ mkcert -uninstall Uninstall the local CA (but do not delete it). Advanced options: -cert-file FILE, -key-file FILE, -p12-file FILE Customize the output paths. -client Generate a certificate for client authentication. -ecdsa Generate a certificate with an ECDSA key. -pkcs12 Generate a ".p12" PKCS #12 file, also know as a ".pfx" file, containing certificate and key for legacy applications. -csr CSR Generate a certificate based on the supplied CSR. Conflicts with all other flags and arguments except -install and -cert-file. -CAROOT Print the CA certificate and key storage location. $CAROOT (environment variable) Set the CA certificate and key storage location. (This allows maintaining multiple local CAs in parallel.) $TRUST_STORES (environment variable) A comma-separated list of trust stores to install the local root CA into. Options are: "system", "java" and "nss" (includes Firefox). Autodetected by default. C:\>
1、mkcert简介mkcert 是一个简单的工具,用于制作本地信任的开发证书。它不需要配置。简化我们在本地搭建 https 环境的复杂性,无需操作繁杂的 openssl 实现自签证书了,这个小程序就可以帮助我们自签证书,在本机使用还会自动信任 CA,非常方便。使用来自真实证书颁发机构 (CA) 的证书进行开发可能很危险或不可能(对于example.test、localhost或 之类的主机127.0.0.1),但自签名证书会导致信任错误。管理您自己的 CA 是最好的解决方案,但通常涉及神秘的命令、专业知识和手动步骤。mkcert 在系统根存储中自动创建并安装本地 CA,并生成本地信任的证书。mkcert 不会自动配置服务器以使用证书,但这取决于您。2、mkcert下载本实验使用Windows 10 操作系统进行演示说明。mkcert也支持其他噶平台的安装与使用,自行下载对应的版本安装即可。3、mkcert安装配置(1)输入CMD,调出命令提示符(2)初次安装mkcert输入mkcert-v1.4.3-windows-amd64.exe -install 命令 ,安装mkcert。将CA证书加入本地可信CA,使用此命令,就能帮助我们将mkcert使用的根证书加入了本地可信CA中,以后由该CA签发的证书在本地都是可信的。卸载命令 mkcert-v1.4.3-windows-amd64.exe -install安装成功成功。提示创建一个新的本地CA,本地CA现在已安装在系统信任存储中。(3)测试mkcert是否安装成功C:\>mkcert-v1.4.3-windows-amd64.exe --help Usage of mkcert: $ mkcert -install Install the local CA in the system trust store. $ mkcert example.org Generate "example.org.pem" and "example.org-key.pem". $ mkcert example.com myapp.dev localhost 127.0.0.1 ::1 Generate "example.com+4.pem" and "example.com+4-key.pem". $ mkcert "*.example.it" Generate "_wildcard.example.it.pem" and "_wildcard.example.it-key.pem". $ mkcert -uninstall Uninstall the local CA (but do not delete it). Advanced options: -cert-file FILE, -key-file FILE, -p12-file FILE Customize the output paths. -client Generate a certificate for client authentication. -ecdsa Generate a certificate with an ECDSA key. -pkcs12 Generate a ".p12" PKCS #12 file, also know as a ".pfx" file, containing certificate and key for legacy applications. -csr CSR Generate a certificate based on the supplied CSR. Conflicts with all other flags and arguments except -install and -cert-file. -CAROOT Print the CA certificate and key storage location. $CAROOT (environment variable) Set the CA certificate and key storage location. (This allows maintaining multiple local CAs in parallel.) $TRUST_STORES (environment variable) A comma-separated list of trust stores to install the local root CA into. Options are: "system", "java" and "nss" (includes Firefox). Autodetected by default. C:\>(4)查看CA证书存放位置输入mkcert-v1.4.3-windows-amd64.exe -CAROOT命令。按“Windows键+R”调出运行框,输入certmgr.msc命令。打开证书控制台。(5)生成自签证书,可供局域网内使用其他主机访问。直接跟多个要签发的域名或 ip 就行了,比如签发一个仅本机访问的证书(可以通过127.0.0.1和localhost,以及 ipv6 地址::1访问)需要在局域网内测试 https 应用,这种环境可能不对外,因此也无法使用像Let's encrypt这种免费证书的方案给局域网签发一个可信的证书,而且Let's encrypt本身也不支持认证 Ip。证书可信的三个要素:由可信的 CA 机构签发访问的地址跟证书认证地址相符证书在有效期内如果期望自签证书在局域网内使用,以上三个条件都需要满足。很明显自签证书一定可以满足证书在有效期内,那么需要保证后两条。我们签发的证书必须匹配浏览器的地址栏,比如局域网的 ip 或者域名,此外还需要信任 CA。操作如下。签发证书,加入局域网IP地址。C:\>mkcert-v1.4.3-windows-amd64.exe localhost 127.0.0.1 ::1 192.168.2.25 Note: the local CA is not installed in the Java trust store. Run "mkcert -install" for certificates to be trusted automatically ⚠️ Created a new certificate valid for the following names 📜 - "localhost" - "127.0.0.1" - "::1" - "192.168.2.25" The certificate is at "./localhost+3.pem" and the key at "./localhost+3-key.pem" ✅ It will expire on 13 November 2023 🗓在mkcert软件同目录下,生成了自签证书。如图所示。通过输出,我们可以看到成功生成了localhost+3.pem证书文件和localhost+3-key.pem私钥文件,只要在 web server 上使用这两个文件就可以了。
一、安装BIND[root@server ~]# yum clean all [root@server ~]# yum repolist [root@server ~]# yum list | grep '^bind\.' [root@server ~]# yum -y install bind*二、配置主配置文件备份需配置的文件,防止配置当中出错。[root@server ~]# cp /etc/named.conf /etc/named.conf.backup配置named.conf主配置文件主要修改这两处信息。其余信息根据情况自行修改设置。listen-on port 53 { any; };allow-query { any; };按:wq保存退出// // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // // See the BIND Administrator's Reference Manual (ARM) for details about the // configuration located in /usr/share/doc/bind-{version}/Bv9ARM.html options { listen-on port 53 { any; }; #允许所有IP地址监听53号端口 #listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; recursing-file "/var/named/data/named.recursing"; secroots-file "/var/named/data/named.secroots"; allow-query { any; }; #允许所有使用本解析服务的网段 /* - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion. - If you are building a RECURSIVE (caching) DNS server, you need to enable recursion. - If your recursive DNS server has a public IP address, you MUST enable access control to limit queries to your legitimate users. Failing to do so will cause your server to become part of large scale DNS amplification attacks. Implementing BCP38 within your network would greatly reduce such attack surface */ recursion yes; dnssec-enable yes; dnssec-validation yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.root.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid"; session-keyfile "/run/named/session.key"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key";三、配置区域配置文件。添加正向解析配置。在末尾添加如下配置。vim /etc/named.rfc1912.zoneszone “xybdns.com” IN {type master;file “xybdns.com.zone”;allow-update { none; };按:wq保存退出// named.rfc1912.zones: // // Provided by Red Hat caching-nameserver package // // ISC BIND named zone configuration for zones recommended by // RFC 1912 section 4.1 : localhost TLDs and address zones // and http://www.ietf.org/internet-drafts/draft-ietf-dnsop-default-local-zones-02.txt // (c)2007 R W Franks // // See /usr/share/doc/bind*/sample/ for example named configuration files. // zone "localhost.localdomain" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "localhost" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "1.0.0.127.in-addr.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "0.in-addr.arpa" IN { type master; file "named.empty"; allow-update { none; }; }; zone "xybdns.com" IN { #正向解析为“pakho.com” type master; #类型:主缓存为master file "xybdns.com.zone"; #指定区域数据文件为xybdns.com.zone allow-update { none; }; };四、配置正向区域数据文件拷贝主配置文件,保留源文件的权限和属主的属性复制cp -a named.localhost xybdns.com.zone[root@server ~]# cd /var/named/ [root@server named]# cp -a named.localhost xybdns.com.zone [root@server named]# ll total 28 drwxr-x--- 7 root named 61 Jul 9 05:18 chroot drwxrwx--- 2 named named 49 Jul 20 03:11 data -rw-r----- 1 root named 259 Jul 14 03:42 dnsdiy.com.zone drwxrwx--- 2 named named 31 Jul 20 01:25 dynamic -rw-r----- 1 root named 2253 Apr 5 2018 named.ca -rw-r----- 1 root named 152 Dec 15 2009 named.empty -rw-r----- 1 root named 152 Jun 21 2007 named.localhost -rw-r----- 1 root named 168 Dec 15 2009 named.loopback drwxrwx--- 2 named named 6 Apr 29 10:05 slaves -rw-r----- 1 root named 515 Jul 20 04:26 xybdns.com.zone -rw-r----- 1 root named 538 Jul 14 03:53 xybdns.com.zone.bakup配置正向区域数据文件注意:“.”的书写格式,其代替了@,别遗漏[root@server named]# vim xybdns.com.zone #进入配置文件 [root@server named]# cat xybdns.com.zone #查看配置文件 $TTL 1D #有效解析记录的生成周期 @ IN SOA xybdns.com. root.xybdns.com. ( #@表示当前的DNS区域名表示这个域名 SOA表示授权信息开启 后面表示邮件地址因为@有特殊含义 所以使用.代替 0 ; serial #更新序列号,可以是10以内的整数 1D ; refresh #刷新时间,重新下载地址数据的间隔 1H ; retry #重试延迟,下载失败后的重试延迟 1W ; expire #失效时间,超过该时间仍无法下载则放弃 3H ) ; minimum #无效解析记录的生存周期 IN NS server.xybdns.com. #记录当前区域DNS服务器的名称 IN MX 10 server.xybdns.com. #MX为邮件服务器 10表示优先级 数字越大优先级越低 server IN A 192.168.200.115 #记录正向解析域名对应的IP,即将域名与IP绑捆 web IN A 192.168.200.115 vsan7 IN A 192.168.200.118修改主机名[root@server ~]# hostnamectl set-hostname server.xybdns.com [root@server ~]# bash [root@server ~]# hostname server.xybdns.com配置文件语法检查工具named-checkconf -z /etc/named.conf仅检查语法不检查逻辑关系。当显示的全为0时表示没有语法错误[root@server ~]# named-checkconf -z /etc/named.conf zone localhost.localdomain/IN: loaded serial 0 zone localhost/IN: loaded serial 0 zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0 zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0 zone 0.in-addr.arpa/IN: loaded serial 0 zone xybdns.com/IN: loaded serial 0五、启动DNS服务启动前,检查防火墙、SELINUX安全模式是否是关闭或允许状态关闭防火墙并设置开机不自启动防火墙[root@server ~]# systemctl stop firewalld && systemctl disable firewalld [root@server ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)关闭SELINUX安全模式[root@server ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled #修改为disabled保存退出 # SELINUXTYPE= can take one of three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted [root@server ~]# getenforce #重启生效 Disabled启动dns服务systemctl start namedsystemctl enable named[root@server ~]# systemctl start named [root@server ~]# systemctl enable named [root@server ~]# systemctl status named查看53号监听端口是否开启若执行不了netstat命令,请先输入yum install -y net-tools命令安装net-tools工具netstat -anpt | grep 53[root@server ~]# netstat -anpt | grep 53 tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 2416/named tcp 0 0 192.168.200.115:53 0.0.0.0:* LISTEN 2416/named tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 2416/named tcp6 0 0 ::1:953 :::* LISTEN 2416/named六、测试DNS服务器在Windows 10环境下测试设置所在网络配置,添加DNS服务器地址、默认网关等信息。如图所示。在linux环境下测试设置dnsDNS=192.168.200.115按:wq保存退出[root@test ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32 TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no NAME=ens32 UUID=db4e154b-6cc7-420c-a43c-e5a27af7749d DEVICE=ens32 ONBOOT=yes IPADDR=192.168.200.120 NETMASK=255.255.255.0 GATEWAY=192.168.200.1 DNS=192.168.200.115安装nslookupyum provides nslookupyum install -y bind-utils[root@test ~]# yum provides nslookup Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile 32:bind-utils-9.11.4-9.P2.el7.x86_64 : Utilities for querying DNS name servers Repo : centos Matched from: Filename : /usr/bin/nslookup 32:bind-utils-9.11.4-9.P2.el7.x86_64 : Utilities for querying DNS name servers Repo : @centos Matched from: Filename : /usr/bin/nslookup [root@test ~]# yum install -y bind-utils Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Package 32:bind-utils-9.11.4-9.P2.el7.x86_64 already installed and latest version Nothing to do[root@test ~]# ping baidu.com PING baidu.com (220.181.38.148) 56(84) bytes of data. 64 bytes from 220.181.38.148 (220.181.38.148): icmp_seq=1 ttl=128 time=45.0 ms ^C --- baidu.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 45.080/45.080/45.080/0.000 ms [root@test ~]# ping server.xybdns.com PING server.xybdns.com (192.168.200.115) 56(84) bytes of data. 64 bytes from 192.168.200.115 (192.168.200.115): icmp_seq=1 ttl=64 time=0.148 ms 64 bytes from 192.168.200.115 (192.168.200.115): icmp_seq=2 ttl=64 time=0.330 ms ^C --- server.xybdns.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 0.148/0.239/0.330/0.091 ms [root@test ~]# nslookup www.baidu.com Server: 192.168.200.115 Address: 192.168.200.115#53 Non-authoritative answer: www.baidu.com canonical name = www.a.shifen.com. Name: www.a.shifen.com Address: 180.101.49.11 Name: www.a.shifen.com Address: 180.101.49.12 [root@test ~]# nslookup server.xybdns.com Server: 192.168.200.115 Address: 192.168.200.115#53 Name: server.xybdns.com Address: 192.168.200.115
DNS服务器搭建(使用Windows server 2016环境演示)本实验使用以虚拟机做演示。在VMware Workstation软件上安装一台Windows Server 2016的服务器,搭建DNS服务器。Windows Server 2016服务器安装过程省略。1、按Windows键,点击服务器管理器。2、点击“添加角色和功能”。进行DNS配置。3、直接点击“下一步”。4、默认选择,点击“下一步”。5、默认选择,点击“下一步”。6、勾选“DNS服务器”。7、点击“添加功能”。8、点击“下一步”。9、默认,点击“下一步”。10、点击“下一步”。11、选择“安装”。12、等待安装完成13、安装完成。点击“关闭”。14、在工具中,点击“DNS”。15、右击“正向查找区域”,选择“新建区域”。16、单击“下一步”。17、选择“主要区域”,单击“下一步”18、设置区域名称。例如,xybnetlab.com19、默认设置,单击“下一步”。20、单击“下一步”。21、确认配置信息,点击“完成”即可。22、右击选择“新建主机”,如图所示。23、填写名称(可填),IP地址。点击“添加主机”。24、正向查找添加完成。25、右击“反向查找区域”,选择“新建区域”。26、单击“下一步”。27、默认选项,点击“下一步”。28、选择“IPv4 反向查找区域”。29、填写网络ID。单击“下一步”。30、单击“下一步”。31、单击“下一步”。32、确认配置信息,单击“完成”即可。33、右击选择“新建指针”。34、设置主机名。点击“确定即可”。35、反向查找设置完成。36、右击选择“启动nslookup”。37、在虚拟机中进行测试。38、在本机上进行测试。在虚拟机上设置IPv4 DNS 服务器,即虚拟机的IP地址。在本机上设置VMware Network Adapter VMnet8网络设置(因为虚拟机连接NAT网络)。如图所示。打开cmd命令窗口,输入“nslookup xybnetlab.com”、“nslookup 192.168.200.128”,测试成功。正向、反向解析测试成功。ping此虚拟机的IP地址,域名地址。IP地址:192.168.200.128域名地址:xybnetlab.com若ping时,出现请求超时的问题。首先检查网络配置信息、DNS配置信息是否正确无误。检查虚拟机防火墙对应的服务是否启用,可以先尝试关闭防火墙,若ping通,则是防火墙设置问题,若ping不通,则是其他问题造成的。若需要开启防火墙,则按如下所示操作,开启对应的服务即可。点击“高级设置”。点击“入站规则”,找到“文件和打印机共享(回显请求 - ICMPv4-In)”,勾选“已启用”,单击“确定。”重新进行ping测试。测试成功。
【Windows网络连接问题】无法连接到这个网络问题:连接此网络无法正常连接上网。解决方法尝试:1、排查是否电脑网卡问题:连接其他无线网,发现正常连接并正常能够上网已经重新启动电脑,还是不能正常连接此网络2、通过疑难解答,还未正常修复3、网上搜索问题解决查看本地端相关网络服务,是否正常开启。按“ctirl+r”键调出运行框,输入services.msc,打开服务。打开或重启以下服务。WLAN AutoConfigWired AutoConfig重新连接此网络依旧不行。4、重启本地电脑,解决问题。有点懵
【网络共享解决】Internet连接共享访问被启用时,出现了一个错误 无法启用Internet连接共享。为LAN连接配置的IP地址需要使用自动IP寻址。报错截图解决办法原因:共享的网络IP地址192.168.137.1被其他网络连接占用。解决办法:将其他网络IP地址修改一下IP地址即可。
使用mkcert工具生成受信任的本地SSL证书官方文档:https://github.com/FiloSottile/mkcert#mkcert参考文章链接:本地https快速解决方案——mkcertmkcert工具下载链接:https://github.com/FiloSottile/mkcert/releases本实验使用的是windows系统,下载对应的版本即可。mkcert 是一个简单的工具,用于制作本地信任的开发证书。不需要配置。以管理员身份运行命令提示符mkcert安装及使用指南cd C:/ ——进入工具存放的目录下输入mkcert-v1.4.3-windows-amd64.exe -install命令进行安装将CA证书加入本地可信CA,使用此命令,就能帮助我们将mkcert使用的根证书加入了本地可信CA中,以后由该CA签发的证书在本地都是可信的。输入mkcert-v1.4.3-windows-amd64.exe,查询是否安装成功输入mkcert-v1.4.3-windows-amd64.exe -help,查看帮助指南输入mkcert-v1.4.3-windows-amd64.exe -CAROOT命令,列出CA证书的存放路径生成SSL自签证书签发本地访问的证书直接跟多个要签发的域名或ip,比如签发一个仅本机访问的证书(可以通过127.0.0.1和localhost,以及ipv6地址::1访问)mkcert-v1.4.3-windows-amd64.exe localhost 127.0.0.1 ::1使用本地IP地址生成证书mkcert-v1.4.3-windows-amd64.exe 192.168.2.25生成的SSL证书存放在当前运行目录下其中192.168.2.25.pem为公钥,192.168.2.25-key.pem为私钥将公钥.pem格式改为.crt格式安装证书输入certmgr.msc命令,打开证书查询使用chrome浏览器进行验证查看是否生效
Windows系统下MD5,SHA1或者SHA256三种校验值查询方法打开cmd,进入需要校验的文件的绝对路径下。格式:certutil -hashfile 绝对路径下文件 校验值certutil -hashfile ventoy-1.0.45-windows.zip SHA256certutil -hashfile ventoy-1.0.45-windows.zip SHA1certutil -hashfile ventoy-1.0.45-windows.zip MD5举例:
在 VMware Workstation 中禁用虚拟机的挂起功能禁用虚拟机挂起,执行以下操作:1、 关闭虚拟机。2、找到虚拟机文件夹。3、在文本编辑器中打开 .vmx 文件以进行编辑。4、将下行添加到 .vmx 文件中:suspend.disabled = “TRUE”5、保存并关闭 .vmx 文件。6、重新启动虚拟机。要启用挂起功能,请从 .vmx 文件中移除下行:suspend.disabled = “TRUE”编辑.vmx文件,请执行以下操作:1、关闭虚拟机。2、找到虚拟机的文件。3、在文本编辑器中打开虚拟机的配置文件(.vmx)。4、根据需要添加或编辑行。行以不特定的顺序显示。5、完成后,使用文本编辑器中的save选项保存更改。6、退出文本编辑器。注意事项:对.vmx文件所做的更改在下次打开VMware Workstation或VMware Player之前不会生效。如果应用程序当前处于打开状态,请退出并重新打开以使更改生效。或者,双击.vmx文件以应用更改并立即打开虚拟机。Windows操作系统默认情况下隐藏文件扩展名。建议启用文件扩展名以确保正在编辑的文件正确。
四、实验注意事项:1、云服务器安全组端口号是否被允许放行需要用到的端口号记得在云服务器安全组中设置允许放行,如客户端连接的端口、服务端的端口、web访问的端口等。如图所示。2、web登录时提示账号密码错误问题当输入正确的Web管理账号和密码时,依旧提示账号密码错误。如图所示。解决办法如下:执行./nps uninstall然后删除/etc/nps目录,重新安装配置。3、使用远程桌面连接出现如图报错解决办法如下:4、注意填写的最大端口号为655355、客户端需开启允许远程桌面访问问题:远程桌面关闭,不允许远程连接到此电脑。解决:开启客户端远程桌面。如图所示。
三、内网穿透步骤流程操作【1】服务端配置1、远程连接云服务器。如图所示2、检查防火墙和SELINUX安全模式是否关闭关闭防火墙命令systemctl stop firewalldsystemctl disable firewalldsystemctl status firewalld修改SELINUX安全模式vim /etc/selinux/config将此选项修改为SELINUX=disabled按“esc”后,按“:wq”保存退出reboot重启生效3、新建nps文件夹,上传nps服务端的压缩包并解压。如图所示mkdir npstar -zxvf linux_arm64_server.tar.gz4、进入conf文件夹内进入nps.conf配置文件内进行编辑修改。如图所示。cd nps/conf/vim nps.conf5、修改nps.conf配置文件的内容修改部分:#web web_host=a.o.com —— 云服务器公网IP地址 web_username=admin ——web控制台账户设置 web_password=123 ——web控制台密码设置 web_port = 8080 web_ip=0.0.0.0 web_base_url= web_open_ssl=false web_cert_file=conf/server.pem web_key_file=conf/server.keynps.conf配置内容如下:【根据需要自行修改】appname = nps #Boot mode(dev|pro) runmode = dev #HTTP(S) proxy port, no startup if empty http_proxy_ip=0.0.0.0 http_proxy_port=80 https_proxy_port=443 https_just_proxy=true #default https certificate setting https_default_cert_file=conf/server.pem https_default_key_file=conf/server.key ##bridge bridge_type=tcp —— 连接协议为TCP协议 bridge_port=8024 —— 连接端口为8024,可以自行修改 bridge_ip=0.0.0.0 # Public password, which clients can use to connect to the server # After the connection, the server will be able to open relevant ports and parse related domain names according to its own configuration file. public_vkey=123 #Traffic data persistence interval(minute) #Ignorance means no persistence #flow_store_interval=1 # log level LevelEmergency->0 LevelAlert->1 LevelCritical->2 LevelError->3 LevelWarning->4 LevelNotice->5 LevelInformational->6 LevelDebug->7 log_level=7 #log_path=nps.log #Whether to restrict IP access, true or false or ignore #ip_limit=true #p2p #p2p_ip=127.0.0.1 #p2p_port=6000 #web web_host=a.o.com —— 云服务器公网IP地址 web_username=admin ——web控制台账户设置 web_password=123 ——web控制台密码设置 web_port = 8080 web_ip=0.0.0.0 web_base_url= web_open_ssl=false web_cert_file=conf/server.pem web_key_file=conf/server.key # if web under proxy use sub path. like http://host/nps need this. #web_base_url=/nps #Web API unauthenticated IP address(the len of auth_crypt_key must be 16) #Remove comments if needed #auth_key=test auth_crypt_key =1234567812345678 #allow_ports=9001-9009,10001,11000-12000 #Web management multi-user login allow_user_login=false allow_user_register=false allow_user_change_username=false #extension allow_flow_limit=false allow_rate_limit=false allow_tunnel_num_limit=false allow_local_proxy=false allow_connection_num_limit=false allow_multi_ip=false system_info_display=true #cache http_cache=false http_cache_length=100 #get origin ip http_add_origin_header=false #pprof debug options #pprof_ip=0.0.0.0 #pprof_port=9999 #client disconnect timeout disconnect_timeout=60修改完成后返回上一级目录,安装并开启nps服务。cd …./nps install./nps start6、登录 nps web后台管理输入“公网IP:端口号”。注意:此端口需要在云服务器中的安全组设置被允许访问。7、添加客户端信息8、设置TCP隧道信息注意:设置的服务器端口需要在云服务器安全组中允许通过放行。【2】客户端配置1、解压nps客户端压缩包。如图所示2、安装Git并启用参考连接:Windows系统Git安装教程(详解Git安装过程)启用方式:① 直接打开Git Bash应用,输入cd A:/桌面文档/proxy/nps【注意:斜杠“/”的书写方式】进入nps压缩包解压后的路径(或目录)下② 找到此目录,在此目录下右击打开即可。3、运行客户端命令,连接服务器端命令格式:./npc.exe -server=xx.xx.xx.xx:端口号 -vkey=密钥 -type=tcp4、查看并验证是否成功远程连接至此,安装验证完成。
一、NPS简介和实现原理1、NPS简介nps是一款轻量级、高性能、功能强大的内网穿透代理服务器。目前支持tcp、udp流量转发,可支持任何tcp、udp上层协议(访问内网网站、本地支付接口调试、ssh访问、远程桌面,内网dns解析等等……),此外还支持内网http代理、内网socks5代理、p2p等,并带有功能强大的web管理端。2、NPS实现功能1、做微信公众号开发、小程序开发等---- 域名代理模式2、想在外网通过ssh连接内网的机器,做云服务器到内网服务器端口的映射----tcp代理模式(本实验搭建,实现此功能)3、在非内网环境下使用内网dns,或者需要通过udp访问内网机器等---- udp代理模式4、在外网使用HTTP代理访问内网站点---- http代理模式5、搭建一个内网穿透ss,在外网如同使用内网vpn一样访问内网资源或者设备---- socks5代理模式3、nps特点全面的协议支持,与几乎所有常用协议兼容,例如tcp,udp,http(s),socks5,p2p,http代理等;全面的平台兼容性(Linux,Windows,MacOS,群晖等),仅支持作为系统服务进行安装。 全面控制,允许客户端和服务器控制;Https集成,支持将后端代理和Web服务转换为https,并支持多个证书。 只需在Web ui上进行简单配置即可完成大多数要求;完整的信息显示,例如流量,系统信息,实时带宽,客户端版本等。 强大的扩展功能,一切可用(缓存,压缩,加密,流量限制,带宽限制,端口重用等);域名解析具有诸如自定义标题,404页面配置,主机修改,站点保护,URL路由和全景解析之类的功能。 服务器上的多用户和用户注册支持。二、使用NPS安装前期准备1、 提前下载好nps压缩包下载链接:https://github.com/ehang-io/nps/releases本实验的服务端使用的是centos 8 linux系统,所以下载对应的nps服务端的压缩包:linux_arm64_server.tar.gz;客户端使用的是Windows操作系统,对应下载nps客户端的压缩包:windows_amd64_client.tar.gz。使用者也可以根据自己实际使用场景进行相应压缩包的下载与安装搭建2、购买云服务器并查询其公网IP地址查询购买云服务器的公网IP地址,这里选用的是华为鲲鹏云服务器,也可以购买阿里云、腾讯云等其他厂商的云服务器进行安装与搭建。3、下载并安装好远程连接工具这里使用的是xshell与xftp远程连接工具,也可以选择CRT、putty、Mobxterm等其他远程连接工具。
按“window+E”键出现【找不到应用程序】或【explore.exe找不到】的解决方法问题描述按“win+e键”无法打开此电脑解决步骤步骤1 按“win+r”键,调出运行框,输入“regedit”确定,打开注册表。步骤2 找到以下目录计算机\HKEY_CLASSES_ROOT\CLSID{52205fd8-5dfb-447d-801a-d0b52f2e83e1}\shell\OpenNewWindow\command步骤3 右击“command”,选择【权限】,点击【高级】步骤4 在command高级安全设置中,点击【更改】,修改权限所有者步骤5 选择【高级】,点击【立即查找】,选择输入对象的名称,单击“确定”步骤6 点击需要修改的“组或用户名”,修改Users的权限,单击“确定”。步骤7 单击command,在右边出现的窗格中删除“DelegateExecute”项步骤8 双击“(默认)”这一项,将数值数据设置为:explorer.exe ::{20D04FE0-3AEA-1069-A2D8-08002B30309D},单击“确定”。最后,使用快捷键“win+e”键成功打开此电脑。
参考链接报错信息如下:[root@rabbitmq3 rabbitmq]# rabbitmqctl stop_app Stopping node rabbit@rabbitmq3 ... Error: unable to connect to node rabbit@rabbitmq3: nodedown DIAGNOSTICS =========== attempted to contact: [rabbit@rabbitmq3] rabbit@rabbitmq3: * connected to epmd (port 4369) on rabbitmq3 * epmd reports node 'rabbit' running on port 25672 * TCP connection succeeded but Erlang distribution failed * suggestion: hostname mismatch? * suggestion: is the cookie set correctly? current node details: - node name: rabbitmqctl1704@rabbitmq3 - home dir: /var/lib/rabbitmq - cookie hash: t+Or9UGpg+M4TGJbQMie7w==解决步骤:step1 查询mq的进程ps -ef | grep rabbitmqstep2 杀掉mq进程ps -ef | grep rabbitmq | grep -v grep | awk ‘{print $2}’ | xargs kill -9step3 启动mqrabbitmq-server -detachedstep4 在查询mq的状态rabbitmqctl status[root@rabbitmq2 rabbitmq]# ps -ef | grep rabbitmq ——查询mq的进程 root 1303 1273 0 01:02 pts/0 00:00:00 ping rabbitmq1 root 1304 1273 0 01:02 pts/0 00:00:00 ping rabbitmq3 rabbitmq 1408 1 1 01:16 ? 00:00:18 /usr/lib64/erlang/erts-5.10.4/bin/beam -W w -K true -A30 -P 1048576 -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.3.5/sbin/../ebin -noshell -noinput -s rabbit boot -sname rabbit@rabbitmq2 -boot start_sasl -config /etc/rabbitmq/rabbitmq -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,"/var/log/rabbitmq/rabbit@rabbitmq2.log"} -rabbit sasl_error_logger {file,"/var/log/rabbitmq/rabbit@rabbitmq2-sasl.log"} -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/lib/rabbitmq_server-3.3.5/sbin/../plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/mnesia/rabbit@rabbitmq2-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@rabbitmq2" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 rabbitmq 1423 1 0 01:16 ? 00:00:00 /usr/lib64/erlang/erts-5.10.4/bin/epmd -daemon rabbitmq 1480 1408 0 01:16 ? 00:00:00 inet_gethost 4 rabbitmq 1481 1480 0 01:16 ? 00:00:00 inet_gethost 4 root 11655 1273 0 01:45 pts/0 00:00:00 grep --color=auto rabbitmq [root@rabbitmq2 rabbitmq]# ps -ef | grep rabbitmq | grep -v grep | awk '{print $2}' | xargs kill -9 ——杀掉mq进程 [1]- Killed ping rabbitmq1 (wd: ~) (wd now: /var/lib/rabbitmq) [2]+ Killed ping rabbitmq3 (wd: ~) (wd now: /var/lib/rabbitmq) [root@rabbitmq2 rabbitmq]# rabbitmq-server -detached ——启动mq Warning: PID file not written; -detached was passed. [root@rabbitmq2 rabbitmq]# rabbitmqctl status ——查询mq的状态 Status of node rabbit@rabbitmq2 ... [{pid,11738}, {running_applications,[{os_mon,"CPO CXC 138 46","2.2.14"}, {xmerl,"XML parser","1.3.6"}, {mnesia,"MNESIA CXC 138 12","4.11"}, {sasl,"SASL CXC 138 11","2.3.4"}, {stdlib,"ERTS CXC 138 10","1.19.4"}, {kernel,"ERTS CXC 138 10","2.16.4"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [async-threads:30] [hipe] [kernel-poll:true]\n"}, {memory,[{total,39108744}, {connection_procs,0}, {queue_procs,0}, {plugins,0}, {other_proc,16463696}, {mnesia,55032}, {mgmt_db,0}, {msg_index,0}, {other_ets,720656}, {binary,1090360}, {code,16704494}, {atom,602729}, {other_system,3471777}]}, {alarms,[]}, {listeners,[]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,771307929}, {disk_free_limit,50000000}, {disk_free,20360318976}, {file_descriptors,[{total_limit,924}, {total_used,0}, {sockets_limit,829}, {sockets_used,0}]}, {processes,[{limit,1048576},{used,86}]}, {run_queue,1}, {uptime,12}] ...done. [root@rabbitmq2 rabbitmq]# rabbitmqctl stop_app Stopping node rabbit@rabbitmq2 ... ...done. [root@rabbitmq2 rabbitmq]# [root@rabbitmq3 ~]# rabbitmqctl stop_app ——执行成功 Stopping node rabbit@rabbitmq3 ... ...done. [root@rabbitmq3 ~]# [root@rabbitmq3 ~]# rabbitmqctl join_cluster --ram rabbit@rabbitmq1 Clustering node rabbit@rabbitmq3 with rabbit@rabbitmq1 ... ...done. [root@rabbitmq3 ~]# rabbitmqctl start_app Starting node rabbit@rabbitmq3 ... ...done. [root@rabbitmq3 ~]#
实战案例——Ansible部署高可用OpenStack平台案例描述1、了解高可用OpenStack平台架构2、了解Ansible部署工具的使用3、使用Ansible工具部署OpenStack平台案例目标1、部署架构Dashboard访问采用负载均衡方式,提供VIP地址,平台访问通过VIP地址进行访问,当其中一台控制节点异常时,别一台控制节点可以继续正常工作;MariaDB数据库采用集群式部署,控制节点间数据库相互进行同步。2、节点规划Ansible部署高可用OpenStack平台节点规划IP地址主机名节点172.30.14.10controller01控制节点1172.30.14.11controller02控制节点2172.30.14.12compute01计算节点1172.30.14.13compute02计算节点2192.168.1.109serverAnsible3、解压提供的server_bak.zip,通过哦VMware Workstation软件打开server_bak虚拟机,其作为Ansible节点。手动最小化安装4台CentOS 7.2系统的服务器,作为OpensStack节点。【前期准备】实施步骤1、基础环境配置【IP地址配置】server_bak节点的IP地址[root@server ~]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.30.14.20 netmask 255.255.255.0 broadcast 172.30.14.255 inet6 fe80::20c:29ff:fe7e:4486 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:7e:44:86 txqueuelen 1000 (Ethernet) RX packets 391 bytes 29646 (28.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 138 bytes 14205 (13.8 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.117 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::20c:29ff:fe7e:4490 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:7e:44:90 txqueuelen 1000 (Ethernet) RX packets 152 bytes 14224 (13.8 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 28 bytes 2602 (2.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.1.0.253 netmask 255.255.255.0 broadcast 10.1.0.255 inet6 fe80::20c:29ff:fe7e:449a prefixlen 64 scopeid 0x20<link> ether 00:0c:29:7e:44:9a txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12 bytes 888 (888.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 0 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@server ~]# ping -c 5 baidu.com PING baidu.com (220.181.38.148) 56(84) bytes of data. 64 bytes from 220.181.38.148: icmp_seq=1 ttl=49 time=26.6 ms 64 bytes from 220.181.38.148: icmp_seq=2 ttl=49 time=26.4 ms 64 bytes from 220.181.38.148: icmp_seq=3 ttl=49 time=27.0 ms 64 bytes from 220.181.38.148: icmp_seq=4 ttl=49 time=26.7 ms 64 bytes from 220.181.38.148: icmp_seq=5 ttl=49 time=27.0 ms --- baidu.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4051ms rtt min/avg/max/mdev = 26.433/26.782/27.084/0.275 ms [root@server ~]# controller01节点IP地址[root@controller01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:19:55:0d brd ff:ff:ff:ff:ff:ff inet 172.30.14.10/24 brd 172.30.14.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe19:550d/64 scope link valid_lft forever preferred_lft forever [root@controller01 ~]# controller02节点的IP地址[root@controller02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:93:a2:40 brd ff:ff:ff:ff:ff:ff inet 172.30.14.11/24 brd 172.30.14.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe93:a240/64 scope link valid_lft forever preferred_lft forever [root@controller02 ~]# compute01节点的IP地址[root@compute01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:60:aa:8e brd ff:ff:ff:ff:ff:ff inet 172.30.14.12/24 brd 172.30.14.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe60:aa8e/64 scope link valid_lft forever preferred_lft forever [root@compute01 ~]# compute02节点的IP地址[root@compute02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:87:88:b4 brd ff:ff:ff:ff:ff:ff inet 172.30.14.13/24 brd 172.30.14.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe87:88b4/64 scope link valid_lft forever preferred_lft forever [root@compute02 ~]# 2、Ansible安装OpenStack平台【修改Ansible环境配置】[root@server ~]# cd /opt/xd-cloud-simple/ [root@server xd-cloud-simple]# ll total 44 -rwxr-xr-x 1 root root 5964 Sep 28 2019 add_compute_node.sh -rwxr-xr-x 1 root root 1648 Oct 18 22:31 configuration.cfg -rwxr-xr-x 1 root root 158 Jun 30 2017 hosts -rwxr-xr-x 1 root root 0 Jun 20 2017 hosts_ansible -rwxr-xr-x 1 root root 9740 Sep 27 2019 install.sh drwxr-xr-x 5 root root 4096 Oct 19 15:52 module -rwxr-xr-x 1 root root 173 Jun 19 2017 passwd -rwxr-xr-x 1 root root 1512 Sep 28 2019 roach.sh -rwxr-xr-x 1 root root 201 Sep 27 2019 test.sh [root@server xd-cloud-simple]# 编辑configuration.cfg 环境配置文件,根据实际地址和参数进行修改变量。[root@server xd-cloud-simple]# vim configuration.cfg# Xiandian Cloud Platform Installation Script # taicai. #----------------------------------------------- # Basic Authentication #----------------------------------------------- REGION_NAME=xiandian DOMAIN_NAME=domain MGMT_NET_CIDR=172.30.14.0/24 DATA_NET_CIDR=172.30.14.0/24 #----------------------------------------------- # System Config # Controller Node #----------------------------------------------- CON_IS_HA=yes CON_VIP_IP=172.30.14.100 CON_HOST_NAME=controller01,controller02 CON_MGMT_DEV_NAME=eth0 CON_MGMT_DEV_IP=172.30.14.10,172.30.14.11 CON_DATA_DEV_NAME=eth0 CON_DATA_DEV_IP=172.30.14.10,172.30.14.11 #----------------------------------------------- # Compute Node #----------------------------------------------- COM_MGMT_DEV_NAME=eth0 COM_MGMT_DEV_IP=172.30.14.12,172.30.14.13 COM_HOST_NAME=compute01,compute02 COM_DATA_DEV_NAME=eth0 COM_DATA_DEV_IP=172.30.14.12,172.30.14.13 COM_PRI_DEV_NAME=eth1 COM_EXT_DEV_NAME=eth1 NEUTRON_MIN_VLAN_NAME=114 NEUTRON_MAX_VLAN_NAME=120 #----------------------------------------------- # Storage Node #----------------------------------------------- #STORAGE_MGMT_DEV_NAME=enp9s0 #STORAGE_MGMT_DEV_IP=10.0.1.1,10.0.1.2,10.0.1.3,10.0.1.4 #STORAGE_HOST_NAME=node-1,node-2,node-3,node-4 #STORAGE_DISK_NAME="/dev/sda /dev/sdb" #STORAGE_DATA_DEV_NAME=enp10s0 #STORAGE_DATA_DEV_IP=10.0.1.1,10.0.1.2,10.0.1.3,10.0.1.4 #----------------------------------------------- # yum repo config #----------------------------------------------- NAME1=centos7 URL1=ftp://172.30.14.20/centos7.2/ NAME2=iaas URL2=ftp://172.30.14.20/iaas/iaas-repo/ ALL_SERVER_ROOT_PASSWORD=000000在Ansible脚本目录下执行test.sh脚本,清空原始文件。[root@server xd-cloud-simple]# ./test.sh removed ‘/root/.ssh/id_rsa’ removed ‘/root/.ssh/id_rsa.pub’ removed ‘/root/.ssh/known_hosts’ removed ‘/root/.ssh/authorized_keys’ removed directory: ‘/root/.ssh’ [root@server xd-cloud-simple]# 【一键安装平台】[root@server xd-cloud-simple]# ./install.s【查询登录名、密码】[root@server module]# pwd /opt/xd-cloud-simple/module [root@server module]# cat passwd OPENSTACK_SERVICE_NAME_PASS=tINfSr5aTz7kgukAfIF7 OPENSTACK_SERVICE_PASS=rilgrqK6eEJilk3HSUDs OPENSTACK_METADATA_KEY_PASS=4gvjRuWQy2F6zbPIZ1OR OPENSTACK_KEYSTONE_TOKEN_PASS=L2qIYZaKQPWgvrwEqYM1 DATABASE_PASS=RyEgk2voacCHVzzZRXCv ADMINISTRATOR_NAME=admin ADMINISTRATOR_PASS=cAUk6Pv9WZKTw5a3x2Lg REGION_NAME=xiandian DOMAIN_NAME=domain MGMT_NET_CIDR=172.30.14.0/24 DATA_NET_CIDR=172.30.14.0/24 NEUTRON_MIN_VLAN_NAME=114 NEUTRON_MAX_VLAN_NAME=120 NAME1=centos7 URL1=ftp://172.30.14.20/centos7.2/ NAME2=iaas URL2=ftp://172.30.14.20/iaas/iaas-repo/ ALL_SERVER_ROOT_PASSWORD=000000 CONTROLLER_VIP_IP=172.30.14.100 CONTROLLER_NUM=2 CONTROLLER_VIP_IP=172.30.14.100 CONTROLLER_NODE1_IP=172.30.14.10 CONTROLLER_NODE1_NAME=controller01 CONTROLLER_NODE2_IP=172.30.14.11 CONTROLLER_NODE2_NAME=controller02 COMPUTE_NUM=2 COMPUTE_NODE1_IP=172.30.14.12 COMPUTE_NODE1_NAME=compute01 COMPUTE_NODE2_IP=172.30.14.13 COMPUTE_NODE2_NAME=compute02 MGMT_DEV_NAME=br-mgmt DATA_DEV_NAME=br-storage PRI_DEV_NAME=br-prv EXT_DEV_NAME=br-ex CON_MGMT_DEV_NAME_1=eth0 CON_MGMT_DEV_IP_NODE_1=172.30.14.10 CON_MGMT_DEV_IP_NODE_2=172.30.14.11 CON_DATA_DEV_NAME_1=eth0 CON_DATA_DEV_IP_NODE_1=172.30.14.10 CON_DATA_DEV_IP_NODE_2=172.30.14.11 COM_MGMT_DEV_NAME_1=eth0 COM_MGMT_DEV_IP_NODE_1=172.30.14.12 COM_MGMT_DEV_IP_NODE_2=172.30.14.13 COM_DATA_DEV_NAME_1=eth0 COM_DATA_DEV_IP_NODE_1=172.30.14.12 COM_DATA_DEV_IP_NODE_2=172.30.14.13 COM_PRI_DEV_NAME_1=eth1 COM_EXT_DEV_NAME_1=eth1 [root@server module]# 【查看控制节点1的地址】[root@controller01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000 link/ether 00:0c:29:19:55:0d brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe19:550d/64 scope link valid_lft forever preferred_lft forever 3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 1a:81:15:8f:e0:50 brd ff:ff:ff:ff:ff:ff 4: br-mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:19:55:0d brd ff:ff:ff:ff:ff:ff inet 172.30.14.10/24 brd 172.30.14.255 scope global br-mgmt valid_lft forever preferred_lft forever inet 172.30.14.100/32 scope global br-mgmt valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe19:550d/64 scope link valid_lft forever preferred_lft forever 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 5a:69:a0:d2:76:49 brd ff:ff:ff:ff:ff:ff [root@controller01 ~]# 【查看控制节点2的地址】[root@controller02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000 link/ether 00:0c:29:93:a2:40 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe93:a240/64 scope link valid_lft forever preferred_lft forever 3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether be:e0:c3:ba:35:3a brd ff:ff:ff:ff:ff:ff 4: br-mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:93:a2:40 brd ff:ff:ff:ff:ff:ff inet 172.30.14.11/24 brd 172.30.14.255 scope global br-mgmt valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe93:a240/64 scope link valid_lft forever preferred_lft forever 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 26:9b:7b:4f:e3:44 brd ff:ff:ff:ff:ff:ff [root@controller02 ~]# 当controller01节点异常时或者关闭,VIP将自动切换至controller02节点,实现HA控制节点HA服务。现在,关闭controller01,查看web界面是否还能正常访问。[root@controller01 ~]# shutdown -h now【controller01节点异常或者关闭后,controller02节点的IP地址变化情况】[root@controller02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000 link/ether 00:0c:29:93:a2:40 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:fe93:a240/64 scope link valid_lft forever preferred_lft forever 3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether be:e0:c3:ba:35:3a brd ff:ff:ff:ff:ff:ff 4: br-mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:93:a2:40 brd ff:ff:ff:ff:ff:ff inet 172.30.14.11/24 brd 172.30.14.255 scope global br-mgmt valid_lft forever preferred_lft forever inet 172.30.14.100/32 scope global br-mgmt valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe93:a240/64 scope link valid_lft forever preferred_lft forever 5: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 26:9b:7b:4f:e3:44 brd ff:ff:ff:ff:ff:ff【从新登录dashboard,查看是否能够正常访问
内网穿透原理与实践前期准备局域网:Windows 10 主机IP地址:192.168.1.103/24在局域网里,通过在Windows 10 主机上的CRT远程连接工具,连接局域网里的CentOS服务器,配置好相关网络参数,使其可以正常访问外网。如图所示。初次使用,前往官网,注册账号,登录控制台添加映射规则。具体操作步骤如下图所示。方法一:在网页控制台里添加映射规则登录https://www.kingdriod.cn/此网站,注册账号方法二:在电脑客户端内添加映射规则在centos服务器安装相关服务。1、打开终端输入:yum install gcc gcc-c++ wget -y2、创建一个目录mkdir /usr/local/shenzhuocd /usr/local/shenzhuo3、使用wget命令从指定的URL下载文件连接。wget http://neiwangchuantou.oss-cn-shanghai.aliyuncs.com/linux/neiwangchuantou/linux/3.0.2/shenzhuo4、运行权限赋值启动软件: 带上自己的之前注册过的账号和密码。例如: 18800000000 密码:123456 如下图:./shenzhuo 18800000000 123456
VMware Workstation 在此主机上不支持嵌套虚拟化。模块“MonitorMode”启动失败。未能启动虚拟机。所遇问题:原因分析:得知VMware Workstation Pro 升级至15.5.6版本后,可以与Hyper-V兼容起来了。于是升级了。升级之后可以正常开启虚拟机。前几天,想试一试Windows沙盒,就是于是也安装起来了。测试安装成功。今天又再次打开了虚拟机,就报如上错误。VMware Workstation 在此主机上不支持嵌套虚拟化。,一提示到这个,我就想起来是可能是Hyper-V开启捣的鬼,百度了一大堆,有人说是windows电脑系统升级到2004版本的问题,需要重装系统之类的,也有人说是VMware 版本没有卸除干净,需要重装。额…怎么说呢,不想这么折腾。于是,下意识想到了,每次为什么要在内存那里选项中,开启虚拟化 Intel VT -x/RVI(V)呢,是不是现在windows兼容了,不需要在vm里开启虚拟化了。抱着试一试的心态,于是成功了。成功。
uos系统如何设置开发者模式并获取root权限方法一:在线激活进入UOS系统后,依次选择“控制中心–开发者模式模块–进入开发者模式–在线模式”方法二:离线激活1、机器信息获取在控制中心-开发者模式模块–进入开发者模式-选择离线模式,导出机器信息;2、上传机器信息上传导出的机器信息文件,后缀为.json3、下载离线证书点击下载离线证书按钮,下载离线证书;4、导入离线证书在控制中心-开发者模式模块–进入开发者模式-选择离线模式页面,导入证书;系统获取到证书后进入开发者模式。如图所示,已开启开发者模式。鼠标右击“在终端中打开”,打开终端,输入“sudo -i”命令,输入密码,即刻进入root管理员视图,拥有管理员root的权限。
在Ubuntu下使用几行命令打造好莱坞电影特效效果图展示操作步骤:此特效使用的是一个工具,叫Hollywood。打开终端执行以下命令。$ sudo apt-add-repository ppa:hollywood/ppa $ sudo apt-get update $ sudo apt-get install hollywood $ sudo apt-get install byobu $ hollywood //执行hollywood命令,实现特效。
使用Windows远程桌面工具来远程连接控制Ubuntu系统所需软件及系统:Windows远程连接工具、Ubuntu系统。Windows的远程桌面使用的协议为RDP,接下来需要在Ubuntu的操作系统中安装xrdp。1、测试网络连通性。2、安装XRDP。打开Ubuntu的终端命令窗口。输入下面的指令进行安装。安装xrdp需要安装tightvncserver组件,这里一起将其相关的组件安装完成。命令如下。#sudo apt-get install tightvncserver xrdp3、完成安装后,进入Windows的操作系统,这里为Windows 10操作系统,打开Windows的远程桌面工具,输入Ubuntu系统的IP地址,就可以进行远程桌面的链接。(1)查看Ubuntu的IP地址。2)使用Windows远程连接工具进行连接。如图所示。输入用户名和密码。远程连接成功!注意事项:1、查看Ubuntu是否开启屏幕共享。鼠标右击选择“设置”。2、查看防火墙是否关闭或者对应端口是否开放。(1)、如果是 “不活动” 的话,可以不需要任何设置。(2)、开启对应端口。如果需要开启防火墙的话,那需要打开5900端口。$ sudo ufw allow 5900$ sudo ufw reload$ sudo ufw status3、设置root账户登录密码。4、设置完以上参数记得重启一下系统。$ sudo reboot
使用Windows Server 2012 R2创建DHCP地址池操作步骤:一、安装DHCP作用域(DHCP地址池)1、打开服务器管理器,点击“添加角色和功能”。2、默认,下一步3、默认,下一步4、默认,下一步5、选择“DHCP服务器”,单击“添加功能”6、单击下一步7、默认,下一步8、默认,下一步9、开始安装操作11、完成安装二、配置DHCP作用域(DHCP地址池)1、选择DHCP服务,单击“DHCP管理器”2、新建作用域3、单击“下一步”4、输入作用域的名称,单击”下一步“5、设置IP 地址范围6、添加排除的IP地址范围,须在上一步设置的IP地址范围内的IP地址进行排除。单击”下一步“。7、设置DHCP租用天数,默认为”8“8、默认,下一步9、设置路由器(默认网关)10、默认,下一步11、默认,下一步12、默认,下一步12、单击”完成“13、查看结果14、使用另一台PC,设置在同一网络模式下(如NAT模式),执行ipconfig /release命令,清空原有的IP地址,再执行ipconfig /renew命令,重新获取IP地址。结果如下。15、在DHCP服务器中,查看是否已租用
使用Kali Linux虚拟机破解WiFi密码的一波三折声明:此篇使用自己的WiFi作为学习和测试。私自破解他人WiFi属于违法行为!仅供参考学习~望周知!波折一波折二波折三实验操作步骤1、连接无线网卡;2、检查网卡是否处于监听模式;3、开启无线网卡监听模式;4、使用ifconfig命令查看无线网卡名称;5、扫描环境周围的WiFi网络信号;6、抓取握手包;7、查看抓包情况;8、破解WiFi密码,进行跑包使用Kali Linux 暴力破解wifi密码详细步骤所谓暴力破解就是穷举法,将密码字典中每一个密码依次去与握手包中的密码进行匹配,直到匹配成功。所以能否成功破解wifi密码取决于密码字典本身是否包含了这个密码。破解的时间取决于CPU的运算速度以及密码本身的复杂程度。如果WiFi密码设得足够复杂,即使有一个好的密码字典,要想破解成功花上个几天几十天甚至更久都是有可能的。为了测试这个实验,我前期准备了好久。真是“一波三折”。第一波:修复VMware Workstation 软件中某个服务(VMware USB Arbitration Service)未能正常启动的bug。1、出现的问题:可移动设备无法正常显示出来。2、原因分析:是因为windows服务中无法正常启动VMware USB Arbitration Service 所导致的。3、解决方法:操作步骤:打开"控制面板"——> 点击“卸载程序” ——>找到VMware Workstation程序——>右击,选择“更改”。注意:更改此应用程序的时候,需要关闭此软件。等待修复完成后,重新启动即可。第二波:未能正确购买到Kali Linux所支持的无线网卡做测试。温馨提示:使用Kali linux,先准备好一个适合Kali系统的USB外置无线网卡,注意内置网卡并不适合做渗透测试。用于抓取无线数据包稳定和兼容比较好的三款芯片:1.RT3070(L)/RT5572:即插即用支持混杂模式、注入模式、AP模式。2.RT8187(L):即插即用支持混杂模式、注入模式、AP模式。3.AR9271:即插即用支持混杂模式、注入模式、AP模式。对于Linux(像Ubuntu、Kali、Debian、Centos等等),这三款芯片即插即用,无需再手动安装驱动。第三波:Kali Linux 外接无线网卡显示不出来的问题操作步骤:通过演示动画,连接无线网卡设置。连接以后,在虚机的右下角部分可以看到类型USB接口的图案显亮出来,即表示连接成功。再次使用 airmon-ng 命令进行查看,检查网卡是否支持监听模式。结果如图所示。以上就是我做实验前的“一波三折”,希望能够给你们提供一些帮助与解答。接下来进入正式实验环节。实验操作步骤:1、连接无线网卡(即在波折三中有提及,此处略过… …)2、使用airmon-ng命令检查网卡是否支持监听模式。执行命令:airmon-ng3、开启无线网卡的监听模式。如图所示。执行命令:airmon-ng start wlan04、开启监听模式之后,无线接口wlan0变成了wlan0mon,使用ifconfig命令进行查看。如图所示。执行命令:ifconfig -a5、执行命令扫描环境中的WiFi网络。如图所示。执行命令:airodump-ng wlan0mon啊哦~报错了,暂时不知道咋整的所以断开网卡重新连接,重新进行操作即可。使用 airodump-ng 命令列出无线网卡扫描到的WiFi热点详细信息,包括信号强度,加密类型,信道等。这里我们记下要破解WiFi的BSSID和信道,如图中我用红色标记出来的。当搜索到我们想要破解的WiFi热点时可以Ctrl+C 停止搜索。6、抓取握手包使用网卡的监听模式抓取周围的无线网络数据包,其中我们需要用到的数据包是包含了WiFi密码的握手包,当有新用户连接WiFi时会发送握手包。root@xyb:/home/xyb/桌面# airodump-ng -c 10 --bssid 94:63:72:9F:C6:C7 -w vivo wlan0mon参数解释:-c 指定信道,上面已经标记目标热点的信道-bssid指定目标路由器的BSSID,就是上面标记的BSSID-w指定抓取的数据包保存的目录7、抓包,如图所示执行命令:airodump-ng -c 13 --bssid 94:63:72:9F:C6:C7 -c vivo wlan0mon抓包结束8、破解WiFi密码(跑包),如图所示。执行命令:aircrack-ng -a2 -b 94:63:72:9F:C6:C7 -w shuzi.txt vivo-01.cap跑包结束,wifi密码为:11111111至此,实验完成!
配置使用console口连接三层交换机需要的连接线:USB—RJ-45网线需要的远程连接工具:SecureCRT首先查看计算机所连接的端口打开CRT进行选择新建对话框,进行相关参数配置。通过console口连接成功。ps:1、如果忘记用户登录密码的话。重启路由器,在倒计时的界面出来时,迅速按 “Ctrl+B”键,进入后台管理界面,登录密码(华为S5700)默认登录密码为:Admin@huawei,依次选择清空密码的的操作数字。最后重启reboot即可。进入后,自行设置用户名和密码。2、使用CRT远程连接工具中Xmodem的发送功能发送系统配置文件至路由器中或使用ftp或tftp服务器,传送文件至交换机中,或从路由器中下载系统配置文件。
教你如何锁定移动硬盘盘符引言: 近来,嫌自己笔记本里的机械硬盘运行速度慢,加上磁盘空间大小有限,所以决定换了块同等大小的固态硬盘装入笔记本电脑里,原装的机械硬盘当移动硬盘来用,主要用作存储。但后来发现,移动硬盘的盘符在发生变化,感觉不适应,所以,今天准备锁定一下移动硬盘的盘符,详细操作,请往下看。操作前的移动硬盘盘符。如下图所示。操作步骤:步骤一:”win+R“打开运行,输入cmd并按回车键确认。步骤二: 输入 diskpart 并按回车键确认。之后就弹出一个新的cmd窗口。步骤三: 在新的cmd窗口中,输入 list volume 并按回车键确认。其中我这里的移动硬盘是卷5、6、7。步骤四: 输入 select volume 5 并按回车键确认。步骤五: 输入 assign letter=D(这里的D代表将要被锁定的盘符。至此,卷 5(即E盘变成了D盘)设置盘符成功。其二两个卷6、7进行同样操作。操作后的移动盘符,如下图所示。最后,全部修改成功!需要的话,自己可以动手试试~ps:“win+X”键,选择磁盘管理,进入选择要修改的盘符。
DNS服务器DNS概述DNS(Domain Name Service)域名解析服务是用于解析域名与IP地址对应关系的服务。简单来说,就是能够接受用户输入的域名或IP地址,然后自动查找与之匹配的IP地址或域名,即将域名解析为IP地址(正向解析),或将IP地址解析为域名(反向解析)。这样人们只需要在浏览器中输入域名就能打开想要访问的网站了。目前,DNS域名解析技术的正向解析也是人们最常用的一种工作模式。DNS组成整个DNS域名系统由DNS域名空间、DNS服务器和解析器这三部分组成。1、DNS域名空间指定用于组织名称的域的层次结构。根域位于顶部,在根域的下面是几个顶级域,每个顶级域又可以进一步划分为不同的二级域,二级域再划分出子域,子域下面可以是主机也可以是再划分的子域,指导最后的主机。2、DNS服务器DNS服务器是保持个维护域名空间中色数据的程序。由于域名服务是分布式的。每一个DNS服务器含有一个域名空间自己的完整信息,七控制范围成为区(Zone)。杜宇本区内的请求由负责本区的DNS服务器解释,对于其它区的请求将有本区的DNS服务器与负责该区的相应服务器联系。(1)主服务器为客户端提供域名解析的主要区域,主DNS服务器宕机,会启用从DNS服务器提供服务。(2)从服务器主服务器DNS长期无应答,从服务器也会停止提供服务,主从区域治安的同步采用周期性检查+通知的机制,从服务器周期性地检查主服务器上地记录情况,一旦发现修改就会同步,另外主服务器上如果又数据被修改了,会立即通知从服务器更新记录。(3)缓存服务器缓存服务器是一种不负责与民法数据维护,也不负责域名解析地DNS服务类型。它将用户经常使用到地域名与IP地址解析记录保存在主机本地中,来提升下次解析的效率。3、解释器解释器是简单的程序和子程序,它从服务器中提取信息以响应对域名空间中主机的查询。DNS查询DNS查询分为递归查询和迭代查询。1、递归查询:用于客户机向DNS服务器查询。2、迭代查询:用于DNS服务器向其它DNS服务器查询。Bind及BInd配置BInd概述BIND(Berkeley Internet Name Daemon)是一款全球互联网使用最广泛的能够提供安全可靠、快捷高效的域名解析服务程序。并且Bind服务程序还支持chroot(change root)监牢安全机制,chroot机制会限制bind服务程序仅对自身配置文件进行操作,从而保证了整个服务器的安全。域名解析服务Bind程序名称叫做named,服务程序有3个关键的配置文件如下:主配置文件(/etc/named.conf):只有58行,而且在除注释信息和空行之后,实际有效的参数仅有30左右,这些参数用来定义bind服务程序的进行。区域配置文件(/etc/named.rfc 1912.zones):用来保存域名与IP地址对应关系的所有位置,类似于书本目录。数据配置文件(/var/named):用于保存域名与IP地址真实的对应关系的数据配置文件。2、Bind中的安全相关的配置(1)bind4个内置的aclnone:没有主机;any:任意主机;local:本机;localhost:本机所在的IP所属的网络;(2)访问控制指令*表示允许查询的主机。*表示允许向哪些主机做区域传送。默认向所有主机。*表示允许哪些主机向当前DNS服务器发起递归查询请求。*表示允许动态更新区域数据库文件中内容,主要用于DNS。DNS相关而是工具及命令常用的有host、nslookup、dig三种工具。1、dig命令用于检测DNS系统,其不会查询hosts文件,使用格式:2、host命令其用法类似于dig命令,使用格式为:
使用KVM服务创建虚拟机目录(1)安装KVM组件(2)编写和使用NAT启动脚本(3)在NAT模式下启动虚拟机前期准备使用VM Workstation 安装CentOS 7.2操作系统,镜像使用后提供的CentOS-7-x86_64-DVD-1511.iso,关闭防火墙并配置Selinux安全规则,配置IP地址。YUM源使用提供的kvm_yum文件夹。实施步骤1、安装KVM配置本地YUM安装源,将提供的kvm_yum文件夹上传至/opt目录,并配置本地YUM源,命令如下。[root@localhost ~]# yum install -y qemu-kvm openssl libvirt启动Libvirt服务,命令如下。[root@localhost ~]# systemctl start libvirtd将/usr/libexec/qemu-kvm链接为/usr/bin/qemu-kvm,命令如下。[root@localhost ~]# ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-kvm2、创建NAT模式KVM虚拟机将cirros-0.3.3-x86_64-disk.img镜像与qemu-ifup-NAT脚本文件上传到系统/root目录下。赋予脚本执行权限。通过qemu-kvm命令启动KVM虚拟机。[root@localhost ~]# qemu-kvm -m 1024 -drive file=/root/cirros-0.3.3-x86_64-disk.img,if=virtio -net nic,model=virtio -net tap,script=/root/qemu-ifup-NAT.txt -nographic -vnc :1Warning:dnsmasq is already running. No need to run it again./root/qemu-ifup-NAT.txt: line 69: ifconfig: command not found [0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version3.2.0-80-virtual (buildd@batsu) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 (Ubuntu 3.2.0-80.116-virtual 3.2.68) [ 0.000000] Command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0 [ 0.000000]KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [0.000000] AMD AuthenticAMD [ 0.000000] Centaur CentaurHauls [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009fc00 (usable) [ 0.000000]BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved) [0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) [ 0.000000] BIOS-e820: 0000000000100000 - 000000003fffe000 (usable) [ 0.000000] BIOS-e820: 000000003fffe000 - 0000000040000000 (reserved) [ 0.000000] BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved) [ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved) [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.4 present. [ 0.000000] No AGP bridge found [ 0.000000]last_pfn = 0x3fffe max_arch_pfn = 0x400000000 [ 0.000000] PAT notsupported by CPU. [ 0.000000] found SMP MP-table at[ffff8800000f2000] f2000 [ 0.000000] init_memory_mapping:0000000000000000-000000003fffe000 [ 0.000000] RAMDISK: 37c92000 -37ff0000 [ 0.000000] ACPI: RSDP 00000000000f1e60 00014 (v00 BOCHS )[ 0.000000] ACPI: RSDT 000000003ffffa9b 00030 (v01 BOCHS BXPCRSDT00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 000000003ffff17700074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001) [ 0.000000]ACPI: DSDT 000000003fffe040 01137 (v01 BXPC BXDSDT 00000001 INTL20150619) [ 0.000000] ACPI: FACS 000000003fffe000 00040 [0.000000] ACPI: SSDT 000000003ffff1eb 00838 (v01 BOCHS BXPCSSDT 00000001 BXPC 00000001) [ 0.000000] ACPI: APIC 000000003ffffa2300078 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001) [ 0.000000] NoNUMA configuration found [ 0.000000] Faking a node at0000000000000000-000000003fffe000 [ 0.000000] Initmem setup node 00000000000000000-000000003fffe000 [ 0.000000] NODE_DATA[000000003fff9000 - 000000003fffdfff] [ 0.000000] kvm-clock: Usingmsrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: cpu 0, msr0:1cf76c1, boot clock [ 0.000000] Zone PFN ranges: [ 0.000000]DMA 0x00000010 -> 0x00001000 [ 0.000000] DMA32 0x00001000-> 0x00100000 [ 0.000000] Normal empty [ 0.000000] Movable zone start PFN for each node [ 0.000000] early_node_map[2] activePFN ranges [ 0.000000] 0: 0x00000010 -> 0x0000009f [0.000000] 0: 0x00000100 -> 0x0003fffe [ 0.000000] ACPI: PM-Timer IO Port: 0x608 [ 0.000000] ACPI: LAPIC (acpi_id[0x00]lapic_id[0x00] enabled) [ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff]dfl dfl lint[0x1]) [ 0.000000] ACPI: IOAPIC (id[0x00]address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 0,version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI:INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000]ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 highlevel) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq11 high level) [ 0.000000] Using ACPI (MADT) for SMP configurationinformation [ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs [0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000000a0000 [ 0.000000] PM: Registered nosave memory:00000000000a0000 - 00000000000f0000 [ 0.000000] PM: Registerednosave memory: 00000000000f0000 - 0000000000100000 [ 0.000000]Allocating PCI resources starting at 40000000 (gap: 40000000:beffc000)[ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000]setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1[ 0.000000] PERCPU: Embedded 27 pages/cpu @ffff88003fc00000 s78848r8192 d23552 u2097152 [ 0.000000] kvm-clock: cpu 0, msr 0:3fc126c1,primary cpu clock [ 0.000000] KVM setup async PF for cpu 0 [0.000000] kvm-stealtime: cpu 0, msr 3fc0cd40 [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 257928 [0.000000] Policy zone: DMA32 [ 0.000000] Kernel command line: LABEL=cirros-rootfs ro console=tty1 console=ttyS0 [ 0.000000] PIDhash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000]Checking aperture… [ 0.000000] No AGP bridge found [ 0.000000]Memory: 1012368k/1048568k available (6576k kernel code, 452k absent,35748k reserved, 6620k data, 928k init) [ 0.000000] SLUB:Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [0.000000] Hierarchical RCU implementation. [ 0.000000] RCU dyntick-idle grace-period acceleration is enabled. [ 0.000000]NR_IRQS:4352 nr_irqs:256 16 [ 0.000000] Console: colour VGA+ 80x25[ 0.000000] console [tty1] enabled [ 0.000000] console [ttyS0]enabled [ 0.000000] allocated 8388608 bytes of page_cgroup [0.000000] please try ‘cgroup_disable=memory’ option if you don’t want memory cgroups [ 0.000000] Detected 1800.001 MHz processor. [0.008000] Calibrating delay loop (skipped) preset value… 3600.00 BogoMIPS (lpj=7200004) [ 0.008014] pid_max: default: 32768 minimum:301 [ 0.012000] Security Framework initialized [ 0.012033]AppArmor: AppArmor initialized [ 0.016000] Yama: becoming mindful.[ 0.016113] Dentry cache hash table entries: 131072 (order: 8,1048576 bytes) [ 0.020001] Inode-cache hash table entries: 65536(order: 7, 524288 bytes) [ 0.021023] Mount-cache hash tableentries: 256 [ 0.024001] Initializing cgroup subsys cpuacct [0.024016] Initializing cgroup subsys memory [ 0.027850] Initializing cgroup subsys devices [ 0.028001] Initializing cgroupsubsys freezer [ 0.028016] Initializing cgroup subsys blkio [0.031871] Initializing cgroup subsys perf_event [ 0.032426] mce: CPU supports 10 MCE banks [ 0.036002] SMP alternatives: switchingto UP code [ 0.072003] Freeing SMP alternatives: 24k freed [0.072042] ACPI: Core revision 20110623 [ 0.076004] ftrace: allocating 26610 entries in 105 pages [ 0.088004] …TIMER:vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.088004] CPU0: IntelQEMU Virtual CPU version 1.5.3 stepping 03 [ 0.092004] APICcalibration not consistent with PM-Timer: 269ms instead of 100ms [0.092004] APIC delta adjusted to PM-Timer: 6249642 (16836955) [ 0.092235] Performance Events: Broken PMU hardware detected, using software events only. [ 0.096255] NMI watchdog disabled (cpu0):hardware events not enabled [ 0.100035] Brought up 1 CPUs [0.104008] Total of 1 processors activated (3600.00 BogoMIPS). [ 0.113042] devtmpfs: initialized [ 0.116386] EVM: security.selinux [ 0.120008] EVM: security.SMACK64 [ 0.124010] EVM: security.capability [ 0.128598] print_constraints: dummy: [0.132270] RTC time: 2:25:59, date: 06/16/20 [ 0.136063] NET: Registered protocol family 16 [ 0.140269] ACPI: bus type pciregistered [ 0.144299] PCI: Using configuration type 1 for baseaccess [ 0.152630] bio: create slab at 0 [ 0.156187]ACPI: Added _OSI(Module Device) [ 0.160012] ACPI: Added_OSI(Processor Device) [ 0.164012] ACPI: Added _OSI(3.0 _SCP Extensions) [ 0.168015] ACPI: Added _OSI(Processor AggregatorDevice) [ 0.178180] ACPI: Interpreter enabled [ 0.180013] ACPI:(supports S0 S5) [ 0.185923] ACPI: Using IOAPIC for interruptrouting [ 0.194507] ACPI: No dock devices found. [ 0.196016]HEST: Table not found. [ 0.200016] PCI: Using host bridge windowsfrom ACPI; if necessary, use “pci=nocrs” and report a bug [0.204081] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) [ 0.212109] pci_root PNP0A03:00: host bridge window [io 0x0000-0x0cf7] [ 0.216023] pci_root PNP0A03:00: host bridge window [io0x0d00-0xffff] [ 0.224018] pci_root PNP0A03:00: host bridge window[mem 0x000a0000-0x000bffff] [ 0.228020] pci_root PNP0A03:00: hostbridge window [mem 0x40000000-0xfebfffff] [ 0.244222] pci0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI [0.248058] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB [ 0.293239] pci0000:00: Unable to request _OSC control(_OSC support mask: 0x1e) [ 0.298010] ACPI: PCI Interrupt Link[LNKA] (IRQs 5 *10 11) [ 0.304205] ACPI: PCI Interrupt Link [LNKB](IRQs 5 *10 11) [ 0.312200] ACPI: PCI Interrupt Link [LNKC] (IRQs 510 *11) [ 0.319075] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11)[ 0.326860] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) [0.332840] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none [ 0.336028]vgaarb: loaded [ 0.340023] vgaarb: bridge control possible0000:00:02.0 [ 0.344318] i2c-core: driver [aat2870] using legacysuspend method [ 0.348025] i2c-core: driver [aat2870] using legacyresume method [ 0.352327] SCSI subsystem initialized [ 0.356718]usbcore: registered new interface driver usbfs [ 0.360062] usbcore:registered new interface driver hub [ 0.364146] usbcore: registerednew device driver usb [ 0.372061] PCI: Using ACPI for IRQ routing [0.376916] NetLabel: Initializing [ 0.380039] NetLabel: domain hash size = 128 [ 0.384027] NetLabel: protocols = UNLABELED CIPSOv4 [0.388105] NetLabel: unlabeled traffic allowed by default [ 0.392155] Switching to clocksource kvm-clock [ 0.408274] AppArmor: AppArmor Filesystem Enabled [ 0.413339] pnp: PnP ACPI init [0.416712] ACPI: bus type pnp registered [ 0.422055] pnp: PnP ACPI: found 6 devices [ 0.426024] ACPI: ACPI bus type pnp unregistered [0.437564] NET: Registered protocol family 2 [ 0.443305] IP route cache hash table entries: 32768 (order: 6, 262144 bytes) [0.450809] TCP established hash table entries: 131072 (order: 9, 2097152 bytes) [ 0.461222] TCP bind hash table entries: 65536(order: 8, 1048576 bytes) [ 0.469552] TCP: Hash tables configured(established 131072 bind 65536) [ 0.475249] TCP reno registered [0.478732] UDP hash table entries: 512 (order: 2, 16384 bytes) [ 0.483619] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes) [ 0.488724] NET: Registered protocol family 1 [ 0.492749] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 0.498193] pci0000:00:01.0: PIIX3: Enabling Passive Release [ 0.502955] pci0000:00:01.0: Activating ISA DMA hang workarounds [ 0.508939]audit: initializing netlink socket (disabled) [ 0.514107] type=2000audit(1592274358.512:1): initialized [ 0.535974] Trying to unpackrootfs image as initramfs… [ 0.572324] HugeTLB registered 2 MBpage size, pre-allocated 0 pages [ 0.584597] VFS: Disk quotasdquot_6.5.2 [ 0.588238] Dquot-cache hash table entries: 512 (order0, 4096 bytes) [ 0.600162] fuse init (API version 7.17) [0.657173] msgmni has been set to 1977 [ 0.664903] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.673578] ioscheduler noop registered [ 0.677427] io scheduler deadlineregistered (default) [ 0.682134] io scheduler cfq registered [0.685814] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.691156] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 0.697587] input: Power Button as/devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 0.705179] ACPI:Power Button [PWRF] [ 0.709410] ERST: Table is not found! [0.713414] GHES: HEST is not enabled! [ 0.717354] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 0.722132] virtio-pci0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ11 [ 0.730998] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [0.738501] virtio-pci 0000:00:04.0: PCI INT A -> Link[LNKD] -> GSI 10 (level, high) -> IRQ 10 [ 0.787572] Serial: 8250/16550 driver, 32ports, IRQ sharing enabled [ 0.819403] serial8250: ttyS0 at I/O0x3f8 (irq = 4) is a 16550A [ 0.834332] Freeing initrd memory:3448k freed [ 0.865640] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a16550A [ 0.870385] Linux agpgart interface v0.103 [ 0.875339]brd: module loaded [ 0.880398] loop: module loaded [ 0.888421]vda: vda1 [ 0.894439] scsi0 : ata_piix [ 0.897987] scsi1 :ata_piix [ 0.901798] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6bmdma 0xc060 irq 14 [ 0.907756] ata2: PATA max MWDMA2 cmd 0x170 ctl0x376 bmdma 0xc068 irq 15 [ 0.913837] Fixed MDIO Bus: probed [0.917158] tun: Universal TUN/TAP device driver, 1.6 [ 0.922010] tun: © 1999-2004 Max Krasnyansky maxk@qualcomm.com [ 0.931024]PPP generic driver version 2.4.2 [ 0.935157] ehci_hcd: USB 2.0‘Enhanced’ Host Controller (EHCI) Driver [ 0.940100] ohci_hcd: USB1.1 ‘Open’ Host Controller (OHCI) Driver [ 0.945340] uhci_hcd: USB Universal Host Controller Interface driver [ 0.950471] usbcore:registered new interface driver libusual [ 0.955512] i8042: PNP:PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [0.965694] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 0.969928] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 0.974848] mousedev:PS/2 mouse device common for all mice [ 0.980986] input: ATTranslated Set 2 keyboard as/devices/platform/i8042/serio0/input/input1 [ 0.988834] rtc_cmos00:01: RTC can wake from S4 [ 0.993703] rtc_cmos 00:01: rtc core:registered rtc_cmos as rtc0 [ 1.000550] rtc0: alarms up to one day,114 bytes nvram [ 1.005182] device-mapper: uevent: version 1.0.3 [1.010124] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: dm-devel@redhat.com [ 1.018778] cpuidle: using governor ladder [1.022624] cpuidle: using governor menu [ 1.026310] EFI Variables Facility v0.08 2004-May-17 [ 1.032627] TCP cubic registered [1.036289] NET: Registered protocol family 10 [ 1.041201] NET: Registered protocol family 17 [ 1.046163] Registering thedns_resolver key type [ 1.050657] registered taskstats version 1 [1.057110] Magic number: 8:784:409 [ 1.061846] rtc_cmos 00:01: setting system clock to 2020-06-16 02:26:00 UTC (1592274360) [1.069665] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 1.074947] EDD information not available. [ 1.086487] ata2.00: ATAPI: QEMU DVD-ROM, 1.5.3, max UDMA/100 [ 1.092422] ata2.00:configured for MWDMA2 [ 1.098147] scsi 1:0:0:0: CD-ROMQEMU QEMU DVD-ROM 1.5. PQ: 0 ANSI: 5 [ 1.108072] sr0:scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray [ 1.114047] cdrom:Uniform CD-ROM driver Revision: 3.20 [ 1.119462] sr 1:0:0:0:Attached scsi generic sg0 type 5 [ 1.124979] Freeing unused kernelmemory: 928k freed [ 1.130614] Write protecting the kernelread-only data: 12288k [ 1.138536] Freeing unused kernel memory:1596k freed [ 1.146507] Freeing unused kernel memory: 1184k freedinfo: initramfs: up at 1.15 NOCHANGE: partition 1 is size 64260. itcannot be grown info: initramfs loading root from /dev/vda1 info:/etc/init.d/rc.sysinit: up at 1.22 info: container: none Startinglogging: OK modprobe: module virtio_blk not found in modules.depmodprobe: module virtio_net not found in modules.dep WARN:/etc/rc3.d/S10-load-modules failed Initializing random numbergenerator… done. Starting acpid: OK cirros-ds ‘local’ up at 1.31 noresults found for mode=local. up 1.35. searched: nocloud configdriveec2 Starting network… udhcpc (v1.20.1) started Sending discover…Sending discover… Sending discover… Usage: /sbin/cirros-dhcpc<up|down> No lease, failing WARN: /etc/rc3.d/S40-network failedcirros-ds ‘net’ up at 181.56 checkinghttp://169.254.169.254/2009-04-04/instance-id failed 1/20: up 181.57.request failed failed 2/20: up 183.61. request failed failed 3/20: up185.62. request failed failed 4/20: up 187.63. request failed failed 5/20: up 189.63. request failed failed 6/20: up 191.64. request failedfailed 7/20: up 193.65. request failed failed 8/20: up 195.67. requestfailed failed 9/20: up 197.68. request failed failed 10/20: up 199.68.request failed failed 11/20: up 201.69. request failed failed 12/20:up 203.71. request failed failed 13/20: up 205.72. request failedfailed 14/20: up 207.74. request failed failed 15/20: up 209.74.request failed failed 16/20: up 211.75. request failed failed 17/20:up 213.76. request failed failed 18/20: up 215.78. request failedfailed 19/20: up 217.80. request failed failed 20/20: up 219.81.request failed failed to read iid from metadata. tried 20 no resultsfound for mode=net. up 221.82. searched: nocloud configdrive ec2failed to get instance-id of datasource Starting dropbear sshd:generating rsa key… generating dsa key… OK=== system information === Platform: Red Hat KVM Container: none Arch: x86_64 CPU(s): 1 @ 1800.001 MHz Cores/Sockets/Threads: 1/1/1Virt-type: RAM Size: 995MB Disks: NAME MAJ:MIN SIZE LABELMOUNTPOINT vda 253:0 41126400 vda1 253:1 32901120cirros-rootfs / sr0 11:0 1073741312=== sshd host keys ===-----BEGIN SSH HOST KEY KEYS----- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgwCAmCNRw5vgevBzR67TIl8jq4ArzLo7as+190iRxNC/3uO5FayVLgO2Jg4MvYGfSibHpVWIF7lmqp3PJ4Sy8S7KQBe0i0Am5ctZMvfqGP948eerD25EQCWwrSqLNlTGGTU0BQGp6//B0c9Cx8OaedmaS4ZDC12gjpPZpdfZ45xanjpVroot@cirros ssh-dssAAAAB3NzaC1kc3MAAACBALKd93nY0voUVtH5Q8jJW0iqJvva6v7LublJT+bPcI7cmZHo6GTRY8w0M1WPyA/b4hL/J1sBaXGR3FqUfK2tHyg3O1T2MwiweLRQyeYi5c/Xq1cgLDoRsrQlULaI8VeEiZ8a2yK7qltPzmit3u6RyG5S4Q0m+Tpl1nOiM4FPgjETAAAAFQCtMQTC/k6/BV0ltldxaPRogEFFfQAAAIEAh+eMVGriCk54lqjJln+olirNvCD6x3OKG6GCb4Yq8YI9EPR6ib1rpAMAARni0vq3Duiu/1hRJr6G5Od6giZQNtE6en2rVwybTDU6Ed75dInsUBvnFz+A29N5ZfdkNzWLlEJfiwHE6Z2wkPoLW8kpCHVW/COtz97+FsrA0y+Owt4AAACAOsPLF3NvDCfpZcB8voPFJBaxc0mCmjSgObG3etmnWPNKp3Vqqqt9wXjNBW1OEyqk2Ua4r/yULcCZ2PTWeOP0mAC26WXfEmWUUGR7kMq05GMG/nVlTsCOV56qy4jxtMWLbpwq/dJ9PWQHyeaMy5Qxq5CiiUeN2hW1kTMN6t4UDOU=root@cirros-----END SSH HOST KEY KEYS-----=== network info === if-info: lo,up,127.0.0.1,8,::1 if-info: eth0,up,8,fe80::5054:ff:fe12:3456=== datasource: None None ====== cirros: current=0.3.4 uptime=222.04 === route: fscanf=== pinging gateway failed, debugging connection ===############ debug start ##############/etc/init.d/sshd start Starting dropbear sshd: OK route: fscanfifconfig -a eth0 Link encap:Ethernet HWaddr 52:54:00:12:34:56 inet6 addr: fe80::5054:ff:fe12:3456/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:1132 (1.1 KiB)lo Link encap:Local Loopbackinet addr:127.0.0.1 Mask:255.0.0.0inet6 addr: ::1/128 Scope:HostUP LOOPBACK RUNNING MTU:16436 Metric:1RX packets:0 errors:0 dropped:0 overruns:0 frame:0TX packets:0 errors:0 dropped:0 overruns:0 carrier:0collisions:0 txqueuelen:0RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface route: fscanfcat /etc/resolv.conf cat: can’t open ‘/etc/resolv.conf’: No such file or directorygateway not found /sbin/cirros-status: line 1: can’t open /etc/resolv.conf: no such filepinging nameserversuname -a Linux cirros 3.2.0-80-virtual #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 2015 x86_64 GNU/Linuxlsmod Module Size Used by Not tainted nls_iso8859_1 12713 0 nls_cp437 16991 0 vfat17585 0 fat 61512 1 vfat isofs40259 0 ip_tables 27473 0 x_tables 298911 ip_tables pcnet32 42119 0 8139cp27360 0 ne2k_pci 13691 0 8390 188561 ne2k_pci e1000 108589 0 acpiphp24231 0dmesg | tail [ 1.260111] acpiphp: Slot [29] registered [ 1.260118] acpiphp: Slot [30] registered [ 1.260126] acpiphp: Slot [31]registered [ 1.267385] e1000: Intel® PRO/1000 Network Driverversion 7.3.21-k8-NAPI [ 1.267386] e1000: Copyright © 1999-2006 Intel Corporation. [ 1.271543] ne2k-pci.c:v1.03 9/22/2003 D.Becker/P. Gortmaker [ 1.274420] 8139cp: 8139cp: 10/100 PCI Ethernetdriver v1.3 (Mar 22, 2004) [ 1.277502] pcnet32: pcnet32.c:v1.3521.Apr.2008 tsbogend@alpha.franken.de [ 1.284242] ip_tables: © 2000-2006 Netfilter Core Team [ 11.960551] eth0: no IPv6 routerspresenttail -n 25 /var/log/messages Jun 16 03:26:00 cirros kern.info kernel: [ 0.000000] KERNEL supported cpus: Jun 16 03:26:00 cirroskern.info kernel: [ 0.000000] Intel GenuineIntel Jun 16 03:26:00cirros kern.info kernel: [ 0.000000] AMD AuthenticAMD Jun 1603:26:00 cirros kern.info kernel: [ 0.000000] CentaurCentaurHauls Jun 16 03:26:00 cirros kern.info kernel: [ 0.000000]BIOS-provided physical RAM map: Jun 16 03:26:00 cirros kern.infokernel: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009fc00(usable) Jun 16 03:26:00 cirros kern.info kernel: [ 1.259776]acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 1603:26:00 cirros kern.info kernel: [ 1.259861] acpiphp: Slot [3]registered Jun 16 03:26:00 cirros kern.info kernel: [ 1.259870]acpiphp: Slot [4] registered Jun 16 03:26:00 cirros kern.info kernel:[ 1.259891] acpiphp: Slot [5] registered Jun 16 03:26:00 cirroskern.info kernel: [ 1.259917] acpiphp: Slot [6] registered Jun 1603:26:00 cirros kern.info kernel: [ 1.259925] acpiphp: Slot [7]registered Jun 16 03:26:00 cirros kern.info kernel: [ 1.259932]acpiphp: Slot [8] registered Jun 16 03:26:00 cirros kern.info kernel:[ 1.259940] acpiphp: Slot [9] registered Jun 16 03:26:00 cirroskern.info kernel: [ 1.259948] acpiphp: Slot [10] registered Jun 1603:26:00 cirros kern.info kernel: [ 1.259969] acpiphp: Slot [11]registered Jun 16 03:26:00 cirros kern.info kernel: [ 1.259992]acpiphp: Slot [12] registered Jun 16 03:26:00 cirros kern.info kernel:[ 1.267385] e1000: Intel® PRO/1000 Network Driver - version7.3.21-k8-NAPI Jun 16 03:26:00 cirros kern.info kernel: [ 1.267386] e1000: Copyright © 1999-2006 Intel Corporation. Jun 16 03:26:00cirros kern.info kernel: [ 1.271543] ne2k-pci.c:v1.03 9/22/2003 D.Becker/P. Gortmaker Jun 16 03:26:00 cirros kern.info kernel: [1.274420] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004) Jun 16 03:26:00 cirros kern.info kernel: [ 1.277502] pcnet32:pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de Jun 16 03:26:00cirros kern.info kernel: [ 1.284242] ip_tables: © 2000-2006Netfilter Core Team Jun 16 03:26:11 cirros kern.debug kernel: [11.960551] eth0: no IPv6 routers present Jun 16 03:29:41 cirros authpriv.info dropbear[299]: Running in background############ debug end ############## ____ ____ ____ / / __ ____ ____ / __ / / / / / // // // /_/ /\ \ _///// // _/_/ http://cirros-cloud.netlogin as ‘cirros’ user. default password: ‘cubswin:)’. use ‘sudo’ forroot.cirros login: cirrosPassword:$ ip addr list1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueuelink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope hostvalid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ffinet6 fe80::5054:ff:fe12:3456/64 scope linkvalid_lft forever preferred_lft forever创建虚拟机完成,使用cirros用户登录虚拟机,输入用户名cirros,密码cubswin:)。然后师叔ip addr list命令 查看IP地址,最后输入route -n明林查询路由表。命令如下。
2023年05月