Changing the Way of Continuous Delivery with Docker (Part 1)

简介: The internet industry has been constantly changing market demands and products, forcing organizations to adapt by making constant deliveries and updates to their production environment.

0213_Continuous_delivery_with_Docker_change_the_way_of_delivery_with_technology_Part_1

Introduction

This post is the first part of the series "Changing the Way of Continuous Delivery with Docker" and discusses the background, challenges, and processes involving Docker. Docker is a service for reformed continuous delivery. In the second part of the series, we will explore the method of using Docker along with its delivery processes.

Background

The internet industry has been constantly changing market demands and products, forcing organizations to adapt by making constant deliveries and updates to their production environment. This new approach to development is known as Continuous Integration (CI) and Continuous Delivery (CD), which combines and converts development, production and delivery processes into a cyclical process.

However, this approach introduces several problems in the long run. One such problem is the difficulty of transferring obscure production environments to successors who have limited experience with the environments.

Furthermore, debugging that occurs in the production environment prior to the final production launch is an obstacle. It is not only hard to maintain but also requires constant updates because demands and products are persistently changing.

Traditional CD Processes: Overview and Challenges

Traditional development solutions include processes, Continuous Integration (CI), and Continuous Delivery (CD). These practices combine and convert development, production, and delivery processes into a cyclical process.

01

  • Integration (combining two things together): Upon submission of documents, code goes with code; upon compiling, code goes with logic; upon testing, code goes with features; from generation to deployment and upon generation for release, code goes with systems, and systems go with systems. Every time the system combines two things, integration occurs.
  • Continuous testing: Just like a physical examination (for your system), it helps with the rapid discovery and elimination of faults in your system before the final launch, preventing a combination of untested features, missing features, or defective code segments. In general, this process should be continuous.
  • Feedback loops: In the flowchart of continuous integration pipes, each step consists of a feedback loop. This allows for rapid feedback, simplifying problem identification and rectification.

For example, after submitting code, a developer should first make sure that the code does not conflict with other codes. The developer also tests if the code successfully compiles to pass the unit test. Upon meeting all these requirements, the developer can move on to the next item. However, if the developer submitted a non-conflicting code but receives feedback that the code disabled successful compiling of the whole system, the developer needs to fix the code.

Feedback must be promptly traced back to the point of development so that the developers can know what to do next. Additionally unit tests should be kept separate from system function or integration tests. This is because the speed of these tests varies, and the tests generate different types of feedback.

Building a CD process- Environment Requirements

Generally, a company or a project requires a multiple environments. Typically, the production environment of an enterprise is placed in a public cloud, while the development process is in an offline environment. The public cloud environment may be inconsistent with the offline development environment, resulting in problems during the final product launch.

CD Processes- Problems Encountered

Problems may occur even if a complete continuous integration system environment is built systematically. Developers may even rely on different language environments or packages, causing conflicts in compiling environments and difficulties in maintenance.

02

Origin of Problems

A majority of problems arise because developers only deliver code and related dependencies, while operations, in reality, also require an operating environment, environmental description, dependencies, databases, and cache.

03

Docker: Transforming the Way of Software Delivery

04

In Docker, all information required for the generation of the environment is included in the delivered code. The code contains a description file that describes the environment as well as its dependencies, caches, configurations, variables, containers, and jar packages.

This approach is analogous to packing a code and all other required components into a container, and delivering the container to the operations team. The core feature of this container is its portability. Since the container includes all environmental dependencies, we can realize the same result regardless of the operating environment.

Competency of Docker

Before Docker came into being, there were many specification constraints to creating containers. Docker is not only a software but also a new approach to implementing containers.

  • Environment-describing capacity: Docker files may describe the whole environment required by software.
  • Hierarchical file system: Docker images provide a solution to package management, in which we describe each operation as a layer of version management.
  • Separation of OS: Docker shields the differences of the operating system upon running.

Docker is a Container Technology

In virtualization technology, hardware and software are virtualized into virtual machines. Each virtual machine is a complete operating system that is well isolated but may require several minutes to start.

The major difference of the container technology from the traditional virtualization technology lies in the fact that all containers lack a complete operating system layer and use the OS kernel of the parent machine as their own.

The prime advantage of the container technology is that containers can be started in seconds as they do not involve any traditional virtual machines. Additionally, since containers have lower overhead compared with virtual machines, users can deploy more containers on a server.

Three Steps to Software Delivery with Docker

  1. Build: describes the operating system foundation, the environment, the port to start, and scripts to run. The system saves the description file as a Docker image located within the local storage.
  2. Ship: pushes the image to the Docker Registry at the far end.
  3. Run: pulls the image from the public registry upon running. The container is an environmental description and at the same time an entirety, which will render the same result when running in whatever environment.

Case Study: BBC News

BBC News is a global news website company with over 500 developers distributed around the world. It has more than ten CI environments, as it uses different languages in different areas of the world. BBC News had to figure out how to unify the coding processes and manage the CI environments uniformly. The existing jobs took up to 60 minutes to schedule and run, and they were run sequentially. With Docker, the jobs are now run in parallel, significantly speeding up the process. Furthermore, by using containers, the developers do not have to worry about the CI environments. Visit Docker to learn more about BBC's success story.

Conclusion

This post introduced Docker and its role in Continuous Integration and Continuous Delivery. Continuous Delivery with Docker mainly focuses on reducing application risks while delivering value faster through reliable software production in shorter iterations.

In the next part of this post, we will examine how to use Docker and describe its build and UT environment.

目录
相关文章
|
Java 关系型数据库 MySQL
Changing the Way of Continuous Delivery with Docker (Part 2)
Docker has reformed the way of conducting continuous delivery. Docker allows you to package an application with all of its dependencies into a standar.
2177 0
|
10天前
|
数据库 Docker 容器
docker容器为啥会开机自启动
通过配置适当的重启策略,Docker容器可以在主机系统重启后自动启动。这对于保持关键服务的高可用性和自动恢复能力非常有用。选择适合的重启策略(如 `always`或 `unless-stopped`),可以确保应用程序在各种情况下保持运行。理解并配置这些策略是确保Docker容器化应用可靠性的关键。
158 93
|
1月前
|
监控 NoSQL 时序数据库
《docker高级篇(大厂进阶):7.Docker容器监控之CAdvisor+InfluxDB+Granfana》包括:原生命令、是什么、compose容器编排,一套带走
《docker高级篇(大厂进阶):7.Docker容器监控之CAdvisor+InfluxDB+Granfana》包括:原生命令、是什么、compose容器编排,一套带走
263 77
|
13天前
|
Ubuntu NoSQL Linux
《docker基础篇:3.Docker常用命令》包括帮助启动类命令、镜像命令、有镜像才能创建容器,这是根本前提(下载一个CentOS或者ubuntu镜像演示)、容器命令、小总结
《docker基础篇:3.Docker常用命令》包括帮助启动类命令、镜像命令、有镜像才能创建容器,这是根本前提(下载一个CentOS或者ubuntu镜像演示)、容器命令、小总结
85 6
《docker基础篇:3.Docker常用命令》包括帮助启动类命令、镜像命令、有镜像才能创建容器,这是根本前提(下载一个CentOS或者ubuntu镜像演示)、容器命令、小总结
|
24天前
|
搜索推荐 安全 数据安全/隐私保护
7 个最能提高生产力的 Docker 容器
7 个最能提高生产力的 Docker 容器
112 35
|
1月前
|
监控 Docker 容器
在Docker容器中运行打包好的应用程序
在Docker容器中运行打包好的应用程序
|
11天前
|
数据库 Docker 容器
docker容器为啥会开机自启动
通过配置适当的重启策略,Docker容器可以在主机系统重启后自动启动。这对于保持关键服务的高可用性和自动恢复能力非常有用。选择适合的重启策略(如 `always`或 `unless-stopped`),可以确保应用程序在各种情况下保持运行。理解并配置这些策略是确保Docker容器化应用可靠性的关键。
40 17
|
23天前
|
Ubuntu Linux 开发工具
docker 是什么?docker初认识之如何部署docker-优雅草后续将会把产品发布部署至docker容器中-因此会出相关系列文章-优雅草央千澈
Docker 是一个开源的容器化平台,允许开发者将应用程序及其依赖项打包成标准化单元(容器),确保在任何支持 Docker 的操作系统上一致运行。容器共享主机内核,提供轻量级、高效的执行环境。本文介绍如何在 Ubuntu 上安装 Docker,并通过简单步骤验证安装成功。后续文章将探讨使用 Docker 部署开源项目。优雅草央千澈 源、安装 Docker 包、验证安装 - 适用场景:开发、测试、生产环境 通过以上步骤,您可以在 Ubuntu 系统上成功安装并运行 Docker,为后续的应用部署打下基础。
docker 是什么?docker初认识之如何部署docker-优雅草后续将会把产品发布部署至docker容器中-因此会出相关系列文章-优雅草央千澈
|
11天前
|
运维 Java 虚拟化
《docker基础篇:1.Docker简介》,包括Docker是什么、容器与虚拟机比较、能干嘛、去哪下
《docker基础篇:1.Docker简介》,包括Docker是什么、容器与虚拟机比较、能干嘛、去哪下
74 12
|
13天前
|
Kubernetes Linux 虚拟化
入门级容器技术解析:Docker和K8s的区别与关系
本文介绍了容器技术的发展历程及其重要组成部分Docker和Kubernetes。从传统物理机到虚拟机,再到容器化,每一步都旨在更高效地利用服务器资源并简化应用部署。容器技术通过隔离环境、减少依赖冲突和提高可移植性,解决了传统部署方式中的诸多问题。Docker作为容器化平台,专注于创建和管理容器;而Kubernetes则是一个强大的容器编排系统,用于自动化部署、扩展和管理容器化应用。两者相辅相成,共同推动了现代云原生应用的快速发展。
77 11

热门文章

最新文章