Getting Started with Knative on Alibaba Cloud Container Service Kubernetes

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器镜像服务 ACR,镜像仓库100个 不限时长
简介: Knative Serving builds on Kubernetes and Istio to support deploying and serving of serverless applications and functions.

Overview

Knative Serving builds on Kubernetes and Istio to support deploying and serving of serverless applications and functions. Serving is easy to get started with and scales to support advanced scenarios.
Let's introduce how to quickly install Knative Serving and scale automatically based on Alibaba Cloud Container Service Kubernetes

Installing Knative Serving

Prepare Kubernetes

Alibaba Cloud Container Service supports the Kubernetes 1.12.6 now, making it easy to create Kubernetes clusters through the Container Service Management Console, see Create a Kubernetes cluster.

Installing Istio

Knative depends on Istio for traffic routing and ingress. Currently, Alibaba Cloud Container Service Kubernetes has provided a quick one-click deployment to install and configure Istio, see the Deploy Istio.
Log on to the Container Service console. Under Kubernetes, click Clusters in the left-side navigation pane. On the right of the target cluster, choose More > Deploy Istio.
image

You can view your deployment results in the following way:

  • In the left-side navigation pane, choose Application > Pods, select the cluster and namespace in which Istio is deployed, and you can see the relevant pods in which Istio is deployed.

image

Installing Istio Ingress gateway

In the left-side navigation pane, choose Store > App Catalog. Click the ack-istio-ingressgateway.
image

Click the Values tab, and then set the parameters. You can set customized parameters, including indicating whether to enable a specific port, or whether to use the intranet SLB or the Internet SLB by setting the serviceAnnotations parameter.
image

In the left-side navigation pane, choose Application > Pod. Select the target cluster and the istio-system namespace to view the pod to which the Istio Ingress gateway has been deployed.
image

Installing Knative CRD

Log on to the Container Service console. In the left-side navigation pane, choose Store > App Catalog. Click the ack-knative-init.
image

In the Deploy area on the right, select the target Cluster from the drop-down list, and then click DEPLOY.
image

Installing Knative Serving

Log on to the Container Service console. In the left-side navigation pane, choose Store > App Catalog. Click the ack-knative-serving.
image

Click the Values tab, then set the parameters. You can set customized values for parameters, or use the default value Istio IngressGateway. And then click DEPLOY.
image

Currently, the all four Helm charts for the Knative serving have been installed. Like this:
image

Getting Started with Knative

Deploy the autoscale sample

Deploy the sample Knative Service, run the following command:

kubectl create -f autoscale.yaml

Example of the autoscale.yaml:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: autoscale-go
  namespace: default
spec:
  runLatest:
    configuration:
      revisionTemplate:
        metadata:
          annotations:
            # Target 10 in-flight-requests per pod.
            autoscaling.knative.dev/target: "10"
            autoscaling.knative.dev/class:  kpa.autoscaling.knative.dev
        spec:
          container:
            image: registry.cn-beijing.aliyuncs.com/wangxining/autoscale-go:0.1

Load the autoscale service

Obtain both the hostname and IP address of the istio-ingressgateway service in the istio-system namespace, and then export them into the IP_ADDRESS environment variable.

export IP_ADDRESS=`kubectl get svc istio-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`

Make a request to the autoscale app to see it consume some resources.

curl --header "Host: autoscale-go.default.{domain.name}" "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"

Note, need to replace the {domain.name} to your domain. The domain name of the sample is aliyun.com.

curl --header "Host: autoscale-go.default.aliyun.com" "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"
Allocated 5 Mb of memory.
The largest prime less than 10000 is 9973.
Slept for 100.16 milliseconds.

Installing the hey load generator:

go get -u github.com/rakyll/hey

Send 30 seconds of traffic maintaining 50 in-flight requests:

hey -z 30s -c 50 \
  -host "autoscale-go.default.aliyun.com" \
  "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5" \
  && kubectl get pods

With the traffic request running for 30 seconds, the Knative Serving scale automatically as the requests increasing.

Summary:
  Total:    30.1126 secs
  Slowest:    2.8528 secs
  Fastest:    0.1066 secs
  Average:    0.1216 secs
  Requests/sec:    410.3270

  Total data:    1235134 bytes
  Size/request:    99 bytes

Response time histogram:
  0.107 [1]    |
  0.381 [12305]    |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.656 [0]    |
  0.930 [0]    |
  1.205 [0]    |
  1.480 [0]    |
  1.754 [0]    |
  2.029 [0]    |
  2.304 [0]    |
  2.578 [27]    |
  2.853 [23]    |


Latency distribution:
  10% in 0.1089 secs
  25% in 0.1096 secs
  50% in 0.1107 secs
  75% in 0.1122 secs
  90% in 0.1148 secs
  95% in 0.1178 secs
  99% in 0.1318 secs

Details (average, fastest, slowest):
  DNS+dialup:    0.0001 secs, 0.1066 secs, 2.8528 secs
  DNS-lookup:    0.0000 secs, 0.0000 secs, 0.0000 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0023 secs
  resp wait:    0.1214 secs, 0.1065 secs, 2.8356 secs
  resp read:    0.0001 secs, 0.0000 secs, 0.0012 secs

Status code distribution:
  [200]    12356 responses



NAME                                             READY   STATUS        RESTARTS   AGE
autoscale-go-00001-deployment-5fb497488b-2r76v   2/2     Running       0          29s
autoscale-go-00001-deployment-5fb497488b-6bshv   2/2     Running       0          2m
autoscale-go-00001-deployment-5fb497488b-fb2vb   2/2     Running       0          29s
autoscale-go-00001-deployment-5fb497488b-kbmmk   2/2     Running       0          29s
autoscale-go-00001-deployment-5fb497488b-l4j9q   1/2     Terminating   0          4m
autoscale-go-00001-deployment-5fb497488b-xfv8v   2/2     Running       0          29s

Conclusion

Based on the Alibaba Cloud Container Service Kubernetes, we can quickly install Knative Serving and scale automatically. Welcome to use the container service on Alibaba Cloud to install Knative and integrate it into your product.

相关实践学习
巧用云服务器ECS制作节日贺卡
本场景带您体验如何在一台CentOS 7操作系统的ECS实例上,通过搭建web服务器,上传源码到web容器,制作节日贺卡网页。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
6月前
|
Kubernetes 负载均衡 网络协议
|
3月前
|
Kubernetes 负载均衡 应用服务中间件
Kubernetes(K8S) Service 介绍
Kubernetes(K8S) Service 介绍
38 0
|
6月前
|
Kubernetes 负载均衡 算法
kubernetes—Service详解
kubernetes—Service详解
83 0
|
Kubernetes 网络协议 应用服务中间件
|
负载均衡 Perl
09-Kubernetes-Service入门
09-Kubernetes-Service入门
|
Kubernetes 监控 算法
20-Kubernetes-Service详解-Service介绍
20-Kubernetes-Service详解-Service介绍
|
存储 Kubernetes 负载均衡
21-Kubernetes-Service详解-Service使用
21-Kubernetes-Service详解-Service使用
|
Kubernetes 负载均衡 网络协议
Kubernetes Service解析
我们都知道,在K8S集群中,每个Pod都有自己的私有IP地址,并且这些IP地址不是固定的。这意味着其不依赖IP地址而存在。例如,当我们因某种业务需求,需要对容器进行更新操作,则容器很有可能在随后的启动运行过程中被分配到其他IP地址。此外,在K8S集群外部看不到该Pod。因此,Pod若单独运行于K8S体系中,在实际的业务场景中是不现实的,故我们需要通过其他的策略去解决,那么解决方案是什么? 由此,我们引入了Serivce这个概念以解决上述问题。
109 0
|
Kubernetes 负载均衡 算法
kubernetes中service探讨
kubernetes中service探讨
116 0
|
域名解析 Kubernetes 网络协议
Kubernetes-Service介绍(二)-服务发现
Kubernetes提供两种客户端以固定方式获取后端访问地址的方式:环境变量和DNS方式。

相关产品

  • 容器服务Kubernetes版
  • 推荐镜像

    更多