Getting Started with Knative on Alibaba Cloud Container Service Kubernetes

本文涉及的产品
容器镜像服务 ACR,镜像仓库100个 不限时长
简介: Knative Serving builds on Kubernetes and Istio to support deploying and serving of serverless applications and functions.

Overview

Knative Serving builds on Kubernetes and Istio to support deploying and serving of serverless applications and functions. Serving is easy to get started with and scales to support advanced scenarios.
Let's introduce how to quickly install Knative Serving and scale automatically based on Alibaba Cloud Container Service Kubernetes

Installing Knative Serving

Prepare Kubernetes

Alibaba Cloud Container Service supports the Kubernetes 1.12.6 now, making it easy to create Kubernetes clusters through the Container Service Management Console, see Create a Kubernetes cluster.

Installing Istio

Knative depends on Istio for traffic routing and ingress. Currently, Alibaba Cloud Container Service Kubernetes has provided a quick one-click deployment to install and configure Istio, see the Deploy Istio.
Log on to the Container Service console. Under Kubernetes, click Clusters in the left-side navigation pane. On the right of the target cluster, choose More > Deploy Istio.
image

You can view your deployment results in the following way:

  • In the left-side navigation pane, choose Application > Pods, select the cluster and namespace in which Istio is deployed, and you can see the relevant pods in which Istio is deployed.

image

Installing Istio Ingress gateway

In the left-side navigation pane, choose Store > App Catalog. Click the ack-istio-ingressgateway.
image

Click the Values tab, and then set the parameters. You can set customized parameters, including indicating whether to enable a specific port, or whether to use the intranet SLB or the Internet SLB by setting the serviceAnnotations parameter.
image

In the left-side navigation pane, choose Application > Pod. Select the target cluster and the istio-system namespace to view the pod to which the Istio Ingress gateway has been deployed.
image

Installing Knative CRD

Log on to the Container Service console. In the left-side navigation pane, choose Store > App Catalog. Click the ack-knative-init.
image

In the Deploy area on the right, select the target Cluster from the drop-down list, and then click DEPLOY.
image

Installing Knative Serving

Log on to the Container Service console. In the left-side navigation pane, choose Store > App Catalog. Click the ack-knative-serving.
image

Click the Values tab, then set the parameters. You can set customized values for parameters, or use the default value Istio IngressGateway. And then click DEPLOY.
image

Currently, the all four Helm charts for the Knative serving have been installed. Like this:
image

Getting Started with Knative

Deploy the autoscale sample

Deploy the sample Knative Service, run the following command:

kubectl create -f autoscale.yaml

Example of the autoscale.yaml:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: autoscale-go
  namespace: default
spec:
  runLatest:
    configuration:
      revisionTemplate:
        metadata:
          annotations:
            # Target 10 in-flight-requests per pod.
            autoscaling.knative.dev/target: "10"
            autoscaling.knative.dev/class:  kpa.autoscaling.knative.dev
        spec:
          container:
            image: registry.cn-beijing.aliyuncs.com/wangxining/autoscale-go:0.1

Load the autoscale service

Obtain both the hostname and IP address of the istio-ingressgateway service in the istio-system namespace, and then export them into the IP_ADDRESS environment variable.

export IP_ADDRESS=`kubectl get svc istio-ingressgateway --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`

Make a request to the autoscale app to see it consume some resources.

curl --header "Host: autoscale-go.default.{domain.name}" "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"

Note, need to replace the {domain.name} to your domain. The domain name of the sample is aliyun.com.

curl --header "Host: autoscale-go.default.aliyun.com" "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"
Allocated 5 Mb of memory.
The largest prime less than 10000 is 9973.
Slept for 100.16 milliseconds.

Installing the hey load generator:

go get -u github.com/rakyll/hey

Send 30 seconds of traffic maintaining 50 in-flight requests:

hey -z 30s -c 50 \
  -host "autoscale-go.default.aliyun.com" \
  "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5" \
  && kubectl get pods

With the traffic request running for 30 seconds, the Knative Serving scale automatically as the requests increasing.

Summary:
  Total:    30.1126 secs
  Slowest:    2.8528 secs
  Fastest:    0.1066 secs
  Average:    0.1216 secs
  Requests/sec:    410.3270

  Total data:    1235134 bytes
  Size/request:    99 bytes

Response time histogram:
  0.107 [1]    |
  0.381 [12305]    |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.656 [0]    |
  0.930 [0]    |
  1.205 [0]    |
  1.480 [0]    |
  1.754 [0]    |
  2.029 [0]    |
  2.304 [0]    |
  2.578 [27]    |
  2.853 [23]    |


Latency distribution:
  10% in 0.1089 secs
  25% in 0.1096 secs
  50% in 0.1107 secs
  75% in 0.1122 secs
  90% in 0.1148 secs
  95% in 0.1178 secs
  99% in 0.1318 secs

Details (average, fastest, slowest):
  DNS+dialup:    0.0001 secs, 0.1066 secs, 2.8528 secs
  DNS-lookup:    0.0000 secs, 0.0000 secs, 0.0000 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0023 secs
  resp wait:    0.1214 secs, 0.1065 secs, 2.8356 secs
  resp read:    0.0001 secs, 0.0000 secs, 0.0012 secs

Status code distribution:
  [200]    12356 responses



NAME                                             READY   STATUS        RESTARTS   AGE
autoscale-go-00001-deployment-5fb497488b-2r76v   2/2     Running       0          29s
autoscale-go-00001-deployment-5fb497488b-6bshv   2/2     Running       0          2m
autoscale-go-00001-deployment-5fb497488b-fb2vb   2/2     Running       0          29s
autoscale-go-00001-deployment-5fb497488b-kbmmk   2/2     Running       0          29s
autoscale-go-00001-deployment-5fb497488b-l4j9q   1/2     Terminating   0          4m
autoscale-go-00001-deployment-5fb497488b-xfv8v   2/2     Running       0          29s

Conclusion

Based on the Alibaba Cloud Container Service Kubernetes, we can quickly install Knative Serving and scale automatically. Welcome to use the container service on Alibaba Cloud to install Knative and integrate it into your product.

相关实践学习
巧用云服务器ECS制作节日贺卡
本场景带您体验如何在一台CentOS 7操作系统的ECS实例上,通过搭建web服务器,上传源码到web容器,制作节日贺卡网页。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
Kubernetes 网络协议 Perl
kubernetes中的Service
Service 是kubernetes中一个很重要的,也是很有用的概念,我们可以通过service来将pod进行分组,并提供外网的访问endpoint。在这个过程中还有比如kube-proxy提供了对service的访问。
2056 0
|
9月前
|
Kubernetes 负载均衡 网络协议
|
Kubernetes 网络协议 Docker
Kubernetes的Service
Kubernetes的Service
102 0
Kubernetes的Service
|
9月前
|
Kubernetes 负载均衡 算法
kubernetes—Service详解
kubernetes—Service详解
99 0
|
6月前
|
Kubernetes 负载均衡 应用服务中间件
Kubernetes(K8S) Service 介绍
Kubernetes(K8S) Service 介绍
57 0
|
Kubernetes 负载均衡 网络协议
Kubernetes(五) - Service
Kubernetes解决的另外一个痛点就是服务发现,服务发现机制和容器开放访问都是通过Service来实现的,把Deployment和Service关联起来只需要Label标签相同就可以关联起来形成负载均衡,基于kuberneres的DNS服务我们只需要访问Service的名字就能以负载的方式访问到各个容器
459 0
|
Kubernetes 负载均衡 算法
kubernetes中service探讨
kubernetes中service探讨
135 0
|
Kubernetes 网络协议 应用服务中间件
|
负载均衡 Perl
09-Kubernetes-Service入门
09-Kubernetes-Service入门
|
Kubernetes Linux Docker
Kubernetes In cloud (一、deploy cluster)
一、写在前面: 云提供了kubernetes的Paas服务,但是很多同学对kubernetes的使用不是很清楚,最根本的原因就是出发点不同。cloud是要把技术门槛降低,通过可视化的配置降低学习成本;而企业要的是稳定性,以及故障恢复的时效性,以及故障复盘;这就造成了一系列的问题:support视角:1.
2527 0

热门文章

最新文章

相关产品

  • 容器服务Kubernetes版
  • 相关实验场景

    更多