文章目录
1.镜像扫描ImagePolicyWebhook
2. sysdig检测pod
3. clusterrole
4. AppArmor
5. PodSecurityPolicy
6. 网络策略
7. dockerfile检测及yaml文件问题
8. pod安全
9. 创建SA
10. trivy检测镜像安全
11. 创建secret
12. kube-bench
13. gVsior
14. 审计
15. 默认网络策略
16. falco 检测输出日志格式
kubernetes exam in action考试信息
2小时
15-20题目
预约时间同CKA,32小时出成绩
满分不到100分,87分或93分,但67分及格
模拟环境
4套环境 1个控制台
NAT网段192.168.26.0
模拟考题
1.镜像扫描ImagePolicyWebhook
切换集群 kubectl config use-context k8s
context
A container image scanner is set up on the cluster,but It’s not yet fully integrated into the cluster’s configuration When complete,the container image scanner shall scall scan for and reject the use of vulnerable images.
task:
You have to complete the entire task on the cluster’s master node,where all services and files have been prepared and placed
Glven an incomplete configuration in directory /etc/kubernetes/aa and a functional container image scanner with HTTPS sendpitont http://192.168.26.60:1323/image_policy
1.enable the necessary plugins to create an image policy
2.validate the control configuration and chage it to an implicit deny
3.Edit the configuration to point the provied HTTPS endpoint correctiy
Finally,test if the configurateion is working by trying to deploy the valnerable resource /csk/1/web1.yaml
解题思路
ImagePolicyWebhook
关键字:image_policy,deny 1. 切换集群,查看master,sshmaster 2. ls /etc/kubernetes/xxx 3. vi /etc/kubernetes/xxx/xxx.yaml 更改 true 为 false vi /etc/kubernetes/xxx/xxx.yaml 中 https的地址 volume需要挂载进去 4. 启用ImagePolicyWebhook和- --admission-control-config-file= 5. systemctl restart kubelet 6.kubectl run pod1 --image=nginx
案例:
配置/etc/kubernetes/manifests/kube-apiserver.yaml
添加ImagePolicyWebhook相关策略
重启api-server,systemctl restart kubelet
验证镜像创建pod失败
修改/etc/kubernetes/admission/admission_config.yaml 策略defaultAllow: true
重新验证镜像创建pod
$ ls /etc/kubernetes/aa/ admission_config.yaml apiserver-client-cert.pem apiserver-client-key.pem external-cert.pem external-key.pem kubeconf
$ cd /etc/kubernetes/aa $ cat kubeconf apiVersion: v1 kind: Config # clusters refers to the remote service. clusters: - cluster: certificate-authority: /etc/kubernetes/aa/external-cert.pem # CA for verifying the remote service. server: http://192.168.26.60:1323/image_policy # URL of remote service to query. Must use 'https'. name: image-checker contexts: - context: cluster: image-checker user: api-server name: image-checker current-context: image-checker preferences: {} # users refers to the API server's webhook configuration. users: - name: api-server user: client-certificate: /etc/kubernetes/aa/apiserver-client-cert.pem # cert for the webhook admission controller to use client-key: /etc/kubernetes/aa/apiserver-client-key.pem # key matching the cert $ cat admission_config.yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: ImagePolicyWebhook configuration: imagePolicy: kubeConfigFile: /etc/kubernetes/aa/kubeconf allowTTL: 50 denyTTL: 50 retryBackoff: 500 defaultAllow: false #修改api-server配置 $ cat /etc/kubernetes/manifests/kube-apiserver.yaml ............... - command: - kube-apiserver - --admission-control-config-file=/etc/kubernetes/aa/admission_config.yaml #添加此行 - --advertise-address=192.168.211.40 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook # #修改此行 - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt ........... - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/kubernetes/aa #添加此行 name: k8s-admission #添加此行 readOnly: true #添加此行 .............. - hostPath: #添加此行 path: /etc/kubernetes/aa #添加此行 type: DirectoryOrCreate #添加此行 name: k8s-admission #添加此行 - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates status: {} $ k get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane,master 9d v1.20.1 node1 Ready <none> 9d v1.20.1 node2 Ready <none> 9d v1.20.1 #创建pod失败 $ k run test --image=nginx Error from server (Forbidden): pods "test" is forbidden: Post "https://external-service:1234/check-image?timeout=30s": dial tcp: lookup external-service on 8.8.8.8:53: no such host #修改admission_config.yaml 配置 $ vim /etc/kubernetes/aa/admission_config.yaml apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: ImagePolicyWebhook configuration: imagePolicy: kubeConfigFile: /etc/kubernetes/aa/kubeconf allowTTL: 50 denyTTL: 50 retryBackoff: 500 defaultAllow: true #修改此行为true #重启api-server $ ps -ef |grep api root 78871 39023 0 20:17 pts/3 00:00:00 grep --color=auto api $ mv ../kube-apiserver.yaml . #创建pod成功 $ k run test --image=nginx pod/test created
2. sysdig检测pod
切换集群 kubectl config use-context k8s
you may user you brower to open one additonal tab to access sysdig’s documentation ro Falco’s documentaion
Task:
user runtime detection tools to detect anomalous processes spawning and executing frequently in the sigle container belorging to Pod redis.
Tow tools are avaliable to use:
sysdig
falico
the tools are pre-installed on the cluster’s worker node only;the are not avaliable on the base system or the master node.
using the tool of you choice(including any non pre-install tool) analyse the container’s behaviour for at lest 30 seconds,using filers that detect newly spawing and executing processes store an incident file at /opt/2/report,containing the detected incidents one per line in the follwing format:
[timestamp],[uid],[processName]
解题思路
关键字:sysdig 0. 记住使用sysdig -l |grep 搜索相关字段 1. 切换集群,查询对应的pod,ssh到pod对应的node主机上 2. 使用sysdig,注意要求格式和时间,结果重定向到对应的文件 3. sysdig -M 30 -p "*%evt.time,%user.uid,%proc.name" container.id=容器id >/opt/2/report
案例
3. clusterrole
切换集群 kubectl config use-context k8s
context
A Role bound to a pod’s serviceAccount grants overly permissive permission
Complete the following tasks to reduce the set of permissions.
task
Glven an existing Pod name web-pod running in the namespace monitoring Edit the Roleebound to the Pod’s serviceAccount sa-dev-1 to only allow performing list operations,only on resources of type Endpoints
create a new Role named role-2 in the namespaces monitoring which only allows performing update operations,only on resources of type persistentvoumeclaims.
create a new Rolebind name role role-2-bindding binding the newly created Roleto the Pod’s serviceAccount
解题思路
RBAC
关键字:role,rolebindding 1. 查找rollebind对应的rolle修改权限为list 和 endpoints $ kubectl edit role role-1 -n monitoring 2. 记住 --verb是权限 --resource是对象 $ kubectl create role role-2 --verb=update --resource=persistentvolumeclaims -n monitoring 3. 创建绑定 绑定为对应的sa $ kubectl create rolebinding role-2-bindding --role=role-2 -- serviceaccount=monitoring:sa-dev-1 -n monitoring
4. AppArmor
切换集群 kubectl config use-context k8s
Context
AppArmor is enabled on the cluster’s worker node. An AppArmor profile is prepared, but not enforced yet. You may use your browser to open one additional tab to access
theAppArmor documentation. Task
On the cluster’s worker node, enforce the prepared AppArmor profile located at /etc/apparmor.d/nginx_apparmor . Edit the prepared manifest file located at /cks/4/pod1.yaml to apply the AppArmor profile. Finally, apply the manifest file and create the pod specified in it
解题思路
apparmor
关键字:apparmor 1. 切换结群,记住查看nodes,ssh到node节点 2. 查看对应的配置文件和名字 $ cd /etc/apparmor.d $ vi nginx_apparmor $ apparmor_status |grep nginx-profile-3 # 没有grep到说明没有启动 $ apparmor_parser -q nginx_apparmor # 加载启用这个配置文件 3. 修改对应yaml应用这个规则 ,打开官网的网址复制例子,修改容器名字和本地的配置名 $ vi /cks/4/pod1.yaml apiVersion: v1 kind: Pod metadata: name: hello-apparmor annotations: container.apparmor.security.beta.kubernetes.io/hello: localhost/nginx-profile-3 spec: containers: - name: hello image: busybox command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] 4. 修改后创建出来 $ kubectl apply -f /cks/4/pod1.yaml
5. PodSecurityPolicy
切换集群 kubectl config use-context k8s63
context
A PodsecurityPolicy shall prevent the create on of privileged Pods in a specific
namespace. Task
Create a new PodSecurityPolicy named prevent-psp-policy , which prevents the creation of privileged Pods.
Create a new ClusterRole named restrict-access-role , which uses the newly created PodSecurityPolicy prevent-psp-policy .
Create a new serviceAccount named pspdenial-sa in the existing namespace development .
Finally, create a new clusterRoleBinding named dany-access-bind ,which binds the newly created ClusterRole restrict-access-role to the newly created serviceAccount
解题思路
PodSecurityPolicy
关键词: psp policy privileged 0. 切换结群,查看是否启用 $ vi /etc/kubernetes/manifests/kube-apiserver.yaml - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy $ systemctl restart kubelet 1. 官方网址复制psp,修改拒绝特权 $ cat psp.yaml apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: prevent-psp-policy spec: privileged: false seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny volumes: - '*' $ kubectl create -f psp.yaml 2. 创建对应的clusterrole $ kubectl create clusterrole restrict-access-role --verb=use --resource=podsecuritypolicy --resource-name=prevent-psp-policy 3. 创建sa 看对应的ns $ kubectl create sa psp-denial-sa -n development 4. 创建绑定关系 $ kubectl create clusterrolebinding dany-access-bind --clusterrole=restrict-access-role --serviceaccount=development:psp-denial-sa
6. 网络策略
切换集群 kubectl config use-context k8s
create a NetworkPolicy named pod-access to restrict access to Pod products-service running in namespace development . only allow the following Pods to connect to Pod productsservice :
Pods in the namespace testing
Pods with label environment: staging , in any namespace Make sure to apply the NetworkPolicy. You can find a skelet on manifest file at/cks/6/p1.yaml
解题思路
NetworkPolicy
关键字:NetworkPolicy 1. 主机查看pod的标签 $ kubectl get pod -n development --show-labels 2. 查看对应ns的标签,没有需要设置一下 $ kubectl label ns testing name=testing 3. 编排networkpolicy策略 $ cat /cks/6/p1.yaml kind: NetworkPolicy metadata: name: "pod-access" namespace: "development" spec: podSelector: matchLabels: environment: staging policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: name: testing - from: - namespaceSelector: matchLabels: podSelector: matchLabels: environment: staging $ kubectl create -f /cks/6/p1.yaml
7. dockerfile检测及yaml文件问题
切换集群 kubectl config use-context k8s
Task
Analyze and edit the given Dockerfile (based on the ubuntu:16.04 image) /cks/7/Dockerfile fixing two instructions present in the file being prominent security/best-practice issues.
Analyze and edit the given manifest file /cks/7/deployment.yaml
fixing two fields present in the file being prominent security/best-practiceissues.
解题思路
关键字:Dockerfile issues 1.注意dockerfile提示的错误数量 注释:USER root 2.yaml问题:注意api版本问题,和特权网络,镜像版本,也是要看题目中说的错误是几处
案例:
Dockerfile
# build container stage 1 FROM ubuntu:20.04 ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update && apt-get install -y golang-go=2:1.13~1ubuntu2 COPY app.go . RUN pwd RUN CGO_ENABLED=0 go build app.go # app container stage 2 FROM alpine:3.12.0 RUN addgroup -S appgroup && adduser -S appuser -G appgroup -h /home/appuser RUN rm -rf /bin/* COPY --from=0 /app /home/appuser/ USER appuser CMD ["/home/appuser/app"]
8. pod安全
切换集群 kubectl config use-context k8s
context
lt is best-practice to design containers to best teless and immutable. Task
lnspect Pods running in namespace testing and delete any Pod that is either not stateless or not immutable. use the following strict interpretation of stateless and immutable:
Pods being able to store data inside containers must be treated as not stateless.
You don’t have to worry whether data is actually stored inside containers or not already. Pods being configured to be privileged in any way must be treated as potentially not stateless and not immutable.
解题思路
关键字:stateless immutable 1. get 所有pod 2. 查看是否有特权 privi* 3. 查看是否有volume 4. 把特权网络和volume都删除 $ kubectl get pod pod1 -n testing -o jsonpath={.spec.volumes} | jq $ kubectl get pod sso -n testing -o yaml |grep "privi.*: true" $ kubectl delete pod xxxxx -n testing