restful api访问k8s集群,增删改 查信息,做界面二次开发。
需要预先创建访问权限的配置。
下面罗列部分api
curl -u admin:admin "https://localhost:6443/api/v1" -k
curl -u admin:admin "https://localhost:6443/api/v1/pods" -k
curl -u admin:admin "https://localhost:6443/api/v1/namespaces/{namespace}/pods" -k
curl -u admin:admin "https://localhost:6443/api/v1/namespaces/default/pods" -k
获取节点信息
curl -u admin:admin "https://localhost:6443/api/v1/nodes/{nodename}" -k
curl -u admin:admin "https://localhost:6443/api/v1/nodes/tensorflow1" -k
...
"status": {
"capacity": {
"cpu": "4",
"memory": "7970316Ki",
"pods": "110"
},
"allocatable": {
"cpu": "4",
"memory": "7867916Ki",
"pods": "110"
},
...
获取namespace信息
curl -u admin:admin "https://localhost:6443/api/v1/namespaces/{namespace}" -k
curl -u admin:admin "https://localhost:6443/api/v1/namespaces/default" -k
获得quota信息
curl -u admin:admin "https://localhost:6443/api/v1/namespaces/{namespace}/resourcequotas/" -k
curl -u admin:admin "https://localhost:6443/api/v1/namespaces/default/resourcequotas/" -k
实践
k8s_master_ip:192.168.1.138
username 不同用户不同
password 不同用户不同
namespace 不同用户不同
版本更新到v1.10以后 上面这个链接就找不到了 要把v1.9改成v1.10才能访问。
查看容器
curl -u {
username}:{
password} "https://{k8s_master_ip}:6443/api/v1/namespaces/{namespace}/pods/" -k
curl -u admin:admin "https://192.168.1.138:6443/api/v1/namespaces/default/pods/" -k
看起来像是把所有的pod都拿出来了,包括活的和死的。
看了一下信息很多不过没有资源使用信息。
"phase": "Running"
这个是正在运行的pod
"phase": "Failed"
"reason":"Evicted"
这种是删除了的,状态是failed 原因是被驱逐
增加continue参数取出正在运行的容器
curl -u {
username}:{
password} "https://{k8s_master_ip}:6443/api/v1/namespaces/{namespace}/pods?continue" -k
curl -u admin:admin "https://192.168.1.138:6443/api/v1/namespaces/default/pods?continue" -k
查看replicationcontroller
curl -u admin:admin "https://192.168.1.138:6443/api/v1/namespaces/user1/replicationcontrollers/" -k
查看service
curl -u admin:admin "https://192.168.1.138:6443/api/v1/namespaces/user1/services/" -k
查看资源总览resourcequotas
curl -u {
username}:{
password} "https://{k8s_master_ip}:6443/api/v1/namespaces/{namespace}/resourcequotas/" -k
[root@tensorflow1 info]# curl -u admin:admin "https://localhost:6443/api/v1/namespaces/default/resourcequotas/" -k
...
"status": {
"hard": {
"limits.cpu": "2",
"limits.memory": "6Gi",
"pods": "20",
"requests.cpu": "1",
"requests.memory": "1Gi"
},
"used": {
"limits.cpu": "400m",
"limits.memory": "1Gi",
"pods": "2",
"requests.cpu": "200m",
"requests.memory": "512Mi"
}
}
...
hard是限额 used是当前申请的限额
limits 和 requests 的区别是 limits是上限,不能突破,但不保证能给。 requests是下限,保证能给。 举例说明:一个容器 requests.memory 512Mi,limits.memory 1Gi。宿主机内存使用量高时,一定会留512Mi内存给这个容器,不一定能拿到1Gi内存。宿主机内存使用量低时,容器不能突破1Gi内存。
Gi 和 G 的区别是 Gi是1024进制,G是1000进制,M Mi也是同理。就像一个U盘8G但实际能使用的是7.45G(其实这里单位就是Gi)
pods是指容器,单位个
cpu单位 m指千分之一,200m即0.2个cpu。这是绝对值,不是相对值。比如0.1CPU不管是在单核或者多核机器上都是一样的,都严格等于0.1CPU core
实时数据
https://github.com/kubernetes/metrics
https://github.com/kubernetes-incubator/metrics-server
下载 metrics-server 压缩包文件
下载 googlecontainer/metrics-server-amd64:v0.2.0
cd metrics-server-0.2.1/deploy
修改 metrics-server-deployment.yaml 文件 image 和 imagePullPolicy: IfNotPresent
kubectl create -f .
获取节点信息
curl -u {
username}:{
password} "https://{k8s_master_ip}:6443/apis/metrics.k8s.io/v1beta1/nodes" -k
curl -u admin:admin "https://192.168.1.138:6443/apis/metrics.k8s.io/v1beta1/nodes" -k
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
...
{
"metadata": {
"name": "tensorflow1",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/tensorflow1",
"creationTimestamp": "2018-04-09T08:44:17Z"
},
"timestamp": "2018-04-09T08:44:00Z",
"window": "1m0s",
"usage": {
"cpu": "265m",
"memory": "3448228Ki"
}
}
...
]
}
获取pod信息
curl -u {
username}:{
password} "https://{k8s_master_ip}:6443/apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods" -k
curl -u admin:admin "https://192.168.1.138:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods" -k
{
"kind": "PodMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods"
},
"items": [
...
{
"metadata": {
"name": "tensorflow-worker-rc-998wf",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/tensorflow-worker-rc-998wf",
"creationTimestamp": "2018-04-09T08:52:38Z"
},
"timestamp": "2018-04-09T08:52:00Z",
"window": "1m0s",
"containers": [
{
"name": "worker",
"usage": {
"cpu": "0",
"memory": "39964Ki"
}
}
]
}
...
]
}
获取namespace信息
没找到url,就把上面获取pod的使用量全加起来就是namespace的使用量了
Metrics API 文档
网上找不到文档 只能从 kubectl top 命令帮助里找
[root@tensorflow1 ~]# kubectl top
Display Resource (CPU/Memory/Storage) usage.
The top command allows you to see the resource consumption for nodes or pods.
This command requires Heapster to be correctly configured and working on the server.
Available Commands:
node Display Resource (CPU/Memory/Storage) usage of nodes
pod Display Resource (CPU/Memory/Storage) usage of pods
Usage:
kubectl top [options]
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
[root@tensorflow1 ~]# kubectl top pod --help
Display Resource (CPU/Memory/Storage) usage of pods.
The 'top pod' command allows you to see the resource consumption of pods.
Due to the metrics pipeline delay, they may be unavailable for a few minutes since pod creation.
Aliases:
pod, pods, po
Examples:
# Show metrics for all pods in the default namespace
kubectl top pod
# Show metrics for all pods in the given namespace
kubectl top pod --namespace=NAMESPACE
# Show metrics for a given pod and its containers
kubectl top pod POD_NAME --containers
# Show metrics for the pods defined by label name=myLabel
kubectl top pod -l name=myLabel
Options:
--all-namespaces=false: If present, list the requested object(s) across all namespaces. Namespace in current
context is ignored even if specified with --namespace.
--containers=false: If present, print usage of containers within a pod.
--heapster-namespace='kube-system': Namespace Heapster service is located in
--heapster-port='': Port name in service to use
--heapster-scheme='http': Scheme (http or https) to connect to Heapster as
--heapster-service='heapster': Name of Heapster service
-l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
Usage:
kubectl top pod [NAME | -l label] [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
弃用的数据获取
参考 https://jimmysong.io/posts/using-heapster-to-get-object-metrics/
官方api文档 https://github.com/kubernetes/heapster/blob/master/docs/model.md 弃用了
弃用的api取值 https://blog.csdn.net/mofiu/article/details/77126848
获取heapster url
[root@tensorflow1 influxdb]kubectl cluster-info
Kubernetes master is running at https://192.168.1.138:6443
Heapster is running at https://192.168.1.138:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://192.168.1.138:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://192.168.1.138:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://192.168.1.138:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
curl -u admin:admin "https://192.168.1.138:6443/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/" -k
curl -u admin:admin "https://192.168.1.138:6443/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/default/metrics" -k
[
"memory/request",
"memory/limit",
"cpu/usage_rate",
"memory/usage",
"cpu/request",
"cpu/limit"
]
[root@tensorflow1 influxdb]# curl -u admin:admin "https://192.168.1.138:6443/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/default/metrics/memory/usage" -k
{
"metrics": [
...
{
"timestamp": "2018-04-09T07:45:00Z",
"value": 81121280
},
{
"timestamp": "2018-04-09T07:46:00Z",
"value": 81121280
}
...
],
"latestTimestamp": "2018-04-09T07:46:00Z"
}
本文转自CSDN-k8s restful api 访问