失她失心

这个人很懒,什么都没有留下!

概览

发布的

订单

邀请码

关注

粉丝

收藏

设置

  • Windows性能计数器提供了多种逻辑磁盘的IO计数器,包括但不限于以下几个:

    1. 读取次数/秒(Disk Reads/sec):衡量每秒从磁盘读取的总次数;
    2. 写入次数/秒(Disk Writes/sec):衡量每秒向磁盘写入的总次数;
    3. 读取字节数/秒(Disk Read Bytes/sec):衡量每秒从磁盘读取的总字节数;
    4. 写入字节数/秒(Disk Write Bytes/sec):衡量每秒向磁盘写入的总字节数;
    5. 平均读取时间(Average Disk sec/Read):衡量读取每个磁盘操作的平均时间,以秒为单位;
    6. 平均写入时间(Average Disk sec/Write):衡量写入每个磁盘操作的平均时间,以秒为单位。

    这些计数器可以帮助用户追踪逻辑磁盘的IO操作,从而监控系统的磁盘性能并识别潜在的瓶颈。

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
  • windows性能计数器 定义每个逻辑磁盘 的IO计数器

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
  • 为了优化代码,我建议对 `get_execute_data` 方法进行重构。重构后的代码如下:

    ```python
    def get_execute_data(item: Item, num_sync_interface: int, num_async_interface: int,
    num_task: int, num_runner: int, num_proxy: int, num_image_cache: int) -> str:

    registry = "https://dockerimg.lenztechretail.com/brain"
    repo_info = {
    "codehost_name": "gitlab",
    "repo_namespace": "ai-python",
    "repo_name": "BRAIN",
    "branch": item.branch_name
    }

    inputs_data = [
    {
    "key": "namespace",
    "value": item.target_env
    },
    {
    "key": "projects",
    "value": item.project_name
    },
    {
    "key": "projects_abbr",
    "value": item.projects_abbr
    },
    {
    "key": "force_pull",
    "value": str(item.force_pull)
    },
    {
    "key": "num_sync_interface",
    "value": str(num_sync_interface)
    },
    {
    "key": "num_async_interface",
    "value": str(num_async_interface)
    },
    {
    "key": "num_task",
    "value": str(num_task)
    },
    {
    "key": "num_runner",
    "value": str(num_runner)
    },
    {
    "key": "num_proxy",
    "value": str(num_proxy)
    },
    {
    "key": "num_image_cache",
    "value": str(num_image_cache)
    },
    {
    "key": "num_per_model",
    "value": str(item.num_per_model)
    },
    {
    "key": "custom_model_num",
    "value": str(item.custom_model_num) if item.custom_model_num else ""
    },
    {
    "key": "router",
    "value": str(item.deploy_router)
    },
    {
    "key": "acceleration_plan_desc",
    "value": "" if item.is_aliyun else "腾讯云tiacc"
    },
    {
    "key": "redis_choice_desc",
    "value": "cloud"
    },
    {
    "key": "is_aliyun",
    "value": str(item.is_aliyun)
    }
    ]

    service_module = "brain"
    service_name = "brain"

    data = {
    "project_name": item.project_name,
    "workflow_name": "brain-workflow-prod",
    "inputs": [
    {
    "job_name": "build",
    "job_type": "zadig-build",
    "parameters": {
    "registry": registry,
    "service_list": [
    {
    "service_module": service_module,
    "service_name": service_name,
    "repo_info": [repo_info],
    "inputs": [{
    "key": "ismodel",
    "value": "False"
    }, {
    "key": "gpu_type",
    "value": "v100" if not item.is_aliyun else "t4"
    }, {
    "key": "router",
    "value": str(item.deploy_router)
    }]
    }
    ]
    }
    },
    {
    "job_name": "deploy",
    "job_type": "zadig-build",
    "parameters": {
    "registry": registry,
    "service_list": [
    {
    "service_module": service_module,
    "service_name": service_name,
    "repo_info": [repo_info],
    "inputs": inputs_data
    }
    ]
    }
    }
    ]
    }

    return json.dumps(data, ensure_ascii=False)
    ```

    在 `get_execute_data` 中,我们将 `data` 的构造转移到了一个新的方法中。

    可以看到,我们在 `get_execute_data` 方法中接收了 item 和各个 `num_***` 的值。这里的思路是传递端到端的数据。我们也可以这样做:

    1. 在外部方法中,调用 `get_pod_num` 方法,获取各个pod的数量。
    2. 在调用 `get_execute_data` 方法时,将各个pod的数量作为参数传递。

    这样,`get_execute_data` 就只需要负责构造数据了。这种做法可能会更符合单一职责原则。关于方法的具体实现,还需要考虑代码架构、具体的输入输出形式,以及实际数据处理需要。

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
  • import json
    import time
    from typing import Optional

    import requests
    from fastapi import APIRouter
    from pydantic import BaseModel

    from libs.k8s.get_pod import get_pod_num

    router = APIRouter()

    class Item(BaseModel):
    branch_name: Optional[str] = "master"
    target_env: Optional[str] = ""
    project_name: Optional[str] = ""
    projects_abbr: Optional[str] = ""
    force_pull: bool = False
    num_per_model: Optional[int] = 1
    custom_model_num: Optional[str] = ""
    deploy_router: bool = False
    is_aliyun: bool = False

    token = ',,,'

    def get_execute_data(branch_name, target_env, project_name, projects_abbr, force_pull,
    num_sync_interface, num_async_interface, num_task, num_runner,
    num_proxy, num_image_cache, num_per_model,
    custom_model_num, deploy_router, is_aliyun):
    if not is_aliyun:
    gpu_type = 'v100'
    acceleration_plan_desc = '腾讯云tiacc'
    else:
    gpu_type = 't4'
    acceleration_plan_desc = ''
    data = {
    "project_name": "ai-brain",
    "workflow_name": f"brain-workflow-prod",
    "inputs": [
    {
    "job_name": "build",
    "job_type": "zadig-build",
    "parameters": {
    "registry": "https://dockerimg.lenztechretail.com/brain",
    "service_list": [
    {
    "service_module": "brain",
    "service_name": "brain",
    "repo_info": [{
    "codehost_name": "gitlab",
    "repo_namespace": "ai-python",
    "repo_name": "BRAIN",
    "branch": f"{branch_name}"
    }],
    "inputs": [{
    "key": "ismodel",
    "value": False
    },
    {
    "key": "gpu_type",
    "value": gpu_type
    },
    {
    "key": "router",
    "value": deploy_router
    }]
    }
    ]
    }
    },
    {
    "job_name": "deploy",
    "job_type": "zadig-build",
    "parameters": {
    "registry": "https://dockerimg.lenztechretail.com/brain",
    "service_list": [
    {
    "service_module": "brain",
    "service_name": "brain",
    "repo_info": [{
    "codehost_name": "gitlab",
    "repo_namespace": "ai-python",
    "repo_name": "BRAIN",
    "branch": f"{branch_name}"
    }],
    "inputs": [{
    "key": "namespace",
    "value": f"{target_env}"
    },
    {
    "key": "projects",
    "value": f"{project_name}"
    },
    {
    "key": "projects_abbr",
    "value": f"{projects_abbr}"
    },
    {
    "key": "force_pull",
    "value": f"{force_pull}"
    },
    {
    "key": "num_sync_interface",
    "value": f"{num_sync_interface}"
    },
    {
    "key": "num_async_interface",
    "value": f"{num_async_interface}"
    },
    {
    "key": "num_task",
    "value": f"{num_task}"
    },
    {
    "key": "num_runner",
    "value": f"{num_runner}"
    },
    {
    "key": "num_proxy",
    "value": f"{num_proxy}"
    },
    {
    "key": "num_image_cache",
    "value": f"{num_image_cache}"
    },
    {
    "key": "gpu_type",
    "value": f"{gpu_type}"
    },
    {
    "key": "num_per_model",
    "value": f"{num_per_model}"
    },
    {
    "key": "custom_model_num",
    "value": f"{custom_model_num}"
    },
    {
    "key": "router",
    "value": f"{deploy_router}"
    },
    {
    "key": "acceleration_plan_desc",
    "value": f"{acceleration_plan_desc}"
    },
    {
    "key": "redis_choice_desc",
    "value": "cloud"
    },
    {
    "key": "is_aliyun",
    "value": f"{is_aliyun}"
    }]
    }
    ]
    }
    }
    ]
    }
    # print(data)
    return json.dumps(data, ensure_ascii=False)

    def execute_req(data):
    url = 'http://zadig.langjtech.com/openapi/workflows/custom/task'
    header = {
    "Content-type": "application/json",
    "charset": "utf-8",
    "Authorization": f'Bearer {token}'

    }
    data_json = json.loads(data)
    res = requests.post(url, json=data_json, headers=header)
    print(res)
    res_json = json.loads(res.text)
    print(res_json)
    return res_json

    def get_execute_req(task_id, pipeline_name):
    print(task_id, pipeline_name)
    url = 'http://zadig.langjtech.com/openapi/workflows/custom/task'
    data = {
    "task_id": task_id,
    "workflow_name": f"{pipeline_name}"
    }
    header = {
    "Content-type": "application/json",
    "charset": "utf-8",
    "Authorization": f'Bearer {token}'
    }
    res = requests.get(url, json=data, headers=header)
    res_json = json.loads(res.text)
    print(res_json)
    print(res_json.get('status'))
    return res_json.get('status')

    @router.post('/brain/ai/pipline/')
    async def req(item: Item):
    num_sync_interface = get_pod_num(item.target_env, f'{item.projects_abbr.lower()}-interface-sync')
    num_async_interface = get_pod_num(item.target_env, f'{item.projects_abbr.lower()}-interface-async')
    num_task = get_pod_num(item.target_env, f'{item.projects_abbr.lower()}-task')
    num_runner = get_pod_num(item.target_env, f'{item.projects_abbr.lower()}-runner')
    num_proxy = get_pod_num(item.target_env, f'{item.projects_abbr.lower()}-proxy')
    num_image_cache = get_pod_num(item.target_env, f'{item.projects_abbr.lower()}-image-cache')
    data = get_execute_data(item.branch_name, item.target_env, item.project_name, item.projects_abbr, item.force_pull,
    num_sync_interface, num_async_interface, num_task, num_runner,
    num_proxy, num_image_cache, item.num_per_model,
    item.custom_model_num, item.deploy_router, item.is_aliyun)
    print(data)
    result = execute_req(data)
    task_id = result.get('task_id')
    pipeline_name = result.get('workflow_name')
    while True:
    time.sleep(3)
    if get_execute_req(task_id, pipeline_name) == 'passed':
    return {
    "pipeline_name": pipeline_name,
    "task_id": task_id,
    "message": "release success"
    }
    elif get_execute_req(task_id, pipeline_name) == 'failed':
    return {
    "pipeline_name": pipeline_name,
    "task_id": task_id,
    "message": "release failed"
    }
    给出优化版代码

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
  • 可以使用 Python 的 `kubernetes` 模块来获取指定 Deployment 对象的 replicas 个数,并设置不存在该 Deployment 对象时的默认值。以下是一个根据 Deployment 名称不同给出不同默认值的示例代码:

    ```python
    from kubernetes import client, config

    # 加载Kubernetes 配置文件
    config.load_kube_config()

    # 创建v1.apps的API客户端
    apps_v1_client = client.AppsV1Api()

    # 定义Deployment的名称
    deployment_name = "my-deployment"

    # 根据不同的Deployment名称设置不同的默认值
    default_replicas = {
    "my-deployment": 3,
    "my-other-deployment": 5,
    "default": 1
    }.get(deployment_name, 1)

    try:
    # 查询Deployment对象
    deployment = apps_v1_client.read_namespaced_deployment(name=deployment_name,
    namespace="default")

    # 获取Deployment的replica数量
    replicas = deployment.spec.replicas
    except client.rest.ApiException as e:
    if e.status == 404: # Deployment不存在
    replicas = default_replicas
    else:
    raise e

    print(f"Deployment {deployment_name} 的replicas数量为: {replicas}")
    ```

    解释:

    首先,我们加载 Kubernetes 配置文件。接着,我们创建了一个 `AppsV1Api` 客户端。然后,我们定义了 Deployment 的名称为 `my-deployment`。我们使用字典类型的 `default_replicas` 变量来定义不同的 Deployment 名称对应的默认值。如果 Deployment 名称不存在于该字典中,则默认值为 1。

    接下来,我们使用 try...except 块来查询 Deployment 对象。如果该 Deployment 存在,则获取 replicas 个数。否则,将 `default_replicas` 的值作为默认值。如果发生除 404 以外的异常,则将其重新引发。

    最后,我们打印出 Deployment 的名称和 replicas 的数量。

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
  • 使用python的kubernetes模块获取deployment replicas个数,如果deplment不存在,则根据deployment名字给出不同默认值,给出代码

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
  • 可以使用 `kubernetes` 模块中的 `client` 和 `exceptions` 模块来实现。代码如下:

    ```python
    from kubernetes import client, config
    from kubernetes.client.rest import ApiException

    # 建立kubernetes配置
    config.load_kube_config()

    # 创建kubernetes客户端
    apps_v1 = client.AppsV1Api()

    deployment_name = "your-deployment-name"
    namespace_name = "your-namespace"

    try:
    # 获取deployment对象
    deployment = apps_v1.read_namespaced_deployment(
    name=deployment_name, namespace=namespace_name)

    # 获取deployment的replicas数量
    replicas = deployment.spec.replicas
    except ApiException as e:
    # 如果deployment不存在,则将replicas默认设置为1
    if e.status == 404:
    replicas = 1

    print("Deployment replicas: ", replicas)
    ```
    其中,`deployment_name` 和 `namespace_name` 需要替换成实际的值。如果 deployment 存在,获取其 replicas 数量,否则将 replicas 数量默认设置为 1。

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
  • 使用python的kubernetes模块获取deployment replicas个数,如果deplment不存在,则默认值为1,给出代码

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
  • 可以使用 Kubernetes 客户端库 `pykube` 来获取Deployment的replica数量,如下所示:

    ```python
    from pykube import Deployment, HTTPClient

    # 创建 Kubernetes API 访问的HTTP客户端
    api = HTTPClient()

    # 定义Deployment的名称
    deployment_name = "my-deployment"

    # 查询Deployment对象
    deployment = Deployment.objects(api).filter(namespace="default", name=deployment_name).first()

    # 获取Deployment的replica数量
    replicas = deployment.obj['spec']['replicas'] if deployment else 1

    print(f"Deployment {deployment_name}的replicas数量为: {replicas}")
    ```

    解释:

    首先,我们创建了一个HTTP客户端来访问Kubernetes API。接着,我们定义了Deployment的名称为 "my-deployment"。然后,我们使用Pykube库来查询 "my-deployment" Deployment对象。

    如果Deployment不存在,则 `deployment` 将为None,我们使用三目运算符来检查这种情况,如果deployment不存在,replicas的默认值为1。

    最后,我们打印出Deployment的名称和replicas的数量。

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
  • 使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码

    使用python获取deployment replicas个数,如果deplment不存在,则给出默认值为1,给出代码
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索