跳转至内容
  • 版块
  • 最新
  • 标签
  • 热门
  • 世界
  • 用户
  • 群组
皮肤
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • 默认(不使用皮肤)
  • 不使用皮肤
折叠

GPU技术交流论坛

  1. 主页
  2. 版块
  3. 知识交流
  4. 运维部署交流
  5. Sglang简明部署教程-Qwen3

Sglang简明部署教程-Qwen3

已定时 已固定 已锁定 已移动 运维部署交流
1 帖子 1 发布者 33 浏览 1 关注中
  • 从旧到新
  • 从新到旧
  • 最多赞同
回复
  • 在新帖中回复
登录后回复
此主题已被删除。只有拥有主题管理权限的用户可以查看。
  • F 离线
    F 离线
    fushinn
    编写于 最后由 编辑
    #1

    前提条件

    1. 请确保你已经安装好Nvidia驱动
    2. 是Linux服务器
    3. 安装好了Docker
    4. Debian12系统
    5. 良好的国际化网络环境

    参考:

    Nvidia vGPU 18.0以上 GRID或CloudGame驱动Patch许可教程
    Tesla T10 PVE 服务器配置指南

    安装NVIDIA 容器运行时

    curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
      && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
        sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
        sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
    
    sudo apt-get update
    
    export NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1
      sudo apt-get install -y \
          nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
          nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
          libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \
          libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION}
    
    sudo nvidia-ctk runtime configure --runtime=docker
    
    sudo systemctl restart docker
    

    配置docker compose(需要自行配置相关内容)

    services:
      sglang:
        image: lmsysorg/sglang:latest
        container_name: sglang
        volumes:
          #- /home/fushinn/sglang/cache/huggingface:/root/.cache/huggingface
          # If you use modelscope, you need mount this directory
          - ${HOME}/.cache/modelscope:/root/.cache/modelscope
          - ./jinja:/root/jinja
        restart: always
        network_mode: host # required by RDMA
        privileged: true # required by RDMA
        # Or you can only publish port 30000
        # ports:
        #   - 30000:30000
        environment:
          HF_TOKEN: "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"
          # if you use modelscope to download model, you need set this environment
          SGLANG_USE_MODELSCOPE: true
        entrypoint: python3 -m sglang.launch_server
        command: --model-path Qwen/Qwen3-4B
          --host 127.0.0.1
          --port 30000
          --tool-call-parser qwen25
          --reasoning-parser qwen3
          #--chat-template /root/jinja/qwen3_nonthinking.jinja
        ulimits:
          memlock: -1
          stack: 67108864
        ipc: host
        healthcheck:
          test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]
        deploy:
          resources:
            reservations:
              devices:
                - driver: nvidia
                  device_ids: ["0"]
                  capabilities: [gpu]
    

    配置jinja(可选内容)

    {%- if tools %}
        {{- '<|im_start|>system\n' }}
        {%- if messages[0].role == 'system' %}
            {{- messages[0].content + '\n\n' }}
        {%- endif %}
        {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
        {%- for tool in tools %}
            {{- "\n" }}
            {{- tool | tojson }}
        {%- endfor %}
        {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
    {%- else %}
        {%- if messages[0].role == 'system' %}
            {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
        {%- endif %}
    {%- endif %}
    {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
    {%- for message in messages[::-1] %}
        {%- set index = (messages|length - 1) - loop.index0 %}
        {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
            {%- set ns.multi_step_tool = false %}
            {%- set ns.last_query_index = index %}
        {%- endif %}
    {%- endfor %}
    {%- for message in messages %}
        {%- if message.content is string %}
            {%- set content = message.content %}
        {%- else %}
            {%- set content = '' %}
        {%- endif %}
        {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
            {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
        {%- elif message.role == "assistant" %}
            {%- set reasoning_content = '' %}
            {%- if message.reasoning_content is string %}
                {%- set reasoning_content = message.reasoning_content %}
            {%- else %}
                {%- if '</think>' in content %}
                    {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
                    {%- set content = content.split('</think>')[-1].lstrip('\n') %}
                {%- endif %}
            {%- endif %}
            {%- if loop.index0 > ns.last_query_index %}
                {%- if loop.last or (not loop.last and reasoning_content) %}
                    {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
                {%- else %}
                    {{- '<|im_start|>' + message.role + '\n' + content }}
                {%- endif %}
            {%- else %}
                {{- '<|im_start|>' + message.role + '\n' + content }}
            {%- endif %}
            {%- if message.tool_calls %}
                {%- for tool_call in message.tool_calls %}
                    {%- if (loop.first and content) or (not loop.first) %}
                        {{- '\n' }}
                    {%- endif %}
                    {%- if tool_call.function %}
                        {%- set tool_call = tool_call.function %}
                    {%- endif %}
                    {{- '<tool_call>\n{"name": "' }}
                    {{- tool_call.name }}
                    {{- '", "arguments": ' }}
                    {%- if tool_call.arguments is string %}
                        {{- tool_call.arguments }}
                    {%- else %}
                        {{- tool_call.arguments | tojson }}
                    {%- endif %}
                    {{- '}\n</tool_call>' }}
                {%- endfor %}
            {%- endif %}
            {{- '<|im_end|>\n' }}
        {%- elif message.role == "tool" %}
            {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
                {{- '<|im_start|>user' }}
            {%- endif %}
            {{- '\n<tool_response>\n' }}
            {{- content }}
            {{- '\n</tool_response>' }}
            {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
                {{- '<|im_end|>\n' }}
            {%- endif %}
        {%- endif %}
    {%- endfor %}
    {%- if add_generation_prompt %}
        {{- '<|im_start|>assistant\n<think>\n\n</think>\n\n' }}
    {%- endif %}
    

    启动Compose

    docker compose up -d
    

    测试

    curl http://127.0.0.1:30000/v1
    
    1 条回复 最后回复
    0
    回复
    • 在新帖中回复
    登录后回复
    • 从旧到新
    • 从新到旧
    • 最多赞同


    • 登录

    • 没有帐号? 注册

    • 登录或注册以进行搜索。
    • 第一个帖子
      最后一个帖子
    0
    • 版块
    • 最新
    • 标签
    • 热门
    • 世界
    • 用户
    • 群组