Skip to content

Conversation

@Sunny-bot1
Copy link
Collaborator

@Sunny-bot1 Sunny-bot1 commented Jan 22, 2026

Motivation

  1. 前处理需要读 seq_len_this_time 来获取实际 token_num 总数,这是一次无法避免的 DtoH 拷贝,当连续处理decode batch时可以复用上一个 batch 的 token_num 来进行launch,从而避免同步带来的开销。本PR为后续的token_num复用做准备。
  2. FA3前处理存在 DtoH 拷贝,Decode阶段可跳过拷贝并放进CUDA graph。

Modifications

  • 为 token_num_cpu 延迟更新做准备
  • FA3前处理进CUDA graph

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Jan 22, 2026

Thanks for your contribution!

@Sunny-bot1 Sunny-bot1 changed the title [Optimization] Prepare token count and move FA3 initialization into the graph [Model Runner] Prepare token count and move FA3 initialization into the graph Jan 23, 2026
output_cum_offsets,
output_padding_offset,
) = pre_process(
token_num_cpu,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

其他文件处的 pre_process( 都需要改一下的

@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 16.66667% with 5 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@3cd0ffe). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...el_executor/layers/attention/flash_attn_backend.py 0.00% 4 Missing ⚠️
fastdeploy/worker/gpu_model_runner.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #6170   +/-   ##
==========================================
  Coverage           ?   66.35%           
==========================================
  Files              ?      383           
  Lines              ?    50519           
  Branches           ?     7894           
==========================================
  Hits               ?    33523           
  Misses             ?    14543           
  Partials           ?     2453           
Flag Coverage Δ
GPU 66.35% <16.66%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

{token_num_data}, paddle::DataType::INT64, input_ids.place());
auto batch_id_per_token = paddle::empty(
{token_num_data}, paddle::DataType::INT32, input_ids.place());
auto x_remove_padding = paddle::full(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里改为2是什么原因呢~

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里改为2是什么原因呢~

只是先给一个有效值~

Copy link
Collaborator

@EmmonsCurse EmmonsCurse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for skip-coverage~

@EmmonsCurse EmmonsCurse merged commit adc69c1 into PaddlePaddle:develop Jan 26, 2026
29 of 36 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants