-
-
Notifications
You must be signed in to change notification settings - Fork 7.3k
[Bugfix] add missing function params to rocm_aiter_mla.py
#17911
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
This pull request has merge conflicts that must be resolved before it can be |
@qli88 @SageMoore @mgoin I see #17864 was reverted, but leaving this PR as draft in case it helps. I'm also happy to incorporate the original changes with this to get it working as expected. |
…ject#17864) Signed-off-by: Qiang Li <[email protected]> Signed-off-by: David Xia <[email protected]>
Looks like vllm-project#17864 had an outdated branch. So its [merge commit][1] caused `qo_indptr` and `max_seqlen_qo` to go into the function signature of `aiter_mla_decode_fwd()` where they're not used and into the body of `mla_decode_fwd_impl()` where they aren't defined. This PR fixes the discrepancies and call-sites. [1]: vllm-project@9f64e93#diff-88fd09f50e8cfc77678ade87483ab9a89ce58904203578f8816882763bd577c2 Signed-off-by: David Xia <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the original commit.
attn_metadata.qo_indptr, | ||
attn_metadata.max_query_len, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a type check warning because these can be None
, but the function doesn't allow None
. Do we add more assert
s above to ensure they're not None
?
attn_metadata.qo_indptr, | ||
attn_metadata.max_query_len, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How should we create these attributes?
kv_indptr: Optional[torch.Tensor] = None, | ||
kv_indices: Optional[torch.Tensor] = None, | ||
kv_last_page_lens: Optional[torch.Tensor] = None, | ||
sm_scale: float = 1.0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moved down and added default value for consistency
Looks like #17864 had an outdated branch. So its merge commit caused
qo_indptr
andmax_seqlen_qo
to go into the function signature ofaiter_mla_decode_fwd()
where they're not used and into the body ofmla_decode_fwd_impl()
where they aren't defined.This PR fixes the discrepancies and call-sites.
Signed-off-by: David Xia [email protected]