Skip to content

Conversation

@gcunhase
Copy link
Contributor

@gcunhase gcunhase commented Jan 16, 2026

What does this PR do?

Type of change: Bug fix

Overview: Fixed the output precision of ConstantOfShape layers in models with custom ops.

Usage

$ python -m modelopt.onnx.quantization --onnx_path=$MODEL_NAME.onnx

Testing

See bug 5763424.

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes
  • Did you write any new necessary tests?: No
  • Did you add or update any necessary documentation?: No
  • Did you update Changelog?: No

Additional Information

This issue only affects models with custom ops.

Summary by CodeRabbit

  • Bug Fixes
    • Improved type propagation handling for ConstantOfShape operations in ONNX autocast, ensuring correct precision type conversion across related operations.

✏️ Tip: You can customize this high-level summary in your review settings.

@gcunhase gcunhase requested a review from a team as a code owner January 16, 2026 01:15
@gcunhase gcunhase requested a review from ajrasane January 16, 2026 01:15
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 16, 2026

📝 Walkthrough

Walkthrough

A single-line conditional branch is added to handle the ConstantOfShape operation type in the type propagation logic of the ONNX precision converter, returning the dtype of its value attribute to enable correct lower-precision type propagation.

Changes

Cohort / File(s) Summary
ConstantOfShape Type Propagation
modelopt/onnx/autocast/precisionconverter.py
Added conditional branch in _get_np_type to extract and return the dtype from ConstantOfShape node's value attribute, enabling proper type inference for this operation alongside existing Cast, DequantizeLinear, and QuantizeLinear handlers.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically identifies the main change: fixing ConstantOfShape layer output precision in ONNX autocast.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings


📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e6e4efd and ba52d57.

📒 Files selected for processing (1)
  • modelopt/onnx/autocast/precisionconverter.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: linux
  • GitHub Check: wait-checks / wait
  • GitHub Check: wait-checks / wait
  • GitHub Check: build-docs
  • GitHub Check: code-quality

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
modelopt/onnx/autocast/precisionconverter.py (1)

343-352: Add guard for optional ConstantOfShape value attribute.

The value attribute in ConstantOfShape is optional per the ONNX specification. When omitted, the output defaults to scalar 0 with float32 type. Direct access node.attrs["value"] will raise KeyError if the attribute is missing, breaking type propagation for valid ONNX models.

Use node.attrs.get("value") with a fallback to np.float32, and handle both gs.Constant and ndarray value types:

Proposed fix
elif node.op == "ConstantOfShape":
-   return node.attrs["value"].dtype
+   value = node.attrs.get("value")
+   if value is None:
+       return np.float32  # ONNX spec default when 'value' is omitted
+   if isinstance(value, gs.Constant):
+       return value.values.dtype
+   if hasattr(value, "dtype"):
+       return value.dtype
+   return None
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e6e4efd and ba52d57.

📒 Files selected for processing (1)
  • modelopt/onnx/autocast/precisionconverter.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: linux
  • GitHub Check: wait-checks / wait
  • GitHub Check: wait-checks / wait
  • GitHub Check: build-docs
  • GitHub Check: code-quality

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

@codecov
Copy link

codecov bot commented Jan 16, 2026

Codecov Report

❌ Patch coverage is 0% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 74.24%. Comparing base (406c18c) to head (ba52d57).
⚠️ Report is 4 commits behind head on main.

Files with missing lines Patch % Lines
modelopt/onnx/autocast/precisionconverter.py 0.00% 2 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main     #789   +/-   ##
=======================================
  Coverage   74.23%   74.24%           
=======================================
  Files         192      192           
  Lines       19033    19035    +2     
=======================================
+ Hits        14129    14132    +3     
+ Misses       4904     4903    -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@ajrasane ajrasane merged commit c1956b8 into NVIDIA:main Jan 16, 2026
36 checks passed
kevalmorabia97 pushed a commit that referenced this pull request Jan 19, 2026
…789)

## What does this PR do?

**Type of change:** Bug fix

**Overview:** Fixed the output precision of ConstantOfShape layers in
models with custom ops.

## Usage
<!-- You can potentially add a usage example below. -->

```python
$ python -m modelopt.onnx.quantization --onnx_path=$MODEL_NAME.onnx
```

## Testing
See bug 5763424.

## Before your PR is "*Ready for review*"
<!-- If you haven't finished some of the above items you can still open
`Draft` PR. -->

- **Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)**
and your commits are signed.
- **Is this change backward compatible?**: Yes
- **Did you write any new necessary tests?**: No
- **Did you add or update any necessary documentation?**: No
- **Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?**:
No <!--- Only for new features, API changes, critical bug fixes or bw
breaking changes. -->

## Additional Information
This issue only affects models with custom ops.


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved type propagation handling for ConstantOfShape operations in
ONNX autocast, ensuring correct precision type conversion across related
operations.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: gcunhase <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants