Fix HFFT tests to use complex input tensors#1533
Fix HFFT tests to use complex input tensors#1533alinpahontu2912 wants to merge 2 commits intomainfrom
Conversation
PyTorch's fft.hfft/hfft2/hfftn require complex input tensors. The tests were passing real-valued tensors (Float32/Float64), which causes 'NYI' errors in newer PyTorch versions. Fixed by using complex64/complex128 input types and ihfft for inverse verification. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This PR fixes the HFFT test suite to use complex input tensors as required by PyTorch's hfft/hfft2/hfftn functions. These functions expect Hermitian symmetric complex inputs and produce real-valued outputs. The tests were incorrectly using real-valued Float32/Float64 input tensors, which causes 'NYI' (Not Yet Implemented) errors in newer PyTorch versions.
Changes:
- Updated input tensor creation in HFFT tests to use complex64/complex128 instead of Float32/Float64 scalar types
- Changed inverse verification from ifft to ihfft (the correct inverse of hfft)
- Fixed output dtype assertions to explicitly check for Float32/Float64 instead of comparing with input dtype
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@copilot fix the comment |
|
@alinpahontu2912 I've opened a new pull request, #1539, to work on those changes. Once the pull request is ready, I'll request review from you. |
The test method was incorrectly calling fft.hfft2() instead of fft.hfftn(), and using ihfft2() instead of ihfftn() for verification. Updated the TestOf attribute to match the correct function being tested. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| var input = torch.arange(4, complex64); | ||
| var output = fft.hfft(input); | ||
| Assert.Equal(6, output.shape[0]); | ||
| Assert.Equal(ScalarType.Float32, output.dtype); | ||
|
|
||
| var inverted = fft.ifft(output); | ||
| var inverted = fft.ihfft(output); | ||
| Assert.Equal(ScalarType.ComplexFloat32, inverted.dtype); |
There was a problem hiding this comment.
Float32HFFT now uses complex input and is functionally identical to the existing ComplexFloat32HFFT test (and similarly Float64HFFT duplicates ComplexFloat64HFFT later in this file). This introduces redundant coverage and makes the intent of the “Float32/Float64” naming unclear. Consider removing the duplicate tests or renaming/adjusting them so each case covers a distinct scenario (e.g., only keep the Complex* variants, or have one set validate round-trip/value behavior).
| @@ -7156,26 +7156,26 @@ public void Float32HFFT2() | |||
| [TestOf(nameof(fft.hfft2))] | |||
| public void Float64HFFT2() | |||
| { | |||
| var input = torch.rand(new long[] { 5, 5, 5, 5 }, float64); | |||
| var input = torch.rand(new long[] { 5, 5, 5, 5 }, complex128); | |||
| var output = fft.hfft2(input); | |||
| Assert.Equal(new long[] { 5, 5, 5, 8 }, output.shape); | |||
| Assert.Equal(input.dtype, output.dtype); | |||
| Assert.Equal(ScalarType.Float64, output.dtype); | |||
|
|
|||
| var inverted = fft.ihfft2(output); | |||
| Assert.Equal(new long[] { 5, 5, 5, 5 }, inverted.shape); | |||
| Assert.Equal(ScalarType.ComplexFloat64, inverted.dtype); | |||
| } | |||
|
|
|||
| [Fact] | |||
| [TestOf(nameof(fft.hfft2))] | |||
| [TestOf(nameof(fft.hfftn))] | |||
| public void Float32HFFTN() | |||
| { | |||
| var input = torch.rand(new long[] { 5, 5, 5, 5 }); | |||
| var output = fft.hfft2(input); | |||
| var input = torch.rand(new long[] { 5, 5, 5, 5 }, complex64); | |||
| var output = fft.hfftn(input); | |||
There was a problem hiding this comment.
These tests (Float32HFFT2/Float64HFFT2/Float32HFFTN/Float64HFFTN) now pass complex inputs (complex64/complex128). In this file, other FFT tests use the “Float32/Float64” prefix to indicate the input dtype (e.g., Float32FFT2, Float32RFFTN). To avoid confusion, consider renaming these to reflect the complex input type (or otherwise aligning the naming scheme).
PyTorch's fft.hfft/hfft2/hfftn require complex input tensors. The tests were passing real-valued tensors (Float32/Float64), which causes 'NYI' errors in newer PyTorch versions. Fixed by using complex64/complex128 input types and ihfft for inverse verification.