Skip to content

Conversation

@nothingmuch
Copy link
Collaborator

This ensures that the OHTTP relay will not be able to distinguish v1 from v2 responses to the receiver.

Pull Request Checklist

Please confirm the following before requesting review:

This ensures that the OHTTP relay will not be able to distinguish v1
from v2 responses to the receiver.
@coveralls
Copy link
Collaborator

Pull Request Test Coverage Report for Build 17336830025

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 85.916%

Totals Coverage Status
Change from base Build 17332008623: 0.0%
Covered Lines: 8168
Relevant Lines: 9507

💛 - Coveralls

@nothingmuch nothingmuch requested a review from DanGould August 30, 2025 01:26
Copy link
Contributor

@DanGould DanGould left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the desire to reduce the size but I'm not sure if 7168 is the right number. This is major bike shedding anyhow.

Unless this is enforced by sender clients, does this really help? Yes backwards-compatible v2 receivers of other implementations should respond with a standard size. ours use ohttp_encapsulate which pads to the same 8192 byte length already when responding to v1.

I did confirm PjV2MsgA/PjV2MsgB are both 7168 so that this payload would be the same size, but I'm not sure it would produce the same sized OHTTP request. I think v1 buffers up 8104 bytes to be the same size as BHTTP requests so the underlying HTTP content would need to be that minus whatever control/info/header information BHTTP encapsulates.

See:
let mut bhttp_req = [0u8; PADDED_BHTTP_REQ_BYTES];

@nothingmuch
Copy link
Collaborator Author

nothingmuch commented Sep 15, 2025

I should have rewritten the commit message, it predates me verifying that the directory pads responses, details below (originally i was going to open another PR for that, realized i didn't need to but forgot to update this one).

If the sender is v1, and sends a too large request, the directory will truncate that when responding to the receiver's GET request:

bhttp_bytes.resize(BHTTP_REQ_BYTES, 0);

If it wasn't truncated then the receiver would still not be able to reply because the response will almost certainly be larger (if the sender's inputs have very large witness data, e.g. labitbus, then this might not be the case #fixthefilters)

Anyway, for this reason it's better to reject these requests earlier, giving an error to the sender, instead of giving the receiver a truncated request. This will generate a content length mismatch error in the receiver's state machine and that's not a replyable error.

As for the actual size, I chose that for simplicity and consistency with the requests. Technically it could be allowed to be up to BHTTP_REQ_BYTES - response overhead (status and empty headers?) and still be representable without truncation, but because the receiver needs to reply to it with a proposal request I don't think we can put a precise number on that due to the receiver adding its own inputs with arbitrary weight.

Setting it to BHTTP_REQ_BYTES - overhead seems less conservative as far as facilitating a payjoin, but i guess it ensures that the sender can at least receive the request in full and choose to broadcast the fallback if it fails to construct a response that fits.

@nothingmuch
Copy link
Collaborator Author

nothingmuch commented Sep 15, 2025

hmm actually it might not even raise content length mismatch since that's set by the directory, so i think it's just silently truncated and psbt parsing might fail, and trailing query params may be silently omitted without error?

ah no i remembered correctly, first the full body is written and then the serialized bhttp response is possibly truncated:

.write_bhttp(bhttp::Mode::KnownLength, &mut bhttp_bytes)
so i think it will generate a content length mismatch

@nothingmuch nothingmuch mentioned this pull request Sep 17, 2025
17 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants