-
Notifications
You must be signed in to change notification settings - Fork 0
webrtc: Support FIN/FIN_ACK handshake for substream shutdown #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
90d927a to
2fc4a2c
Compare
2fc4a2c to
6de9d33
Compare
…f inbound to self
| let current_state = *self.state.lock(); | ||
|
|
||
| match current_state { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the correct way to keep the lock during the match is:
| let current_state = *self.state.lock(); | |
| match current_state { | |
| let current_state = self.state.lock(); | |
| match *current_state { |
Otherwise, the lock will be released while you are reading the state, which could lead to a race condition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The state gets updated for certain matched variants, so it needs to be unlocked again later. It's actually fine if lock isn't held since it would never get updated by another thread since poll_shutdown is never called concurrently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When testing this with a libp2p dialer and a litep2p listener, I noticed the following:
-
Apparently libp2p does not send
FIN_ACK. It doesn't even exist in their flag enum. Just an observation, still works due to the timeout. -
litep2p does not use the
Substreamtype for the noise handshake datachannel and therefore the flag handling code added in this PR is not executed. Additionally, right after sending the last handshake message, litep2p closes the datachannel directly. I tried commenting out this line, but that caused a panic inWebRtcConnection::on_inbound_data(). The spec just says "On success of the authentication handshake, the used datachannel is closed". It seems more logical to me that the dialer closes the channel. But given that connection establishment works, there is only ever one handshake channel per connection and changing this on the litep2p side seems to be a bit involved, I think this is a low priority issue. -
I added a call to close the perf substream in libp2p here and logging calls (
warn) when libp2p sends theFINflag and when litep2p receives it, as well as when it sendsFIN_ACKback. I see the first message, but not the latter two. I changed this log fromtracetowarnin order to see all flags of incoming messages and it seems to be alwaysNone.
Full dialer log:
$ SSLKEYLOGFILE=key.log RUST_LOG="info" cargo run --bin libp2p-perf -- client --transport-layer "webrtc" --server-address "/ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q" --upload-bytes 1024 --download-bytes 1024
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.19s
Running `target/debug/libp2p-perf client --transport-layer webrtc --server-address /ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q --upload-bytes 1024 --download-bytes 1024`
2026-01-13T11:20:44.825444Z INFO libp2p_swarm: local_peer_id=12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid
2026-01-13T11:20:44.828496Z INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/127.0.0.1/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.828655Z INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/192.168.1.146/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.828683Z INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/192.168.139.3/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.828709Z INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/169.254.133.87/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.831052Z INFO webrtc_ice::mdns: mDNS is using 0.0.0.0:5353 as dest_addr
2026-01-13T11:20:44.832012Z INFO webrtc_mdns::conn: Looping and listening Ok(0.0.0.0:5353)
2026-01-13T11:20:44.832532Z INFO webrtc::peer_connection: signaling state changed to have-local-offer
2026-01-13T11:20:44.833122Z INFO webrtc::peer_connection: signaling state changed to stable
2026-01-13T11:20:44.833685Z INFO webrtc_ice::agent::agent_internal: [controlling]: Setting new connection state: Checking
2026-01-13T11:20:44.833711Z INFO webrtc::peer_connection: ICE connection state changed: checking
2026-01-13T11:20:44.833723Z INFO webrtc::peer_connection: peer connection state changed: connecting
2026-01-13T11:20:45.041060Z INFO webrtc_ice::agent::agent_internal: [controlling]: Setting new connection state: Connected
2026-01-13T11:20:45.041359Z INFO webrtc::peer_connection: ICE connection state changed: connected
2026-01-13T11:20:45.075581Z INFO webrtc::peer_connection: peer connection state changed: connected
2026-01-13T11:20:45.287422Z WARN libp2p_webrtc_utils::stream: sending FIN flag to close stream
2026-01-13T11:20:45.288343Z INFO libp2p_perf: Event: Some(ConnectionEstablished { peer_id: PeerId("12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q"), connection_id: ConnectionId(1), endpoint: Dialer { address: /ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q, role_override: Dialer, port_use: Reuse }, num_established: 1, concurrent_dial_errors: Some([]), established_in: 459.91925ms })
2026-01-13T11:20:45.293837Z INFO litep2p-perf: Uploaded 1.00 KiB bytes in 0.0000s bandwidth 546.67 Mbit/s
2026-01-13T11:20:45.297546Z INFO litep2p-perf: Downloaded 1.00 KiB bytes in 0.0036s bandwidth 2.20 Mbit/s
2026-01-13T11:20:45.297613Z WARN libp2p_webrtc_utils::stream: sending FIN flag to close stream
2026-01-13T11:20:45.297851Z INFO libp2p_perf: Event: Some(Behaviour(Event { id: 1, result: Ok(()) }))
2026-01-13T11:20:45.298186Z WARN webrtc_sctp::association: [] failed to write packets on net_conn: io error: io error: oneshot canceled
Full listener log:
$ RUST_LOG="info" cargo run --bin litep2p-perf -- server --listen-address "/ip4/127.0.0.1/udp/8888/webrtc-direct" --node-key "secret" --transport-layer webrtc
Compiling litep2p v0.12.3 (/Users/haiko/code/polkadot/litep2p)
Compiling litep2p-perf v0.1.0 (/Users/haiko/code/polkadot/litep2p-perf/litep2p)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 3.75s
Running `target/debug/litep2p-perf server --listen-address /ip4/127.0.0.1/udp/8888/webrtc-direct --node-key secret --transport-layer webrtc`
2026-01-13T11:20:35.158021Z INFO litep2p_perf: Using WebRTC transport layer
2026-01-13T11:20:35.161763Z INFO litep2p::webrtc: start webrtc transport listen_addresses=["/ip4/127.0.0.1/udp/8888/webrtc-direct"]
2026-01-13T11:20:35.173753Z INFO litep2p_perf: Server listening on address: ["/ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q"]
2026-01-13T11:20:45.085174Z INFO litep2p_perf: Event: ConnectionEstablished { peer: PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid"), endpoint: Listener { address: "/ip4/127.0.0.1/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg/p2p/12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid", connection_id: ConnectionId(1) } }
2026-01-13T11:20:45.289427Z WARN str0m: Drop ChannelData event for id: 0
2026-01-13T11:20:45.293199Z WARN str0m: Drop BufferedAmountLow for id: 0
2026-01-13T11:20:45.295765Z WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid") channel_id=ChannelId(1) data_len=8
2026-01-13T11:20:45.295947Z WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid") channel_id=ChannelId(1) data_len=1024
2026-01-13T11:20:45.296132Z WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid") channel_id=ChannelId(1) data_len=8
After moving the substream closure before receiving the download bytes (right after this line), I see the flag and log message show up on the litep2p side. Full dialer log: Full listener log: I'm not sure what to make of this. 🤔 |
b413ef4 to
8578487
Compare
|
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
d9d9225 to
a4effe2
Compare
1127166 to
73f76ec
Compare
2d78d7e to
2440797
Compare
|
Upstream PR has been opened. Closing this to avoid email spam. |
Summary
Implements proper FIN/FIN_ACK handshake for WebRTC substream shutdown according to the libp2p WebRTC-direct specification, with timeout handling to prevent indefinite waiting.
Closes paritytech#476
Changes
1. Proper FIN/FIN_ACK Handshake
poll_shutdownnow waits for FIN_ACK acknowledgment before completing2. Timeout Protection
3. Shutdown Requirements Fulfilled
Closingstate)4. Code Refactoring
Messageevent variant to include optional flag parameter (instead of separateMessageWithFlagsvariant)Implementation Details
The shutdown process now follows this state machine:
Open→Closing(blocks new writes)Closing→FinSentFinSent→FinAckedThe timeout task is spawned immediately when entering
FinSentstate and wakes the shutdown future after 5 seconds if no FIN_ACK is received.Testing
Compatibility