Skip to content

Conversation

@timwu20
Copy link
Collaborator

@timwu20 timwu20 commented Jan 9, 2026

Summary

Implements proper FIN/FIN_ACK handshake for WebRTC substream shutdown according to the libp2p WebRTC-direct specification, with timeout handling to prevent indefinite waiting.

Closes paritytech#476

Changes

1. Proper FIN/FIN_ACK Handshake

  • poll_shutdown now waits for FIN_ACK acknowledgment before completing
  • Added waker mechanism to wake shutdown task when FIN_ACK is received
  • Ensures graceful shutdown with delivery guarantees

2. Timeout Protection

  • Added 5-second timeout for FIN_ACK (matching go-libp2p's behavior)
  • Prevents indefinite waiting if remote never sends FIN_ACK
  • Automatically completes shutdown after timeout to avoid resource leaks

3. Shutdown Requirements Fulfilled

  • ✅ Stops accepting new writes during shutdown (transitions through Closing state)
  • ✅ Flushes pending data before sending FIN
  • ✅ Sends FIN flag with empty payload
  • ✅ Waits for FIN_ACK acknowledgment from remote
  • ✅ Triggers data channel closure and resource cleanup

4. Code Refactoring

  • Unified Message event variant to include optional flag parameter (instead of separate MessageWithFlags variant)
  • Unified encode functions into single function with optional flag parameter
  • Fixed flag handling bug: changed from bitwise operations to equality checks
  • Renamed plural "flags" to singular "flag" throughout codebase for consistency with protobuf schema

Implementation Details

The shutdown process now follows this state machine:

  1. OpenClosing (blocks new writes)
  2. Flush pending data
  3. Send FIN flag
  4. ClosingFinSent
  5. Wait for FIN_ACK or timeout (5 seconds)
  6. FinSentFinAcked
  7. Drop substream → channel closes → data channel cleanup

The timeout task is spawned immediately when entering FinSent state and wakes the shutdown future after 5 seconds if no FIN_ACK is received.

Testing

  • Added comprehensive tests for FIN/FIN_ACK handshake
  • Added test for timeout behavior when FIN_ACK is not received

Compatibility

@timwu20 timwu20 changed the title Tim/support flags webrtc: Support FIN/FIN_ACK handshake for substream shutdown Jan 12, 2026
@timwu20 timwu20 marked this pull request as ready for review January 12, 2026 18:53
@timwu20 timwu20 requested a review from dimartiro January 12, 2026 22:22
Comment on lines +306 to +308
let current_state = *self.state.lock();

match current_state {
Copy link

@dimartiro dimartiro Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the correct way to keep the lock during the match is:

Suggested change
let current_state = *self.state.lock();
match current_state {
let current_state = self.state.lock();
match *current_state {

Otherwise, the lock will be released while you are reading the state, which could lead to a race condition

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The state gets updated for certain matched variants, so it needs to be unlocked again later. It's actually fine if lock isn't held since it would never get updated by another thread since poll_shutdown is never called concurrently.

Copy link

@haikoschol haikoschol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When testing this with a libp2p dialer and a litep2p listener, I noticed the following:

  1. Apparently libp2p does not send FIN_ACK. It doesn't even exist in their flag enum. Just an observation, still works due to the timeout.

  2. litep2p does not use the Substream type for the noise handshake datachannel and therefore the flag handling code added in this PR is not executed. Additionally, right after sending the last handshake message, litep2p closes the datachannel directly. I tried commenting out this line, but that caused a panic in WebRtcConnection::on_inbound_data(). The spec just says "On success of the authentication handshake, the used datachannel is closed". It seems more logical to me that the dialer closes the channel. But given that connection establishment works, there is only ever one handshake channel per connection and changing this on the litep2p side seems to be a bit involved, I think this is a low priority issue.

  3. I added a call to close the perf substream in libp2p here and logging calls (warn) when libp2p sends the FIN flag and when litep2p receives it, as well as when it sends FIN_ACK back. I see the first message, but not the latter two. I changed this log from trace to warn in order to see all flags of incoming messages and it seems to be always None.

Full dialer log:

$ SSLKEYLOGFILE=key.log RUST_LOG="info" cargo run --bin libp2p-perf -- client --transport-layer "webrtc" --server-address "/ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q" --upload-bytes 1024 --download-bytes 1024
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.19s
     Running `target/debug/libp2p-perf client --transport-layer webrtc --server-address /ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q --upload-bytes 1024 --download-bytes 1024`
2026-01-13T11:20:44.825444Z  INFO libp2p_swarm: local_peer_id=12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid
2026-01-13T11:20:44.828496Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/127.0.0.1/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.828655Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/192.168.1.146/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.828683Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/192.168.139.3/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.828709Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/169.254.133.87/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.831052Z  INFO webrtc_ice::mdns: mDNS is using 0.0.0.0:5353 as dest_addr
2026-01-13T11:20:44.832012Z  INFO webrtc_mdns::conn: Looping and listening Ok(0.0.0.0:5353)
2026-01-13T11:20:44.832532Z  INFO webrtc::peer_connection: signaling state changed to have-local-offer
2026-01-13T11:20:44.833122Z  INFO webrtc::peer_connection: signaling state changed to stable
2026-01-13T11:20:44.833685Z  INFO webrtc_ice::agent::agent_internal: [controlling]: Setting new connection state: Checking
2026-01-13T11:20:44.833711Z  INFO webrtc::peer_connection: ICE connection state changed: checking
2026-01-13T11:20:44.833723Z  INFO webrtc::peer_connection: peer connection state changed: connecting
2026-01-13T11:20:45.041060Z  INFO webrtc_ice::agent::agent_internal: [controlling]: Setting new connection state: Connected
2026-01-13T11:20:45.041359Z  INFO webrtc::peer_connection: ICE connection state changed: connected
2026-01-13T11:20:45.075581Z  INFO webrtc::peer_connection: peer connection state changed: connected
2026-01-13T11:20:45.287422Z  WARN libp2p_webrtc_utils::stream: sending FIN flag to close stream
2026-01-13T11:20:45.288343Z  INFO libp2p_perf: Event: Some(ConnectionEstablished { peer_id: PeerId("12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q"), connection_id: ConnectionId(1), endpoint: Dialer { address: /ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q, role_override: Dialer, port_use: Reuse }, num_established: 1, concurrent_dial_errors: Some([]), established_in: 459.91925ms })
2026-01-13T11:20:45.293837Z  INFO litep2p-perf: Uploaded 1.00 KiB bytes in 0.0000s bandwidth 546.67 Mbit/s
2026-01-13T11:20:45.297546Z  INFO litep2p-perf: Downloaded 1.00 KiB bytes in 0.0036s bandwidth 2.20 Mbit/s
2026-01-13T11:20:45.297613Z  WARN libp2p_webrtc_utils::stream: sending FIN flag to close stream
2026-01-13T11:20:45.297851Z  INFO libp2p_perf: Event: Some(Behaviour(Event { id: 1, result: Ok(()) }))
2026-01-13T11:20:45.298186Z  WARN webrtc_sctp::association: [] failed to write packets on net_conn: io error: io error: oneshot canceled

Full listener log:

$ RUST_LOG="info" cargo run --bin litep2p-perf -- server --listen-address "/ip4/127.0.0.1/udp/8888/webrtc-direct" --node-key "secret" --transport-layer webrtc
   Compiling litep2p v0.12.3 (/Users/haiko/code/polkadot/litep2p)
   Compiling litep2p-perf v0.1.0 (/Users/haiko/code/polkadot/litep2p-perf/litep2p)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 3.75s
     Running `target/debug/litep2p-perf server --listen-address /ip4/127.0.0.1/udp/8888/webrtc-direct --node-key secret --transport-layer webrtc`
2026-01-13T11:20:35.158021Z  INFO litep2p_perf: Using WebRTC transport layer
2026-01-13T11:20:35.161763Z  INFO litep2p::webrtc: start webrtc transport listen_addresses=["/ip4/127.0.0.1/udp/8888/webrtc-direct"]
2026-01-13T11:20:35.173753Z  INFO litep2p_perf: Server listening on address: ["/ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q"]
2026-01-13T11:20:45.085174Z  INFO litep2p_perf: Event: ConnectionEstablished { peer: PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid"), endpoint: Listener { address: "/ip4/127.0.0.1/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg/p2p/12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid", connection_id: ConnectionId(1) } }
2026-01-13T11:20:45.289427Z  WARN str0m: Drop ChannelData event for id: 0
2026-01-13T11:20:45.293199Z  WARN str0m: Drop BufferedAmountLow for id: 0
2026-01-13T11:20:45.295765Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid") channel_id=ChannelId(1) data_len=8
2026-01-13T11:20:45.295947Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid") channel_id=ChannelId(1) data_len=1024
2026-01-13T11:20:45.296132Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid") channel_id=ChannelId(1) data_len=8

@haikoschol
Copy link

  1. I added a call to close the perf substream in libp2p [...]

After moving the substream closure before receiving the download bytes (right after this line), I see the flag and log message show up on the litep2p side.

Full dialer log:

$ SSLKEYLOGFILE=key.log RUST_LOG="info" cargo run --bin libp2p-perf -- client --transport-layer "webrtc" --server-address "/ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiDkiVIC23hLqgGIRwSfB3MVbR8U0NjCAauOpeMAq2hU_Q/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q" --upload-bytes 1024 --download-bytes 1024
   Compiling libp2p-perf v0.1.0 (/Users/haiko/code/polkadot/litep2p-perf/libp2p)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.90s
     Running `target/debug/libp2p-perf client --transport-layer webrtc --server-address /ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiDkiVIC23hLqgGIRwSfB3MVbR8U0NjCAauOpeMAq2hU_Q/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q --upload-bytes 1024 --download-bytes 1024`
2026-01-13T11:29:41.420941Z  INFO libp2p_swarm: local_peer_id=12D3KooWRXvHDkUXKhRGqgyRBZF8GUv155TW2deKrq2tqvxsryin
2026-01-13T11:29:41.423770Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/127.0.0.1/udp/56354/webrtc-direct/certhash/uEiAR1PJvzYZKDzUfIXaNlgi_PCzUf8FKOSIJRnEzw6l-mQ })
2026-01-13T11:29:41.424653Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/192.168.1.146/udp/56354/webrtc-direct/certhash/uEiAR1PJvzYZKDzUfIXaNlgi_PCzUf8FKOSIJRnEzw6l-mQ })
2026-01-13T11:29:41.424692Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/192.168.139.3/udp/56354/webrtc-direct/certhash/uEiAR1PJvzYZKDzUfIXaNlgi_PCzUf8FKOSIJRnEzw6l-mQ })
2026-01-13T11:29:41.424722Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/169.254.133.87/udp/56354/webrtc-direct/certhash/uEiAR1PJvzYZKDzUfIXaNlgi_PCzUf8FKOSIJRnEzw6l-mQ })
2026-01-13T11:29:41.425939Z  INFO webrtc_ice::mdns: mDNS is using 0.0.0.0:5353 as dest_addr
2026-01-13T11:29:41.426905Z  INFO webrtc_mdns::conn: Looping and listening Ok(0.0.0.0:5353)
2026-01-13T11:29:41.427432Z  INFO webrtc::peer_connection: signaling state changed to have-local-offer
2026-01-13T11:29:41.427960Z  INFO webrtc::peer_connection: signaling state changed to stable
2026-01-13T11:29:41.428344Z  INFO webrtc_ice::agent::agent_internal: [controlling]: Setting new connection state: Checking
2026-01-13T11:29:41.429508Z  INFO webrtc::peer_connection: ICE connection state changed: checking
2026-01-13T11:29:41.429520Z  INFO webrtc::peer_connection: peer connection state changed: connecting
2026-01-13T11:29:41.634736Z  INFO webrtc_ice::agent::agent_internal: [controlling]: Setting new connection state: Connected
2026-01-13T11:29:41.634866Z  INFO webrtc::peer_connection: ICE connection state changed: connected
2026-01-13T11:29:41.659586Z  INFO webrtc::peer_connection: peer connection state changed: connected
2026-01-13T11:29:41.872556Z  WARN libp2p_webrtc_utils::stream: sending FIN flag to close stream
2026-01-13T11:29:41.873559Z  INFO libp2p_perf: Event: Some(ConnectionEstablished { peer_id: PeerId("12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q"), connection_id: ConnectionId(1), endpoint: Dialer { address: /ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiDkiVIC23hLqgGIRwSfB3MVbR8U0NjCAauOpeMAq2hU_Q/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q, role_override: Dialer, port_use: Reuse }, num_established: 1, concurrent_dial_errors: Some([]), established_in: 449.820083ms })
2026-01-13T11:29:41.878442Z  INFO litep2p-perf: Uploaded 1.00 KiB bytes in 0.0000s bandwidth 549.83 Mbit/s
2026-01-13T11:29:41.878515Z  WARN libp2p_perf::perf: closing the substream before receiving the download bytes
2026-01-13T11:29:41.878532Z  WARN libp2p_webrtc_utils::stream: sending FIN flag to close stream
2026-01-13T11:29:41.882684Z  INFO litep2p-perf: Downloaded 1.00 KiB bytes in 0.0041s bandwidth 1.92 Mbit/s
2026-01-13T11:29:41.883598Z  INFO libp2p_perf: Event: Some(Behaviour(Event { id: 1, result: Ok(()) }))

Full listener log:

$ RUST_LOG="info" cargo run --bin litep2p-perf -- server --listen-address "/ip4/127.0.0.1/udp/8888/webrtc-direct" --node-key "secret" --transport-layer webrtc
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.17s
     Running `target/debug/litep2p-perf server --listen-address /ip4/127.0.0.1/udp/8888/webrtc-direct --node-key secret --transport-layer webrtc`
2026-01-13T11:29:27.377482Z  INFO litep2p_perf: Using WebRTC transport layer
2026-01-13T11:29:27.379770Z  INFO litep2p::webrtc: start webrtc transport listen_addresses=["/ip4/127.0.0.1/udp/8888/webrtc-direct"]
2026-01-13T11:29:27.388211Z  INFO litep2p_perf: Server listening on address: ["/ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiDkiVIC23hLqgGIRwSfB3MVbR8U0NjCAauOpeMAq2hU_Q/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q"]
2026-01-13T11:29:41.669696Z  INFO litep2p_perf: Event: ConnectionEstablished { peer: PeerId("12D3KooWRXvHDkUXKhRGqgyRBZF8GUv155TW2deKrq2tqvxsryin"), endpoint: Listener { address: "/ip4/127.0.0.1/udp/56354/webrtc-direct/certhash/uEiAR1PJvzYZKDzUfIXaNlgi_PCzUf8FKOSIJRnEzw6l-mQ/p2p/12D3KooWRXvHDkUXKhRGqgyRBZF8GUv155TW2deKrq2tqvxsryin", connection_id: ConnectionId(1) } }
2026-01-13T11:29:41.874576Z  WARN str0m: Drop ChannelData event for id: 0
2026-01-13T11:29:41.877167Z  WARN str0m: Drop BufferedAmountLow for id: 0
2026-01-13T11:29:41.880297Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooWRXvHDkUXKhRGqgyRBZF8GUv155TW2deKrq2tqvxsryin") channel_id=ChannelId(1) data_len=8
2026-01-13T11:29:41.880492Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooWRXvHDkUXKhRGqgyRBZF8GUv155TW2deKrq2tqvxsryin") channel_id=ChannelId(1) data_len=1024
2026-01-13T11:29:41.880575Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooWRXvHDkUXKhRGqgyRBZF8GUv155TW2deKrq2tqvxsryin") channel_id=ChannelId(1) data_len=8
2026-01-13T11:29:41.880795Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooWRXvHDkUXKhRGqgyRBZF8GUv155TW2deKrq2tqvxsryin") channel_id=ChannelId(1) flag=0 data_len=0
2026-01-13T11:29:41.880914Z  WARN litep2p::transport::webrtc::substream: received FIN flag, sending FIN_ACK

I'm not sure what to make of this. 🤔

@timwu20
Copy link
Collaborator Author

timwu20 commented Jan 14, 2026

When testing this with a libp2p dialer and a litep2p listener, I noticed the following:

  1. Apparently libp2p does not send FIN_ACK. It doesn't even exist in their flag enum. Just an observation, still works due to the timeout.
  2. litep2p does not use the Substream type for the noise handshake datachannel and therefore the flag handling code added in this PR is not executed. Additionally, right after sending the last handshake message, litep2p closes the datachannel directly. I tried commenting out this line, but that caused a panic in WebRtcConnection::on_inbound_data(). The spec just says "On success of the authentication handshake, the used datachannel is closed". It seems more logical to me that the dialer closes the channel. But given that connection establishment works, there is only ever one handshake channel per connection and changing this on the litep2p side seems to be a bit involved, I think this is a low priority issue.
  3. I added a call to close the perf substream in libp2p here and logging calls (warn) when libp2p sends the FIN flag and when litep2p receives it, as well as when it sends FIN_ACK back. I see the first message, but not the latter two. I changed this log from trace to warn in order to see all flags of incoming messages and it seems to be always None.

Full dialer log:

$ SSLKEYLOGFILE=key.log RUST_LOG="info" cargo run --bin libp2p-perf -- client --transport-layer "webrtc" --server-address "/ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q" --upload-bytes 1024 --download-bytes 1024
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.19s
     Running `target/debug/libp2p-perf client --transport-layer webrtc --server-address /ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q --upload-bytes 1024 --download-bytes 1024`
2026-01-13T11:20:44.825444Z  INFO libp2p_swarm: local_peer_id=12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid
2026-01-13T11:20:44.828496Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/127.0.0.1/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.828655Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/192.168.1.146/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.828683Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/192.168.139.3/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.828709Z  INFO libp2p_perf: Event: Some(NewListenAddr { listener_id: ListenerId(1), address: /ip4/169.254.133.87/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg })
2026-01-13T11:20:44.831052Z  INFO webrtc_ice::mdns: mDNS is using 0.0.0.0:5353 as dest_addr
2026-01-13T11:20:44.832012Z  INFO webrtc_mdns::conn: Looping and listening Ok(0.0.0.0:5353)
2026-01-13T11:20:44.832532Z  INFO webrtc::peer_connection: signaling state changed to have-local-offer
2026-01-13T11:20:44.833122Z  INFO webrtc::peer_connection: signaling state changed to stable
2026-01-13T11:20:44.833685Z  INFO webrtc_ice::agent::agent_internal: [controlling]: Setting new connection state: Checking
2026-01-13T11:20:44.833711Z  INFO webrtc::peer_connection: ICE connection state changed: checking
2026-01-13T11:20:44.833723Z  INFO webrtc::peer_connection: peer connection state changed: connecting
2026-01-13T11:20:45.041060Z  INFO webrtc_ice::agent::agent_internal: [controlling]: Setting new connection state: Connected
2026-01-13T11:20:45.041359Z  INFO webrtc::peer_connection: ICE connection state changed: connected
2026-01-13T11:20:45.075581Z  INFO webrtc::peer_connection: peer connection state changed: connected
2026-01-13T11:20:45.287422Z  WARN libp2p_webrtc_utils::stream: sending FIN flag to close stream
2026-01-13T11:20:45.288343Z  INFO libp2p_perf: Event: Some(ConnectionEstablished { peer_id: PeerId("12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q"), connection_id: ConnectionId(1), endpoint: Dialer { address: /ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q, role_override: Dialer, port_use: Reuse }, num_established: 1, concurrent_dial_errors: Some([]), established_in: 459.91925ms })
2026-01-13T11:20:45.293837Z  INFO litep2p-perf: Uploaded 1.00 KiB bytes in 0.0000s bandwidth 546.67 Mbit/s
2026-01-13T11:20:45.297546Z  INFO litep2p-perf: Downloaded 1.00 KiB bytes in 0.0036s bandwidth 2.20 Mbit/s
2026-01-13T11:20:45.297613Z  WARN libp2p_webrtc_utils::stream: sending FIN flag to close stream
2026-01-13T11:20:45.297851Z  INFO libp2p_perf: Event: Some(Behaviour(Event { id: 1, result: Ok(()) }))
2026-01-13T11:20:45.298186Z  WARN webrtc_sctp::association: [] failed to write packets on net_conn: io error: io error: oneshot canceled

Full listener log:

$ RUST_LOG="info" cargo run --bin litep2p-perf -- server --listen-address "/ip4/127.0.0.1/udp/8888/webrtc-direct" --node-key "secret" --transport-layer webrtc
   Compiling litep2p v0.12.3 (/Users/haiko/code/polkadot/litep2p)
   Compiling litep2p-perf v0.1.0 (/Users/haiko/code/polkadot/litep2p-perf/litep2p)
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 3.75s
     Running `target/debug/litep2p-perf server --listen-address /ip4/127.0.0.1/udp/8888/webrtc-direct --node-key secret --transport-layer webrtc`
2026-01-13T11:20:35.158021Z  INFO litep2p_perf: Using WebRTC transport layer
2026-01-13T11:20:35.161763Z  INFO litep2p::webrtc: start webrtc transport listen_addresses=["/ip4/127.0.0.1/udp/8888/webrtc-direct"]
2026-01-13T11:20:35.173753Z  INFO litep2p_perf: Server listening on address: ["/ip4/127.0.0.1/udp/8888/webrtc-direct/certhash/uEiCFl4FpV9gPv_LDA9lVLLQ4QM71y7jPXdH9qPGYBPnKVQ/p2p/12D3KooWBpZHDZu7YSbvPaPXKhkRNJvR7MkTJMQQAVBKx9mCqz3q"]
2026-01-13T11:20:45.085174Z  INFO litep2p_perf: Event: ConnectionEstablished { peer: PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid"), endpoint: Listener { address: "/ip4/127.0.0.1/udp/51413/webrtc-direct/certhash/uEiCNhSA3RD-4v77WuvuKWE7Spa6C4cPln-mWv0M0pSWMSg/p2p/12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid", connection_id: ConnectionId(1) } }
2026-01-13T11:20:45.289427Z  WARN str0m: Drop ChannelData event for id: 0
2026-01-13T11:20:45.293199Z  WARN str0m: Drop BufferedAmountLow for id: 0
2026-01-13T11:20:45.295765Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid") channel_id=ChannelId(1) data_len=8
2026-01-13T11:20:45.295947Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid") channel_id=ChannelId(1) data_len=1024
2026-01-13T11:20:45.296132Z  WARN litep2p::webrtc::connection: handle inbound message peer=PeerId("12D3KooW9s5y8JzwVMYB4Mw13d2ZLuG8cYcjP5AfDdcphhYoWAid") channel_id=ChannelId(1) data_len=8
  1. This is good to know. That means we're relying on the timeout logic to function properly.

  2. I'll revisit this later. I think we if we revamp how we do the message framing and utilize the same negotiation protocol as the other transports, that would be the ideal solution.

  3. We need to explicitly call substream.close() on both server side and client side given it's a 4 way handshake. Each side sends FIN to close it's own writing side, but read side is still open. I made the changes on my fork of litep2p-perf here. The sequence diagram when closing a channel can be found in the "Closing an RTCChannel" section of the WebRTC spec.

@haikoschol
Copy link

Upstream PR has been opened. Closing this to avoid email spam.

@haikoschol haikoschol closed this Jan 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

webrtc: Gracefully close substreams

4 participants