Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make TCPMux IPv4/IPv6 aware #424

Merged
merged 1 commit into from
Feb 21, 2022
Merged

Make TCPMux IPv4/IPv6 aware #424

merged 1 commit into from
Feb 21, 2022

Conversation

Sean-Der
Copy link
Member

@Sean-Der Sean-Der commented Feb 20, 2022

TCPMux before would create one internal connection per ufrag. This could
cause remote IPv6 traffic to be dispatched to a local IPv4 handler (or
the inverse). The ice.Agent would then discard the traffic since a
candidate pair must be the same IP version.

This commit now creates two connections per ufrag. When requesting a
connection for a ufrag the user must specify if they want IPv4 or IPv6.

Relates to pion/webrtc#2125
Relates to pion/webrtc#1356

@codecov
Copy link

codecov bot commented Feb 20, 2022

Codecov Report

Merging #424 (5d993ad) into master (2d70ec8) will decrease coverage by 0.16%.
The diff coverage is 59.52%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     pion/webrtc#424      +/-   ##
==========================================
- Coverage   78.98%   78.82%   -0.17%     
==========================================
  Files          34       34              
  Lines        3860     3882      +22     
==========================================
+ Hits         3049     3060      +11     
- Misses        630      636       +6     
- Partials      181      186       +5     
Flag Coverage Δ
go 78.82% <59.52%> (-0.17%) ⬇️
wasm 25.16% <42.85%> (-0.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
tcp_mux.go 69.45% <58.53%> (-6.24%) ⬇️
gather.go 66.80% <100.00%> (ø)
candidate_base.go 86.06% <0.00%> (+1.21%) ⬆️
udp_mux.go 81.97% <0.00%> (+1.74%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2d70ec8...5d993ad. Read the comment docs.

@Sean-Der
Copy link
Member Author

@jech Hopefully this fixes the issues you are seeing! Sorry it took so long to understand/reproduce the issue.

tcp_mux.go Outdated Show resolved Hide resolved
@jech
Copy link
Member

jech commented Feb 20, 2022

Thanks a lot, Sean, I'll test it during the night (when nobody is using the sample server).

FWIW, I've seen similar symptoms with UDPMux, see #518. I'm wondering if it's the same issue?

@jech
Copy link
Member

jech commented Feb 20, 2022

Perhaps I'm doing something wrong, but it still doesn't work for me. I've put a TCP listener on port 8444, enabled NetworkTypeTCP4 and NetworkTypeTCP6 then did the following on the client (I'm running TURN on port 1194):

iptables -I INPUT -s 51.210.14.2 -p udp -j DROP
ip6tables -I INPUT -p udp -j DROP
iptables -I INPUT -p tcp --sport 1194 -j DROP
ip6tables -I INPUT -p tcp --sport 1194 -j DROP

The client is the offerer and the media sender. It connects to the TCP listener, and sends some data:

16:17:43.149235 IP 192.168.0.27.43936 > 51.210.14.2.8444: Flags [SEW], seq 1304065160, win 64240, options [mss 1460,sackOK,TS val 3863861007 ecr 0,nop,wscale 7], length 0
16:17:43.156764 IP 51.210.14.2.8444 > 192.168.0.27.43936: Flags [S.E], seq 24426563, ack 1304065161, win 65160, options [mss 1460,sackOK,TS val 4145864461 ecr 3863861007,nop,wscale 7], length 0
16:17:43.156800 IP 192.168.0.27.43936 > 51.210.14.2.8444: Flags [.], ack 1, win 502, options [nop,nop,TS val 3863861015 ecr 4145864461], length 0

It's sending some data:

16:17:44.168249 IP 192.168.0.27.43936 > 51.210.14.2.8444: Flags [P.], seq 115:229, ack 1, win 502, options [nop,nop,TS val 3863862026 ecr 4145864799], length 114
16:17:44.175367 IP 51.210.14.2.8444 > 192.168.0.27.43936: Flags [.], ack 229, win 509, options [nop,nop,TS val 4145865479 ecr 3863862026], length 0

However, I see no traffic in the opposite direction. After a while, the connection drops:

16:18:28.705050 IP 192.168.0.27.43938 > 51.210.14.2.8444: Flags [P.], seq 2509:2623, ack 1, win 502, options [nop,nop,TS val 3863906563 ecr 4145909334], length 114
16:18:28.712831 IP 51.210.14.2.8444 > 192.168.0.27.43938: Flags [.], ack 2623, win 509, options [nop,nop,TS val 4145910016 ecr 3863906563], length 0
16:18:28.753946 IP 192.168.0.27.43938 > 51.210.14.2.8444: Flags [F.], seq 2623, ack 1, win 502, options [nop,nop,TS val 3863906612 ecr 4145910016], length 0

At that point, ICE gets into failed state, which triggers an ICE restart, and the cycle starts again.

Copy link
Contributor

@boks1971 boks1971 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@jech
Copy link
Member

jech commented Feb 20, 2022

Interestingly, I can only reproduce the issue if both TCP/IPv4 and TCP/IPv6 are unblocked. If I additionally block TCP/IPv6 (ip6tables -I INPUT -p tcp -j DROP), then everything works fine.

@davidzhao
Copy link
Member

FWIW, I've seen similar symptoms with UDPMux, see pion/ice#518. I'm wondering if it's the same issue?

Fairly certain UDPMux is also not IPv6 aware. If this works, we'll port the same fix to UDPMux

Copy link
Member

@davidzhao davidzhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lg! I don't have a repro to validate this, and will defer to @jech to confirm that it resolves the issue.

@jech
Copy link
Member

jech commented Feb 20, 2022

The behaviour is definitely better than it was: I now get an ICE failure consistently across browsers (tested with Brave and Firefox), which is something we all already have to handle gracefully, while previously I was getting erratic behaviour that I didn't know how to detect.

Hence, unless Sean has any further ideas, I recommend merging this patch while keeping the bug report open.

@maeln
Copy link

maeln commented Feb 21, 2022

Hi!

Because our use-case rely on two PR that are not merge yet ( pion/webrtc#1812 & pion/mediadevices#338 ), it is a bit difficult to test this PR against our own issue in a timely manner.

So don't wait for my feedback. I will see with my colleague to try to organize our own fork/pr to test this ASAP.

Thanks a lot !

@Sean-Der
Copy link
Member Author

@jech I still have ice-tcp issues as well! With FireFox I am not able to connect when Pion is the controlled agent.

  • FireFox sends a binding request with UseCandidate. Pion creates a peer reflexive candidate + candidate pair
  • Pion sends a binding request (to validate the candidate)
  • FireFox sends another binding request with UseCandiate. Pion hasn't gotten a response to it's binding request yet and ignores (candidate pair isn't valid yet)
  • FireFox responds with Binding Success. Pion marks candidate pair as valid.

If FireFox sent another Binding Request everything would go to connected, not sure why it doesn't. I have attached the pcap here. You can reproduce at http://18.216.219.191:8080/ it is just running https://github.com/pion/webrtc/tree/master/examples/ice-tcp

firefox-tcp-client.pcap.zip

@Sean-Der
Copy link
Member Author

I am going to merge this, but just makes this a Relates to on the two tickets. It fixes at least one problem

TCPMux before would create one internal connection per ufrag. This could
cause remote IPv6 traffic to be dispatched to a local IPv4 handler (or
the inverse). The ice.Agent would then discard the traffic since a
candidate pair must be the same IP version.

This commit now creates two connections per ufrag. When requesting a
connection for a ufrag the user must specify if they want IPv4 or IPv6.

Relates to pion/webrtc#2125
Relates to pion/webrtc#1356
@Sean-Der Sean-Der merged commit fb4760c into master Feb 21, 2022
@Sean-Der Sean-Der deleted the tcpmux-ip-version-aware branch February 21, 2022 13:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

6 participants