Tangled in the Threads

Jon Udell, November 29, 2000

How peers connect, continued

Readers respond to the P2P/NAT debate

Jon weathers the Slashdot Effect, and emerges with an argument for of IPv6 and identity-based firewalling

A few weeks ago, I reported on a lengthy newsgroup thread about relayed vs. direct peer-to-peer connections, and the possibility of connection-splicing. That column drew the attention of the thundering herd when it was mentioned on Slashdot last week.

Elsewhere in the newsgroups, someone remarked: "I sure hope that the Byte people don't feel threatened by this free online mag [Slashdot]." On the contrary. (And by the way, BYTE.com is free too, of course.) I greatly enjoy being the target of the Slashdot effect. It's a rough ride, to be sure. But for everyone who'll condemn you as a "know-nothing fsck-wit" there are ten others who will enlarge your understanding of the issue under discussion.

The feedback I've received from that last column, by way of email, my own newsgroups, and Slashdot, has been extremely helpful. One colleague noted that the column raised an important issue, but didn't suggest any concrete solutions. That's true. I was just asking questions, I didn't have clear answers in mind. One of the great privileges of writing this column is that I can, from time to time, raise a lightning rod to attract sparks. Here are some sparks that column attracted.

Eric Hopper (email)

See http://slashdot.org/comments.pl?sid=00/11/24/2359231&cid=127.

I think I've come up with a way to make a very simple modification to NATs that would preserve most of the 'no uninvited connection' properties a NAT normally provides while still allowing brokered peer-to-peer filesharing connections.

Eric's scheme goes like this:

TCP allows a connection to be established if both sides simultaneously send each other a SYN packet. This method requires a little NAT cooperation, but only a little. Here's how it could work:

  1. Side1 Binds their TCP socket to a particular port.
  2. Side1 Tries to connect to Broker on an agreed upon port.
  3. Broker Replies with an RST when it receives the expected SYN. Records source IP and source port.
  4. Side2 Binds their TCP socket to a particular port.
  5. Side2 Tries to connect to Broker on an agreed upon port.
  6. Broker Replies with an RST when it receives the expected SYN. Records source IP and source port.
  7. Broker Informs Side1 of Side2's source port and source IP.
  8. Broker Informs Side2 of Side1's source port and source IP.
  9. Source1 Uses same socket originally bound to connect to Source2's IP and port.
  10. Source2 Uses same socket originally bound to connect to Source1's IP and port.
  11. Voila! They connect via exchange of simultaneous SYNs.

This requires cooperation between the sources and their NATs. Specifically, it requires these three things from a NAT:

  1. The NAT should keep the same outgoing port for the same TCP port on a client for a period of time. This is very similar to how a NAT handles UDP.
  2. A NAT must not reply to SYNs it receives on a bound TCP port that don't originate from the connected to IP. Normally it would reply with an RST.
  3. A NAT must change the IP it's expecting TCP packets from if the client sends out a new SYN to a different IP and port combo.

It's a hack, but I think it'd work.

Will it work? I'm no TCP guru, so I can't say for sure. If you are a TCP guru, by all means drop into the newsgroup and enlighten us.

"Of course, it's a hack..."

A different question is: "Should we even contemplate such schemes?" I've looked at a lot of these ideas lately, and the phrase "of course, it's a hack" crops up repeatedly:

lazzaro (Slashdot)

This IETF I-D (Internet-Draft) includes a novel hack (see part 3) using https.

The referenced document proposes that SIP (Session Invitation Protocol, SIP homepage) can use an HTTPS tunnel to connect SIP callers and callees, through unmodified firewalls and NATs, by way of proxies. As such, it does not address the question of enabling direct connections. But the language of the draft indicates the intense frustration surrounding this issue:

...the solutions are not pretty. However, these NATs and firewalls are a reality, and SIP deployment is being hampered by the lack of support for SIP ALGs [application level gateways] in these boxes. A solution MUST be found...

...the fundamental semantic of the IP address, that it is a globally reachable point for communications, has been violated by NATs....

...our solution is a horrible, horrible hack. We propose that a specific contact hostname value be reserved...we propose that this host name be "jibufobutbmpu". This name is "I hate NATS a lot" with each letter incremented by one....

If we believe that peer-to-peer computing will be pervasive, then do we really want to rely on hackery? NAT, in and of itself, is a major hack. If we hack NAT, do two wrongs make a right?

Here's an approach that, while administratively complex, works without changing any fundamental rules. Recall that Napster doesn't relay (or proxy) because it can't afford to carry all the traffic that otherwise blocked peers would generate. If such proxying occurs on the client's local network, though, it's supportable.

maccroz (Slashdot)

For each network of computers behind a firewall sharing an IP address, there can be one computer that has access to incoming requests. Linksys refers to this as the DMZ (Demilitarized Zone). This one connection can be the representative for the entire network.

I know my apartment is behind a Linksys router, and we have 4 connections, however we have one computer that is the dedicated incoming access server. This doesn't really help the other computers on the network, but it is a partial solution to the problem.

The Monster (Slashdot)

Well, it helps a bunch. The entire idea of NAT/masquerade firewalling is to deny all incoming connections to the firewall. So set up in the DMZ either Napster, or better yet a middleman to connect the two TCP streams (each of which is originated from the peers).

At first blush this looks like a nonstarter because establishing a DMZ is way beyond the interest and ability of a home user. But then so are many other things which, in due course, are commoditized. Perhaps a near-term solution will, in fact, be to ship access devices that encompass and commoditize the DMZ architecture.

Com2Kid (email)

I simply tell my software firewall to to accept connections on port XXX, and then when I am done with that application, I close the port again. I look at the port as a doorway, to be opened when needed. Your article, though, sees ports as either permanently open or shut.

An excellent point. Arguably, though, most users can't be bothered to make decisions on a per-session basis. So making NATs as flexibly configurable as software firewalls won't gain much -- because nobody (well, hardly anybody) will exercise that discretion even if available. Still, the notion of human-controlled discretionary access is an important piece of the puzzle.

Kryffpi (Slashdot)

Now that we have a "short-list" of acceptable ports to use in applications, all applications will use them. Which means that, in order to remain effective, firewalls will have to install application level filters, which (a) should be impossible as modern secure protocols are man-in-the-middle proof, (b) makes my life (as a writer of said applications) harder as I have to either co-operate with the firewall in some way, or fudge my protocol somehow to make it pass the application level filter applicable to some other more common application.

What happens if I write some new P2P application that your users would find useful or just plain fun -- or perhaps it's even dangerous. Do you ban the usage of this application? How?

It's not what you trust, it's whom you trust

Exactly. The notion of a trusted port is ultimately incoherent. As is the notion of a trusted application. Ultimately we trust or distrust people, not ports or applications. Here's the exchange that finally made the light bulb click on for me:

crt (Slashdot)

Coming from the gaming industry (which has done P2P LONG before Napster), I can safely say that proxies and NATs are the bane of a network programmer's existence.

However, there is a solution in the works, it's called Realm-Specific IP, here's the IETF working group that's working on it: http://www.ietf.org/html.charters/nat-charter.html.

It allows a client behind a NAT to reserve a port on the NAT and forward all traffic from that port to the client. So different clients can open up different listening ports on the NAT, and it will forward them incoming connections.

I pray every day that this protocol will get finalized quickly and be implemented in all NAT products. See also: http://www.ietf.org/internet-drafts/draft-ietf-nat-rsip-framework-05.txt.

sedawkgrep (Slashdot)

Yikes.

As a security freak, this scares the s**t out of me. The last thing I want is for any random user to be able to allow inbound connections through the firewall. No thanks -- they can suffer with filtering, proxying and NAT.

Before you knew it, you'd have everybody under the sun using netcat to allow themselves back into a protected network from the outside...just because it is convenient.

Ugh. Please somebody take us to PKI and IPv6.

PTrumpet (Slashdot)

It's generally recognized that NATs are a hack to get around the failing of the current IPv4 network. Even though I have written one of the first NATs before they became popular, it is widely agreed within the IETF that NATs present insurmountable problems. In addition to the inability to establish peer to peer TCP connections, there are also significant problems for network security because you can't directly tie the end user's IP address to the security protocol.

Another issue I have with NATs is that from an ISP point of view (we run one also) it is quite difficult to trace a rogue user that is on the inside of a NAT because usually a NAT will hide the identity of the customer, and you have to resort to other means to determine the identity of a user that may be launching an attack.

Time to rollout IPv6.

Peter Tattam, Author of Trumpet Winsock, PetrOS

Yes. It's going to take a lot of activation energy to get IPv6 over the threshold. Perhaps peer-to-peer computing will generate the necessary push. But the comment about "PKI and IPv6" resonates even more strongly for me. As does the notion that firewalling remains crucial:

Anonymous Coward (Slashdot)

The proliferation of firewalls in home use does not mean that peer to peer is dead. On the contrary! Firewalling is an absolutely critical enabling technology that will allow peer to peer to happen SAFELY. Without which (safety and real firewalling) peer-to-peer has NO future.

I agree. The P2P/NAT brouhaha exposes problems that were latent all along, but the idea of firewalling is necessary and right. Here's where I think all these threads converge:

In a world of many clients and few servers, the NAT compromise was tolerable -- an address-space-expanding hack with some convenient security properties. In a world of peers, it isn't just NAT that starts to fall apart, it's the very notion of firewalling based on conventions about applications and ports.

We put a man on the moon, we broke the 640K barrier, we survived Y2K, I'd like to think we can get to an Internet that uniquely identifies both people and devices, and enables people to control user/device interactions in ways that make sense in human terms. In the long run, there's really no choice.


Jon Udell (http://udell.roninhouse.com/) was BYTE Magazine's executive editor for new media, the architect of the original www.byte.com, and author of BYTE's Web Project column. He's now an independent Web/Internet consultant, and is the author of Practical Internet Groupware, from O'Reilly and Associates. His recent BYTE.com columns are archived at http://www.byte.com/index/threads

Creative Commons License
This work is licensed under a Creative Commons License.