gogoNET

IPv6 & Networking the Internet of Things

Although they may be temporary, it looks like large scale NATs will be
deployed as a bridge between v4 and v6 in ISP and mobile networks. What
are your thoughts, opinions and advice for operators contemplating
their deployment?

Views: 1224

Reply to This

Replies to This Discussion

The beautiful thing about site-local addresses is that you can use a NAT to tie them one-to-one to public addresses when you get them.  You can also use programs like faith to connect IPv6 only machines to the IPv4 internet.

This solution cannot work. In fact, anyone that is currently using a (IPv4-only) mobile Internet access already sees that solutions based on IPS-side NATs, or proxies, cannot work: the ISP is compeltely unable to sustain a sesssion for more than 30 seconds of apparent idle time. This means that interactive applications on today's 3G mobile networks are already suffering from excessive loss of sessions, due to server load overhead.

 

large-scale NAT principally suffer from PORTS HUNGER: today we alreay need about 700 ports per user (1000 recommanded). This number is already increasing with uses of Web 2.0 applications, with lots of interactions between lots of users, or with various data sources. Add to this some restrictions on port numbers, and the "large-scale NAT" will not accept more than about 30 *users* per IPv4: in other words, not more than existing NAT solutions at home or in small offices.

 

The only way to avoid this problem is to give not just one address per user, but ideally one address per user and per web application, on each physical interface that the user may use at one time (e.g. on their fixed line access or on their mobile acces with their smartphone). Only IPv6 allows that, by routing not just one address but a full /64 block in which users can configure various Internet-connected devices, or virtual interfaces for specific applications, all of them benefiting from IPv6's native support for autoconfiguration of these interfaces.

 

With 4G networks coming, users will not understand that all those NATs or proxies, shared between independant users will be impacted by the inability of ISPs to sustain the combined trafic (and avoid the HUNGER of port numbers, even when using only TCP with HTTP/1.1 and explicit session closes).

 

Alreasy, the servers on the web are suffering the problem: to avoid it, they needed to use complex routers, and frontal proxies for load balancing across a large block of IPv4 addresses. As it becomes almost impossible to get new IPv4 addresses for these servers, costly alternatives are being used (notably the use of costly CDNs for static contents). This will be even more critical given the exponential growth of the use of video, which doubles almost all years, and that will use about 60-70% of the total bandwidth of the Internet in 2013, in long sessions or with users constantly zapping from one content to the other (Youtube, DailyMotion, and even Web radios, online games...)

 

For 4G (LTE) deployment, there's simply NO solution at all with IPv4. IPv6 is the only viable alternative (at reasonable cost, but also because of existing lack of IPv4 addresses).

 

If ISPs are deploying large-scale NAT, it's only in order to deliver the static contents on the web, via caches, mostly for HTTP, and free up lots of existing IPv4 addresses for reusing them for video contents. But even in this case, it will not be sufficient. There's a strong evidence that IPv4 will soon die compeltely for ALL interactive contents and services (including also HTTPS for secured personnal contents, including those for Cloud Computing and Cloud Storage from home or from roaming mobile accesses).

 

Ipv4 will then only remain for the static, cachable content of the web. Interactive applications that use personal data are the biggest promoters of accelerated adoption and deployment of IPv6 by ISPs (Google for Gmail, Buzz, Google+; Facebook, Tweeter; Apple for iTunes; all online game platforms...).

 

But ISPs still want to maintain their hands on their subscribers. They are ready to sacrifice the services available, and in fact are battling against the openness of the Internet (restricting protocols, using unfair trafic shaping practices, using lying DNS servers to redirect users to THEIR onlien services that they want to sell at higher prices...)

 

Thanks, Freenet6 is demonstrating that ISPs are just liers. IPv6 is perfectly possible today and in fact some large ISPs have offered IPv6 since long (e.g. since 2002 for the French ISP "Nerim").

 

Other liers also include large providers of hardware solutions for ISPs: Cisco (promoting non-working or very slow L2TP tunnels) for example.

 

Don't trust Cisco... Its "CGv6" solution in fact DOES NOT work. I have proofs of that. It won't help users, won't help ISPs make the right choice and realize the emergency of a large scale solution really based on native IPv6.

 

The killer apps already exist: these are all the interactive personal communication apps on smartphones (e.g. dating applications, like Meetic, and VOD applications): when used over the merging 4G networks (LTE) there will be no choice (and already on 3G networks, users are experimenting too many losses of sesssions, and applications hanging or terminating abruptly, due to too frequent changes of IPv4 addresses).

 

LSN cannot work with the growing demand of security and privacy: how will you support HTTPS or IPSEC in general for sessions during more than a few seconds ?

There's a need for someting else : I think that the way to go would be to use the "Reliable UDP" protocol as a convergence technology allowing to extend the number of ports available (using the source and destination IPv6 address as an additional port number for session dispatch).

Ideas about this have been developed since long: see for example what Limewire developed in Peer-to-Peer apps, there was a 128-bit GUID that identified hosts, much like a 128-bit IPv6 address, and still this worked using a "Reliable UDP" session that could traverse any number of NAT, including NAT+PAT.

The only problem with the Gnutella protocol (supplemented with the LimeWire extensions) is that there was a lack of registry for maintaining the routing info in a stable way (i.e. the global system of ASN and BGP announcements, only partly implemented with instable GUID routing tables maintained locally between each hops, but later improved using a Kademlia-based routing discovery mechanism). But all the principles were there. And it was completely possible to create a reliable protocol and implement TCP on top of it, and it offered excellent performances, without IPv6.

If LimeWire had not been killed, we would already have it working as an excellent convergence mechanism to support later the connectivity to IPv4-only sites, from IPv6-only clients, it would have implemented BGP as well using Gnutella as the base transport, and an alternative tunneling protocol compatible with both IPv4 and IPv6...

"Reliable UDP"???  UDP was never intended to be reliable.  We have TCP for reliable data transfer.  Also, wasn't LimeWire a P2P service?  And Gnutella?  It sounds like you're trying to come up with some hacks to avoid the move to IPv6.

Please note that I have quoted the expression completely, and used a capital on Reliable. This means that this is an unbreakable expression. I have NOT said that UDP alone is a reliable protocol. But really you absolutely don't know what you're speaking about. Even the gogoClient uses such "Reliable UDP" (RUDP) protocol for NAT traversal over IPv4. Basically it means that TCP is implemented on top of UDP instead of on top of UDP.

But the major improvement offered by a RUDP protocol is seen on the server side rather than on the client-side: the standard TCP protocol requires one port number per client. Unfortunately, TCP port numbers are a scarce resource, and for frtony-proxies or load-balancers near servers, that must be able to handle lots of connections from many clients, and where each client uses many sessions simulatenously, this is a problem.

Alternatives to standard TCP requires being able to multiplex as many sessions as possible over the same target port number on the link from the proxy to the server: this is possible using UDP.

In that case, the RUDP protocol implementation no longer needs to be limited by the scarce 16-bit port number (with part of this space being restricted): the port number can be any number of arbitrary size, so it can effectively be a 126bit entity such as the remote IPv6 address of the client connected on the other end.

Google's SPDY protocol is not a RUDP implementation because it still uses TCP but attempts to multiplex sessions from the client-side. It is another technic being used also because it improves a lot the performances, also saving network resources on the server side.

And finally your assertion about my intent "It sounds like you're trying to come up with some hacks to avoid the move to IPv6.", is completely FALSE. This is not my intent. In fact I also militate for a rapid IPv6 adoption and deployment. But the time has already run out, and we WILL need IPv4 for long again, even after a massive adoption of IPv6. But due to the scarce resources in IPv4 addresses, we will still need such multiplexing technic to avoid a complet collapse of services: the world is in fact extremely late in adoption of IPv6 on the server-side (much more than on the client-side), and most web hosting providers cannot even offer more than a single IPv4 address for their hosted webservices (and quite often there is simply NO available public IPv4 dedicated to a server, which is accessed via a proxy or firewalling NAT). Web hosting companies are already experiencing this problem: not enough IPv4 addresses for supporting the shared proxies and offer a decent visibility of web services hosted behind these shared frontal proxies.

Even IPv6 has forgotten to extend the width of port numbers used by TCP and UDP over IPv6. This problem has been underestimated. There's a future for such "RUDP" protocol or similar multiplexing protocol.

Some NAT, or more specifically PAT, has been using session tracking for years.  They have been reusing the same TCP ports on the NAT server by tracking both the source IP and port as well as the destination IP and port.  Tracking these combinations when doing the conversion, actually can enable 65535 * 65535 / 700 = 4294836225 / 700 = 6135480 different users.  Six million sure looks like a large number of users and should fall under a true LSN definition.
Your basic math is wrong. Most sites cannot support 65535 IP addresses (most often not more than 16, and it will be impossible to get more IPv4 addresses for everyone) and absolutely none of them can support 65535 ports (most often 32767 at most, due to port restrictions).
Typically, on a mobile cell, you just have a 1-3 IPv4 per cell
Take 3*32767/700 and you get a maximum of 140 mobile users served by this cell. With the increasing use of personnal data in web applications, HTTPS is needed, and you absolutely need many more ports per user than before, when those mobile devices have concurrent applications running their personal updates in the background to many sites. The proxies installed on these mobile accesses already have problems to maintain enough ports per user, so what mobile providers was to reduce a lot the lifetime of "idle" sessions, so much that above 30 seconds, the session is agressively closed, even if it was in fact NOT idle, but could not transfer due to the shared bandwidth usage and delays.
And 140 users within a mobile cell is really not exceptional (the typical number in a city is about 2000 users... mobile services providers have difficulties to reduce this number because it would require installing new cell antennas and reduce their power to concentrate them in smaller sizes. In my area, the cell covers a radius of about 2 km, reaching about 5000 subscribers, and there I can very frequent disconnections: the proxy has been set up so that all sessions are forced to be closed sometime in less than 30 seconds of apparent idle time (and mobile applications try to stay connected by pinging their server at most once every minute... but this does not work, for example when visiting a web page that references a collection of thumbnail images, such as instant messaging applications showing thumbnail photos of users: it's impsosible to have enough port numbers on the proxy, even if the application uses one 4 sessions: one of these sessions will fail and will be closed by the proxy while trying to get a response or just a SYN connection)
I hate LSN. This does not work in practice, and mobile Internet accesses are an excellent demonstration of this: mobile access providers cannot even install enough proxies and allocate them enough IPv4 resources. Only a native end-to-end IPv6 access, without any tunneling through IPv4, will allow transmitting enough traffic without having to pass by those legacy IPv4 proxies (but we'll also have to wait for web servers to also get IPv6 connectivity, and in fact this is where the IPv6 deployment is the least advanced: we have the backbone, and IPv6 accesses starting being deployed on the lcient side, but web hosting providers are VERY late, and many web services simply have not defined any strategy to remain usable now from IPv6-only users, they think that there will be 6to4 or Teredo tunnels for everyone, but this will not work: we don't have globally enough IPv4 resources to give enough capacity and bandwidth to support billions users, and if web services suppose that LSN offered by ISPs to their subscribers will be enough, they are clearly wrong: those web services will now dramatically start to have reachability problems.

I believe my math lines up to the actual truth.  Actually, Most NAT devices that track both source and destination IP addresses.  With the introduction of windows any TCP/UDP ports are fair game for NAT, but if we use two IP addresses on average per cellular site, we can get back to about 65535 tcp and 65535 udp available possible ports.  When tracking both source and destination IP/port combinations, we can only track 65535 source ports and 65535 destination IP/ports.  This allows for 65535 source ports times 65535 target IP addresses or 4 billion entries possible NAT entries.  If we assume that users will use 700 ports, then that leaves about 6 million users.

Now you mentioned some other reasons why you think LSN might struggle such as bandwidth.  Bandwidth is only tied to NAT through the CPU speed of the NAT appliance.  If the appliance CPU runs at 2GHz, as takes 4 cycles to swap the source IP/port and 2 cycle to track the destination IP/port, then it  can run at 2048MHz /6cycle /5bytes /2addresses = 34Mb/s.  Most Cellular towers only pull in a 1.5Mbs connection per 1000 users, which is the real bottleneck.  Put three people on youtube and it can kill a connection like this.

However, tracking this based on ram means that we need about 32Gb per appliance.

The old 2G GSM and CDMA sysetems used DS1s (T1) for network connections, but the new LTE networks use fibre or high bandwidth microwave* for this now.  I am currently working on a LTE roll out project for one of the major Canadian cell companies.  The cell sites run Ethernet over fibre, with some sites connected to others via microwave.  The Ethernet switches are all capable of 1 GB and are connected to the other equipment with fibre or CAT6 copper patch cords.  So there's a lot more than 1.5 Mb/s available.  Also, not all 65K ports are available.  "Well known" ports are not likely to be used as they have to be available for new incoming connections.


*Another carrier I did some work for, a couple of years ago, ran 400 Mb microwave links between sites.


The microware will help a ton with data speeds, however, the NAT equipment needs to be upgraded as well.

 

As for the "Well known" ports, I was taking that into account.  Philippe has mentioned that there were one to three IP addresses per tower.  When I take the average, it is two IP's per tower.  He also mentioned that we get about 32767 ports per IP.  When I add 32767 for one IP and 32768 for the other IP, I get 65535.  Please note that this is only an average and it can really range from 32767 to 98302 available ports when adding all IP addresses together.

I wonder where he gets that 1-3 addresses per tower?  I have seen nothing to support that.  Generally, there's an Ethernet link to the radio equipment that goes back further in the network, where the routing, DHCP, NAT etc. is done.

You don't need to be an expert to get such metrics. The fact that there's a DHCP or NAT router on the tower does not hide the other fact that we constantly get the same few IPs being used on the Internet, and being trackable to a single tower, independantly of the user connected to it with its smartphone.

Google is already using this to get a fast alternate geolocalisation of smartphone users that don't include a GPS device (he also uses the geolocalisation of those connected to an open Wifi access, but it's much less reliable, as most of this hotspots are connected to a DSL/cable/FTTH/FTTB access, via a tunnel whose IPv4 is assigned much more temporarily by the upstream ISP, in a address block that covers a much larger metropolitan area). The public IPv4 address used provides the mapping and Google can then correlate this data with the collected GPS coordinates and then compute the location of the cell tower to match it with a good enough precision of the location. These IPv4 addresses are extremely stable, and it's not the DHCP or NAT routing that will hide it.

RSS

Sponsor

IoT & IPv6 Networking Conference

Training

Product Information

Fill out my online form.

© 2014   Created by gogo6.

Badges  |  Report an Issue  |  Terms of Service