This solution cannot work. In fact, anyone that is currently using a (IPv4-only) mobile Internet access already sees that solutions based on IPS-side NATs, or proxies, cannot work: the ISP is compeltely unable to sustain a sesssion for more than 30 seconds of apparent idle time. This means that interactive applications on today's 3G mobile networks are already suffering from excessive loss of sessions, due to server load overhead.
large-scale NAT principally suffer from PORTS HUNGER: today we alreay need about 700 ports per user (1000 recommanded). This number is already increasing with uses of Web 2.0 applications, with lots of interactions between lots of users, or with various data sources. Add to this some restrictions on port numbers, and the "large-scale NAT" will not accept more than about 30 *users* per IPv4: in other words, not more than existing NAT solutions at home or in small offices.
The only way to avoid this problem is to give not just one address per user, but ideally one address per user and per web application, on each physical interface that the user may use at one time (e.g. on their fixed line access or on their mobile acces with their smartphone). Only IPv6 allows that, by routing not just one address but a full /64 block in which users can configure various Internet-connected devices, or virtual interfaces for specific applications, all of them benefiting from IPv6's native support for autoconfiguration of these interfaces.
With 4G networks coming, users will not understand that all those NATs or proxies, shared between independant users will be impacted by the inability of ISPs to sustain the combined trafic (and avoid the HUNGER of port numbers, even when using only TCP with HTTP/1.1 and explicit session closes).
Alreasy, the servers on the web are suffering the problem: to avoid it, they needed to use complex routers, and frontal proxies for load balancing across a large block of IPv4 addresses. As it becomes almost impossible to get new IPv4 addresses for these servers, costly alternatives are being used (notably the use of costly CDNs for static contents). This will be even more critical given the exponential growth of the use of video, which doubles almost all years, and that will use about 60-70% of the total bandwidth of the Internet in 2013, in long sessions or with users constantly zapping from one content to the other (Youtube, DailyMotion, and even Web radios, online games...)
For 4G (LTE) deployment, there's simply NO solution at all with IPv4. IPv6 is the only viable alternative (at reasonable cost, but also because of existing lack of IPv4 addresses).
If ISPs are deploying large-scale NAT, it's only in order to deliver the static contents on the web, via caches, mostly for HTTP, and free up lots of existing IPv4 addresses for reusing them for video contents. But even in this case, it will not be sufficient. There's a strong evidence that IPv4 will soon die compeltely for ALL interactive contents and services (including also HTTPS for secured personnal contents, including those for Cloud Computing and Cloud Storage from home or from roaming mobile accesses).
Ipv4 will then only remain for the static, cachable content of the web. Interactive applications that use personal data are the biggest promoters of accelerated adoption and deployment of IPv6 by ISPs (Google for Gmail, Buzz, Google+; Facebook, Tweeter; Apple for iTunes; all online game platforms...).
But ISPs still want to maintain their hands on their subscribers. They are ready to sacrifice the services available, and in fact are battling against the openness of the Internet (restricting protocols, using unfair trafic shaping practices, using lying DNS servers to redirect users to THEIR onlien services that they want to sell at higher prices...)
Thanks, Freenet6 is demonstrating that ISPs are just liers. IPv6 is perfectly possible today and in fact some large ISPs have offered IPv6 since long (e.g. since 2002 for the French ISP "Nerim").
Other liers also include large providers of hardware solutions for ISPs: Cisco (promoting non-working or very slow L2TP tunnels) for example.
Don't trust Cisco... Its "CGv6" solution in fact DOES NOT work. I have proofs of that. It won't help users, won't help ISPs make the right choice and realize the emergency of a large scale solution really based on native IPv6.
The killer apps already exist: these are all the interactive personal communication apps on smartphones (e.g. dating applications, like Meetic, and VOD applications): when used over the merging 4G networks (LTE) there will be no choice (and already on 3G networks, users are experimenting too many losses of sesssions, and applications hanging or terminating abruptly, due to too frequent changes of IPv4 addresses).
LSN cannot work with the growing demand of security and privacy: how will you support HTTPS or IPSEC in general for sessions during more than a few seconds ?
There's a need for someting else : I think that the way to go would be to use the "Reliable UDP" protocol as a convergence technology allowing to extend the number of ports available (using the source and destination IPv6 address as an additional port number for session dispatch).
Ideas about this have been developed since long: see for example what Limewire developed in Peer-to-Peer apps, there was a 128-bit GUID that identified hosts, much like a 128-bit IPv6 address, and still this worked using a "Reliable UDP" session that could traverse any number of NAT, including NAT+PAT.
The only problem with the Gnutella protocol (supplemented with the LimeWire extensions) is that there was a lack of registry for maintaining the routing info in a stable way (i.e. the global system of ASN and BGP announcements, only partly implemented with instable GUID routing tables maintained locally between each hops, but later improved using a Kademlia-based routing discovery mechanism). But all the principles were there. And it was completely possible to create a reliable protocol and implement TCP on top of it, and it offered excellent performances, without IPv6.
If LimeWire had not been killed, we would already have it working as an excellent convergence mechanism to support later the connectivity to IPv4-only sites, from IPv6-only clients, it would have implemented BGP as well using Gnutella as the base transport, and an alternative tunneling protocol compatible with both IPv4 and IPv6...
Please note that I have quoted the expression completely, and used a capital on Reliable. This means that this is an unbreakable expression. I have NOT said that UDP alone is a reliable protocol. But really you absolutely don't know what you're speaking about. Even the gogoClient uses such "Reliable UDP" (RUDP) protocol for NAT traversal over IPv4. Basically it means that TCP is implemented on top of UDP instead of on top of UDP.
But the major improvement offered by a RUDP protocol is seen on the server side rather than on the client-side: the standard TCP protocol requires one port number per client. Unfortunately, TCP port numbers are a scarce resource, and for frtony-proxies or load-balancers near servers, that must be able to handle lots of connections from many clients, and where each client uses many sessions simulatenously, this is a problem.
Alternatives to standard TCP requires being able to multiplex as many sessions as possible over the same target port number on the link from the proxy to the server: this is possible using UDP.
In that case, the RUDP protocol implementation no longer needs to be limited by the scarce 16-bit port number (with part of this space being restricted): the port number can be any number of arbitrary size, so it can effectively be a 126bit entity such as the remote IPv6 address of the client connected on the other end.
Google's SPDY protocol is not a RUDP implementation because it still uses TCP but attempts to multiplex sessions from the client-side. It is another technic being used also because it improves a lot the performances, also saving network resources on the server side.
And finally your assertion about my intent "It sounds like you're trying to come up with some hacks to avoid the move to IPv6.", is completely FALSE. This is not my intent. In fact I also militate for a rapid IPv6 adoption and deployment. But the time has already run out, and we WILL need IPv4 for long again, even after a massive adoption of IPv6. But due to the scarce resources in IPv4 addresses, we will still need such multiplexing technic to avoid a complet collapse of services: the world is in fact extremely late in adoption of IPv6 on the server-side (much more than on the client-side), and most web hosting providers cannot even offer more than a single IPv4 address for their hosted webservices (and quite often there is simply NO available public IPv4 dedicated to a server, which is accessed via a proxy or firewalling NAT). Web hosting companies are already experiencing this problem: not enough IPv4 addresses for supporting the shared proxies and offer a decent visibility of web services hosted behind these shared frontal proxies.
Even IPv6 has forgotten to extend the width of port numbers used by TCP and UDP over IPv6. This problem has been underestimated. There's a future for such "RUDP" protocol or similar multiplexing protocol.
I believe my math lines up to the actual truth. Actually, Most NAT devices that track both source and destination IP addresses. With the introduction of windows any TCP/UDP ports are fair game for NAT, but if we use two IP addresses on average per cellular site, we can get back to about 65535 tcp and 65535 udp available possible ports. When tracking both source and destination IP/port combinations, we can only track 65535 source ports and 65535 destination IP/ports. This allows for 65535 source ports times 65535 target IP addresses or 4 billion entries possible NAT entries. If we assume that users will use 700 ports, then that leaves about 6 million users.
Now you mentioned some other reasons why you think LSN might struggle such as bandwidth. Bandwidth is only tied to NAT through the CPU speed of the NAT appliance. If the appliance CPU runs at 2GHz, as takes 4 cycles to swap the source IP/port and 2 cycle to track the destination IP/port, then it can run at 2048MHz /6cycle /5bytes /2addresses = 34Mb/s. Most Cellular towers only pull in a 1.5Mbs connection per 1000 users, which is the real bottleneck. Put three people on youtube and it can kill a connection like this.
However, tracking this based on ram means that we need about 32Gb per appliance.
The old 2G GSM and CDMA sysetems used DS1s (T1) for network connections, but the new LTE networks use fibre or high bandwidth microwave* for this now. I am currently working on a LTE roll out project for one of the major Canadian cell companies. The cell sites run Ethernet over fibre, with some sites connected to others via microwave. The Ethernet switches are all capable of 1 GB and are connected to the other equipment with fibre or CAT6 copper patch cords. So there's a lot more than 1.5 Mb/s available. Also, not all 65K ports are available. "Well known" ports are not likely to be used as they have to be available for new incoming connections.
*Another carrier I did some work for, a couple of years ago, ran 400 Mb microwave links between sites.
The microware will help a ton with data speeds, however, the NAT equipment needs to be upgraded as well.
As for the "Well known" ports, I was taking that into account. Philippe has mentioned that there were one to three IP addresses per tower. When I take the average, it is two IP's per tower. He also mentioned that we get about 32767 ports per IP. When I add 32767 for one IP and 32768 for the other IP, I get 65535. Please note that this is only an average and it can really range from 32767 to 98302 available ports when adding all IP addresses together.
You don't need to be an expert to get such metrics. The fact that there's a DHCP or NAT router on the tower does not hide the other fact that we constantly get the same few IPs being used on the Internet, and being trackable to a single tower, independantly of the user connected to it with its smartphone.
Google is already using this to get a fast alternate geolocalisation of smartphone users that don't include a GPS device (he also uses the geolocalisation of those connected to an open Wifi access, but it's much less reliable, as most of this hotspots are connected to a DSL/cable/FTTH/FTTB access, via a tunnel whose IPv4 is assigned much more temporarily by the upstream ISP, in a address block that covers a much larger metropolitan area). The public IPv4 address used provides the mapping and Google can then correlate this data with the collected GPS coordinates and then compute the location of the cell tower to match it with a good enough precision of the location. These IPv4 addresses are extremely stable, and it's not the DHCP or NAT routing that will hide it.