The article states that offloading work from routers to clients is useless today because memory and CPU is cheaper due to Moore's law. Well guess what, traffic has increased as well.
What you must think about is "where are the bottlenecks". When you are connecting to a client on the other side of a large network (e.g. the internet) and you're not getting the same amount of bandwidth your last-mile connection should provide you with you have to ask yourself: what's keeping the speed down?
Turns out that router processing is still a bottleneck. And by delegating the mundane router work of handling packet fragments and doing checksum validation to the end terminals we are getting a much more efficient network than with IPv4. IPv6 headers are also much simpler making it easier (faster) to process with ASICs.
Stating that reducing work is useless because we work much faster now than way back when is not a good argument.
What you must think about is "where are the bottlenecks". When you are connecting to a client on the other side of a large network (e.g. the internet) and you're not getting the same amount of bandwidth your last-mile connection should provide you with you have to ask yourself: what's keeping the speed down?
Turns out that router processing is still a bottleneck. And by delegating the mundane router work of handling packet fragments and doing checksum validation to the end terminals we are getting a much more efficient network than with IPv4. IPv6 headers are also much simpler making it easier (faster) to process with ASICs.
Stating that reducing work is useless because we work much faster now than way back when is not a good argument.