I haven't seen a single network stack that doesn't limit the size of the tailing payload or packet in general (MTU's ;)), go try and push 65507 bytes of payload into the message and tell me how it goes.
In any case DNS tunnel offers you both TCP and UDP tunneling at much higher throughput, I'll take a look at your code when I'll have the time and see how it compares to ptunnel or ICMP shell.
"A correctly-formed ping packet is typically 56 bytes in size, or 84 bytes when the Internet Protocol header is considered. However, any IPv4 packet (including pings) may be as large as 65,535 bytes."
1500 is the MTU of Ethernet, it's often not sustainable on end to end connection (especially when you add frame overheads) if you are using that high of a payload size you'll get considerably worse performance than say limiting it to around 500 bytes, you are welcomed to try it.
Also with how global traffic is managed smaller packets tend to get priority since they can be qued quicker backbone connectivity uses much bigger frames than Ethernet so more often than not generating more smaller packets would increase your overall throughput (to a limit) unless you are on a very controlled network.
What do you mean by "it's often not sustainable"? Throughput on a server is higher at higher packet size, so if you're doing a download I'd expect the server to send 1500 byte packets. It's pretty easy to saturate a link with 1500 byte packets, and it's much harder to do so at lower packet sizes (from the server's perspective) since the per-packet processing costs start to dominate over the per-byte costs. Admittedly my knowledge of this sort of stuff is mostly intra-DC; is there some other factor that you're referring to that supersedes this on the web?
I'm not aware of prioritizing smaller packets on the backbone, sounds like something that would be targeted at small flows (i.e. first N packets in a flow get a priority bump)? More info on that would be appreciated.
It doesn't matter the MTU setting on your end for WAN and ISP / interlink grade networks is meaningless they don't use Ethernet, FDDI frame size is 4500 (ATM is about double that) bytes (minus what ever overhead, but usually 4200 and change) ISP/WAN routers don't care about how many mbit/s they transfer but how many packets they route at per given unit of time, as packets get packed into a single frame the smaller the packet the more packets they can transfer each frame.
Also from a more high level point of view if you think about it the small packets are the most critical ones and they are at least as far as responsiveness goes DNS is limited to 512bytes over UDP, TCP 3-way handshake packets are tiny and those are the packets that need to get to and back from their destination as fast as possible, delays in data transfers means slower speeds, delays of handshakes mean that your application can fail or hang.
Other important traffic such as VOIP[0] also uses very small packet sizes for this same reason most critical services need to transfer very little data (per given unit of time) but need to update data as frequently as possible to provide the illusion of real time and to mask the latency, same goes for other things like online/multi-player gaming and so on and on and on.
Pretty much if you want your service to be as responsive as possible limit your packet size to the smallest size possible and increase your PPS this will ensure that your packets get quicker to their destination.
[0]VOIP Packet Sizes http://www.cisco.com/c/en/us/support/docs/voice/voice-qualit...
The only time you would want to use large packets is pretty much when you can have a buffer, this means that you need to handle less packets per second which lowers CPU consumption (across the entire path) so video streaming and such can use pretty much as large of an MTU as they want unless they start getting fragments.
> It doesn't matter the MTU setting on your end for WAN and ISP / interlink grade networks is meaningless [as] they don't use Ethernet...
It absolutely does matter and is quite meaningful. :)
If you set your edge router's Internet-facing MTU to 9k, and the upstream equipment's MTU is smaller than that, then either your packets will be dropped, or PTMU Discovery will try to figure out the MTU of the path. (Better hope everyone along the path is correctly handling ICMP! :) )
> The only time you would want to use large packets is pretty much when you can have a buffer...
Or if you have high-volumes of data to move and want to dramatically increase the data:Ethernet_frame_boilerplate ratio. :)
> Also ... if you think about it the small packets are the most critical ones... [because they need to be dispatched as quickly as possible.]
Yes, but a larger MTU shouldn't affect this. Set whatever socket options are required to get those packets on their way as soon as they're created, and your system shouldn't wait to fill an Ethernet frame before sending that packet.
Poor choice of words on my part, if you configure jump frames on your uplink you are going to kill your network stack, if you limit it too much you'll have a huge overhead.
The point being is that for transferring data especially when responsiveness is important if not paramount utilizing the maximum potential frame size you can push without fragmentation would generally yield a poorer result in real world applications.
> [I]f you configure [jumbo] frames on your uplink you are going to kill your network stack...
I can't agree with that statement. If upstream devices support larger than 1500 byte MTU, OR PTMU works correctly, then you are absolutely not going to "kill your network stack". At worst, (in the PMTU discovery phase) you'll see poor performance for a few moments while the MTU for the path is worked out, and then nothing but smooth sailing from then on.
> The point being is that for transferring data especially when responsiveness is important if not paramount utilizing the maximum potential frame size you can push without fragmentation would generally yield a poorer result in real world applications.
I'm not sure what you're saying here. Are you saying:
"If you configure your networking equipment to always wait to fill up a full L2 frame before sending it off, you'll harm perf on latency-sensitive applications."?
If you're not, would you be so kind as to rephrase your statement? I may be particularly dense today. :)
However, if you are, then that statement is pretty obvious. I expect that few people configure their networks to do that. However, I don't see what that has to do with the link's MTU. Just because you have a 9k+ MTU, doesn't mean that you have to transmit 9k of data at a time. :)
I work for an ISP and it is all ethernet on the interior. Both for residential and commercial customers. The small amount of frame relay and things that are requested are on the ethernet network from edge to edge.
It's the de-facto MTU of much of The Internet. Baby Jumbo (MTU >1500 but <9k), Jumbo (MTU ~9k), and Super Jumbo (MTU substantially larger than 9k) frames exist, and are supported by many (but -sadly- not all) Ethernet devices.
Edit:
> Also with how global traffic is managed smaller packets tend to get priority...
Do you have a reliable citation for this? I would expect that core and near-core devices would handle so much traffic, that they all would be using MTUs far higher than 1500 bytes per frame.
It's pretty standard QoS measure, network schedulers especially for multiplexed/aggregated networks will have a bias for small packets, you should be able to find performance statistics for various token bucket configurations that will demonstrate that.
Do you have a cite for that? :) I know that CoDel doesn't bias for small packets; it treats all flows equally and tracks traffic on a bytes-transferred (rather than packets-transferred) basis.
Can you please clarify this comment? It sounds like you're saying Ethernet cannot maintain a line-rate transfer at the maximum MTU. But that can't be what you're saying, anyone could run an iperf/netperf or even large crafted packet transfer and prove this wrong.
If the layer above (i.e. IPv4) can create fragments, you can send up to the maximum payload of the L3 protocol. You can send up to a 64KiB IPv4 packet over 1500-byte Ethernet.
I haven't tried comparing both. I don't have much resources. All I can say is that using icmptunnel, one couldn't differentiate whether it's using tunnel or direct internet. Hence ICMP tunneling was very fast.
Yes. In my opinion they should restrict the payload size of an ICMP message. Blocking all echo/reply can have adverse impact on other applications as well.
If you're going to do that, set the maximum length to 128 bytes. Different ping tools use different sized payloads - I know of some common ones that generate packets by default that would be blocked with that limit.
Also, instead of using the plain limit match, check out hashlimit. It can apply a rate limit on a per sender, destination, or sender+destination basis. The recent match may also be of interest.
A couple of million small packets in a short timeframe will still eat up your resources. If an application needs ICMP echo to pass transparently through your firewall then you should probably review your need for that application, you're one step away from becoming a partner in someone else's amplification attack.
ICMP echo isn't amplification, as long as you don't respond to multicast/broadcast addresses. It's still 1:1 reflection, so you probably want to rate limit if it's simple (FreeBSD and Linux come out of the box with sane default rate limits).
It is amplification if you allow the packets through transparently because all the hosts behind your firewall will respond if you send an echo request to the broadcast address.
So you're going to have to do a little bit more configuration than just allow a maximum packet size if you're going to allow ICMP to transit at all you should also limit the allowed set of addresses (you should do that regardless, but echo can be used for amplification requests by virtue of the broadcast feature of the IP protocol). Hence the 'one step away'.
This was known as the 'smurf' attack. Fortunately this is now mostly a thing of the past. But poking holes in your firewall for ICMP is a delicate affair.
Well the question is then what's the point other than a personal exercise?
There is plenty of ICMP / multi protocol tunnels software out there for both linux and windows much of it doesn't require administrative privileges.
Also ptunnel comes standard with some linux distro's these days Ubuntu and so do probably most of it's derivatives, and as far as raw performance goes ptunnel is also the highest performing one capable of achieving about 150kbps which isn't that bad considering the sheer amount of packets and overhead you get.
> Well the question is then what's the point other than a personal exercise?
What's your question? Is it "What's the point of blocking ICMP?"? Or is it the opposite question?
If it's the former, then there are sysadmins out there who cargo-cult their network configuration and listen to folks like Gibson Research Corporation who've been giving really bad advice [0] for the past decade+.
[0] Specifically, they strongly recommend dropping all traffic to ports that don't have listening services, along with all ICMP, rather than rejecting said traffic and allowing all non-problematic ICMP. They also have a "handy" tool [1] to make it look like doing anything else is "DANGEROUS": (The tool reports [2] if your site responds to ICMP echo requests.)
[2] Ping Reply: RECEIVED (FAILED) — Your system REPLIED to our Ping (ICMP Echo) requests, making it visible on the Internet. Most personal firewalls can be configured to block, drop, and ignore such ping requests in order to better hide systems from hackers. This is highly recommended since "Ping" is among the oldest and most common methods used to locate systems prior to further exploitation.
I tried using some but couldn't get them to work. Probably because many were developed long time back. There have been many recent changes in the kernel.
Any major captive portal re-routes DNS requests to their Login-IP and block any IP leaving the local network. That essentily prohibits any ICMP request to the outside world.