TCP Fast Open: Is it useful?

I was looking at this option, currently experimenting with this.
I found this document, explaining how to enable it in the OS, and this document, explaining how to compile unbound with tcp-fastopen support.

A Microsoft Agent / Moderator says:

TCP Fast Open will reduce the traffic bath and forth between the client and the server by improving the page load time by 10% to 40%.

I'm curious about your thoughts to enable this, by compiling unbound with the additional directives.

Also wondering what has changed in edge, several articles (example) claim you can enable this in the browser, using about:flags, but the option is missing in the current version.

Thanks for your time, effort and quick response to the new dnsmasq release.

The first article you linked to has a reference to a Google/ICSI research paper:

Relevant sections are

Today’s TCP standard permits the exchange of data only after the client and server perform a handshake to establish a connection. This handshake introduces one RTT [<-- round trip-time] of delay for each connection. For short transfers of the sort that are common today on the web, this additional RTT is a significant portion of the web flows’ network latency

[...]

TCP Fast Open (TFO) that enables data to be exchanged safely during TCP’s initial handshake. At the core of TFO is a security cookie that is used by the server to authenticate a client that is initiating a TFO connection.

Their conclusion is:

Based upon traffic analysis and network emulation, we show that TCP Fast Open would decrease average HTTP transaction network latency by 15% and whole-page load time over 10% on average, and in some cases up to 40%.

I will summarize only very briefly what the technical implications of this standard are, you will have to go to the specification yourself when you want to understand it in it's full depth.

In the first connection, a TCP cookie (for later fast authentication) is requested. Hence, the first TCPFO connection is not accelerated. However, subsequent are, even when the payload size itself is notable increased due to the transmitted TFO cookie.

After reading the standard, it becomes clear that TCPFO shines when two requirements are met:

  1. There is content to be served without effort, this is true for HTTP requests where (static) content can be served solely based on the requested path.
  2. The distance between the two parties is large enough for some notable latency of a few milliseconds between the involved hosts

For DNS (assuming a local unbound is used, as this was the question):

  1. Typically fulfilled. The query can be initiated one RTT earlier as the very first TCP packet can already transport the DNS question. For typical TCP connections, the question is only in the third packet.
  2. Typically fulfilled with a distant DNS server. When the delay to your DNS server is on the order of 40 msec, then TCPFO allows your DNS question to reach the server 2x40 = 80 msec earlier. This is an advantage and the motivation for TCPFO. This is only true for sufficiently fast networks (larger TCP packets), but we can generally assume this these days.

However, mind that local delays are negligible. They may be in the low nanosecond regime. This voids argument 2 entirely making TCPFO not advantageous in any sense when everything is happening purely local. Hence, my conclusion is: It will not be truly beneficial for a local connection to unbound, however, it will still be beneficial for your DNS clients connecting to your Pi-hole in case they

  1. are using TCP connections (not many do), and
  2. are sufficiently far away (e.g., through a VPN tunnel).

I am on ubuntu 18 with stubby as a forwarder which also supports tcp fast open.
Do I need to do anything to enable the support in pihole?

the dnsmasq changelog says:

Support TCP-fastopen (RFC-7413) on both incoming and
outgoing TCP connections, if supported and enabled in the OS.

I cannot find anything in the dnsmasq man page that indicates you need to enable it, so I assume it is automatically enabled.

If you're running pihole beta5, and have updated to the latest version (pihole -up), you'll notice a message in the pihole log: started, version pi-hole-2.81. You could also use dig chaos txt version.bind +short @127.0.0.1 as mentioned by DanSchaper in this topic. For the command to work, ensure version.bind is not blocked (see here).

supported != enabled. check if the settings in this article are on your system, the document also mentions some commands which should confirm you're using tcp-fastopen (or use wireshark to confirm).

I wonder what is required to enable tcp-fastopen for IPv6...

edit

After reading this article, I even wonder if it is wise to use tcp-fastopen.
TL;DR
It seems to boil down to “don't enable TCP Fast Open on any service not protected by TLS or IPsec”.

and the warning in this configuration file.
TL;DR
beware some firewalls do not like TFO! (kernel > 3.7)

/edit

That might explain why I woke up to some non working sites.

When I checked nothing seemed amiss my forwarder was still forwarding and pihole was doing it's job.
I had to reboot the Ubuntu box and my WRT BOX. I will keep an update on my situation here.

Very abnormal for me.. Normally this setup has been solid with months of up time.

I've been using TFO on port 853 for sometime now. That port uses TLS as I am sure you know.

It turns out that my dns provider is experiencing some sort of issue with its blocklist.

Im using a UniFi dream machine and I notice since upgrading to 5.0 I'm getting a lot of "High TCP Latency" errors.. does this relates to tcp fast open?... im not sure if its enable or disable by default, need some clarification.

Yes.

The current dnsmasq always requests TCPFO for incoming and outgoing TCP connections. The kernel will automatically do the rest (it may even ignore this request if TCPFO is not enabled, see below). The use of TCPFO cookies is transparent to the application level: the TCPFO cookie is automatically generated during the first TCP conversation between the client and server, and then automatically reused in subsequent conversations if TCPFO is enabled.

To prevent resource-exhaustion attacks, one of the potential issues with TCPFO, dnsmasq sets the limit on the size of the queue of TCPFO requests that have not yet completed the three-way handshake to 5.

You can check by issuing cat /proc/sys/net/ipv4/tcp_fastopen

If this value is 0, then TCPFO is disabled.
If it is 1, then TCPFO is in client-mode (outgoing), i.e., you can use TCPFO to connect to others servers which support it (technically: you can read a TCPFO cookie).
If it is 2, then TCPFO is in server-mode (incoming), i.e., others can connect to your server using TCPFO (technically: you can generate TCPFO cookies for others).
If the values is 3, you can use both and TCPFO is fully enabled (incoming and outgoing).

You can set the value yourself using, e.g.,

echo "3" | sudo tee /proc/sys/net/ipv4/tcp_fastopen

to fully enable TCPFO.

According to the kernel sysctl ip documentation, the default is enabled client support (value 1).

Nothing. The kernel code uses the net/ipv4/tcp_fastopen value for all TCP connections, regardless of the used protocol. See the corresponding kernel code if you are interested (this subroutine is used from ../ipv6/tcp_ipv6.c as well).

I'm aware that this is somewhat misleading. TCPFO-IPv6 support was added in a later kernel version than TCPFO-IPv4 support, so I assume the IPv6 implementer simply did a minimal effort implementation, not wanting to touch the existing settings (this is a bigger deal). Plus. there is no obvious reason to have different behavior for TCP with IPv4 and IPv6.

It seems this conclusion is a bit extreme. I would not say this.

The "danger" is that the source could be spoofed. Well, this can be done much easier with UDP as well. I do not see this being a danger at all for DNS servers. As dnsmasq has both a limit on concurrent TCP connections and on the queue of TCPFO requests, I can hardly see any danger for a DDoS attack here (which is what you typically want to achieve with source IP spoofing).

@DL6ER Thanks for the info.