Thanks for bringing your paper up (actually the first link is timing out
for me right now - so I have only read the second). My personal viewpoint
is that algorithms that trade bandwidth for latency have a lot of value in
short lifetime web scenarios - so I think its interesting work. I've got
some concerns, but please view them in that bigger picture light.
Quick question - you seem to be using a speculating tone in this email
thread about a drop in for getaddrinfo() but the paper indicates this
experiment was actually executed for a local Firefox build.. is this how it
was done? That seems like a reasonable approach but I want to understand if
we're speculating or talking about results.
If that's the case, I am (pleasantly?) surprised you saw such an impact in
page load metrics. I'm not especially surprised that you can do better on
any particular query, but a lot of the time our page load time isn't
actually serialized on the dns lookup latency because of the speculative
queries we do. Maybe its just a manifestation of a huge number of
sub-origins or maybe your test methodology effectively bypassed that logic
by not finding urls organically. (that would mean telemetry of average
browsing behavior would show less of an impact than the lab study).. we've
got some additional code coming soon that will link subdomains of origins
to your history so that when you revisit an origin the subdomain dns
queries will be done in parallel with the origin lookup - I would expect
that would mitigate some of the gains you see in real life as well.
There are two obvious scenarios you see improvement from - 1 is just
identifying a faster path, but the other is in having a parallel query
going when one encounters a drop and has to retry.. just a few of those
retries could seriously change your mean. Do you have data to tease these
things apart? Retries could conceivably also be addressed with aggressive
Its also concerning that it seems the sum of the data is all based on the
comparison of one particular DSL connection and one particular (un-named?)
ISP recursive resolver as the baseline. Do I have that right? How do we
tell if that's representative or anecdotal? It would be really interesting
to graph savings % against rtt to the origin.
One of my concerns is that, while I wish it weren't true, there really is
more than 1 DNS root on the Internet and the host resolver doesn't
necessarily have insight into that - coporate split horizon dns is a
definite thing. So silently adding more resolvers to that list will result
in inconsistent views.
also :biesi's concerns are fair to consider.. this is a place where mozilla
operating a distributed public service on behalf of its clients might be a
reasonable thing to consider if it showed reproducible widespread gains (a
mighty big if).. any use of third-party servers (which would include
mozilla operated services) also comes with tracking and security concerns
which might not be surmountable. All interesting stuff to consider -
certainly before any code was integrated.
On Fri, Dec 5, 2014 at 11:48 AM, Vulimiri, Ashish
> I’m a grad student at the U of Illinois, and I’ve been looking into a
> technique for improving DNS lookup latency, involving replicating DNS
> requests to multiple DNS servers in parallel. We’re seeing a significant
> reduction in latency when we try this: 25-60% better raw DNS latency and,
> in initial experiments, 6-15% better total browser page load times.
> Raw DNS performance: sec 3.2 in
> Impact on web page load times: http://arxiv.org/abs/1306.3534
> Would there by any interest in incorporating something like this in the
> Firefox code?
> dev-tech-network mailing list