On 04/11/2017 02:05 PM, Jan de Mooij wrote:
> On Tue, Apr 11, 2017 at 9:34 PM, Jason Orendorff
>> - Bigger projects, like Waldo's work on parsing and djvj et al's work on GC
>> scheduling, are undertaken when we have stuff that has been showing up on
>> profiles "forever". This kind of work isn't driven by any one particular
>> measurement, like a benchmark.
> I think telemetry is useful to drive the GC work. We know how many of our
> GCs are non-incremental (and for what reason). Same for full (all zone)
> GCs. We know how long our GC slices are. There may be data that's not
> captured by telemetry, but the data we do have seems pretty valuable anyway.
Yeah, telemetry is good (especially max pause times) since it tells us
how much we're hurting (and some about why -- the longest phase of GC is
also reported via telemetry.) Where GC is concerned, crash-stats is also
a big driver, since sadly that's how GC problems tend to manifest. (As
well as lots of non-GC problems, and it's not always easy/possible to
tell the difference.)
Then there's some amount of ongoing hazard analysis work, driven by code
churn and what new features we start making use of that the analysis has
to learn about.
> Generally, I think we're working on stuff that makes sense (and have been
>> all along), but it's still not guaranteed to be representative of the web
>> as users see it. Is that fair? What else should we be doing?
> On the JIT/IC side, evilpie's IC logger/analyzer has been very useful. The
> past months we've fixed tons of (often easy to fix) perf bugs that showed
> up on actual websites our users visit.
The juxtaposition of telemetry and the IC logger makes me wonder -- is
it, or could it be, lightweight enough to report on via telemetry? It
would be pretty cool to be able to drive work off of what the bulk of
our users could make use of.