On Tue, Apr 11, 2017 at 12:34 PM, Jason Orendorff
> I have no plans to type in my notes from the JS meeting. If you want them,
> ping me on IRC. But one thing I want to think about is how we decide what
> to work on, especially performance work. Today, it's like this:
> - If you're a volunteer, of course you decide what to pick up—we're just
> glad you're here!
> - A lot of us profile benchmarks and look for useful work in the profiles.
> - Sometimes we do the same thing with random web sites.
> - Bigger projects, like Waldo's work on parsing and djvj et al's work on GC
> scheduling, are undertaken when we have stuff that has been showing up on
> profiles "forever". This kind of work isn't driven by any one particular
> measurement, like a benchmark.
> Generally, I think we're working on stuff that makes sense (and have been
> all along), but it's still not guaranteed to be representative of the web
> as users see it. Is that fair? What else should we be doing?
For performance work, "representative of the web" changes over time, and
when we're working on things that many people feel to be unrepresentative,
it's most likely because we're profiling the wrong or outdated benchmarks.
One thing we've never really done is to talk to at least our in-house web
devs to see what they think what kind of JS is representative of the web.
We might have been slow in realizing how big a deal React-style code was;
perhaps they saw it coming. OTOH this is a more PM-y type job than for an
For completeness, there's also non-perf feature work. The decision process
for that is easy. Stage >=3 features get implemented, prioritized by a
handwavy notion of how excited the community is about that feature.
> dev-tech-js-engine-internals mailing list