On 2017-03-13 9:32 AM, Boris Zbarsky wrote:
> On 3/12/17 8:44 PM, Daniel Veditz wrote:
>> This does add overhead when applied to developers who are, in fact,
>> trusted. On the surface, clearly noted in this thread, that seems
>> insane. But at the top of this thread, the worry was "what if a trusted
>> committer's credentials get compromised?" Currently the answer is "We're
>> screwed". We can hope the sheriffs would notice an odd patch during
>> uplift, but the sheriffs are busy and any attacker who was any good
>> would make sure the patch looked totally normal on the surface.
> I feel like there's an important point to be made here. If I had
> compromised a developer's credentials and were trying to get an attack
> past a "review right before landing" system, I would do it as follows:
> 1) In one patch, introduce some code that relies on an invariant that
> currently holds.
> 2) In another, non-conflicting, patch, introduce a violation of that
> invariant and fix up the other places depending on it, but not the one
> from item 1.
> 3) Have different people review the two patches.
> 4) Make sure I have both reviews before landing either patch, so the
> reviewers can't catch the problem. There are various ways of doing
> this, like making both patches depend on yet at third patch that I make
> sure gets reviewed slowly.
> This is a somewhat higher bar than just "add the malicious code and land
> it", of course. But this is also what I came up with after about 30
> seconds' worth of thought...
I'm really struggling to imagine what kind of an actor we are trying to
protect against. There is a cost to going through the pain of
compromising a trusted developer's account, authoring a patch without
their knowledge, getting it landed, correctly getting it approved for an
uplift and actually uplifted to our release branches and wait for the
vulnerability to get deployed and profit. This puts a price tag on this
attack vector, in terms of financials, and logistics, and it risks
getting detected by breaching in to a machine used by a Mozilla developer.
What kind of an actor would choose to actually go through this, versus,
let's say, buying a Firefox 0-day from a vulnerability vendor (would
perhaps cost a bit more in financials but a lot less in logistics), or
just look through our oceans of unsafe C++ code fishing for fresh
0-days? (Honest question, not just rhetorical.)
Also, in all honesty, I feel like our review process is pretty
ineffective against this attach vector at any rate. I don't know about
other reviewers, but I'm going to admit that most of the time when I'm
reviewing code coming from trusted Mozilla colleagues who I have worked
with for many years, I implicitly trust the code as being non-malicious,
whereas when I review code coming from contributors who I don't
recognize, I usually try to treat the code as potentially containing
disastrous security sensitive mistakes. :/