[libre-riscv-dev] web-of-trust for code reviews to manage trusting dependencies
programmerjake at gmail.com
Tue Aug 27 12:17:04 BST 2019
On Tue, Aug 27, 2019, 03:55 Luke Kenneth Casson Leighton <lkcl at lkcl.net>
> On Tue, Aug 27, 2019 at 11:05 AM Jacob Lifshay <programmerjake at gmail.com>
> > the idea is more that each individual developer would verify through the
> > people they directly trust. every developer ends up with their own web of
> > trust.
> ah. that's actually completely different. the "local" trust system
> was the focus of research that i did into Advogato's Trust Metrics.
> basically i added a feature to an experimental version which allowed
> the *user* to be the "central seed" (instead of an external set).
> *for certain scenarios* this turned out to be incredibly useful.
> > > > for crev, the signing keys are only on the developer's computer (or
> > > > device used for signing), not the git server.
> > >
> > > how are the signing keys independently audited as trustable?
> > >
> > > *are* the signing keys independently audited as trustable?
> > >
> > not sure.
> (now that i understand what crev was designed for - code reviews - it
> no longer matters... *except* if some tries to start using crev for
> code *distribution*. then it *really* matters).
> > > GPG has key-signature revocation. here is the procedure:
> > > https://security.ias.edu/how-revoke-gnupgpgp-signature-key
> > revocation is not the same as negative trust. revocation is where one
> > person says they no longer trust someone (with the trust level reverting
> > no association). negative trust is where one person says that someone is
> > not to be trusted, and that is transitive. so, for example, if i saw fred
> > (made up person) do something untrustworthy, I could sign a negative
> > record, where if you trusted me, then you would transitively distrust
> this was also the subject of "hot debate" in Advogato discussions, for
> many years. it came down to the fact that "Negative" Certs - in a
> social context - are so "loaded" that it was very deliberately *not*
negative trust (of a particular code version) is appropriate in the case of
malicious code. it may also be appropriate in the case that someone
knowingly published malicious code where people would expect non-malicious
> it sounds basically, to me, like they've created a system that's
> suitable for use in, say... slashdot. or even facebook.
> at some point, they're going to realise that even for a single person,
> evaluating the Certs (positive and negative), is beyond any one
> individual to calculate.
> they'll need an algorithm to do that, and the best one to use is a
> "gas flow" algorithm (modified to be "breadth-first" rather than
I they already have an algorithm, though I do not know which kind.
> the Ford-Fulkersson "gas flow" algorithm was the core of Advogato
> Trust Metrics system. depth-first unfortunately consumed larger
> computational resources than necessary.
> > > when people start talking about making it mandatory, the problems with
> > > "just gpg signatures" will start to show up.
> > >
> > so, I guess the conclusion is that crev has some good ideas, but needs
> > serious protocol overhaul (which they might be willing to do, it's still
> > alpha releases, I think).
> right. ok. now that i know (finally) that it's for code *review*
> purposes, then i can bring down the "Defcon" level by three to four
> code *review* purposes, being optional and a "social and manual
> decision-making aid", is completely different, and the consequences of
> mistakes are nowhere near as drastically bad as for code
> *distribution* processes/code.
> if however, as that online article starts suggesting, crev starts
> getting proposed - or even used - for code *distribution* purposes,
> the "Defcon" level absolutely has to get cranked back up to the
> absolute max, and, from what you've told me and from what i've seen,
> crev is in *no way* suitable for the task.
> it would be better to start again, by doing the research properly,
> doing a comparative analysis of:
> * GNU guix
> * Mozilla's failed B2G projects' reliance on SSL Certificates, and the
> * FreeBSD ports
> * archlinux's pacman and their failure to include GPG-signing on packages
> * suse/novell and redhat RPMs, and the vulnerabilities associated with
> their failure to sign some of the critical information (i forget what
> it is)
> * debian and ubuntu's system, and why ubuntu is vulnerable when debian
> is not, despite the same code and procedures being used for both
> * npm, rubygems and others, and why relying on HTTPS does not work.
> and many many more. that analysis *will* need to go into social
> aspects, threat-scenario analysis, where no "insane" scenario shall be
> considered "too insane" to be on the table for mitigation.
> they also need to be warned - in advance - that only a handful of
> people in the world have the mindset to cope with such a task, because
> most people simply are, putting it bluntly, just not mentally equipped
> to take the "smallest, most implausible" threat seriously.
> the way that i illustrate this to people is to see what happens to the
> probabilities in the following equation:
> N = 10: pow(1-1/N, N) ==> 0.3486
> N = 100: pow(1-1/N, N) ==> 0.3486
> and it converges to 0.3678 somethingsomething.
> now change that number, 1-1/N, by a small fraction. subtract 0.1 for
> example. the effect amplifies so massively that by the time you get
> to pow(0.9, 10000000) you're at near-zero. even pow(0.999999991,
> 100000000) the cumulative effect of even the *tiniest* change results
> in an answer of 0.00012.
> in simple "words", the more people involved (the more eyes), the more
> important that even the tiniest security flaws become, because there
> are simply more people to spot - and exploit - the flaws.
> i.e. by the time you get to 1-10 million downloads a month, what was
> formerly not even remotely worth taking seriously instead has to be
> considered a high potential threat.
> libre-riscv-dev mailing list
> libre-riscv-dev at lists.libre-riscv.org
More information about the libre-riscv-dev