[libre-riscv-dev] web-of-trust for code reviews to manage trusting dependencies
Luke Kenneth Casson Leighton
lkcl at lkcl.net
Tue Aug 27 10:17:19 BST 2019
On Tue, Aug 27, 2019 at 9:18 AM Jacob Lifshay programmerjake at gmail.com
have a look at why advogato was created.
i did read about advogato a bit.
the interesting bit is that it didn't fail because the trust metric system
didn't work: it did. it failed because people rejected the ideas put
forward by people who *had* been Certified as "trusted".
Raph actually received public demands for the removal of articles (or the
De-Certification of the people who had posted them). he had to point out to
them that, unfortunately, if he did so it would completely undermine his
impartiality and thus the credibility of the entire system, but more than
that, the people being "attacked" (me) had over 150 "Master" Certs by
well-known themselves-trusted community individuals.
he had to tell them that if they wanted my Cert "removed", they needed to
contact all 150 of those people and demand that the person remove their
"declaration of trust". Raph wasn't going to do it for them.
the lesson we learn here is that there has to be a strong positive
reinforcement "incentive" for people to do what they're actually "trusted"
and that there needed to be an "expiration", "renewal" and "revocation"
process, which did not exist in the original Advogato code.
"Will this web of trust model work? I don’t know."
then why are you rushing to use it?
Because it sounds like a good idea and is similar enough to PGP's web
of trust that I think it will have mostly the same security
not good enough. "mostly" the same security properties is simply not
again, that would be more thoroughly verified when the security audit is
you'll see in a followup that i point out that this entirely misses an
critical step. a security audit is *different* from a cryptographic and
engineering *design* review.
the crev team have entirely failed to provide any demonstrative proof that
done that work.
again to give an example: we had to go through such a review process for the
NLNet funding application.
NLNet *specifically* requested that we perform a comparative analysis of
projects, providing not only insights into what was "wrong" with the
but what was *missing* from the alternatives.
this then allowed them to make two separate and distinct decisions:
(1) whether the donation would be money well spent (as opposed to
simply duplicating something that already existed)
(2) whether we actually knew - and could express - what we were talking
the crev team have *failed* this utterly essentlal step, and gone straight
that makes the proof repo a high-value target for attackers. and
social engineering or simply plain lack of attention is easy enough to
result in keys being stolen.
I think you are missing that the proof repo and the private keys are stored
completely separately: the keys would be on their owner's computer or
hardware key and the repo would (usually) be on a public server somewhere.
you'll see in the follow-up that is *precisely* the definition of a
the server is not a very high-value target because even if an attacker
could put whatever they wanted on it, the only parts that other people
would trust are the parts that are also signed by the owner's crypto keys,
which the attacker doesn't have.
which they can obtain. underestimating that is a serious mistake.
there's an extremely funny mathematical formula for this, which i saw on
David Leblanc's wall at ISS when i worked there. it read, (in words):
"Security tends to zero as the number of idiots goes from zero to infinity".
it looks hilarious when written out in actual mathematical notation, because
social engineering or lack of attention is generally not enough to get the
key owner to give out their private key, just like they wouldn't just pass
out their password to whoever asked.
this is a fundamental mistaken assumption based on lack of experience in
look up the lengths to which the Israeli Government went to, to get stuxnet
the higher the value of the target, the greater the lengths that attackers
will go to.
the current version of cargo-crev encrypts the private keys by default,
requiring the owner to type in their passphrase to decrypt them in order to
sign trust records.
so what? worse: is that a *hard requirement*?? what if i have a smartcard
for my GPG key? what if i have an "offline" computer that is never connected
to the internet?
this is what some people in debian actually do: they *NEVER*, under
*ANYCIRCUMSTANCES*, permit the computer that is used to perform GPG-signing
connected to the internet.
have a look at how debian's distriibution chain actually works - and
why it works. i describe it here:
I'm well aware of how GPG is used in Debian and how everything is
recursively based on the web-of-trust.
good. that person that you linked to clearly isn't.
I think crev is a good idea precisely because it is based on the same kind
of web-of-trust and similar signing mechanism, though, because it is
currently like an after-market addon for cargo, cargo doesn't currently
check everything that way when it's originally downloaded (since cargo-crev
isn't able to hook the download mechanism), instead, all the verification
and review-checking is done as a separate command (cargo crev verify, i
that's a good enough first step, however do not be fooled into thinking that
just because the "verify" command exists, that it can be trusted.
debian's procedures are designed for a single organization,
they're absolutely not. the *code* is designed for a single organisation,
as the procedures are implemented - hard-coded - into apt and dpkg.
which is not as
practical for cases where there is no centralized place to store signatures
or Releases files.
?? where do you think those signatures and the Release files are stored??
they're *offline* distributable! it is nonsense to claim that they're
crev is similar, but more decentralized -- it doesn't
need a centralized package repository.
that sounds like a critical security design flaw
crev can handle those cases (though maybe not the current implementation).
crev should even be able to handle cases like, for example (currently only
works for rust crates and (maybe) npm packages), Khronos releasing a new
version of Vulkan, where Khronos is totally unaware of crev's existance.
people can still review a particular release of the Vulkan spec and publish
a signed review using crev, such that other people who download that
version of the spec can see that it has been reviewed by people that they
transitively trust, and crev will check that the cryptographic hash matches
and who - or what - checks that the process by which crev gets its
has not been compromised?
this is the kind of thing that should have - would have - been picked up if
the crev team had done their research into existing systems (and their
*then* you design the interface around the basic fundamental concepts.
if the interface happens to be easy, that's *great*... but it's
absolutely essential that simplicity *not* be a driving "priority one"
that's true, however, any system that's supposed to improve security needs
to be sufficiently easy to use that most people will actually use it.
no: the users need to be educated and told that under no circumstances
should they violate these procedures. or if they do, they get everything
that they deserve.
i have friends contacting me to say that they downloaded a random dpkg
off the internet and that they got a trojan installed as a result.
they bypassed the procedures: they get what they deserve.
specifically *designing* a system that is insecure by design *FOR EVERYBODY*
just to make the API quotes simple to use quotes is a really, *really*
thing to do.
if it is "too complicated" for people who are inherently incapable of
understanding it (and understanding the need) then at least the majority
will not be affected.
For me, at least, Debian's system is not easy enough to use for publishing
code, simply because I don't have a GPG key and am not likely to be able
get it signed by anyone in the Debian web of trust in the near future due
to not having met anyone in person (probably will change once I attend some
and that's a *good* thing! and you won't be allowed to become a Debian
Developer unless you've gone through the correct procedures.
however, you've fundamentally misunderstood (conflated) a number of things:
(1) debian maintainers are NOT the same as "upstream code writers".
they *can* be, but do not have to be
(2) it is possible to submit a package for inclusion. you raise an "RFP"
(3) that the RPF procedures exist does *NOT*, repeat *NOT* mean that "Debian
procedures are a failure when it comes to general-purpose adoption".
like many people who do not fully understand the processes and procedures
that took debian 20 years to refine and prove, you are conflating a huge
number of really quite separate and distinct parts of the process.
to be perfectly clear: i'm not saying that the person who wrote the wiki
page understood debian correctly or anything like that (they appear to only
be tangentially related to the crev project anyway, though I could be
wrong), I'm saying I think a lot of the ideas that went into crev are worth
having in whatever system ends up becoming standard for cargo, crev or
they're only worth having when their value has been demonstrated by people
who fully understand *all* the various package-distribution systems.
putting a diamond ring on a pig does not make the pig pretty.
More information about the libre-riscv-dev