[libre-riscv-dev] Spectre mitigation stratagies
Luke Kenneth Casson Leighton
lkcl at lkcl.net
Thu Jan 10 07:51:47 GMT 2019
On Thu, Jan 10, 2019 at 1:17 AM Jacob Lifshay <programmerjake at gmail.com> wrote:
> While we are designing the GPU, we should keep in mind that one way to
> avoid spectre-style vulns is to design every part so that any instruction
> following an earlier instruction can't affect the latency/issuability of
> any earlier instruction. This will prevent some kinds of instruction timing
> leaks.
ooo, that's gonna be a looot of work to research, and the
micro-architecture is... well, getting to the point where i'm having
to keep an eye on my "fear / achievability" antennae :)
it may surprise you that, despite having a background in security, i'm
*really annoyed* by the paranoia surrounding spectre. it absolutely
matters for Virtualisation / Hypervisor Server scenarios, however it
doesn't matter a damn for a personal machine.
i.e.: if you've given someone physical access to a personal machine
(desktop, tablet, laptop), all bets are off in ways that make spectre
etc. completely irrelevant.
if we were going to put Hypervisor Mode in it, and if it was intended
as a server-class chip, i would *absolutely* be concerned, and agree
100%. the first target market for this chip - even for industrial
embedded - is totally different as far as the threat / risk analysis
is concerned.
that having been said: if it's _really easy_ to avoid those kinds of
vulnerabilities, then yes, let's do it.
l.
More information about the libre-riscv-dev
mailing list