[libre-riscv-dev] Spectre mitigation stratagies

Luke Kenneth Casson Leighton lkcl at lkcl.net
Thu Jan 10 13:07:58 GMT 2019


On Thu, Jan 10, 2019 at 12:47 PM Jacob Lifshay <programmerjake at gmail.com> wrote:

> On Thu, Jan 10, 2019, 04:04 Luke Kenneth Casson Leighton <lkcl at lkcl.net
> wrote:
>
> > On Thu, Jan 10, 2019 at 1:17 AM Jacob Lifshay <programmerjake at gmail.com>
> > wrote:
> >
> > > While we are designing the GPU, we should keep in mind that one way to
> > > avoid spectre-style vulns is to design every part so that any instruction
> > > following an earlier instruction can't affect the latency/issuability of
> > > any earlier instruction. This will prevent some kinds of instruction
> > timing
> > > leaks.
> >
> >  ooo, that's gonna be a looot of work to research, and the
> > micro-architecture is... well, getting to the point where i'm having
> > to keep an eye on my "fear / achievability" antennae :)
> >
> > it may surprise you that, despite having a background in security, i'm
> > *really annoyed* by the paranoia surrounding spectre.  it absolutely
> > matters for Virtualisation / Hypervisor Server scenarios, however it
> > doesn't matter a damn for a personal machine.
> >
> I disagree, it matters a lot for cases like web browsers where you are
> running potentially malicious code (javascript from ads for example) and
> you don't want it to be able to steal other important info.

 darn-it, it's that bad, is it?

 ok.  well, in one of the other messages you mentioned it's quite
simple, just make sure that the ALUs are reset back to a known
(identical) state after use, such that there will never be any changes
in the time taken.  is that basically it?  because if so, that's quite
simple.

l.



More information about the libre-riscv-dev mailing list