[libre-riscv-dev] System-Wide Complexity Rant (was store computation unit)

Samuel Falvo II sam.falvo at gmail.com
Wed Jun 5 16:53:31 BST 2019

On Wed, Jun 5, 2019 at 12:03 AM Luke Kenneth Casson Leighton
<lkcl at lkcl.net> wrote:
>  http://linux-sunxi.org/EGON
> Follow the trail. Next page is BROM.

Yup, and if you notice, I cover all these scenarios in the 3 ROM
configuration options I listed in my original rant message.

Even Amiga had multi-stage boot-loaders[1]; I'm not suggesting they're
completely unnecessary.  But, I simply believe that it is possible for
the system hardware to abstract away a lot of the details of the
underlying hardware in a way that supports rather than impedes simpler
software stacks.  I guess this is where you and I disagree (or, at
least, I take it as disagreement).  I reject the idea that because
history evolved a certain way, that it is the only way that it *could*
have happened.  SIGs exist to help sell their products, and they'll
evolve their standards to support that goal.  Their next priority is
impeding competition, so as to protect their investment.  Helping
others is entirely tertiary to those motivations.

> Basically all this is down to the fact that the DRAM is utterly dead until
> its controller interface is initialised, you are in single core seriously

Right; I get this.  My point is that this initialization can (and,
arguably, should) be the domain of the system hardware in which it's
mounted -- the RAM controller core *itself* should take it upon itself
to at least provide a *minimally functional* SDRAM configuration at
reset time, so that the system firmware doesn't have to deal with it.
(The vastly simplified interface to HyperRAM seems to suggest I'm not
the only one to think along these lines.)

Should I have to rewrite/recompile my entire bootstrap and/or kernel
just because SDRAM is replaced by WhizBang RAM five years from now?  I
think that's total bullshido, and much of the philosophy of the
Kestrel's hardware development reflects this.  It's why the KIA core
has a hardware-resident FIFO (so it can be useful in the absence of
interrupts), why this SIA blocks the CPU when transmitting too fast
(so the same debug print routines works in emulation and in live
hardware), and why the SIA cores come out of reset with baud rates set
according to their intended use (so terminal driver is at 9600 bps be
default, while the block storage is set to 115Kbps by default), etc.

I think that there is a minimum level of necessary complexity that
software has to contend with.  I *don't* think that the contemporary
market sits at that level; I feel firmly that each succeeding standard
that gets published is aimed at least as much on preventing
competitors from competing as it is solving some obscure technical
issue.  Groups of companies that back these standards form marketing
blocs (vis a vis, USB vs Firewire).  One need only consider USB's
recent ratification of *charger DRM* to see the deleterious effects
these blocs can have.[2]

If we just focused on the problem solving, things might be better;
but, it doesn't happen that way as far as I can see.  So we get
ultra-complicated standards like SDRAM initialization protocol, SD
card initialization protocol (which actually exposes voltages and
currents to the software layer; a layer which has no business dealing
with such matters), USB and its entire driver stack architecture, ...
I could go on.

> Even once the DRAM is first initialised you STILL have to go through some
> checks to get it to run faster, all the things you could not do because
> there is just not enough space in only 16k to do that much DRAM
> initialisation.

And, to me, that's fine.  This is what auto-config is for, but
auto-config needs RAM resources to function.  So, in my mind, having a
minimum viable computer in place at cold-boot time is absolutely
paramount.  If you then want to apply tweaks *post-boot* to optimize
the system as a whole, that's perfectly acceptable to me.

What I explicitly don't want is an $800 paperweight if I botch the
system firmware, and I sure as hell don't want another $1500
motherboard to brick itself because I tried booting into what it
thought was an unsupported Linux distribution too many times.  (Yes,
this happened to me.)

> Intel is absolutely no different here, they too are restricted to around 16
> to 32k SRAM until the DRAM is brought up in the absolute minimal way, again
> you just simply do not witness the process unless you are a coreboot
> developer.

I've had the extremely negative experience of having to deal with
initializing pseudo-static RAM (basically SDRAM core but with an SRAM
facade on it) with the Digilent Nexys 2 board.  It very nearly brought
the entire Kestrel project, and everything it stands for, to the brink
of complete failure and I just about walked away from the project.  It
took my about 18 months to finally recover from that incident.

I could not get this chip to work no matter what, not in SRAM mode,
and not in SDRAM mode (and in either mode, the chip behaved in a
different way).  As far as I could tell, the chip was full-on damaged;
however, the Nexys 2's BIST bitstream indicated that the chip was
working fine.  The problem is, the BIST code doesn't have a dedicated
pseudo-SDRAM controller; rather, it was bit-banged from software.

To this day, I still think the chip was defective; and had they used a
hardware RAM controller instead of a software RAM controller, the BIST
would have reported accurate results.  16MB of software bit-banged RAM
disk is useless to me.  I needed 16MB of usable, online RAM.

The Kestrel-3 would have been done two and a half years ago had it not
been for that RAM chip.  OTOH, that also gave me the time needed to
discover and learn about Yosys and the more open source friendly
development boards out there, to find libre-riscv, and to learn about
scoreboards and OoO techniques, so maybe a mixed blessing?

1.  The Amiga 1000 needed *two* boot disks, because the kernel ROMs
were not ready before manufacture and shipping.  In fact, this is how
the OS ROM-resident kernel image came to get its name.  The boot ROMs
had to read in the "kickstart" image from floppy into a special 256KB
chunk of RAM which was then write-protected until reset.  Once this
happened, the normal "Workbench" boot process commenced.  This
literally is a 3-stage boot: slurp in the Kickstart image, read the
boot sector of the Workbench, and finally, load the rest of the
Workbench image, in that order.  Starting with the Amiga 2000 and 500,
Kickstart had been burned into ROM, thus reducing the number of boot
stages to two, and the number of boot disks to one.

2.  https://www.androidpolice.com/2019/01/02/usb-type-c-authentication-program-gets-started-sounds-like-its-effectively-drm-for-type-c-devices/

Samuel A. Falvo II

More information about the libre-riscv-dev mailing list