[libre-riscv-dev] pipeline stages controlling delays

Luke Kenneth Casson Leighton lkcl at lkcl.net
Sun Apr 7 04:45:56 BST 2019


i found a simpler case, just involving a single-length pipeline, and
where buffering is switched *off*.  d_ready and d_valid are still
staggered, however it doesn't matter what the latency is between them.

what matters is whether the "input ready" signal is asserted HI
(success, always), or whether it is set to random (fails).

you can see in the attached, top is "success", bottom is "fail", the
n_i_ready signal is written, however *only* when it combines with
d_ready does it produce an *actual* "ready" signal (n_i_rdy_data).  in
the "success" case, n_i_ready is always HI, making the d_ready signal
effectively "the" n_i_ready signal, and thus giving us the OLD
(successful) behaviour.

it's when the d_ready combines with n_i_ready that the problems start,
which makes no sense.

l.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 2019-04-07_04-01.png
Type: image/png
Size: 78046 bytes
Desc: not available
URL: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/attachments/20190407/047abf86/attachment-0001.png>


More information about the libre-riscv-dev mailing list