[libre-riscv-dev] buffered pipeline
Luke Kenneth Casson Leighton
lkcl at lkcl.net
Fri Mar 15 01:03:40 GMT 2019
On Thu, Mar 14, 2019 at 6:28 PM Jacob Lifshay <programmerjake at gmail.com> wrote:
>
> On Thu, Mar 14, 2019, 06:52 Luke Kenneth Casson Leighton <lkcl at lkcl.net>
> wrote:
>
> >
> > https://git.libre-riscv.org/?p=ieee754fpu.git;a=blob;f=src/add/test_buf_pipe.py;hb=HEAD#l89
> >
> > yay, that works! split out send from receive, on their own separate clocked
> > co-routines. Simulation() takes care of multiplexing / synchronising them,
> > based on the clock "yield" timing.
> >
> > without such an approach it's almost impossible to manually interleave the
> > send/receives (or other interactions, of which there could be potentially
> > dozens) in a way that can be interpreted and understood by looking at the
> > code.
> >
> > so, for example, jacob, i notice that the unit test that you wrote, there
> > are two yield statements: one on "set data" and the other on "clear data
> > valid":
> >
> > # set data
> > yield test_stage.input.data.eq(0xAAAA)
> > yield test_stage.input.sending.eq(1)
> > yield test_stage.output.accepting.eq(1)
> > yield
> > yield delay_into_cycle
> > # clear data_valid
> > yield test_stage.input.sending.eq(0)
> > yield test_stage.output.accepting.eq(1)
> > yield
> > yield delay_into_cycle
> >
> > what that means is: the data is only sent every *two* cycles, which means
> > that the pipeline is only 50% utilised... in turn, that means that
> > potential conditions where it could fail (lose data, or data corruption)
> > are not being tested.
> >
> they are still being tested since the next clock cycle optionally fills the
> simple pipeline stage, then the cycle after is the cycle where all the
> combinations are covered.
so, what's not being tested is where the filling of the pipeline
stage and the combinations are being done in the same clock cycle.
> > a split sender co-routine as separate and distinct from the receiver
> > co-routine would allow the sending to potentially occur on every clock.
> >
> > i like the idea of comprehensive coverage of data valid, sending and
> > accepting: my only concern is that they're regularly scheduled. as in: the
> > possible permutations of the *order* in which the combinations of
> > data-valid / sending / accepting can occur (be set and unset) has not been
> > covered, leaving the possibility that some combinations might fail as being
> > unconfirmed.
> >
> I can add another test that dumps random data through the pipeline (using a
> lfsr or something to obtain reproducible results).
did you see what Test3 does? that's exactly what it does. i would
prefer that code not be duplicated. plus, the two types of pipeline
should chain together. that in turn means conforming to a common API
so that time is not wasted and confusion does not result from the
names being different.
if you recall i mentioned that there are conventions STB and BUSY
which we need to conform to because they are industry-standard.
if we have different names from industry-standard conventions, the
code will be rejected by people working in the industry, and we will
end up being isolated and forced to do extra work, instead of
receiving assistance from people whose expertise we really need.
> > that was why i used a random setting of the accept and send conditions, but
> > also varied the distribution of time that each could spend set / unset (see
> > send_range and stall_range).
> >
> > what that does is: sets up between 1 and 10 cycles where (potentially) the
> > sender is mostly idle and the receiver is mostly accepting, and then on the
> > next 1-10 cycle group, it could be the other way round. then, also,
> > sometimes, there's no stalling and the receiver is never busy.
> >
> > repeat for 10,000 values and we have a high degree of confidence that the
> > buffered pipeline (and a 2-stage chain) works as advertised.
> >
> ok, though the test I have already tests all combinations of internal state
> and external inputs (except that the input data doesn't cover all 2^16
> values).
the values are not so important (except being correct of course), as the
> I can add tests that test a longer pipeline if you like.
i think it's essential to demonstrate that the pipeline can operate
at full capacity. there could potentially be a situation not covered
where if it is operated at full capacity it drops data under certain
circumstances (dan showed some circumstances where a naive pipeline
can *DROP* (lose) data).
in the screenshot below you can see the results from the Test3 class:
input data is on every clock [where i_p_stb is asserted], and output
data is on every clock [where i_n_busy is LOW].
it would be preferable to work towards a common API, using the same
unit test code.
l.
More information about the libre-riscv-dev
mailing list