[libre-riscv-dev] [isa-dev] Re: FP transcendentals (trigonometry, root/exp/log) proposal

lkcl luke.leighton at gmail.com
Sat Aug 10 18:16:45 BST 2019



On Friday, August 9, 2019 at 8:03:10 PM UTC+1, Allen Baum wrote:
>
>
>
> On Fri, Aug 9, 2019 at 1:46 AM lkcl <luke.l... at gmail.com <javascript:>> 
> wrote:
>
>>
>>
>> > If there is a measurable and significant improvement on some large
>> > body of code, such as SPEC for example, then that would be grounds for
>> > considering inclusion in a RISC-V Foundation standard extension.
>>
>> It took Jeff Bush on Nyuzi about... 2 years to get to the point of being 
>> able to do that level of assessment.
>>
>
> OK - but what's your point?
> Or, rather, why do you expect that you can or should be able to skip 2 
> years of work?
>

if you read the follow-ups it will become clear.  the amount of work that 
went into the Vulkan / OpenCL standard is... staggering.

* repeating that quantitive evaluation is completely pointless and further
* DEVIATING from that quantitive evaluation that resulted in the selection 
of those opcodes is also pointless and further
* REMOVING from the list of opcodes that have Industry-wide adoption 
results in performance penalties (if not done carefully).

i illustrated in a cross-over post that i've *already* rejected requests 
for the addition of POSITs, for example [and didn't bother anyone here with 
them].

i also described that we've already begun the process of evaluating 
"equivalence" (LOG=LOGP1(x-1) etc.) thanks to input from Mitch for that 
one.  we've saved i think it's at least four opcodes already from being 
needed.


Or, why should you expect anyone to make a major adoption of a standard 
> that might turn out to be fatally flawed because it was rushed to 
> ratification without the requisite homework?
>
>
we don't!  we expect everyone to help do a proper evaluation!  it's 
everyone's community, and we *all* win by contributing to it!  why is that 
so hard to understand or follow through on, in a fashion that's respectful 
and doesn't make people feel completely unwelcome??

we do *NOT* expect people to act in a hostile fashion, trying to gain "one 
up" over anyone else (directly or indirectly).  we're keenly aware that 
this is for a wider benefit, so it's up to *all* of us to take 
responsibility and work together.
 

> If you want to create a standard (make no mistake, that's what you're 
> doing) that will be widely adopted, there is a lot of heavy lifting that 
> can't be swept under the rug.
>

well if someone actually bothered to ****** well tell us what the process 
was - which we've requested six, seven, eight, nine ten ******* times, we'd 
not be in this ******* mess _would_ we!

as "outsiders", because we are excluded from membership by the NDA terms 
and the conflict of interest with our business objectives, with everyone 
else *knowing* what the process is, of *course* they're going to get 
frustrated with us.

*and that's just not good enough*.  the RISC-V Founders set up this stupid 
ITU-style Standards process - without consulting anyone or listening to 
feedback, so don't blame *us* if it doesn't work out!

where is the documentation on how to submit proposals?

why has nobody answered our requests on what the process is?

if it is on the closed / secretive RISC-V wiki, *why* is such critical 
documentation closed and secretive?

why are *we* being penalised for the RISC-V Foundation failing to take 
responsibility for something as fundamental and basic as providing 
easily-accessible written procedures on how to submit extensions?

does written documentation on such procedures even *exist*??


I'm sorry to say that your team may have the resources to design something 
> pretty nifty - but not big enough to handle the other part of that, which 
> is to demonstrate it is the right nifty thing that others will adopt (at 
> the expense of adopting someone else's nifty standard).
>

well... i'm not a fan of that approach ["winning" over someone else's 
ideas].  if there's an alternative standard, that would save us time and 
effort, that's GREAT.

the problem comes - as is the case with how both RVV and BitManip have been 
developed (both of which we need to complete a GPU) -  we're *EXCLUDED* 
from contributing and participating in the innovation.

so we've been FORCED into a position of creating an entire new 
vectorisation standard.

we're not happy about it.

 

> You do have an advantage - SW developers will prefer an open source 
> solution - but not at the expense of a flawed open source solution, and it 
> is unfortunately up to you to show that it isn't flawed (to clear- I'm not 
> saying it is flawed - just that it needs evidence to back it up).
>

i understand, as it would be completely insane to drop USD $10m onto a 
silicon roll-out only to have it fail because it hadn't been designed 
properly in the first place.

*we know this just as well as everyone else*.  [i had a mentor for 12 years 
who worked for LSI Logic, and was the head of Samsung's R&D]

the approach that i take is that i actually don't trust "myself".  from the 
software engineering and reverse-engineering that i've done, i find some 
way to "prove" that the approach is correct.

in the case of the IEEE754 FP Unit, that involved running hundreds of 
thousands of conformance tests against softfloat-3.  why?  because 
softfloat-3 is "proven".


in the case of the proposed Transcendentals and Trigonometric functions, 
they're *DIRECTLY* lifted from the OpenCL SPIR-V extended instruction set:

https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html

the Khronos Consortium has *really* big names behind it.  replicating the 
N+ years of effort that's gone into why they added those particular opcodes 
is genuinely a stupid thing to do [likewise, deviating from that list is 
pretty pointless, as there's no API call in Vulkan that will use it].

there is therefore *absolutely no point* trying to do "quantitative 
analysis" of work that's *already* been proven.

let's try it.

let's take COS.

how should we "quantitatively analyse" whether COS should be included.

um... let's do some research.  how many Vulkan applications writers use 
that function?  well, we don't know, because it's an API function call.

i know, let's go onto the khronos forums and ask, "hey cool 3D and OpenCL 
dudes, how's it hangin'?  we wanna like put COS into RISC-V.  can you like, 
give us access to your proprietary source code and algorithms, and all the 
proprietary shader models of your multi-million-dollar games, so we can do 
a quantitative analysis of whether to put COS into hardware?"

after a suitably stunned silence, they'll just burst out laughing, won't 
they?  not so much at the requests for access to proprietary trade secrets, 
but for the sheer banality of the question in the first place, and the lack 
of appreciation of why and how COS went into Vulkan.


about the only other possible way it could be done is with some academic 
research.  and you know what?  i bet that that academic research will 
basically be along the lines of:

"well, the Vulkan API has wide adoption, and, well, y'know: all the 
Transcendentals and Trigonometric functions seem to be heavily used by a 
wide and diverse range of applications, world-wide, y'know, cos, well, it's 
a proven standard.   um, yeah, that's about all we can say, apart from this 
has been such a boring paper to write, with such a blindingly-obvious 
conclusion that we actually can't get it published in any academic 
journals, as it's just not original or interesting enough".

to me it's so frickin obvious that you stick - to the letter - to the 
Vulkan OpenCL Opcodes that i'm having severe difficulty understanding why 
this is not blindingly ****** obvious to other people.


the other aspect that i'm particularly pissed off about is that, as a Libre 
/ Open team, the requests that we take on this additional work *are not 
coming with sponsorship offers and funding associated with them*.

that's completely unacceptable for *us* to be expected to foot the bill for 
advancement to RISC-V's adoption into such an important strategic area as 
3D and OpenCL, especially given that billion dollar Corporations will end 
up - YET AGAIN - spongeing off of Libre initiatives.

this is something that's actually in the RISC-V Membership Agreement.  
Members are *REQUIRED* to fund Libre/Open developers when it advances 
RISC-V.

sorry, Allen - you can probably tell, i'm really not very happy or 
impressed.

particularly that, as it stands, i can't possibly put this thread in front 
of potential members of the upcoming Open 3D Alliance.  they'd take one 
look at it, and, completely horrified, would respond "you *seriously* think 
we are going to trust our business prospects to the RISC-V Foundation if 
that's how they respond to Libre/Open innovators??"

as i said in the cross-over post: of course i get why a comprehensive 
evaluation is necessary.  as a Certification Mark Holder i've spent *six 
years* developing a Standard, so i *know* how it works.

it's just that in this particular case, the Transcendentals and 
Trigonometrics are *already* proven.  if they weren't, AMD, NVIDIA, Intel 
and other *massive* Corporations behind the Khronos Group would never have 
signed off on them, nor put them into their GPU Hardware, would they?

sometimes, there's other ways *other* than "quantitative analysis" to 
demonstrate that something's acceptable for a Standard.  and that's to 
adopt *someone else's* Industry-Grade Standard.

l.



More information about the libre-riscv-dev mailing list