[libre-riscv-dev] Request for input and technical expertise for Systèmes Libres Amazon Alexa IOT Pitch 10-JUN-2020

Staf Verhaegen staf at fibraservi.eu
Mon Jun 8 11:17:05 BST 2020

Jacob Lifshay schreef op zo 07-06-2020 om 17:29 [-0700]:
> On Sun, Jun 7, 2020, 16:56 Luke Kenneth Casson Leighton <lkcl at lkcl.net>wrote:
> > On Mon, Jun 8, 2020 at 12:42 AM Cole Poirier <colepoirier at gmail.com>wrote:
> > > Hi Libre-SOC Developers,
> > > I have been working on a first draft of the script/contents of apresentation pitching Libre-SOC to Amazon Alexa's IOT division. I haveput what I've been able to come up with on my own on a wiki page here:
> > 
> > https://libre-soc.org/Systemes_Libres_Amazon_Alexa_IOT_Pitch_10-JUN-2020/.
> > (i moved it to here, sorry!
> > https://libre-soc.org/systemes_libre/Systemes_Libres_Amazon_Alexa_IOT_Pitch_10-JUN-2020/
> > )
> > > If you can, please review what I have written and edit, rearrange, andadd relevant points as you see fit. I would appriciate help specificallyin filling in or correcting technical details or false claims that Ihave made.
> I added some notes.

OK, some comment from a guy coming out of the hardware development world. I may sound skeptical but this is to help you and prepare for a possible skeptical audience.

I get the advantage of RTOS and RPC needed between CPU and GPU; we will have to see how important your audience will find this.
Playing devil's advocate I would even wonder if IoT devices typically need a display (and thus GPU) functionality anyway ? Or if they do, can't they just live with a simple framebuffer; possibly with some 2D acceleration ?
Will the libre-SOC GPU instructions have advantage for machine learning/artificial intelligence ?

Big part of your pitch is based on power advantages, as also indicated by Jacob I would be very prudent in doing power claims without having real numbers to back it up.
Modern design techniques and recent (proprietary) EDA tools are good at having virtually no power consumption when a circuit is not doing anything. So I don't see a reason why your combined CPU-GPU would be so much better in power consumption then a chip with a CPU and GPU. Given that you currently are doing power instruction set I expect you will have more power consumption in decoding the power instruction set, so even if the GPU part would cause less overhead would you still be better than ARM + MALI or RISC-V + some GPU ?
As always I like to be proven wrong by real numbers.

Also to get there several man-years of development of the open source EDA tool chain will be needed: proper parasitic extraction of interconnects, timing driven P&R, support for DVFS (dynamic frequency voltage scaling), clock gating, power gating, ...

If there will be ASIC hardware developers in your audience they will be very skeptical when you present power consumption numbers based on simulation and will likely want to see the details how you derived the numbers; they will just dismiss power claims made on principle. Reason is that in smaller nodes power consumption is mainly driven by the capacitive load of the interconnects so doing power claims without an actual implementation in the targeted node is considered just guessing.

I don't understand the BOM advantage when CPU and GPU are integrated in one chip, e.g. ARM Cortex + MALI GPU.


More information about the libre-riscv-dev mailing list