The SoC design crisis: is big data the answer?

Rupert Baines

Rupert Baines

Ed Sperling’s recent excellent article about the challenges of gathering and processing data for system-level design really got me thinking.

Ed summarizes some very relevant and pithy thoughts coming from a number of different parts of our industry – including EDA vendors, design houses, and IP providers. Rightly in my opinion, he identifies the challenges of collecting relevant data throughout the development process (although without mentioning the possibility that the data gathering process might extend into the real life deployment of the SoC into an end product). And I’d certainly agree that another challenge is to process that data to provide engineers with a dashboard view of what’s going on inside their SoCs.

But somewhere along the line, the argument breaks down: is the solution to the SoC development crisis REALLY to acquire masses of data and throw it at “big iron” servers (albeit these days virtualized ones) in the hope that something useful will emerge? Surely we need to work smarter than that?

At UltraSoC, when we talk to potential customers, they’re not thinking that it would be nice to identify “unknown unknowns”. They have enough “known unknowns” to be going along with. They know that at the system level their chips are going to perform differently than the constituent blocks of the SoC behaved when validated in isolation. They know that DMA transfers will (unpredictably) take longer than expected. They know that the CPU will (sometimes) deliver far less performance than it was designed to deliver. They know that deadlock conditions and contention will lead to intermittent, hard-to-reproduce stalls and hangs.

And they also know that the complicated set of interactions inside a SoC will vary: so adding “debug code” or monitors is not only wasteful (using cycles that could be used more productively), they actually change the performance: so the debug code means problems change to occur in different ways in different places.

So what they need first is a way of homing in on those “known unknowns”, at “system speed” and in a way that is non-intrusive.

That’s what UltraSoC is dedicated to providing. A way of collecting the right data based on the knowledge and intelligence of the developer, and of turning that into actionable information.

Our CTO Gadge Panesar wrote about this in a recent blog post, explaining why SoC design needs a shake-up; a system-level perspective powered by intelligent analytics and enabling development based on sound engineering principles.

So really the question is not “what should I do with all this data?” It’s “what do I need to know, and how can I get the chip itself to tell me that?” That’s the dimension that the “big data” argument misses out.

You can read the full text of Ed Sperling’s article here

Gadge Panesar’s blog post can be found here