Monday, May 24, 2010

NextOp's assertion synthesis and our recent FIFO experience

NextOp's assertion synthesis and our recent FIFO experience

Based on DVCon 2010 paper on SystemVerilog Assertions - 2009 (see www.cvcblr.com --> Publications) we recently got our FIFO model run through NextOp's BugScope tool. It produced some interesting stuff. The main one I liked is

pop |-> full;

This is an eye opener property - as this should never be the case! But BugScope indeed indicated that we are missing this - either as assert or cover. Obviously this is not a good assert, so when we analyzed deep, it turned out to be a "valid coverage" based on the RTL written. Details at:

http://verificationguild.com/modules.php?name=Forums&file=viewtopic&p=18073#18073

So essentially we did have a coverage hole - when that hole is analyzed, we get a design error/bug! What an interesting go-around way of detecting bugs - who cares, as long the bug detection is automatic, it is good!

Ajeetha,

http://www.cvcblr.com/blog/?p=163

Monday, May 17, 2010

Welcome the next generation Verification Methodology – UVM

For all those System Verilog geeks, lovers, followers here is a sigh of BIG relief – at last we have a UNIVERSAL Verification Methodology that all the 3 major EDA vendors would openly support (and hopefully promote as well). As we speak, UVM-EA (Early Adaptor) release is now available. Take a look at it from Accellera site.

CVC (www.cvcblr.com) has been constantly following this release and are about to release our fresh trainings on this UVM. After all it is based on OVM 2.0.* on which we had successful trainings delivered to several customers locally. The most recent one just over the last weekend! (Yeah, we do have weekend classes as well).

So, what are you waiting for? Go ahead and ask for our upcoming UVM class via training@cvcblr.com or call us via +91-9620209226

Talk to you soon on UVM!

CVC Team

www.cvcblr.com

Saturday, April 24, 2010

A glimpse of our DVAudit – what goes on @CVC’s TDG

Many have asked us the following:

  • Is CVC a training company? I see: www.cvcblr.com/trainings
  • Do you work on “live” projects?
  • What is your TDG really doing?

and more. Usually these questions are more from students/RCGs (Recent College Graduates) than the experienced lot – as the experienced lot is well networked with CVC founders (www.linkedin.com/in/svenka3, www.linkedin.com/in/ajeetha) and can hence know about us well.

Our honest answer to such questions is “all of the above” :-) That’s YES, we ARE proud to be a training company www.cvcblr.com/trainings – simply b’cos we know why we are doing that. We work on “live” projects – we constantly upgrade to next generation technology, the most recent being the SystemVerilog VMM/OVM, Low power etc.

What makes us really different and keeps us constantly innovating is the thirst for “doing better”. This is the core of our PDG – Product Division Group (yet to be formally announced on our website), we look for ways to enhance the productivity. For instance when there is a customer deliverable for a Verification code, “Team CVC” spends quality time together to do thorough reviews, code walk through, custom lints etc. Here is our latest, weekend edition on un-moderated, live-from-the board glimpse of our DVAudit review done on a customer deliverable.

DVAudit

 

For the experienced lot reading this entry (BTW, thanks for getting so far :-)), this is such a common part of your tech-life. For those uninitiated, this is how industry works - “writing piece of code” is just A part – there is lot more to it in making it customer ready/production ready.

Now what’s innovative about the above “glimpse” – if you read it carefully you can observe that we are creating a thorough check-list of “executable” process for Design-Verification Audit. It is something CVC has been doing it for its corporate customers behind the scene for many years. Now it is slowly taking shape as part of our PDG – stay tuned for more..

Wednesday, March 17, 2010

A modern approach to SoC level verification

 

           Verifying SoC is fun and tedious. Especially with several buzz words around the corner, it is quite easy to get lost in maze of buzz-words and miss the goal. At the end one may feel that the plain old wisdom of whiteboard based testcase review/plan is/was lot more controllable & observable. We did that back in 2000 @ Realchip communications and yes it worked really well. But with shrinking times and mounting complexity is that really fast enough? Before I hear constrained-random, blink for a while – how much random do you want your end-to-end data flow in-and-out of ASIC/SoC to be?

We at CVC (www.cvcblr.com) take pride in partnering with all major EDA vendors (http://www.cvcblr.com/partners) – big & small to look for best possible solution for different problems than suggesting “one-size-fit-all” like solution.

Here is a relevant thread @Vguild: http://www.verificationguild.com/modules.php?name=Forums&file=viewtopic&p=17615#17615

I am due to start work on an ASIC, and am wondering about a suitable verification strategy. The ASIC consists of a data path, with continuous data input from ADCs and continuous output to DACs, and a couple of embedded processors utilising external flash and SRAM.

So the interfaces to the ASIC are pretty much:
(1) parallel data bus in
(2) parallel data bus out
(3) external memory interface for CPUs

And here is our own experience/view of some emerging approach to this problem – we don’t claim to have solved it completely, but seem to be making good progress towards a methodical and controllable (yet scalable) manner.

 

Hi Siskin,
Good question/topic. While the value of OVM/VMM is very profound at block levels, their usage at SoC level wherein end-to-end data flow is being checked is not very well reported (yet) in literature. Needless to say they are far better than inventing your own. Especially if you have block-to-system reuse of these VIP components they definitely come very handy. The virtual sequences/multi-stream scenarios do assist but IMHO they come with heavy workouts. Instead what we promote to our customers here and have been proto-typing with at CVC is the solution from Breker, it is called Trek. It can work on top of any existing TB - Verilog/VHDL/TCL/VMM/OVM you name it.
Idea is to reuse the block level components to do what they do best and build tests at a higher level - in this case using graphs, nodes etc. I tend to like this as I used to like Petri nets during my post-graduation days (though didn't followup on my interest afterwards).
My first impression was to use Trek simply as a testcase creation engine but slowly I'm getting convinced it is useful as "checker" as well - especially the end-to-end checks.
You are absolutely right - use assertions in IP interface levels and use some sort of higher level stimulus. Where I see Trek useful in SoC verification is the ability to describe your "flow of data through SoC" as a graph and let the tool generate tests for you. I even jokingly say one can use a palmtop/PDA to draw these graphs during travel, convert them to Trek graph (somehow, didn't chase that dream yet) and have tests ready while I'm on travel - flight/train/bus whatever be it! On a serious note, this is quite similar to how we used to discuss our testplans on a whiteboard during our Realchip (a communication startup in 2000-2001) days, now becoming "executable" Smile
See ST's usage of Trek @
http://www10.edacafe.com/nbc/articles/view_article.php?articleid=787856
Feel free to contact me offline if you need further assistance on Trek. We have our 2nd successful project finishing on using Trek, though these are small/medium scale ones.
My 2 cents!
Srini
www.cvcblr.com
_________________
Srinivasan Venkataramanan
Chief Technology Officer, CVC www.cvcblr.com
A Pragmatic Approach to VMM Adoption
SystemVerilog Assertions Handbook
Using PSL/SUGAR 2nd Edition.
Contributor: The functional verification of electronic systems

Saturday, March 13, 2010

NextOp’s Assertion Synthesis – expanding ABV applications?

 

In case you missed it, read a user report on NextOp’s technology at: http://www.deepchip.com/items/0484-01.html 

In next couple of blog entries, I will share my reading, reflections on this detailed report.

To start with, this technology seems to address some of the “points to ponder” being discussed at: http://www.cvcblr.com/blog/?p=146 

As there is no whitepaper/material available on this technology I base my reflections solely on the ESNUG report. First thing that strikes me is, it seems to suggest in identifying “what assertions to write”. But then it takes a radically different approach to this problem atleast from what has been attempted so far by other EDA vendors. The single most difference is it takes the RTL + Testbench as guide to create assertions/properties. From the report:

 BugScope
takes in our RTL design and testbench as inputs and generates properties,
(which we then categorize as assertions or coverages) that help identify
bugs and coverage holes during simulation. In contrast, Mentor's 0-in
assertion synthesis does not use our testbench;



This is certainly new idea, though I’m little sceptical about the value of late-in-the-cycle assertions.



The next interetsing inference I have on this report is the “coverage property” generation:



When we began our BugScope eval, we only cared about assertion properties
it generated -- we didn't initially see any value of BugScope's coverage
properties.


From what I read in that report, its USP seems to be the “coverage holes” that it can identify. In which case it may be adding more work for the whole project than reducing it – true it helps with better quality, but folks like nuSym will go crazy to have more to cover, but again it is too early to comment in detail. The example given in that report looks little strange as that case maybe due to insufficient run-time of testcase, weak random generation, over-constrained stimulus etc. Also nowadays with RAL (VMM-RAL, www.vmmcentral.org) like automation, all registers can be captured in more controlled fashion from spec. So atleast I fail to see value with the example provided in the report. But since the user says he is using it in production for 2 years or so, there must be credit to this “niche technology”.



Perhaps NextOp is expanding the traditional ABV applications to include “verification closure requirements” by identifying what is not covered yet. That will be interesting application of ABV!



More on this report later.

ABV – points to ponder on its slow adoption

Efforts have been ongoing to make ABV (Assertion Based Verification) more and more deployed for several years via OVL, PSL, SVA etc. Though the concept of assertions is not really new to the industry, widespread usage of it has not been as much as it was expected atleast by the EDA vendors, promoters (to which I consider CVC www.cvcblr,com included).

Prior to PSL/SVA days, 0-in came up with idea of assertion identification, checker library etc. It did catch up with early adaptors but suffered from proprietary solution and inherent limitations of any auto-generated code. This was followed by other EDA vendors developing “auto-generated assertions” for designs – there was some good traction for few quarters and then the initial enthusiasm faded away as the SNR (Signal-to-Noise-Ration) was way too much perhaps.

The development of OVL and other vendor specific assertion libraries looked promising, but IMHO this was not marketed well enough. Also they all fell short of good old 0-in checker elements when it comes to ease of use, verbosity etc. We dealt on this very topic in good detail in our rceent SVA handbook 2nd edition (www.systemverilog.us/sva_info.html) and also touched upon this in our DVCOn 2010 paper (See www.cvcblr.com for downloads page, code, paper + slides are available on request).

  As we at CVC have been walking through these developments in the industry we continue to have debate on what is preventing it from being more widely used. We have several items identified, a non-exhaustive list is below:

  • Who will add these tiny little monsters to start with? Is it RTL designers or Verification engineers?
    • The answer seems to be both.
  • There is a myth that RTL folks don’t want to learn new language – be it SVA/PSL etc.
    • I call it a myth b’cos atleast in this part of the world, the young engineers are always open to new languages, technologies to keep them ahead in technology and beat recession!
    • True, the full PSL/SVA is more than what average RTL guy can consume – but then the kind of properties that RTL folks would write are also simple and don’t require full language capabilities.
    • We at CVC have carefully extracted what RTL designers would require to become productive with ABV – we offer it as 1-day (or even half-a-day if really needed) workshop on “ABV for RTL designers”, see: www.cvcblr.com/trainings or contact us via training@cvcblr.com for details
  • The checker libraries are very handy for RTL folks, but as I said earlier many are not even aware of its potentials. Need more marketing..
    • Some complain about the verbosity especially those who have used 0-in or OVA (inlined) in the past (See AMD’s presentation to Accellera OVL-TC www.accellera.org few years back)
      • Recently released SVA-2009 LRM does address this well with inherited clocks, default clock etc. See www.systemverilog.us for more
      • Also look at checker..endchecker construct in SVA-2009
    • Many users may indeed benefit from a simple “drag-n-drop” style such as the one being developed by ZazzOVL (www.zocalo-tech.com) We at CVC have done initial eval and results look very promising. True, they have some way to go before satisfying every possible user, but good first step I must say!
  • In My design, what assertions can I add?
    • This seems to be much more prevelant question than the myth I mention earlier. There is good element of truth in this concern – only with experience does one get to “identify” quality assertions.
    • There are tools emerging in this space such as NextOp’s Assertion synthesis @:http://www.deepchip.com/items/0484-01.html and Zocalo’s “Zazz bird dog” (www.zocalo-tech.com)
  • How do I know whether my assertions themselves are correct?
    • See: http://www.cvcblr.com/blog/?p=132 for a lively discussion on this topic with Jasper’s ActiveDesign seemingly addressing this well along with other EDA vendors too.
    • Also tools like VCS, Verdi etc. allow assertion evaluation based on a given DUMP file – say VPD, FSDB etc. This is yet another useful feature that’s least marketed – if any. Do look in the tool doc or contact your vendor for more on this, or send us an email via: info@cvcblr.com for more on this.
  • How do I know my assertions really fired?
  • How many assertions are enough for my design?
    • Excellent/Best question perhaps, so NO ANSWER :-)
    • More pragmatically though, there is some research going on at IIT-Kharagpur on this topic, see: http://www.smdp.iitkgp.ernet.in/publications.htm
    • 0-in addressed this with MSD – Minimum Sequential Distance, look in their doc for more
    • VCS adds a stats on “assertion density” – some indications atleast
    • If you are a Masters graduate or PhD – excellent topic to work on!

Twitter of RTL design – welcome to Behavioral Indexing!

Srinivasan Venkataramanan, CVC Pvt. Ltd. www.cvcblr.com

Ajeetha Kumari, CVC Pvt. Ltd. www.cvcblr.com

If you haven’t heard of Twitter you perhaps are living in an internet vacuum J On a positive note, the reach and impact of SNS (Social Networking Sites) into our internet life is hard to ignore – whether it is Twitter, Facebook, LinkedIn etc. To me, a successful SNS tries to capture “what is in going on in your mind right now”? A similar approach can be applied to RTL design – when a designer makes an assumption about the latency of output or the FIFO size etc., it hardly gets captured in a repeatable, executable format. True, at the end of a design phase documentation is written (usually) that attempts to capture these. However it gets too late by then to be “active comments”.

From a language perspective SystemVerilog allows assertions & functional coverage (covergroup) inline with RTL code that can help to some extent. However they are only the “specification” part. A lot more “information” gets lost during such translation such as

· “show me a proof/witness/waveform” for such an occurrence

· Can we optimize the latency to say 5

· What-if I change the FIFO size to 32 here etc.

Jasper’s recently announced ActiveDesign technology has a significant component for this “design process”. It is called “Behavioral Indexing”, you “index” the behavior with facts, assumptions, traces, bugs etc. all in a comprehensive database along with your RTL. So when a designer (or another designer who inherits, reviews the code) looks at the code again (via the ActiveDesign database of-course) he/she can get not only the assumptions (that would be similar to SVA) but also real traces, potential issues of changes to FIFO size etc. In a generic sense the indexing captures the designers state of mind “at that point in time” as a snapshot and keeps it reproducible throughout the lifetime of the RTL code! A good thinking indeed, this is why I like to call it the “Twitter of RTL design”.

There is more to Behavioral Indexing than this, will talk about it next time around, so stay tuned!