Sunday, November 8, 2009

SV: implication constraint and its implication/effect

SystemVerilog has a nice implication constraint feature to guard constraint expressions on their applicability. Last week during our SystemVerilog + methodology workshop one of the attendees faced an interesting issue. She was creating a min-VIP for APB as part of our SystemVerilog 10-day workshop (See details at: http://www.cvcblr.com/trng_profiles/CVC_VSV_WK_profile.pdf ).

She wrote a APB scenario code that was intended to create a sequence of transactions with varying address, kind etc. Here is a code snippet:

constraint cst_xactn_kind{
       if(this.scenario_kind == this.sc_id)
       this.length == 10;
       foreach (items[i])
        {
          (i==0) -> items[i].apb_op_kind == APB_WR;items[i].addr == 'b01; items[i].wdata == 'd11;


               (i==1) -> items[i].apb_op_kind == APB_WR;items[i].addr == 'b11; items[i].wdata == 'd12;
                       }
     }

Spot anything wrong in the above code? Perhaps not for the unsuspecting, bare eyes. Code intention: Keep the:

0th transaction KIND == WRITE, address == 01, data == 11;

1st transaction KIND == WRITE, address == 3, data == 12;

Read again the code – it seems to imply just that, isn’t it? Let’s run it.

Here is what Questa says:

###########################################################################                    
#                     WELCOME !!!
#                      APB PROJECT USING VMM
#                     DONE BY PRIYA @ CVC
#                     DATE:21stOctober2009
############################################################################
# Normal[NOTE] on APB_PROGRAM(0) at                    0:
#     APB PROJECT:       Start of APB Random test!    
# ****************************************************************************
# Normal[NOTE] on APB_ENV(0) at              0.00 ns:
#     APB PROJECT: Sim shall run for 10 number of transactions
# Normal[NOTE] on APB_ENV(0) at              0.00 ns:
#                     Reset!!!!!!!!!               
# Normal[NOTE] on APB_ENV(0) at            230.00 ns:
#                    Reset Release!
# ****************************************************************************
# *FATAL*[FAILURE] on APB Generator Scenario Generator(APB_GENERATOR) at            730.00 ns:
#     Cannot randomize scenario descriptor #0

Puzzled? What is wrong? Review by the code author herself few times didn’t reveal anything wrong (bias towards own code?).

Seek expert assistance.. Questa has a simple flag to bring up solver debugger as: vsim –solvefaildebug Let’s try that now..

 

# ../tb_src_scenario/apb_scenario_gen.sv(1): randomize() failed due to conflicts between the following constraints:
#     ../tb_src_scenario/apb_scenario_gen.sv(25): the_scenario.cst_xactn_kind { (the_scenario.items[0].addr == 32'h00000001); }
#     ../tb_src_scenario/apb_scenario_gen.sv(1): the_scenario.repetition { (the_scenario.repeated == 32'h00000000); }
#     ../tb_src_scenario/apb_scenario_gen.sv(25): the_scenario.cst_xactn_kind { (the_scenario.items[0].apb_op_kind == APB_WR); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[0].addr == 32'h00000003); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[0].wdata == 32'h0000000c); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[1].apb_op_kind == APB_WR); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[1].addr == 32'h00000003); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[1].wdata == 32'h0000000c); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[2].addr == 32'h00000003); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[2].wdata == 32'h0000000c); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[3].addr == 32'h00000003); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[3].wdata == 32'h0000000c); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[4].addr == 32'h00000003); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[4].wdata == 32'h0000000c); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[5].addr == 32'h00000003); }
#     ../tb_src_scenario/apb_scenario_gen.sv(26): the_scenario.cst_xactn_kind { (the_scenario.items[5].wdata == 32'h0000000c); }

Smell something wrong? Why is the constraint on addr, data getting applied across scenario items 2,3,4,5 etc.? Beyond the 0, 1 that the “implication” supposed to guard it? Relook at constraint code:

          (i==0) -> items[i].apb_op_kind == APB_WR;items[i].addr == 'b01; items[i].wdata == 'd11;

Found it? Not yet? The devil lies in details – here in that SEMICOLON “ ; “. A semicolon in Verilog/SV denotes END of a statement and begin of the next one. Hence the effect of “implication” is ENDED with the variable “kind” alone here – thereby it doesn’t affect the addr, data – hence the implication is invisible to them. At line 25, the addr == 1; At line 26, addr == 3; Hence the contradiction!

The fix will be to use && to imply that the guard is applicable to all the 3 variables – kind && addr && data.

  Instead of:

(i==0) -> items[i].apb_op_kind == APB_WR;items[i].addr == 'b01; items[i].wdata == 'd11;

Use:

   (i==0) –> (items[i].apb_op_kind == APB_WR) && (items[i].addr == 'b01) && (items[i].wdata == 'd11);

Morale of the debug session is: you need to be careful while using implication constraints for more than single variable :-)

Tuesday, July 7, 2009

Certificate course on SystemVerilog Assertions …Language + Lab + Mini-project

Certificate course on SystemVerilog Assertions

…Language + Lab + Mini-project

CVC is announcing a new session of its popular 2-day certificate course on SsystemVerilog Assertions (ABV_SVA) covering SystemVerilog Assertions in depth. Broadly it covers the following topics:

  • ABV Introduction
  • SystemVerilog Assertions (SVA)
  • Project – develop a real life Protocol IP (PIP) with SVA

Course contents: http://www.cvcblr.com/trng_profiles/CVC_LG_SVA_profile.pdf

Duration

Here is a detailed breakdown of the course with duration. Note that we have a “mini project” tightly embedded in the course that helps in mastering topics learned so far in the course. This is on top of the regular labs that are part of the training.

Topic

Duration

Start

End

SystemVerilog Assertions

1.5 days

July 13

July 14

Mini Project II

0.5 day

July 14

July 14

Schedule:

July 13, 14 at Bangalore

To attend this class, confirm your registration by sending an email to training @ cvcblr.com

Ph: +91-9916176014, +91-80-42134156

Please include the following details in your email:

Name:

Company Name:

Contact Email ID:

Contact Number:

Sunday, June 14, 2009

Multi-threaded Verification MtV - taxonomy challenge

This evening I was speaking to a friend of mine on how to effectively utilize the 2-box, 4-core each machines he has at work for functional verification. He is exploring to add additional simulator licenses (did I forget which vendor :-) ) and was curious if he needs to add more machines or can he better utilize the existing ones.

During the discussion it became apparent that we have differences in various terms and their meanings. After the phone call I was casually browing edn.com and found a seemingly relevant article:

http://www.edn.com/article/CA6662624.html?industryid=47037

But a quick read didn't look like what I initially thought it would address. IMHO, while Multi-threaded Verification (MtV) is becoming mainstream (albiet slowly), it is a good time to agree upon the taxonomy for this new paradigm.

More on this later..

Saturday, March 14, 2009

CCD, My read of Certess technology and positioning

 

With due respect to the technology behind Certess’s tool I have some discomfort with the way it is being positioned – atleast in the below article:

http://www.edadesignline.com/howto/215600203;jsessionid=TP12OA3IF1X3UQSNDLOSKHSCJUNN2JVN?pgno=2

Before I talk about my discomfort, let me state the positives: Not very often do we get to read such well written, all encompassing technical article, Kudo’s to Mark Hampton – he touches on every aspect of functional verification in this article, not so common in an EDA product “promotional” article – to which this article may be characterized to (unfortunately IMHO). Having said that, I personally believe Certess should position the technology “along with” existing ones than challenging/trying to replace time tested/well adopted methodologies such as code cov, functional cov etc. Not that I differ from his views on the shortcomings of these technologies, rather going by what Pradip Thakcker said in DVM 08 (http://vlsi-india.org/vsi/activities/2008/dvm-blr-apr08/program.html)

“Code coverage and functional coverage are useful techniques with their own strengths and weaknesses. Rather than worrying about their weaknesses, focus on the positives and use them today”..Pradip, during his “Holistic Verification: Myth or The Magic Bullet?”

I will be very glad if Certess focuses on their real strength of exposing lack of checkers in a Verification environment than trying to “eat” into the well established market of Code/Func coverage tools. Another rationale: Both the cov and qualification is compute intensive and given the amount of EDA investment that has gone into stabilizing and optimizing these features, it will be irrational to try and replace them with “functional qualification” (No offense meant, I have great respect for Mark – given his excellent article and ofcourse the product). With SpringSoft acquiring Certess hopefully their customer base/reach increases and that will throw up more success stories in the coming months/quarters. So good times ahead!

ITG, CCD & ACC - Emerging Verification technologies

Well, it is not the overly hyped *V here - such as CRV, CDV, ABV - we at CVC (www.noveldv.com) consider them as yesterday-ones for the sake of giving room to next generation ones such as:

  • ACC - Automatic Coverage Closure 
  • ITG - Intelligent Test Generation (such as Graph based)
  • CCD - Covered & Checked implies Done (such as Certess/SpringSoft)

Out of this let me spend more time on the last two as the ACC is already been discussed for a while now (atleast more than the other two).

ITG - Intelligent Test Generation (such as Graph based)

ITG - is still in its early days. Two tools seem to be addressing this as of today:

  1. Infact  from Mentor is one big name.
  2. The other one that is very promising is: Breker Systems with a very high profile team behind it. These folks know what they are talking about - with their CTO holding "Adnan holds 15 patents in test case generation and synthesis.".

We at CVC are yet to get our hands dirty with these tools, but certainly worth watching indeed! From our early analysis this technology will assist more and more system level tests being easily captured by raising the level of abstraction of testcase specification. This will be fun indeed!

CCD - Covered & Checked implies Done 

Coming to the other category: CCD (yet to find a better name) – this is a topic that has been haunting us for atleast a decade now. Ever since I started using Functional coverage (early 2000), we always had this problem of “I got it covered, but did I get it checked too?”. During an Ethernet monster switch/router verification at Intel we hit this problem atleast half-a-dozen times and those corridor discussions still ring in my ears. The Design (read it as RTL) manager (Sutapa Chandra) made fun of us asking “are we taping out RTL or testbench” as we seem to be finding lack of checkers every now and then. Most of these situations are the case of bugs went undetected at block/cluster level and later get got (luckily) at full chip level – then we do a rigorous review of our block level env, and find that we indeed had coverage points for those scenarios, just that we didn’t have enough checkers! Shame, but true. A technology such as Certess’s Testbench Qualification was what was indeed needed! A very detailed read of Certess technology is at: http://www.edadesignline.com/howto/215600203;jsessionid=TP12OA3IF1X3UQSNDLOSKHSCJUNN2JVN?pgno=2

Random testing for VHDL based designs

With so much buzz around CRV (Constrained Random Verification) it is hard to imagine that VHDL based design teams are staying quite away from this approach. From our experience at CVC, we have seen that SV-Tesbench with VHDL DUT is not that hard to get it working, so excpet for some additional tool cost (mabye?) it is very much a feaisble approach. However with great push and dedication from Jim Lewis, we are seeing that VHDL is fast catching up. See recent Aldec seminar for instance: http://www.aldec.com/Events/Event.aspx?companyeventid=74

Implementing Constrained Random Verification with VHDL

Interesting...

Tuesday, February 10, 2009

Specman's compiled vs. interprted mode




Visiting the grand old topic of "compiled" vs. "interpreted" simulations, I found the following post interesting with reference to Specman & e language.

http://www.cadence.com/Community/blogs/fv/archive/2009/02/06/tech-tip-double-wall-clock-performance-with-one-easy-step.aspx

I couldn't agree more with that post - it is really beneficial to explore what can be pushed to compiled code. Some more data from my own experience during my Intel days:

1. We got 3 to 4x gain in compiled mode, though it is 5+ year old data. 
2. There are (were?) some restrictions with the usage of computed macros across such compiled SO files/partitions. Not sure if they are still around. I don't recall all the details on top of my head, though if someone is interested I can try and recollect. 

3. A very *important* aspect is the debuggability of compiled code. Line stepping feature gets disabled for compiled portions, so if you need to debug with line step go back to loaded mode.

4. Note that "function" level breakpoint is still possible with compiled mode.

5. If there is a null obj access, compiled mode won't reveal much details, while interpreted/loaded mode will point to exact issue. We have used this facility so often that we infact automated this process via script - i.e. if there is a null obj access, a separate process shall spawn off with same test/seed but in loaded mode. This was part of our automation we presented in DesignCon http://www.iec.org/events/2004/designcon_east/pdf/3-wp1.pdf  though this specific tip/trick was left undocumented there.

6. Lastly, we did see few corener case scenarios wherein the random generation was different in both modes, Verisity knew about it back then (around 2003?) and said they will fix it sometime in future. Not sure if that still is an issue. Note that it was not easy to reproduce and was truly corner case, so don't ask me for "testcase" now :-)

Srini, CVC
www.noveldv.com

Monday, February 9, 2009

Upcoming language updates to IEEE 1647

As I reinvent my good times with the "fun" verification language (it is not just "functional verification" - truly a "fun to use" language, both in terms of usage, application, benefits etc.), I attended last evening IEEE 1647 eWG (e Working Group). Some cool updates are on the new language extensions on coverage, constraints and macros. I personally am interested in coverage & constraints. I would love to work with active e users in India to see who would like to get involved.

Please do drop me an email if you are interested in contributing to this effort. See: www.ieee1647.org for details.

So good times ahead indeed!
Srini
www.noveldv.com