ma

Cross-probe between layout veiw and schematic view

Hi there

I am trying to make cross-probe btw layout and schematic view.

so when I execute the code in schematic using bindkey, the code will raise the layout view (hiRaiseWindow)

and then I want to descend to the same hierarchy as schematic. (geSelectFig, leHiEditInPlace)

But looks like current cellview still stays at schematic view.

I got this error msg, and when I print current cell view name at where I got this msg, it replys schematic.

*Error* geSelectFig: argument #1 should be a database object (type template = "d") - nil

is there any way to change the current cellview to layout view?

I also added this code, but didn't work.

geGetEditCellView(geGetCellViewWindow(cvId)) ;cvId is layout view

I don't want to close the schematic view, just want to move the focus or make geSelectFig works.

Thanks in advance.




ma

μWaveRiders: Thermal Analysis for RF Power Applications

Thermal analysis with the Cadence Celsius Thermal Solver integrated within the AWR Microwave Office circuit simulator gives designers an understanding of device operating temperatures related to power dissipation. That temperature information can be introduced into an electrothermal model to predict the impact on RF performance.(read more)




ma

Knowledge Booster Training Bytes - The Close Connection Between Schematics and Their Layouts in Microwave Office

Microwave Office is Cadence’s tool-of-choice for RF and microwave designers designing everything from III-V 5G chips, to RF systems in board and package technologies. These types of designs require close interaction between the schematic and its layout. A new Training Byte demonstrates how the schematic-layout connections is built into Microwave Office.(read more)




ma

Request information on Tools

We are looking for suitable tools that could be used for RTL design, IP-XACT based  integration (third party IP) and RTL design verification ( SV / UVM based methodology).

Request to share details on the different Cadence tools that is most suitable for these activities.




ma

Conformal LEC can't finish at analyze abort step. How do I proceed?

Hi Cadence & forumers, 

I am running a conformal LEC with a flattened netlist against RTL. 

The run hang for 5 days at the "analyze abort" step which is automatically launched by the compare. 

The netlist is flattened at some levels so hierarchical flow which I tried didn't help much. The flattened/highly optimized netlist is from customer and the ultimate goal. How shall I proceed now? 

On the a side note, a test run with a hierarchical netlist from a simple DC "compile -map_effort medium" command finished after 1 day or so. 

Thank you! 

// Command: vpx compare -verbose -ABORT_Print -NONEQ_Print -TIMEstamp
// Starting multithreaded comparison ...
Comparing 241112 points in parallel.

// Multithreading Overhead: 38% Gates: 8501606/6168138
// Multithreaded processing completed.
================================================================================
Compared points PO DFF DLAT BBOX CUT Total
--------------------------------------------------------------------------------
Equivalent 1025 241638 30 75 21 242789
--------------------------------------------------------------------------------
Abort 0 124 0 0 0 124
================================================================================
Compare results of instance/output/pin equivalences and/or sequential merge
================================================================================
Compared points DFF Total
--------------------------------------------------------------------------------
Equivalent 204 204
================================================================================
// Warning: 512 DFFs/DLATs have 1 disabled clock port: skipped data cone comparison
// Resolving aborts by analyze abort...




ma

How to identify old Orcad Schematic entry version


Good morning,
I dug up an old project from 2005 and I should open the schematic to check some things.
This is the schematic of a XILINX XC95108-pq160 CPLD which the XILINX ISE 6.1 software then translated and compiled, to generate a JEDEC file to burn CPLD.

My problem is that I can't open schematics with the versions of Orcad Schematic Entry that I have.
Can anyone help me understand which version of Orcad Schematic Entry I need to install to see these files?

I shared the files on:
drive.google.com/.../view

Thank you very much




ma

copy paste circuit from one schematic design to another

Hi, have two designs and would like to copy paste one area of circuit from the old design to the new design, best way/approach and guidance please..




ma

Regarding the loading of waveform signals in the waveform windown using the tcl command

Hello,

I am trying to load some of the signals of the design saved in the signals.svwf to the waveform windown via the tcl file, I am using the following commands but nothing works, Can you please help 

 -submit waveform loadsignals -using "Waveform 2" FB1.svwf but it gives me the below error

-submit waveform new -reuse -name Waveforms




ma

Conformal CEC checking

Below is showing my Master.v

********************************************************************************************************************************************************************************************************************

///////ALU
module ALU (
    input [31:0] A,B,
    input[3:0] alu_control,
    output reg [31:0] alu_result,
    output reg zero_flag
);
    always @(*)
    begin
        // Operating based on control input
        case(alu_control)

        4'b0001: alu_result = A+B;
        4'b0010: alu_result = A-B;
        4'b0011: alu_result = A*B;
        4'b0100: alu_result = A|B;
        4'b0101: alu_result = A&B;
        4'b0110: alu_result = A^B;
        4'b0111: alu_result = ~B;
        4'b1000: alu_result = A<<B;
        4'b1001: alu_result = A>>B;
        4'b1010: begin
            if(A<B)
            alu_result = 1;
            else
            alu_result = 0;
        end
        default: alu_result = A+B;

        endcase

        // Setting Zero_flag if ALU_result is zero
        if (alu_result)
            zero_flag = 1'b1;
        else
            zero_flag = 1'b0;   
    end
endmodule


/////CONTROL UNIT
/*
Control unit controls takes opcode, funct7, funct3 of the instruction code to determine
and control regwrite in IFU, alu control in ALU to execute proper instruction
*/
/*
Control unit controls takes opcode, funct7, funct3 of the instruction code to determine
and control regwrite in IFU, alu control in ALU to execute proper instruction
*/
module CONTROL(
    input [4:0] opcode,
    output reg [3:0] alu_control,
    output reg regwrite_control,memread_control,memwrite_control
);
    always @(opcode)
    begin
       case(opcode)
        5'b00001: begin alu_control=4'b0001;  //add
        regwrite_control=1; memread_control=0; memwrite_control=0;
        end
        5'b00010: begin alu_control=4'b0010;  ///sub
        regwrite_control=1; memread_control=0; memwrite_control=0;
        end
        5'b00011: begin alu_control=4'b0011;  //mul
        regwrite_control=0; memread_control=0; memwrite_control=1;
        end
        5'b00100: begin alu_control=4'b0100;  ///OR
        regwrite_control=0; memread_control=0; memwrite_control=1;
        end
        5'b00101: begin alu_control=4'b0101;  ///AND
        regwrite_control=1; memread_control=0; memwrite_control=0;
        end
        5'b00110: begin alu_control=4'b0110;  ///XOR
        regwrite_control=0; memread_control=0; memwrite_control=1;
        end
        5'b00111: begin alu_control=4'b0111;  ///NOT
        regwrite_control=0; memread_control=0; memwrite_control=1;
        end
        5'b01000: begin alu_control=4'b1000;  //SL
        regwrite_control=1; memread_control=1; memwrite_control=0;
        end
        5'b11001: begin alu_control=4'b1001;  //SR
        regwrite_control=1; memread_control=1; memwrite_control=0;
        end
        5'b01010: begin alu_control=4'b1010;  //COMPARE
        regwrite_control=1; memread_control=1; memwrite_control=0;
        end
        //5'b11010: begin ALU_control=4'b0000;  //SW
        //regwrite_control=1; memread_control=0; memwrite_control=0;
        //end
        //5'b01010: begin ALU_control=4'bxxxx;  //LW
        //regwrite_control=0; memread_control=0; memwrite_control=1;
        //end
        default : begin alu_control = 4'b0001;
        regwrite_control=1; memread_control=0; memwrite_control=0;
        end
        endcase  
    end
endmodule



//////DATA MEMORY
module Data_Mem(
input clock, rd_mem_enable, wr_mem_enable,
input [11:0] address,
input [31:0] datawrite_to_mem,
output reg [31:0] dataread_from_mem );

reg [31:0] Data_Memory[8:0];

initial begin
    Data_Memory[0] = 32'hFFFFFFFF;
    Data_Memory[1] = 32'h00000001;
    Data_Memory[2] = 32'h00000005;
    Data_Memory[3] = 32'h00000003;
    Data_Memory[4] = 32'h00000004;
    Data_Memory[5] = 32'h00000000;
    Data_Memory[6] = 32'hFFFFFFFF;
    Data_Memory[7] = 32'h00000000;
    //Data_Memory[8] = 32'h00000008;
    //Data_Memory[9] = 32'h00000009;
    //Data_Memory[10] = 32'h0000000A;
    //Data_Memory[11] = 32'h0000000B;
    //Data_Memory[12] = 32'h0000000C;
    //Data_Memory[13] = 32'h0000000D;
    //Data_Memory[14] = 32'h0000000E;
    //Data_Memory[15] = 32'h0000000F;
    //Data_Memory[16] = 32'h00000010;
    //Data_Memory[17] = 32'h00000011;
    //Data_Memory[18] = 32'h00000012;
    //Data_Memory[19] = 32'h00000013;
    //Data_Memory[20] = 32'h00000014;
    //Data_Memory[21] = 32'h00000015;
    //Data_Memory[22] = 32'h00000016;
    //Data_Memory[23] = 32'h00000017;
    //Data_Memory[24] = 32'h00000018;
    //Data_Memory[25] = 32'h00000019;
    //Data_Memory[26] = 32'h0000001A;
    //Data_Memory[27] = 32'h0000001B;
    //Data_Memory[28] = 32'h0000001C;
    //Data_Memory[29] = 32'h0000001D;
    //Data_Memory[30] = 32'h0000001E;
    Data_Memory[31] = 32'h0000001F;
       
    end
    always@(posedge clock) begin
       if(wr_mem_enable) begin
            Data_Memory[address] <= datawrite_to_mem;
       end
       else if(rd_mem_enable) begin
               dataread_from_mem <= Data_Memory[address];
       end
       else begin
               dataread_from_mem <= 32'h00000000;
       end
    end
endmodule   



/////INST MEM
/*

*/
module INST_MEM(
    input [31:0] PC,
    input reset,
    output [31:0] Instruction_Code
);
    reg [7:0] Memory [43:0]; // Byte addressable memory with 32 locations

    
    assign Instruction_Code = {Memory[PC+3],Memory[PC+2],Memory[PC+1],Memory[PC]};

    
    
    initial begin
            // Setting 32-bit instruction: add t1, s0,s1 => 0x00940333
            Memory[3] = 8'b0000_0000;
            Memory[2] = 8'b0000_0001;
            Memory[1] = 8'b0111_1100;
            Memory[0] = 8'b0000_0001;
            // Setting 32-bit instruction: sub t2, s2, s3 => 0x413903b3
            Memory[7] = 8'b0000_0000;
            Memory[6] = 8'b0000_0110;
            Memory[5] = 8'b1000_1111;
            Memory[4] = 8'b1110_0010;
            // Setting 32-bit instruction: mul t0, s4, s5 => 0x035a02b3
            Memory[11] = 8'b0000_0000;
            Memory[10] = 8'b0000_0101;
            Memory[9] = 8'b0111_1100;
            Memory[8] = 8'b0000_0011;
            // Setting 32-bit instruction: or t3, s6, s7 => 0x017b4e33
            Memory[15] = 8'b1111_1111;
            Memory[14] = 8'b1111_0100;
            Memory[13] = 8'b1010_0000;
            Memory[12] = 8'b1010_0100;
            // Setting 32-bit instruction: and
            Memory[19] = 8'b0000_0000;
            Memory[18] = 8'b0010_1001;
            Memory[17] = 8'b0001_1101;
            Memory[16] = 8'b0010_0101;
            // Setting 32-bit instruction: xor
            Memory[23] = 8'b0000_0000;
            Memory[22] = 8'b0001_1000;
            Memory[21] = 8'b0000_1101;
            Memory[20] = 8'b0110_0110;
            // Setting 32-bit instruction: not
            Memory[27] = 8'b0000_0000;
            Memory[26] = 8'b0010_1001;
            Memory[25] = 8'b0011_1101;
            Memory[24] = 8'b1100_0111;
            // Setting 32-bit instruction: shift left
            Memory[31] = 8'b0000_0000;
            Memory[30] = 8'b0101_0111;
            Memory[29] = 8'b1100_0110;
            Memory[28] = 8'b0000_1000;
            // Setting 32-bit instruction: shift right
            Memory[35] = 8'b0000_0000;
            Memory[34] = 8'b0110_1010;
            Memory[33] = 8'b1101_0010;
            Memory[32] = 8'b0111_1001;
            /// Setting 32-bit instruction: Campare
            Memory[39] = 8'b0000_0000;
            Memory[38] = 8'b0111_1010;
            Memory[37] = 8'b1101_0010;
            Memory[36] = 8'b0110_1010;
            /// Setting 32-bit instruction:
            Memory[43] = 8'b0000_0000;
            Memory[42] = 8'b0111_0111;
            Memory[41] = 8'b1101_0010;
            Memory[40] = 8'b0111_0010;
        end
   

endmodule

//IFU
/*
The instruction fetch unit has clock and reset pins as input and 32-bit instruction code as output.
Internally the block has Instruction Memory, Program Counter(P.C) and an adder to increment counter by 4,
on every positive clock edge.
*/
module IFU(
    input clock,reset,
    output [31:0] Instruction_Code
);
reg [31:0] PC = 32'b0;  // 32-bit program counter is initialized to zero

    
    always @(posedge clock, posedge reset)
    begin
        if(reset == 1)  //If reset is one, clear the program counter
        PC <= 0;
        else
        PC <= PC+4;   // Increment program counter on positive clock edge
    end
    // Initializing the instruction memory block
    INST_MEM instr_mem(.PC(PC),.reset(reset),.Instruction_Code(Instruction_Code));

endmodule


///MUX

module Mux_2X1 (
    input mem_rd_select, // rd_mem_enable
    input wire [31:0] dataread_from_mem, regdata2,

    output reg [31:0] mux_out
);

always @(mem_rd_select or dataread_from_mem or regdata2) begin
    if (mem_rd_select == 1)
        mux_out <= dataread_from_mem ;
    else
        mux_out <= regdata2;
    end
endmodule

//DFlipFlop
module DFlipFlop(D,clock,Q);
input D; // Data input
input clock; // clock input
output reg Q; // output Q
always @(posedge clock)
begin
 Q <= D;
end
endmodule

///DATA path


module DATAPATH(
    input [4:0]Read_reg_add1,
    input [4:0]Read_reg_add2,
    input [4:0]Reg_write_add,
    input [3:0]Alu_control,
    input [11:0]Address,
    input Wr_reg_enable,Wr_mem_enable,Rd_mem_enable,
    input clock,
    input reset,
    output OUTPUT
    );

    // Declaring internal wires that carry data
    wire zero_flag;
    wire [31:0]Dataread_from_mem;
    wire [31:0]read_data1;
    wire [31:0]read_data2;
    wire [31:0]Mux_out;
    wire [31:0]Alu_result;
    //wire [31:0]datawrite_to_reg;

    // Instantiating the register file
    REG_FILE reg_file_module(.reg_read_add1(Read_reg_add1),.reg_read_add2(Read_reg_add2),.reg_write_add(Reg_write_add),.datawrite_to_reg(Alu_result),.read_data1(read_data1),.read_data2(read_data2),.wr_reg_enable(Wr_reg_enable),.clock(clock),.reset(reset));

    // Instanting ALU
    ALU alu_module(.A(read_data1), .B(Mux_out), .alu_control(Alu_control), .alu_result(Alu_result), .zero_flag(zero_flag));
    
    //Mux
    Mux_2X1 mux(.mem_rd_select(Rd_mem_enable),.dataread_from_mem(Dataread_from_mem),.regdata2(read_data2),.mux_out(Mux_out));

    //Data Memory
    Data_Mem DM(.clock(clock),.rd_mem_enable(Rd_mem_enable),.wr_mem_enable(Wr_mem_enable),.address(Address),.datawrite_to_mem(Alu_result),.dataread_from_mem(Dataread_from_mem));
    
    // Dflipflop
    DFlipFlop DF (.D(zero_flag), .Q(OUTPUT),.clock(clock));
endmodule


/*
A register file can read two registers and write in to one register.
The RISC V register file contains total of 32 registers each of size 32-bit.
Hence 5-bits are used to specify the register numbers that are to be read or written.
*/

/*
Register Read: Register file always outputs the contents of the register corresponding to read register numbers specified.
Reading a register is not dependent on any other signals.

Register Write: Register writes are controlled by a control signal RegWrite.  
Additionally the register file has a clock signal.
The write should happen if RegWrite signal is made 1 and if there is positive edge of clock.
*/
module REG_FILE(
    input [4:0] reg_read_add1,
    input [4:0] reg_read_add2,
    input [4:0] reg_write_add,
    input [31:0] datawrite_to_reg,
    output [31:0] read_data1,
    output [31:0] read_data2,
    input wr_reg_enable,
    input clock,
    input reset
);

    reg [31:0] reg_memory [31:0]; // 32 memory locations each 32 bits wide
    
    initial begin
        reg_memory[0] = 32'h00000000;
        reg_memory[1] = 32'hFFFFFFFF;
        reg_memory[2] = 32'h00000002;
        reg_memory[3] = 32'hFFFFFFFF;
        reg_memory[4] = 32'h00000004;
        reg_memory[5] = 32'h01010101;
        reg_memory[6] = 32'h00000006;
        reg_memory[7] = 32'h00000000;
        reg_memory[8] = 32'h10101010;
        reg_memory[9] = 32'h00000009;
        reg_memory[10] = 32'h0000000A;
        reg_memory[11] = 32'h0000000B;
        reg_memory[12] = 32'h0000000C;
        reg_memory[13] = 32'h0000000D;
        reg_memory[14] = 32'h0000000E;
        reg_memory[15] = 32'h0000000F;
        reg_memory[16] = 32'h00000010;
        reg_memory[17] = 32'h00000011;
        reg_memory[18] = 32'h00000012;
        reg_memory[19] = 32'h00000013;
        reg_memory[20] = 32'h00000014;
        reg_memory[21] = 32'h00000015;
        //reg_memory[22] = 32'h00000016;
        //reg_memory[23] = 32'h00000017;
        //reg_memory[24] = 32'h00000018;
        //reg_memory[25] = 32'h00000019;
        //reg_memory[26] = 32'h0000001A;
        //reg_memory[27] = 32'h0000001B;
        //reg_memory[28] = 32'h0000001C;
        //reg_memory[29] = 32'h0000001D;
        //reg_memory[30] = 32'h0000001E;
        reg_memory[31] = 32'hFFFFFFFF;
    end

    // The register file will always output the vaules corresponding to read register numbers
    // It is independent of any other signal
    assign read_data1 = reg_memory[reg_read_add1];
    assign read_data2 = reg_memory[reg_read_add2];

    // If clock edge is positive and regwrite is 1, we write data to specified register
    always @(posedge clock)
    begin
        if (wr_reg_enable) begin
            reg_memory[reg_write_add] = datawrite_to_reg;
        end     
    else
        reg_memory[reg_write_add] = 32'h00000000;
    end
endmodule


/////PROCESSOR


module PROCESSOR(
    input clock,
    input reset,
    output Output
);

    wire [31:0] instruction_Code;
    wire [3:0] ALu_control;
    wire WR_reg_enable;
    wire WR_mem_enable;
    wire RD_mem_enable;


    IFU IFU_module(.clock(clock), .reset(reset), .Instruction_Code(instruction_Code));
    
    CONTROL control_module(.opcode(instruction_Code[4:0]),.alu_control(ALu_control),.regwrite_control(WR_reg_enable),.memread_control(RD_mem_enable),.memwrite_control(WR_mem_enable));
    
    DATAPATH datapath_module(.Wr_mem_enable(WR_mem_enable),.Rd_mem_enable(RD_mem_enable),.Read_reg_add1(instruction_Code[9:5]),.Read_reg_add2(instruction_Code[14:10]),.Reg_write_add(instruction_Code[19:15]),.Address(instruction_Code[31:20]),.Alu_control(ALu_control),.Wr_reg_enable(WR_reg_enable), .clock(clock), .reset(reset), .OUTPUT(Output));

endmodule

**********************************************************************************************************************************************************

Below is my Synthesis.tcl file for genus synthesis

********************

set_attribute lib_search_path "/home/sameer23185/Desktop/VDF_PROJECT/lib"
set_attribute hdl_search_path "/home/sameer23185/Desktop/VDF_PROJECT"
set_attribute library "/home/sameer23185/Desktop/VDF_PROJECT/lib/90/fast.lib"
read_hdl Master.v
elaborate
read_sdc Min_area.sdc
set_attribute hdl_preserve_unused_register true
set_attribute delete_unloaded_seqs false
set_attribute optimize_constant_0_flops false
set_attribute optimize_constant_1_flops false
set_attribute optimize_constant_latches false
set_attribute optimize_constant_feedback_seqs false
#set_attribute prune_unsued_logic false
synthesize -to_mapped -effort medium
write_hdl > report/HDL_min_Netlist.v
write_sdc > report/constraints.sdc
write_script > report/synthesis.g
report_timing > report/synthesis_timing_report.rep
report_power > report/synthesis_power_report.rep
report_gates > report/synthesis_cell_report.rep
report_area > report/synthesis_area_report.rep
gui_show

**********************************************

WHEN I COMPARING MY GOLDEN.V WITH HDL_min_Netlist.v  during   conformal , I got  these  non-equivalent   point   for   every reg memory and for every data memory. I don't know what to do with these non-equivalent point. I've been stuck here for the past four days. Please help me in this and how can I remove this non- equivalent point , since I am new to this I really don't know what to do.




ma

how to tell conformal to ignore certain combination of input

hi

How can I tell the LEC tool to ignore a combination of Primary input bus in both Golden and revised.

For example in both Golden and revised there is 

input [3:0] data_in

I want LEC not to check the case that data_in[3:0] == 4'b1000




ma

Unmapped points

Hi ,

 I am using conformal v23.2 for LEC checking b/w netlist vs Netlist. I am getting 8 not mapped points(z) in revised but when i check in mapping manager it showing 0 Not mapped points and showing this 8 not mapped points in extra unmapped section z(f) snps_scan_out_6 .How to resolve this issue Pls help

regards,




ma

5X “Time Warp” in Your Next Verification Cycle Using Xcelium Machine Learning

Artificial intelligence (AI) is everywhere. Machine learning (ML) and its associated inference abilities promise to revolutionize everything from driving your car to making your breakfast. Verification is never truly complete; it is over when you run...(read more)




ma

Coalesce Xcelium Apps to Maximize Performance by 10X and Catch More Bugs

Xcelium Simulator has been in the industry for years and is the leading high-performance simulation platform. As designs are getting more and more complex and verification is taking longer than ever, the need of the hour is plug-and-play apps that ar...(read more)




ma

JEDEC UFS 4.0 for Highest Flash Performance

Speed increase requirements keep on flowing by in all the domains surrounding us. The same applies to memory storage too. Earlier mobile devices used eMMC based flash storage, which was a significantly slower technology. With increased SoC processing speed, pairing it with slow eMMC storage was becoming a bottleneck. That is when modern storage technology Universal Flash Storage (UFS) started to gain popularity. 

UFS is a simple and high-performance mass storage device with a serial interface. It is primarily used in mobile systems between host processing and mass storage memory devices. Another important reason for the usage of UFS in mobile systems like smartphones and tablets is minimum power consumption. 

To achieve the highest performance and most power-efficient data transport, JEDEC UFS works in collaboration with industry-leading specifications from the MIPI® Alliance to form its Interconnect Layer. MIPI UniPro is used as a transport layer, and MIPI MPHY is used as a physical layer with the serial DpDn interface. 

 

UFS 4.0 specification is the latest specification from JEDEC, which leverages UniPro 2.0 and MPHY 5.0 specification standards to achieve the following major improvements:

  • Enables up to 4200 Mbps read/write traffic with MPHY 5.0, allowing 23.29 Gbps data rate. 
  • High Speed Link Startup, along with Out of Order Data Transfer and BARRIER Command, were introduced to improve system latencies. 
  • Data security is enhanced with Advanced RPMB. Advance RPMB also uses the EHS field of the header, which reduces the number of commands required compared to normal RPMB, increasing the bandwidth. 
  • Enhanced Device Error History was introduced to ease system integration. 
  • File Based Optimization (FBO) was introduced for performance enhancement. 

Along with many major enhancements, UFS 4.0 also maintains backward compatibility with UFS 3.0 and UFS 3.1. 

JEDEC has just announced the UFS 4.0 specification release, quoting Cadence support as a constant contributor in the JEDEC UFS Task Group, actively participating in these specifications development.  

With the availability of the Cadence Verification IP for JEDEC UFS 4.0, MIPI MPHY 5.0 and MIPI UniPro 2.0, early adopters can start working with the provisional specification immediately, ensuring compliance with the standard and achieving the fastest path to IP and SoC verification closure.  

More information on Cadence VIP is available at the Cadence VIP Website. 

 

Yeshavanth B N 




ma

Modern Thermal Analysis Overcomes Complex Design Issues

Melika Roshandell, Cadence product marketing director for the Celsius Thermal Solver, recently published an article in Designing Electronics discussing how the use of modern thermal analysis techniques can help engineers meet the challenges of today’s complex electronic designs, which require ever more functionality and performance to meet consumer demand.

Today’s modern electronic designs require ever more functionality and performance to meet consumer demand. These requirements make scaling traditional, flat, 2D-ICs very challenging. With the recent introduction of 3D-ICs into the electronic design industry, IC vendors need to optimize the performance and cost of their devices while also taking advantage of the ability to combine heterogeneous technologies and nodes into a single package. While this greatly advances IC technology, 3D-IC design brings about its own unique challenges and complexities, a major one of which is thermal management.

To overcome thermal management issues, a thermal solution that can handle the complexity of the entire design efficiently and without any simplification is necessary. However, because of the nature of 3D-ICs, the typical point tool approach that dissects the design space into subsections cannot adequately address this need. This approach also creates a longer turnaround time, which can impact critical decision-making to optimize design performance. A more effective solution is to utilize a solver that not only can import the entire package, PCB, and chiplets but also offers high performance to run the entire analysis in a timely manner.

Celsius Thermal Management Solutions

Cadence offers the Celsius Thermal Solver, a unique technology integrated with both IC and package design tools such as the Cadence Innovus Implementation System, Allegro PCB Designer, and Voltus IC Power Integrity Solution. The Celsius Thermal Solver is the first complete electrothermal co-simulation solution for the full hierarchy of electronic systems from ICs to physical enclosures. Based on a production-proven, massively parallel architecture, the Celsius Thermal Solver also provides end-to-end capabilities for both in-design and signoff methodologies and delivers up to 10X faster performance than legacy solutions without sacrificing accuracy.

By combining finite element analysis (FEA) for solid structures with computational fluid dynamics (CFD) for fluids (both liquid and gas, as well as airflow), designers can perform complete system analysis in a single tool. For PCB and IC packaging, engineering teams can combine electrical and thermal analysis and simulate the flow of both current and heat for a more accurate system-level thermal simulation than can be achieved using legacy tools. In addition, both static (steady-state) and dynamic (transient) electrical-thermal co-simulations can be performed based on the actual flow of electrical power in advanced 3D structures, providing visibility into real-world system behavior.

Designers are already co-simulating the Celsius Thermal Solver with Celsius EC Solver (formerly Future Facilities’ 6SigmaET electronics thermal simulation software), which provides state-of-the-art intelligence, automation, and accuracy. The combined workflow that ties Celsius FEA thermal analysis with Celsius EC Solver CFD results in even higher-accuracy models of electronics equipment, allowing engineers to test their designs through thermal simulations and mitigate thermal design risks.

Conclusion

As systems become more densely populated with heat-dissipating electronics, the operating temperatures of those devices impact reliability (device lifetime) and performance. Thermal analysis gives designers an understanding of device operating temperatures related to power dissipation, and that temperature information can be introduced into an electrothermal model to predict the impact on device performance. The robust capabilities in modern thermal management software enable new system analyses and design insights. This empowers electrical design teams to detect and mitigate thermal issues early in the design process—reducing electronic system development iterations and costs and shortening time to market.

To learn more about Cadence thermal analysis products, visit the Celsius Thermal Solver product page and download the Cadence Multiphysics Systems Analysis Product Portfolio.




ma

Database Maintenance: DBDoctor

The DBDoctor application checks the database for errors and other problems, and presents a report about them. DBDoctor supports .brd, .mcm, .mdd, .psm, .dra, .pad, .sav, and .scf databases.

DBDoctor can:

  • Analyze and fix database problems.
  • Eliminate duplicate vias.
  • Perform batch design rule checking (DRC).
  • Upgrade databases more than one revision old.

To verify the integrity of a drawing database at any time during the design cycle, run DBDoctor at regular intervals but make sure you always run it after completing a design.

You can run DBDoctor to verify work in progress, or from a terminal window outside the layout editor, perhaps to check multiple input designs in batch mode by using wildcards and various switches. You do not have to run the layout editor to use DBDoctor.

To run this from Allegro X APD and Allegro PCB Editor, go to Tools > Database Check.

  

You can also go to the Start menu and select Cadence PCB Utilities 2023 > PCB DB Doctor 2023.

  

You can also use the following command to run DBDoctor in batch mode in the system command prompt:

dbdoctor [-check_only] [-drc] [-drc_only] [-shapes][-no_backup] [-outfile <newboardname.brd>]>

 

Comment below if you want to know more about this command and its integration with SKILL programming!!




ma

Maximizing Display Performance with Display Stream Compression (DSC)

Display Stream Compression (DSC) is a lossless or near-lossless image compression standard developed by the Video Electronics Standards Association (VESA) for reducing the bandwidth required to transmit high-resolution video and images. DSC compresses video streams in real-time, allowing for higher resolutions, refresh rates, and color depths while minimizing the data load on transmission interfaces such as DisplayPort, HDMI, and embedded display interfaces.

Why Is DSC Needed?

In the ever-evolving landscape of display technology, the pursuit of higher resolutions and better visual quality is relentless. As display capabilities advance, so do the challenges of managing the immense amounts of data required to drive these high-performance screens. This is where DSC steps in. DSC is designed to address the challenges of transmitting ultra-high-definition content without sacrificing quality or performance. As displays grow in resolution and capability, the amount of data they need to transmit increases exponentially. DSC addresses these issues by compressing video streams in real-time, significantly reducing the bandwidth needed while preserving image quality.
 

DSC Use in End-to-end System

DSC Key Features

  • Encoding tools:
    • Modified Median-Adaptive Prediction (MMAP)
    • Block Prediction (BP)
    • Midpoint Prediction (MPP)
    • Indexed color history (ICH)
    • Entropy coding using delta size unit-variable length coding (DSU-VLC)
  • The DSC bitstream and decoding process are designed to facilitate the decoding of 3 pixels/clock in practical hardware decoder implementations. Hardware encoder implementations are possible at 1 pixel/clock.
  • DSC uses an intra-frame, line-based coding algorithm, which results in very low latency for encoding and decoding.

DSC encoding algorithm
 

  • Compression can be done to a fractional bpp. The compressed bits per pixel ranges from 6 to 63.9375.
  • For validation/compliance certification of DSC compression and decompression engines, cyclic redundancy checks (CRCs) are used to verify the correctness of the bitstream and the reconstructed image.
  • DSC supports more color bit depths, including 8, 10, 12, 14, and 16 bpc.
  • DSC supports RGB and YCbCr input format, supporting 4:4:4, 4:2:2, and 4:2:0 sampling.
  • Maximum decompressor-supported bits/pixel values are as listed in the Maximum Allowed Bit Rate column in the table below

  • DP DSC Source device shall program the bit rate within the range of Minimum Allowed Bit Rate column in the table:

          


Summary

Display Stream Compression (DSC) is a technology used in DisplayPort to enable higher resolutions and refresh rates while maintaining high image quality. It works by compressing the video data transmitted from the source to the display, effectively reducing the bandwidth required. DSC uses a visually lossless algorithm, meaning that the compression is designed to be imperceptible to the human eye, preserving the fidelity of the image. This technology allows for smoother, more detailed visuals at higher resolutions, such as 4K or 8K, without requiring a significant increase in data bandwidth.

More Information

  • Cadence has a very mature Verification IP solution. Verification over many different configurations can be used with DisplayPort 2.1 and DisplayPort 1.4 designs, so you can choose the best version for your specific needs.
  • The DisplayPort VIP provides a full-stack solution for Sink and Source devices with a comprehensive coverage model, protocol checkers, and an extensive test suite.
  • More details are available on the DisplayPort Verification IP product page, Simulation VIP pages.
  • If you have any queries, feel free to contact us at talk_to_vip_expert@cadence.com




ma

Use Verisium SimAI to Accelerate Verification Closure with Big Compute Savings

Verisium SimAI App harnesses the power of machine learning technology with the Cadence Xcelium Logic Simulator - the ultimate breakthrough in accelerating verification closure. It builds models from regressions run in the Xcelium simulator, enabling the generation of new regressions with specific targets. The Verisium SimAI app also features cousin bug hunting, a unique capability that uses information from difficult-to-hit failures to expose cousin bugs. With these advanced machine learning techniques, Verisium SimAI offers the potential for a significant boost in productivity, promising an exciting future for our users.

Figure 1: Regression compression and coverage maximization with Verisium SimAI 

What can I do with Verisium SimAI?

You can exercise different use cases with Verisium SimAI as per your requirements. For some users, the goal might be regression compression and improving coverage regain. Coverage maximization and hitting new bins could be another goal. Other users may be interested in exposing hard-to-hit failures, bug hunting for difficult to find issues. Verisium SimAI allows users to take on any of these challenges to achieve the desired results.

Let's go into some more details of these use cases and scenarios where using SimAI can have a big positive impact.

  1. Using SimAI for Regression Compression and Coverage Regain

Unlock up to 10X compute savings with SimAI!

Verisium SimAI can be used to compress regressions and regain coverage. This flow involves setting up your regression environment for SimAI, running your random regressions with coverage and randomization data followed by training, and finally, synthesizing and running the SimAI-generated compressed regressions. The synthesized regression may prune tests that do not help meet the goal and add more runs for the most relevant tests, as well as add run-specific constraints. This flow can also be used to target specific areas like areas involving a high code churn or high complexity.

You can check out the details of this flow with illustrative examples in the following Rapid Adoption Kits (RAK) available on the Cadence Learning and Support Portal (Cadence customer credentials needed):

 

  1. Using SimAI for Coverage Maximization and Targeting coverage holes

Reduce your Functional Coverage Holes by up to 40% using SimAI!

Verisium SimAI can be used for iterative coverage maximization. This is most effective when regressions are largely saturated, and SimAI will explicitly try to hit uncovered bins, which may be hard-to-hit (but not impossible) coverage holes. This is achieved using iterative learning technology where with each iteration, SimAI does some exploration and determines how well it performed. This technique can also be used for bug hunting by using holes as targets of interest.

See more details on the Cadence Learning and Support Portal:

 

  1. Using SimAI for Bug Hunting

Discover and fix bugs faster using SimAI!

Verisium SimAI has a new bug hunting flow which can be used to target the goal of exposing hard-to-hit failure conditions. This is achieved using an iterative framework and by targeting failures or rare bins. The goal to target failures is best exercised when the overall failure rate is typically low (below 5%). Iterative learning can be used to improve the ability to target specific areas. Use the SimAI bug hunting use case to target rare events, low hit coverage bins, and low hit failure signatures.

See more details on the Cadence Learning and Support Portal:

Unlock compute savings, reduce your functional coverage holes, and discover and fix bugs faster with the power of machine learning technology now enabled by Verisium SimAI!

Please keep visiting  https://support.cadence.com/raks to download new RAKs as they become available.

Please note that you will need the Cadence customer credentials to log on to the Cadence Online  Support  https://support.cadence.com/, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies.

Happy Learning!




ma

Jasper Formal Fundamentals 2403 Course for Starting Formal Verification

The course "Jasper Formal Fundamentals v24.03" introduces formal analysis to those who want to use formal analysis for design or verification. 

To optimally benefit from this course, you must already have sufficient knowledge of the System Verilog assertions to be capable of writing properties for formal verification. Hence, this training provides a module on formal analysis to help cover this essential background. 

In this course, you will learn how to code efficient SVA Properties for formal analysis, understand formal complexity and how to overcome it, and learn the basics of formal coverage.

After completing this course, you will be able to:

  • Define reusable, functionally correct SVA properties that are efficient for formal tools. These shall use abstract auxiliary code to simplify descriptions, make code maintenance easier, reduce debug time, and reduce tool-proof runtime.
  • Set up, run, and analyze results from formal analysis.
  • Identify designs upon which formal is likely to be successful while understanding formal complexity issues and how to identify and overcome them.
  • Use a systematic property development process to approach a completely new verification problem.
  • Understand the basics of formal coverage.

 The most recently updated release includes new modules on:

  • "Basic complexity handling" which discusses the complexity in formal and how to identify and handle them.
  • "Complexity reduction methods” which discusses the complexity reduction methods and which is suitable for which type of complexity problem.
  • “Coverage in formal” which discusses the basics of coverage in formal verification and how coverage can be used in formal.   

Take this course to learn the basics of formal verification. 

What's Next? 

You can check out the complete training: Jasper Formal Fundamentals. There is a free online version of the training available 24/7 for all customers with a Cadence Learning and Support Portal account. If you are interested in an instructor-led version of the training, please contact Cadence Training. And don't forget to obtain your digital badge after completing the training!

You can also check Jasper University page for more materials on formal analysis and Jasper apps. 

Related Trainings 

Jasper Formal Expert Training Course | Cadence

Verilog Language and Application Training Course | Cadence

SystemVerilog for Design and Verification Training Course | Cadence

SystemVerilog Assertions Training Course | Cadence

Related Training Bytes 

Jasper Formal Property Verification (FPV) App: Basic Usage Demo (Video)

Jasper Formal Methodology playlist

Related Training Blogs

It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge!

Training Insights: Introducing the C++ Course for All Your C++ Learning Needs!

Training Insights: Reaching Your Verification Closure Using Verisium Manager

Training Insights - Free Online Courses on Cadence Learning and Support Portal




ma

Unveiling the Capabilities of Verisium Manager for Optimized Operations

In SoC development, the verification cycle is a crucial phase that ensures products meet their specifications and function correctly. However, the complexity of modern SoC projects, with their constant data flow, multiple validation teams working in parallel, and tight schedules, presents significant challenges. This article explores these challenges and introduces Verisium Manager as a solution that embodies the 'One Tool Fits All' concept. This means that Verisium Manager is designed to handle all aspects of the verification process for SoC development, from planning to coverage analysis to regression testing, thereby addressing the complex needs of SoC verification.

The Hurdles in Traditional Validation Cycles

 A typical validation process involves planning, coverage analysis, and regression testing. This complexity is compounded by using separate tools for each activity, leading to multiple control environments, APIs, and databases, not to mention the array of tool owners. Such fragmentation results in constant data transfer and translation between systems, from the planning tool to the coverage analysis tool and then to the regression testing tool. This continuous movement of data causes delays, system instability, poor user experiences, and, ultimately, a dip in the quality of the validation process.

The use of multiple platforms leads to inefficiency and reduced productivity. What's needed is a unified system that can streamline the workflow, simplify the verification process, and enhance its effectiveness.

Envisioning the Ideal Solution: Verisium Manager

 The cornerstone of an efficient validation cycle is integration and simplicity. The ideal solution is a singular platform that consolidates planning, coverage analysis, and regression management into one smooth, unified process. Verisium Manager emerges as this much-needed solution, encompassing all the functionalities necessary to streamline the validation process. Its comprehensive nature instills confidence in its ability to handle all aspects of the verification cycle. It can be fully customized to address and enforce any validation methodology and can facilitate smooth integration into any customer environment.

Features that stand out in Verisium Manager include: 

  • Unified Workflow: It acts as a single cockpit from which all activities are orchestrated, ensuring the validation teams' work is uninterrupted and seamlessly integrated.
  • Customization and Integration: Verisium Manager supports customizing test-plan structures and mapping results per project, ensuring a perfect fit for various project requirements. Its ability to smoothly integrate into the project's environment and compute platforms is unparalleled.
  • Support for Continuous Updates and Migration: The tool accommodates constant updates to project data and supports the migration of legacy data, ensuring that no historical data is lost in the transition to a new system.

Addressing Project-Specific Needs

 Verisium Manager recognizes diversity in different projects and offers project-specific solutions, including:

 Enforcing Project Test-Plan Structures and Attributes: It supports and enforces each project's unique test-plan structure and mapping guidelines.

  • Unified Data Views and Measurements: Verisium Manager promotes a unified view of data across all teams and enforces unified measurements, ensuring consistency and clarity in the validation process.
  • Enabling Project-Specific Actions and Integrations: The tool is designed to support project-specific actions directly from its graphical user interface and allows for smooth integration with in-house databases, dashboards, and the project execution stack.

Verisium Manager is the epitome of efficiency in software/hardware validation. Its differentiating features, such as support for customization, unified data view, and comprehensive coverage and regression requirements, make it an indispensable tool for any validation team looking to elevate their workflow.




ma

Lessons from the UMass Lowell Women’s Leadership Conference

This post was contributed by Liliko Uchida, application engineer at Cadence. Being a “Woman in STEM” is a phrase that has long been used to describe the holistic experience shared by thousands of women globally, yet it still makes us feel isolated. Partially due to the statistics of gender population in the STEM workforce and the remainder due to our own internal obstacles, being a woman in STEM continues to be a challenge. While many of us know the should-do’s and should-be’s of taking on this unique role objectively, we struggle to implement them. After all, our perseverance as engineers, mathematicians, businesswomen, programmers, and scientists is largely affected by subjectivity. The UMass Lowell Women’s Leadership Conference 2024 aimed to tackle this problem by uniting hundreds of women with shared experiences under one roof. Not only did the conference provide us with the knowledge necessary to persevere, but it also gave us the tools that will allow us to thrive and act upon the facts we already know. It is my hope that through this blog post, I can share some of my main takeaways from this special day. Be Confident This is one of the most palpable pieces of advice we always hear. Yet so many of us struggle to build this confidence because we don’t know how. Featured speaker Nicole Kalil defined confidence as “complete trust in oneself”.”One way to build this self-trust is by getting to know yourself on a deeper level. By creating a true inner connection, we begin to see ourselves as a whole instead of hyper-focusing on our shortcomings frequently illusioned by imposter syndrome. In one of the sessions, we were asked to introduce ourselves to our neighbors, not by what we do for work, but by who we are as a person. Even if this opportunity does not arise every day, this practice can be done simply by listing characteristics of yourself that define who you are. Who do you care for? How do you show them? What are your life goals oriented towards? How do you observe others’ behavior around you, and what does that say about how you make them feel? Getting to know you beneath the surface and allowing yourself to be seen for who you are is critical in building internal confidence. With practice, this self-reassurance will grow independent of external factors. Take Risks “Sometimes, you have to put your foot in the elevator” - Barb Vlacich, Keynote Speaker When opportunities arise, the only thing you can do to have a chance is to try. Without putting your foot in the elevator, the doors will close, becoming a missed opportunity. Similarly, several of the conference’s speakers also emphasized that the answer to every unasked question will always be a no. Even if you are not ready to full-send a negotiation, ask for a raise, or respectfully disagree with a co-worker’s opinion, start by getting comfortable asking uncomfortable questions. Just one discomfort a day will help in building an immunity to the anxiety that comes with taking risks, typically driven by our self-doubt. Another interesting point that stood out from the conference was the statistics of self-assessed qualifications between men and women. During the negotiation panel, it was revealed that men typically feel they only need 60% of the qualifications under a job description to apply, whereas women often feel they need close to 100%. These numbers alone demonstrate how the pure mental habits of men continue to funnel them into STEM and not women. The next time you seek a new opportunity, assess yourself based on the 60% and use it as a checklist threshold. If more women are able to pursue STEM careers using these numbers, the more likely we will begin to populate these roles. Build Your Genuine Network “ The essence of communication lies in the mutual exchange of ideas and emotions. And when the listener isn’t invested, it undermines the entire purpose of the conversation. Why are you having it anyway?” This is a quote from episode 186 of Julie Brown’s podcast This Sh!t Works called “The 5 Steps to Being an Active Listener”. Julie Brown is a Networking Coach, author, and podcast host who guided an energetic and candid conversation about networking and building a personal brand for women. Networking is often misunderstood as putting your name and qualifications out on the table for as many people to pick up your cards. While making these things known is important, they are not what nurtures effective connections. The key to cultivating your genuine network is to activate a sincere interest in the people you meet. Become the proactive receiver of the confidence exercise discussed above. When you meet someone new, what can you take away from them as a person, not an employee? By making people feel heard, even through the little conversations, you can begin to develop more meaningful connections that resonate. And, with practice, the sometimes inherent need to overcompensate by defining yourself with your resume will slowly fade. It was a wonderful opportunity to attend the UML Women’s Leadership Conference with four other inspiring Cadence women. Not only was the conference a motivating learning experience, but it was also a wonderful opportunity for us to bond together as women and feel supported by each other. The most eye-opening part of the day was seeing just how many women alike were sitting under the same roof. The conclusion of the event led me to feel proud to be an engineer, proud to be at Cadence, and most importantly, proud to be a woman. Learn more about life at Cadence .




ma

Solutions to Maximize Data Center Performance Featured at OCP Global Summit 2024

The demand for higher compute performance, energy efficiency, and faster time-to-market drove the conversations at this year's Open Compute Project (OCP) Global Summit in San Jose, California. It was the scene of showcasing groundbreaking innovations, expert-led sessions, and networking opportunities to drive the future of data center technology. For those who didn't get to attend or stop by our booth, here's a recap of Cadence's comprehensive solutions that enable next-generation compute technology, AI data center design, analysis, and optimization. Optimized Data Center Design and Operations As the data center community increasingly faces demands for enhanced efficiency, thermal management, sustainability, and performance optimization, data center operators, IT managers, and executives are looking for solutions to these challenges. At the Cadence booth, attendees explored the Cadence Reality Digital Twin Platform and Celsius EC Solver. These technologies are pivotal in achieving high-performance standards for AI data centers, providing advanced digital twin modeling capabilities that redefine next-generation data center design and operation. The Celsius EC Solver demonstration showed how it solves challenging thermal and electronics cooling management problems with precision and speed. CadenceCONNECT: Take the Heat Out of Your AI Data Center Cadence hosted a networking reception on October 16 titled "Take the Heat Out of Your AI Data Center." In today's AI era, managing the heat generated by high-density computing environments is more critical than ever. This reception offered insights into current and emerging data center technologies, digital twin cooling strategies that deliver energy-saving operations, and a chance to engage with industry leaders, Cadence experts, and peers to explore the latest cooling, AI, and GPU acceleration advancements. Here's a recap: Researcher, author, and entrepreneur Dr. Jon Koomey highlighted the inefficiency of data centers in his talk "The Rise of Zombie Data Centers," noting that 20-30% of their capacity is stranded and unused. He advocated for organizational changes and technological solutions like digital twins to reduce wasted energy and improve computational effectiveness as AI deployments increase. In "A New Millennium in Multiphysics System Analysis," Cadence Corporate VP Ben Gu explained the company's significant strides in multiphysics system analysis, evolving from chip simulation to a broader application of computational software for simulating various physical systems, including entire data centers. He noted that the latest Cadence venture, a digital twin platform for data center optimization, opened the opportunity to use simulation technology to optimize the efficiency of data centers. Senior Software Engineering Group Director Albert Zeng highlighted the Cadence Reality DC suite's ability to transform data center operations through simulation, emphasizing its multi-phase engine for optimal thermal performance and the integration of AI capabilities for enhanced design and management. A panel discussion titled "Turning AI Factory Blueprints into Reality at the Speed of Light" featured industry experts from NVIDIA, Norman Wright Precision Environmental and Power, NV5, Switch Data Centers, and Cadence, who explored the evolving requirements and multidimensional challenges of AI factories, emphasizing the need for collaboration across the supply chain to achieve high-performing and sustainable data centers. Watch the highlights. Transforming Designs from Chips to Data Centers The OCP Global Summit 2024 has reaffirmed its status as a pivotal event for data center professionals seeking to stay at the forefront of technological advancements. Cadence's contributions, from groundbreaking digital twin technologies to innovative cooling strategies, have shed light on the path forward for efficient, sustainable data centers. For data center professionals, IT managers, and engineers, the insights gained at this summit are invaluable in navigating the challenges and opportunities presented by the burgeoning AI era. Partnering with Arm Arm Total Design Cadence is a member of the Arm Total Design program. At an invitation-only special Arm event, Cadence's VP of Research and Development, Lokesh Korlipara, delivered a presentation focusing on data center challenges and design solutions with Arm Neoverse Compute Subsystem (CSS). The session highlighted: Efficient integration of Arm Neoverse CSS into system on chips (SoCs) with pre-integrated connectivity IP Performance analysis and verification of the Neoverse CSS integration into the SoC through Cadence's System VIP verification suite and automated testbench creation, enhancing both quality and productivity Jumpstarting designs through Cadence's collaboration with Arm for 3D-IC system planning, chiplets, and interposers Design Services readiness and global scale to support and/or deliver the most demanding Arm Neoverse CSS-based SoC design projects Cadence Supports Arm CSS in Arm Booth During the event, Cadence conducted a demo in the Arm booth that showcased the Cadence System VIP verification suite. The demo highlighted automated testbench creation and performance analysis for integrating the Arm CSS into SoCs while enhancing verification quality and productivity. Summary Cadence offers data center solutions for designing everything from the compute and networking chips to the board, racks, data centers, and campuses. Stay connected with Cadence and other industry leaders to continue exploring the innovations set to redefine the future of data centers. Learn More Cadence Joins Arm Total Design Cadence Arm-Based Solutions Cadence Reality Digital Twin Platform




ma

Celebrating Milestones: The Cadence Bangalore Toastmasters Club’s Journey

On November 5, 2024, the Cadence Bangalore Toastmasters Club celebrated a significant milestone by hosting its 50th meeting. Established in December 2020, the club was created to provide a supportive environment for individuals looking to improve their communication and leadership skills. Over the years, the club has evolved into a vibrant community filled with success stories of personal development and newfound confidence. A testament to the club's dedication is its achievement of the "Select Distinguished Club" status during the 2023-2024 program year. By fulfilling 7 out of 10 distinguished goals, the club highlighted its commitment to excellence—a success driven by its vibrant members' relentless focus and perseverance. The strategic insight gained from regular Toastmasters committee meetings and the influential "Moments of Truth" sessions held in 2023 and 2024 are key to this success. Our club members have consistently demonstrated strong performance in various speech contests, with notable achievements across multiple levels. In 2023, members excelled in Evaluation and Table Topics contests, reaching the district level while advancing to the Division Level in the International Speech Contest. Continuing their success into 2024, members again qualified for area-level contests, securing third-place positions in the Evaluation and Table Topics categories, highlighting the club's dedication and competitive spirit. The 50th meeting was based on the theme of serendipity. It was not only a milestone celebration but also a vibrant festival of achievements and growth. The day buzzed with energy as activities like a spirited Treasure Hunt injected enthusiasm and camaraderie among attendees. Distinguished guests, including Kripa Venkitachalam and Madhavi Rao, enriched the occasion with inspiring speeches. Madhavi reignited the club's spirit, while Kripa's discourse on the Growth Mindset and the "Power of Yet" encouraged members to pursue continuous self-improvement. The Cadence Bangalore Toastmasters Club is enthusiastic about its promising future and is committed to creating an environment that promotes personal and professional growth. Many members are close to completing their Toastmasters levels and pathways, and this term, a new group of approximately 30 individuals has joined, bringing the total membership to 52. This vibrant community is just beginning its journey and is eager to reach new milestones together through mutual support and a shared commitment to excellence. The transformations experienced by many club members are truly compelling. They often share how the club has significantly improved their communication skills and boosted their confidence. One member recalls, "Before joining, I found public speaking intimidating. Now, I embrace every opportunity to share my ideas." Another member highlights how the club's supportive environment helped him overcome his fear of public speaking, propelling his career to new heights. This culture of constructive feedback and continuous improvement has inspired countless members to pursue their dreams with renewed determination and optimism. The Cadence Bangalore Toastmasters Club's journey is a living testament to the power of community and the potential within each of us to grow and achieve greatness. As the club continues to evolve and inspire, it serves as a beacon for those aspiring to transform their skills and seize their moment in the spotlight. Learn more about life at Cadence.




ma

Replace Cache useing TCL command

Hello,

I'm using OrCad 17.2 and in the company I'm wokring at there was a change in the database folder (from driver F to G for example) and it effects the option of synchronise using the Part Manager. and changing manually each part in the Desgin Cahce can be a pain.

Is there any way I can make a TCL script that will run and replace a part cahce with other? Better if I can call from a table to read, and write from other collum.

I would really be happy for an example.

Thanks for the help.




ma

Path mapping for C Firmware source files when debugging

Hi,

i am compiling firmware under Windows transfer the binaries and the sources to Linux to simulate/debug there. The problem is that the paths in the DWARF debug info of the .elf file are the absolute Windows paths as set by the compiler so they are useless under Linux. Is it possible to configure mappings of these paths to the Linux paths when simulating/debugging like with e.g. GDB (https://sourceware.org/gdb/current/onlinedocs/gdb/Source-Path.html#index-set-substitute_002dpath)?

thx,

Peter




ma

Matlab cannot open Pspice, to prompt orCEFSimpleUI.exe that it has stopped working!

Cadence_SPB_17.4-2019 + Matlab R2019a

请参考本文档中的步骤进行操作

1,打开BJT_AMP.opj

2,设置Matlab路径

3,打开BJT_AMP_SLPS.slx

4,打开后,设置PSpiceBlock,出现或CEFSimpleUI.exe停止工作

5,添加模块

6,相同

7,打开pspsim.slx

8,相同

9,打开C: Cadence Cadence_SPB_17.4-2019 tools bin

orCEFSimpleUI.exe和orCEFSimple.exe

 

10,相同

我想问一下如何解决,非常感谢!




ma

Formal Verification Approach for I2C Slave

Hello,

I am new in formal verification and I have a concept question about how to verify an I2C Slave block.

I think the response should be valid for any serial interface which needs to receive information for several clocks before making an action.

The the protocol description is the following: 

I have a serial clock (SCL), Serial Data Input (SDI) and Serial Data Output (SDO), all are ports of the I2C Slave block.

The protocol looks like this:

The first byte which is received by the slave consists in 7bits of sensor address and the 8th bit is the command 0/1 Write/Read.

After the first 8 bits, the slave sends an ACK (SDO = 1 for 1 clock) if the sensor address is correct.

Lets consider only this case, where I want to verify that the slave responds with an ACK if the sensor address is correct.

The only solution I found so far was to use the internal buffer from the block which saves the received bits during 8 clocks. The signal is called shift_s.

I also needed to use internal chip state (state_s) and an internal counter (shift_count_s).

Instead of doing an direct check of the SDO(sdo_o) depending on SDI (sdi_d_i), I used the internal shift_s register.

My question is if my approach is the correct one or there is a possibility to write the verification at a blackbox level.

Below you have the 2 properties: first checks connection from SDI to internal buffer, the second checks the connection between internal buffer and output.

property prop_i2c_sdi_store;
  @(posedge sclk_n_i)
  $past(i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR)
    |-> i2c_bl.shift_s == byte'({ $past(i2c_bl.shift_s), $past(sdi_d_i)});
endproperty
APF_I2C_CHECK_SDI_STORE: assert property(prop_i2c_sdi_store);

property prop_i2c_sensor_addr(sens_addr_sel, sens_addr);
@(posedge sclk_n_i) (i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR) && (i2c_addr_i == sens_addr_sel) && (i2c_bl.shift_count_s == 7)
  ##1 (i2c_bl.shift_s inside {sens_addr, sens_addr+1}) |-> sdo_o;
endproperty
APF_I2C_CHECK_SENSOR_ADDR0: assert property(prop_i2c_sensor_addr(0, `I2C_SENSOR_ADDRESS_A0));
APF_I2C_CHECK_SENSOR_ADDR1: assert property(prop_i2c_sensor_addr(1, `I2C_SENSOR_ADDRESS_A1));
APF_I2C_CHECK_SENSOR_ADDR2: assert property(prop_i2c_sensor_addr(2, `I2C_SENSOR_ADDRESS_A2));
APF_I2C_CHECK_SENSOR_ADDR3: assert property(prop_i2c_sensor_addr(3, `I2C_SENSOR_ADDRESS_A3));

PS: i2c_addr_i is address selection for the slave (there are 4 configurable sensor addresses, but this is not important for the case).

Thank you!




ma

Issue With Loudness Normalization

Hello everyone. In recent days, I'm having a weird problem with sound output on my Windows 10 PC. In fact, I can't control the loudness of it. So is there any possibility of PCB of sound card being damaged?




ma

How to turn vavlog IO width mismatch error to warning?

Hi, all.

When I use vavlog to compile verilog rtl, it will recognize IO width mismatch problem as a fatal error.

How to turn the error into warning?

VCS can use -error=noIOPCWM to ingore the error.

Is vavlog has similar arguments?




ma

The code used to Replace Cache useing TCL command

use the DBO function DboLib_RepalceCache to do the job of "Replace cache" 

in order to easy the job ,  type the code below . the code is a wrapper of the function metioned above

set lStatus [DboState]
set lSession $::DboSession_s_pDboSession
DboSession -this $lSession
set lDesignsIter [$lSession NewDesignsIter $lStatus]
set lDesign [$lDesignsIter NextDesign $lStatus]
set lNullObj NULL

set oldLibName [DboTclHelper_sMakeCString "E:\PROJECT_WORKLIB.OLB"]
set newLibName [DboTclHelper_sMakeCString "E:\MCU_PARTS_LIB.OLB"]

#DboLib_ReplaceCache wrapper
proc ReplaceCacheByName {partName} {
    global oldLibName
    global newLibName
    global lDesign
    set lPartStr [DboTclHelper_sMakeCString $partName]
    #set lNewStr [DboTclHelper_sMakeCString $newName]
    $lDesign ReplaceCache $lPartStr $oldLibName $lPartStr $newLibName 0 1
}

then use the tcl command like below to do the real job :

ReplaceCacheByName "CL10B104KB8NNNC_C12"




ma

Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer: Pt. 2

At a bustling Cadence event, we met Adrian, an intern at a startup who immerses himself in Cadence tools for his research and work.

Adrian was enthusiastic about the innovative technologies at his disposal but faced a significant challenge: internet access was limited to a single machine for new joiners, forcing interns to wait in line for their turn to use online resources.

Adrian's excitement soared when he discovered a game-changing solution: Doc Assistant. The cloud-based help viewer, Doc Assistant, ships with all Cadence tools, enabling Adrian to access help resources offline from any machine equipped with the software. This meant Adrian could continue his research and work seamlessly, irrespective of internet availability!

Meeting Cadence users and customers at such events has given us the opportunity to showcase how they can benefit from the diverse features that Doc Assistant offers.

With that note, welcome back to our Doc Assistant A-Z blog series! In Part 1, we explored key features and benefits that our innovative viewer brings to the table. Today, in Part 2, we'll dive deeper into the advanced functionalities and customization options that make Doc Assistant indispensable for its users.

Whether you're looking to streamline your workflow or enhance your user experience, this blog will provide the insights you need to fully leverage the capabilities of our documentation viewer. Let’s get started!

What Makes Doc Assistant Stand Out?

Here are a few (more) cool features of Doc Assistant!

History and Bookmarks: Want to refer to the topic you read last week? Of course, you can! Doc Assistant stores your browsing activity as History. You can also bookmark topics and revisit them later.

Indexing Capabilities: Looking for seamless search capabilities? The advanced indexing capabilities of Doc Assistant enhance the accessibility and manageability of documents. Doc Assistant automatically creates a search index if it is missing or broken.

Jump Links: Worried about scrolling through lengthy topics? Fret no more! Use the jump links in each topic to quickly navigate to different sections within the same topic or across topics. Jump links reduce the need for excessive scrolling and let you access relevant content swiftly.

Just-in-Time Notifications: Looking for alerts and messages? That’s supported. Doc Assistant displays notifications about important events, including errors, warnings, information, and success messages.

Keyword-Based Search Suggestions: You somewhat know your search keyword, but not quite sure? No worries. Just start typing what you know. Keyword and page suggestions are displayed dynamically as you type, providing a more sophisticated and intuitive search experience.

Library-Switch Support: Want to view documents from other libraries? Doc Assistant, by default, displays documents for the currently active release in your machine. You can access documents from other releases by configuring the associated documentation libraries.

Multimedia Support: Want to view product demos? Multimedia support in Doc Assistant lets you play videos, listen to audio, and view images without opening any external application.

Navigation Made Easy: Worried that you’ll get lost in an infinite doc loop? Not at all. The intuitive navigation controls in Doc Assistant are designed to provide you with a fluid and efficient experience. The Doc Assistant user interface is clean and logically organized, with easy-to-access documentation links.

That's not all. We have more coming your way. Until next time, take care and stay tuned for our next edition!

Want to Know More?

Here's a video about Doc Assistant
Visit the Doc Assistant web page
Read the Doc Assistant FAQ document

For any questions or general feedback, write to docassistant.support@cadence.com.

Subscribe to receive email notifications about our latest Custom IC Design blog posts.

Happy reading!

-Priya Sriram, on behalf of the Doc Assistant Team




ma

Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer Part 3

Welcome back to the Doc Assistant A-Z blog series!

Since the launch of Doc Assistant, we've been gathering feedback and input from our customers regarding their experiences with our latest documentation viewer. My interaction with Ralf was particularly useful and interesting.

Ralf is a design engineer who works on complex schematics and intricate layouts. For each release, he is challenged with the task of verifying the tool and feature changes across multiple releases. He shared with me that he has been using Doc Assistant’s capabilities to help him achieve this.

Ralf explained that he utilizes Doc Assistant to open and compare documents from different releases side-by-side, seamlessly tracking updates across multiple releases and verifying those updates in his Cadence tools. Additionally, in Doc Assistant’s online mode, he compares documents across previous tool versions, ensuring a thorough review of any changes. Finally, he was happy to share with me that Doc Assistant features have helped him significantly reduce the time he spends on identifying such changes.

You, of course, can also achieve such productivity gains using several Doc Assistant features designed to help simplify such tasks!

In previous editions of this blog series, we looked at some key features and benefits of Doc Assistant. If you've missed these editions, I would highly recommend that you read them:

In this third installment, we're diving into some more of Doc Assistant's key capabilities.

Open Multiple Documents

Want to refer to multiple docs at the same time? That’s easy!

Open each doc on a separate tab in Doc Assistant. 

Personalized Content Recommendations

Is it a hassle to navigate through all docs each time? You don’t have to.

You can tailor your Doc Assistant preferences to match your content requirements.

PDF Support

Do you prefer downloading and reading a PDF instead of an HTML?

That’s also supported.

Quick Access to Relevant Search Results

Are you pressed for time, and yet want to run a comprehensive doc search? You’re covered.

In online mode, search runs on all available product documentation, and the results are listed from multiple sources.

Resource Links

Looking for more information about a topic you’ve just read? That’s handy.

Look out for content recommendations!

Share Content

Want to share a useful doc with the rest of your team? That’s easy.

With a single click, Doc Assistant lets you share content with one or more readers.

Submit Feedback

Your feedback is important to us. Use the Submit Feedback feature to share your comments and inputs.

To learn more about how to use the above features, check out the Doc Assistant User Guide.

These are just a few of the productivity gain features in Doc Assistant. We’ll cover more in the next blog in the series.

Want to Know More?

Here's a video about Doc Assistant
Visit the Doc Assistant web page
Read the Doc Assistant FAQ document

If you have any feedback on Doc Assistant or would like to request more information or a demo, please contact docassistant.support@cadence.com.

Subscribe to receive email notifications about our latest Custom IC Design blog posts.

Happy reading!

Priya Sriram, on behalf of the Doc Assistant Team




ma

How to magnify a board on a film view

I have a small board that is not readable even though the document is 11' x 17'. Is there a way I could expand/magnify the board along with the components on them to make them legible? 
I have created a new film and is displaying the bottom and top side of the board but the board is too small and the components are not legible. Perhaps there is a way to upscale it or expand it?
Please note I have other stuff in the document that I am not showing , notes and other things, and I am trying to make just the boards look bigger in some way.
I do have a PDF image of the same file where the board appears to be MUCH bigger and is fully legible and I am trying to match that.


Thank you all.




ma

Noise summary data per sub-block in Maestro output expressions

Hi,

I have a question about printing noise summary via maestro output expressions.

How can I print noise data using output expressions, for multiple levels of the hierarchy?

I have found this article which describe the procedure using ocnGenNoiseSummary() functionhttps://support.cadence.com/apex/ArticleAttachmentPortal?id=a1Od0000007MViHEAW&pageName=ArticleContent

I see also Andrew Beckett referring to the above mentioned article as a solution to a similar question: community.cadence.com/.../noise-summary-per-instance

However, this seems to work only if I'm to extract noise data from a single level of hierarchy.

If I have the output expression "ocnGenNoiseSummary(2 ?result 'hbnoise)", it will generate a "noisesummary" directory under results directory for a hierarchy level of 2.

If I am to extract data from various hierarchy levels, I should be able to generate multiple noise summary directories, such as noisesummary1, noisesummary2 where they correspond to "ocnGenNoiseSummary(1 ?result 'hbnoise)" & "ocnGenNoiseSummary(2 ?result 'hbnoise)", respectively. However this does not seem to be possible.

Can you please advice? Thanks.

My Cadence version: IC23.1-64b.ISR7.27

BR,

Denizhan Karaca




ma

Display Resource Editor: Different Colors for Schematic and Layout Axis

Hi

In the environment I'm currently working, axes are shown for schematic, symbol, and layout views.For schematics and symbols, I'd prefer a dim gray, such that the axes are just visible but not dominant. For the layout, I'd prefer a brighter color. Is there a way to realize this? So far when I change the color of the 'axis' layer in the display resource editor, the axes in all three views get changed together:

Thanks very much for your input!




ma

Import LEF file failed due to layermap

Hi,

I have a LEF file with simple definitions of pad design which uses M8, M9, and AP layers. However, I failed to import the design with CIW > Import > LEF... as I encountered "ERROR: (OALEFDEF-90019): Ignoring the line 30 in the layer map file ... as it contains a syntax error. Each entry in the layer map file must have two values, LEFLayerName and OALayerNumber separated by a blank space." All lines in the file report the same OALEFDEF-90019 error.

The tech.layermap file looks like this:


# techLayer       techPurpose     stream# dataType

ref drawing 0 0
DNW drawing 1 0
PW drawing 2 0




ma

Transient Simulation waveform abnormal

Hello Everybody

Recently, I want to design a high output Power Amplifier at 2.4GHz using TSMC 1P6M CMOS Bulk Process. I use its nmos_rf_25_6t transistor model to determine the approximate mosfet size

I use the most common Common-Source Differential Amplifier topology with neutralizing capacitor to improve its stability and power gain performance

Because I want to output large power, the size of mosfet is very large, the gate width is about 2mm, when I perform harmonic balance analysis, everything is alright, the OP1dB is about 28dBm (0.63Watt)

But When I perform Transient simulation, the magnitude of voltage and current waveform at the saturation point is too small, for voltgae, Vpeaking is about 50mV, for current, Ipeaking is about 5mA

I assume some reasons: the bsim4 model is not complete/ the virtuoso version is wrong (My virtuoso version is IC6.1.7-64b.500.21)/the spectre version is wrong (spectre version is 15.1.0 32bit)/the MMSIM version is wrong/Transient Simulation setting is wrong (the algorithm is select gear2only, but when I select other, like: trap, the results have no difference), the maxstep I set 5ps, minstep I set 2ps to improve simulation speed, I think this step is much smaller than the fundamental period (1/2.4e9≈416ps)

I have no idea how to solve this problem, please help me! Thank you very very much!




ma

How to get maximum value of s11 Trace

Hello

i did a sp-Analysis and now i want to extract the maximum value of the s11 trace and the corresponding frequency.

I already tried ymax() in the calculator but i am suspecting it only works on transient Signals.




ma

Force virtuoso (Layout XL) to NOT create warning markers in design

Hi

I have a rather strange question - is there a way to tell layout XL to NOT place the error/warning markers on a design when I open a cell?  I do a lot of my layout by using arrays from placed instances and create mosaics that completely ignore the metadata that Layout XL uses with its bindings with schematic (and instances get deleted etc. but I do like using it to generate all my pins etc.) and it's just really annoying when I open a design that I know is LVS clean and since the connectivity metadata is all screwed up (because I did not use it to actually complete the layout) I have a design that's just blinking at me at every gate, source and drain.  I typically delete them at the high level heirarchically but the second I go in and modify something and come back up it places all of them again.  I know that if I flatten all the p cells it goes away but sometimes it's nice to have that piece of metadata but that's about it.  Is there a way to "break" the features of XL like this?  I realize what a weird question this is but it's becoming more of an issue since we moved to IC 23 from IC 6 where there is no longer a layout L that I can use free from these annoyances that can't use any of the connectivity metadata.

Thanks

Chris




ma

How to Set Up a Config View to Easily Switch Between Schematic and Calibre of DUT for Multiple Testbenches?

Hello everyone,

I hope you're all doing well. I’ve set up two testbenches (TB1 and TB2) for my Design Under Test (DUT) using Cadence IC6.1.8-64b.500.21 tools, as shown in the attached figure. The DUT has multiple views available: schematic, Calibre, Maestro, and Symbol, and each testbench uses the same DUT in different scenarios. Currently, I have to manually switch between these views, but I would like to streamline this process.

My goal is to use a single config view that allows me to switch between the schematic and the extracted (Calibre) views. Ideally, I would like to have a configuration file where making changes once would update both testbenches (TB1 and TB2) automatically. In other words, when I modify one config, both testbenches should reflect this update for a single simulation run.

I would really appreciate it if you could guide me on the following:

  1. How to create a config view for my DUT that can be used to easily switch between the schematic and extracted views, impacting both TB1 and TB2.
  2. Where to specify view priorities or other settings to control which view is used during simulation.
  3. Best practices for using a config file in this scenario, so that it ensures consistency across multiple testbenches.

Please refer to the attached figure to get a better understanding of the setup I’m using, where both TB1 and TB2 include the same DUT with multiple available views.

Thank you so much for your time and assistance!




ma

Importing ODF to vManager does not update vplan

I exported vplan to .odf file in vManager and after editing it I imported it to vManager. The vplan was expected to be synchronized and updated. However, nothing has changed to it. Does anyone know why?




ma

Using Vmanager Pre-Script to launch a timed script

I would like to send an update about a vmanager regression status x days after the regression has been run. In the current environment, the vmanager regression is creating a new filepath for logs automatically based on regression name/date, so I can't use a cron job to gather logs, as the log location is not known. 


I tried to use the pre session script to launch a detached shell script that would run after a delay, but when the pre_script runs, it waits until everything is completed before finishing and moving on to starting the regression.

Here is the test pre_script I am using:

#!/bin/sh

echo "pre_script start"

delay_script "FIRST" 1
nohup delay_script "SECOND" 30 & disown
delay_script "THIRD" 1

echo "pre_script end"
exit 0

Here is the test delay_script I am using:

#!/bin/sh

echo "Starting $1"

sleep $2

echo "Ending $1"

Here is the script output when run from terminal. After the "pre_script end", I get control back.

Here is the script output when run from vmanager. There is no "nohup", and the pre_session phase doesn't complete until all the delay scripts complete.


My question is, is there a better way to achieve my goal here? (The goal being to run a script from the vmanager log directory automatically x days after the regression). I think I could use the pre_script to send directory information for an auxiliary cron job to pick up, but I would prefer to not have to have extra cronjobs needed for this.




ma

vManager crashes when analyzing multiple sessions simultaneously with a fatal error detected by the Java Runtime Environment

When analyzing multiple sessions simultaneously Verisium Manager crashed and reported below error messages:

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007efc52861b74, pid=14182, tid=18380
#
# JRE version: OpenJDK Runtime Environment Temurin-17.0.3+7 (17.0.3+7) (build 17.0.3+7)
# Java VM: OpenJDK 64-Bit Server VM Temurin-17.0.3+7 (17.0.3+7, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# C  [libucis.so+0x238b74]

......

For more details please refer to the attached log file "hs_err_pid21143.log".

Two approaches were tried to solve this problem but neither has worked.
Method.1:

Setting larger heap size of Java process by "-memlimit" options.For example "vmanager -memlimit 8G".

Method.2:

Enlarging stack memory size limit of the Coverage engine by setting "IMC_NATIVE_STACKSIZE" environment variable to a larger value. For example "setenv IMC_NATIVE_STACKSIZE 1024000"

According to "hs_err_pid*.log" it is almost certain that the memory overflow triggered Java's CrashOnOutOfMemoryError and caused Verisium Manager to crash. There are some arguments about memory management of Java like "Xms, Xmx, ThreadStackSize, Xss5048k etc" and maybe this problem can be fixed by setting these arguments during analysis. However, how exactly does Verisium Manager specify these arguments during analysis? I tried to set them by the form of setting environment variables before analysis but it didn't work in analysis and their values didn't change.

Is there something wrong with my operation or is there a better solution?

Thank you very much.




ma

Is it possible to automatically exclude registers or wires that are not used from toggle coverage?

Hello,

I have a question about toggle coverage.

In my case, there are many unused registers or wires that are affecting the toggle coverage score negatively.

Is it possible to automatically exclude registers or wires that are not used from toggle coverage?

My RTL code is as follows, Is it possible to automatically disable tb.top1.b and tb.top1.c without using an exclude file?

module top1;

  reg a;

  reg b;

  reg [31:0] c;

  initial

  begin

  #1 a=1'b0;

  #1 a=1'b1;

  #1 a=1'b0;

  end

endmodule

module tb;

  top1 top1();

endmodule




ma

Issues related to cadence xrun command

We are trying to run compilation, elab and sim with command xrun -r -u alu, where alu is one of the units to execute. we are getting the following errors.

1) xmsim: *E,DLMKDF: Unable to add default DEFINE std       /home/xxxx/Cad/xcelium/tools/inca/files/STD.
    xmsim: *E,DLMKDF: Unable to add default DEFINE synopsys  /home/xxxx/Cad/xcelium/tools/inca/files/SYNOPSYS


2) xmsim: *W,DLNOHV: Unable to find an 'hdl.var' file to load in.

What is the purpose of hdl.var

3) xmsim: *F,NOSNAP: Snapshot 'alu' does not exist in the libraries.

I cannot see in log files, which libraries is it referring to??

Any one request you to help on how to debug these.




ma

Xcelium: dump coverage information in the middle of a simulation

Hi, I'm using the xcelium simulator to simulate a testbench, in which I first stimulate my design to do something (part "A") and then do a direct follow-up test on the design (part "B").

I need two things from this testbench: the results of the test (part "B", passed/failed) and coverage information, but the coverage information should only include part A and explicitly not part B.

I could do the following: run the testbench with part A and B, get the "passed/failed" result of the test and then follow up another simulator run with another testbench, that only includes part A and get the coverage information from that simulation run.

Is there a way to force xcelium to give me the coverage information of only a part of the simulation? Ideally, I would like to write the verilog code of my testbench to look something like this:

  • do A
  • dump coverage information
  • do B

But maybe there is another way to tell xcelium to consider only part of the testbench for the coverage information. I did have a look at the manual, but was not able to find something useful for this problem. Any ideas?




ma

Collecting Coverage using Vmanager

Hi, 

I am running a regression in order to collect the coverage. However I have an issue. I am setting a signal to 0 when reset is de-asserted  then this signal takes a fixed value when the reset is asserted. 

if(!rst_n) 
init_val= 'b0;

else 

init_val31'h34013FF7

the issue is that I got 0%  coverage for the init_value since we only have a rising edge and the signal is not toggling during the simulation. is there an option to collect coverage when there is a rising edge or a falling edge? 




ma

Using vManager to identify line coverage from a specific test

I have been using the rank feature to identify tests that are redundant in our environment, but then I realized I'd also like to be able to see exactly what coverage goes into increasing the delta_cov value for a given test. If I had a test in my rank report that contributed 0.5% of the delta_cov, how could I got about seeing exactly where that 0.5% was coming from? It seems like that might be part of the correlate function, but I couldn't mange to find a way to see what specific coverage was being contributed for a given test.




ma

Using "add net constraints" command in Conformal

Hi

I have tried using "add net constraints" command to place one-cold constraints on a tristate enable bus. In the command line we need to specify the "net pathname" on which the constraints are to be enforced.

The bus here is 20-bit. How should the net pathname be specified to make this 20-bit bus signals one_hot or one_cold.

The bus was declared as follows:
ten_bus [19:0]

The command I used was

add net constraints one_hot /ren_bus[19]

What would the above command mean?
Should we not specify all the nets' pathnames on the bus?
Is it sufficient to specify the pathname of one net on the bus?
I could not get much info regarding the functionality of this command. I would be obliged if anyone can throw some light.

Thanks
Prasad.


Originally posted in cdnusers.org by anssprasad




ma

c interface with specman

Hi,

 I need to call a  c function form specman . I had followed the below steps.


File vb_pattern.e
---------------------------------

 
struct vb_pattern_s
{
  %data_in_ch0 : uint (bits : 4);   // data on channel 0
  %data_in_ch1 : uint (bits : 4);   // data on channed 1
  %data_in_ch2 : uint (bits : 4);   // data on channel 2
  %mode : uint (bits : 1);          // mode 
  %enable : uint (bits : 1);        // enable

};

C export vb_pattern_s;

------

file  x_output_bfm.e
--------------------------------------------

check_patterns()@clk_e is
{

  ...
 exp_viterbi_op();

}

routine exp_viterbi_op() is C routine viterbi_c_func;

---- EOF------

X.c
#include "vb_pattern.h"

void viterbi_c_func ()
{
 SN_TYPE(vb_pattern_s) vb_packet;
 SN_TYPE(mode)   mode;
 vb_packet = SN_SYS->ops
 mode = vb_packet->mode;
 printf(" Printing from C environment MODE = %h ", mode);

}


------------------- EOF----

x_top.e
------------
import  tb/vb_pattern.e;
import  tb/x_input_bfm.e;
import  tb/x_output_bfm.e;
import  tb/x_cover_dut.e;
import  tb/x_env.e;




I  did the following comand


>> sn_compile.sh -h_only x_top.e -o vb_pattern.h
>> gcc -c viterbi.c -o viterbi.o

I am getting the following error


viterbi.c: In function `viterbi_c_func':
viterbi.c:6: error: `t__mode' undeclared (first use in this function)
viterbi.c:6: error: (Each undeclared identifier is reported only once
viterbi.c:6: error: for each function it appears in.)
viterbi.c:6: error: syntax error before "mode"
viterbi.c:7: error: `mode' undeclared (first use in this function)


Please help me resolve this.

Kesav



  








 


Originally posted in cdnusers.org by kesava