ot

Bengaluru Landlord Asks Rs 5 Lakh Deposit for Rs 40,000 Rent: "Extortion"

The post has sparked a heated debate about Bengaluru's rising rental prices and the need for a cap on deposits.




ot

"No Talking...": Employee Shares Strict Workplace Rules, Calls It A "Jail"

The post details a highly restrictive environment where employees are forbidden from basic actions like looking away from their screens or using their phones.




ot

Salesforce and other tech giants invest $24M in IFTTT to help it expand in enterprise IoT

IFTTT (If This Than That), a web-based software that automates and connects over 600 online services/software raised a $24M Series C led by Salesforce. Other investors include IBM and the Chamberlain Group and Fenox Venture Capital.

New apps and devices that made their way to IFTTT

The latest round brings IFTTT’s funding to $63M and it will use the funding proceeds to provide integration for enterprise and IoT services and hiring. In IFTTT’s platform, applets are code/script users need to deploy to integrate two or more services (such Google Drive’s integration with Twitter/Facebook).

“IFTTT is at the forefront of establishing a more connected ecosystem for devices and services. They see IFTTT as an important business, ecosystem, and partner in the industry,” said CEO Linden Tibbets.

Investment in IFTTT reveals that Salesforce is consolidating its presence in enterprise IoT space. It also acquired Mulesoft, an integration platform that rivals Microsoft’s BizTalk.

IBM’s investment in IFTTT is also noteworthy as the former is pushing its IBM Watson IoT platform. The following statement also shows its keen interest in IFTTT.

“IBM and IFTTT are working together to realize the potential of today’s connected world. By bringing together IBM’s Watson IoT Platform and Watson Assistant Solutions with consumer- facing services, we can help clients to create powerful and open solutions for their users that work with everything in the Internet of Things,” said Bret Greenstein, VP, Watson Internet of Things, IBM.

Other recent investments in IoT companies include $30M Series B of Armis and Myriota's $15M for its IoT satellite-based connectivity platform.

For latest IoT funding and product news, please visit our IoT news section.




ot

Hitman Wanted By Police for Attacking Twin Brothers

[SAPS] Office of the Provincial Commissioner KwaZulu-Natal




ot

Navigating Chiplet-Based Automotive Electronics Design with Advanced Tools and Flows

In the rapidly evolving landscape of automotive electronics, traditional monolithic design approaches are giving way to something more flexible and powerful—chiplets. These modular microchips, which are themselves parts of a whole silicon system, offer unparalleled potential for improving system performance, reducing manufacturing costs, and accelerating time-to-market in the automotive sector. However, the transition to working with chiplets in automotive electronics is not without its challenges.

Designers must now grapple with a new set of considerations, such as die-to-die interconnect standards, complex processes, and the integration of diverse IPs. Advanced toolsets and standardized design approaches are required to meet these challenges head-on and elevate the potential of chiplets in automotive innovation. In the following discourse, we will explore in detail the significance of chiplets in the context of automotive electronics, the obstacles designers face when working with this paradigm, and how Cadence comprehensive suite of IPs, tools, and flows is pioneering solutions to streamline the chiplet design process.

Unveiling Chiplets in Automotive Electronics

For automotive electronics, chiplets offer a methodology to modularize complex functionalities, integrate different chiplets into a package, and significantly enhance scalability and manufacturability. By breaking down semiconductor designs into a collection of chiplets, each fulfilling specific functions, automotive manufacturers can mix and match chiplets to rapidly prototype new designs, update existing ones, and specialize for the myriad of use cases found in vehicles today.

The increasing significance of chiplets in automotive electronics comes as a response to several industry-impacting phenomena. The most obvious among these is the physical restriction of Moore's Law, as large die sizes lead to poor yields and escalating production costs. Chiplets with localized process specialization can offer superior functionality at a more digestible cost, maintaining a growth trajectory where monolithic designs cannot. Furthermore, chiplets support the assembly of disparate technologies onto a single subsystem, providing a comprehensive yet adaptive solution to the diverse demands present in modern vehicles, such as central computing units, advanced driver-assistance systems (ADAS), infotainment units, and in-vehicle networks. This chiplet-based approach to functional integration in automotive electronics necessitates intricate design, optimization, and validation strategies across multiple domains.

The Complexity Within Chiplets

Yet, with the promise of chiplets comes a series of intricate design challenges. Chiplets necessitate working across multiple substrates and technologies, rendering the once-familiar 2-dimensional design space into the complex reality of multi-layered, sometimes even three-dimensional domains. The intricacies embedded within this design modality mandate devoting considerable attention to partitioning trade-offs, signal integrity across multiple substrates, thermal behavior of stacked dies, and the emergence of new assembly design kits to complement process design kits (PDKs).

To effectively address these complexities, designers must wield sophisticated tools that facilitate co-design, co-analysis, and the creation of a robust virtual platform for architectural exploration. Standardizations like the Universal Chip Interconnect Express (UCIe) have been influential, providing a die-to-die interconnect foundation for chiplets that is both standardized and automotive-ready. The availability of UCIe PHY and controller IP from Cadence and other leading developers further eases the integration of chiplets in automotive designs.

The Role of Foundries and Packaging in Chiplets

Foundries have also pivoted their services to become a vital part of the chiplet process, providing specialized design kits that cater to the unique requirements of chiplets. In tandem, packaging has morphed from being a mere logistical afterthought to a value-added aspect of chiplets. Organizations now look to packaging to deliver enhanced performance, reduced power consumption, and the integrity required by the diverse range of technologies encompassed in a single chip or package. This shift requires advanced multiscale design and analysis strategies that resonate across a spectrum of design domains.

Tooling Up for Chiplets with Cadence

Cadence exemplifies the rise of comprehensive tooling and workflows to facilitate chiplet-based automotive electronics design. Their integrations address the challenges that chiplet-based SoCs present, ensuring a seamless design process from the initial concept to production. The Cadence suite of tools is tailored to work across design domains, ensuring coherence and efficiency at every step of the chiplet integration process.

For instance, Cadence Virtuoso RF subflows have become critical in navigating radio frequency (RF) challenges within the chiplets, while tools such as the Integrity 3D-IC Platform and the Allegro Advanced Multi-Die Package Design Solution have surfaced to enable comprehensive multi-die package designs. The Integrity Signal Planner extends its capabilities into the chiplet ecosystem, providing a centralized platform where system-wide signal integrity can be proactively managed. Sigrity and Celsius, on the other hand, offer universally applicable solutions that take on the challenges of chiplets in signal integrity and thermal considerations, irrespective of the design domain. Each of these integrated analysis solutions underscores the intricate symphony between technology, design, and packaging essential in unlocking the potential of chiplets for automotive electronics.

Cadence portfolio includes solutions for system analysis, optimization, and signoff to complement these domain-specific tools, ensuring that the challenges of chiplet designs don't halt progress toward innovative automotive electronics. Cadence enables designers to engage in power- and thermal-aware design practices through their toolset, a necessity as automotive systems become increasingly sophisticated and power-efficient.

A Standardized Approach to Success with Chiplets

Cadence’s support for UCIe underscores the criticality of standardized approaches for heterogeneous integration by conforming to UCIe standards, which numerous industry stakeholders back. By co-chairing the UCIe Automotive working group, Cadence ensures that automotive designs have a universal and standardized Die-to-Die (D2D) high-speed interface through which chiplets can intercommunicate, unleashing the true potential of modular design.

Furthermore, Cadence champions the utilization of virtual platforms by providing transaction-level models (TLMs) for their UCIe D2D IP to simulate the interaction between chiplets at a higher level of abstraction. Moreover, individual chiplets can be simulated within a chiplet-based SoC context leveraging virtual platforms. Utilizing UVM or SCE-MI methodologies, TLMs, and virtual platforms serve as first lines of defense in identifying and addressing issues early in the design process before physical silicon even enters the picture.

Navigating With the Right Tools

The road to chiplet-driven automotive electronics is one paved with complexity, but with a commitment to standards, it is a path that promises significant rewards. By leveraging Cadence UCIe Design and Verification IP, tools, and methodologies, automotive designers are empowered to chart a course toward chiplets and help to establish a chiplet ecosystem. With challenges ranging from die-to-die interconnect to standardization, heterogeneous integration, and advanced packaging, the need for a seamless integrated flow and highly automated design approaches has never been more apparent. Companies like Cadence are tackling these challenges, providing the key technology for automotive designers seeking to utilize chiplets for the next-generation E/E architecture of vehicular technology.

In summary, chiplets have the potential to revolutionize the automotive electronics industry, breathing new life into the way vehicles are designed, manufactured, and operated. By understanding the significance of chiplets and addressing the challenges they present, automotive electronics is poised for a paradigm shift—one that combines the art of human ingenuity with the power of modular and scalable microchips to shape a future that is not only efficient but truly intelligent.

Learn more about how Cadence can help to enable automakers and OEMs with various aspects of automotive design.




ot

How Cadence Is Revolutionizing Automotive Sensor Fusion

The automotive industry is currently on the cusp of a radical evolution, steering towards a future where cars are not just vehicles but sophisticated, software-defined vehicles (SDV). This shift is marked by an increased reliance on automation and a significant increase in the use of sensors to improve safety and reliability. However, the increasing number of sensors has led to higher compute demands and poses challenges in managing a wide variety of data. The traditional method of using separate processors to manage each sensor's data is becoming obsolete. The current trends necessitate a unified processing system that can deal with multimodal sensor data, utilizing traditional Digital Signal Processing (DSP) and AI-driven algorithms. This approach allows for more efficient and reliable sensor fusion, significantly enhancing vehicle perception. Developers often face difficulties adhering to stringent power, performance, area, and cost (PPAC) and timing constraints while designing automotive SoCs.

Cadence, with its groundbreaking products and AI-powered processors, is enabling designers and automotive manufacturers to meet the future sensor fusion demands within the automotive sector. At the recent CadenceLive Silicon Valley 2024, Amol Borkar, product marketing director at Cadence, showcased the company's dedication and forward-thinking solutions in a captivating presentation titled "Addressing Tomorrow’s Sensor Fusion Needs in Automotive Computing with Cadence." This blog aims to encapsulate the pivotal takeaways from the presentation. If you missed the chance to watch this presentation live, please click here to watch it.

Significant Trends in the Automotive Market – Industry Landscape

We are witnessing a revolution in automotive technology. Innovations like occupant and driver monitoring systems (OMS, DMS), 4D radar imaging, LiDAR technology, and 360-degree view are pushing the boundaries of what's possible, leading us into an era of remarkable autonomy levels—ranging from no feet or hands required to eventually no eyes needed on the road.

Sensor Fusion and Increasing Processing Demands—Sensor fusion effectively integrates data from different sensors to help vehicles understand their surroundings better. Its main benefit is in overcoming the limitations of individual sensors. For example, cameras provide detailed visual information but struggle in low-light or lousy weather. On the other hand, radar is excellent at detecting objects in these conditions but lacks the detail that cameras provide. By combining the data from multiple sensors, automotive computing can take advantage of their strengths while compensating for their weaknesses, resulting in a more reliable and robust system overall.

 

One thing to note is that the increased number of sensors produces various data types, leading to more pre-processing.

On-Device Processing—As the industry moves towards autonomy, there is an increasing need for on-device data processing instead of cloud computing to enable vehicles to make informed decisions. Embracing on-device processing is a significant advancement for facilitating real-time decisions and avoiding round-trip latency.

AI Adoption—AI has become integral to automotive applications, driving safety, efficiency, and user experience advancements. AI models offer superior performance and adaptability, making future-proofing a crucial consideration for automotive manufacturers. AI significantly enhances sensor fusion algorithms, offering scalability and adaptability beyond traditional rule-based approaches. Neural networks enable various fusion techniques, such as early fusion, late fusion, and mid-fusion, to optimize the integration and processing of sensor data.

Future Sensor Fusion Needs

Automotive architectures are continually evolving. With current trends and AI integration into radar and sensor fusion applications, SoCs should be modular, flexible, and programmable to meet market demands.

Heterogeneous Architecture- Today's vehicles are loaded with various sensors, each with a unique processing requirement. Running the application on the most suitable processor is essential to achieve the best PPA. To meet such requirements, modern automotive solutions require a heterogeneous compute approach, integrating domain-specific digital signal processors (DSPs), neural processing units (NPUs), central processing unit (CPU) clusters, graphics processing unit (GPU) clusters, and hardware accelerator blocks. A balanced heterogeneous architecture gives the best PPA solution.

Flexibility and Programmability- The industry has come a long way from using computer vision algorithms such as HOG (Histogram Oriented Gradient) to detect people and objects, HAR classifier to detect faces, etc., to CNN and LSTM-based AI to Transformer models and graphical neural networks (GNN). AI has evolved tremendously over the last ten years and continues to evolve. To keep up with the evolving rate of AI, SoC design must be flexible and programmable for updates if needed in the future.

Addressing the Sensor Fusion Needs with Cadence

Cadence offers a complete suite of hardware and software products to address the increasing compute requirements in automotive. The comprehensive portfolio of Tensilica products built on the robust 32-bit RISC architecture caters to various automotive CPU and AI needs. What makes them particularly appealing is their scalability, flexibility, and configurability, offering many options to meet diverse needs.

 

The Xtensa family of products offers high-quality, power-efficient CPUs. Tensilica family also includes AI processors like Neo NPUs for the best power, performance, and area (PPA) for AI inference on devices or more extensive applications. Cadence also offers domain-specific products for DSPs such as HIFI DSPs, specialized DSPs and accelerators for radar and vision-based processing, and a general-purpose family of products for floating point applications.

The ConnX family offers a wide range of DSPs, from compact and low-power to high-performance, optimized for radar, lidar, and communications applications in ADAS, autonomous driving, V2X, 5G/LTE/4G, wireless communications, drones, and robotics. Tensilica's ISO26262 certification ensures compliance with automotive safety standards, making it a trusted partner for advanced automotive solutions. The Cadence NeuroWeave Software Development Kit (SDK) provides customers with a uniform, scalable, and configurable ML interface and tooling that significantly improves time to market and better prepares them for a continuously evolving AI market. Cadence Tensilica offers an entire ecosystem of software frameworks and compilers for all programming styles.

Tensilica's comprehensive software stack supports programming for DSPs, NPUs, and accelerators using C++, OpenCL, Halide, and various neural network approaches. Middleware libraries facilitate applications such as SLAM, radar processing, and Eigen libraries, providing robust support for automotive software development.

Conclusion

Cadence’s Tensilica products offer a development toolchain and various IPs tailored for the automotive industry, covering audio, vision, radar, unified DSPs, and NPUs. With ISO certification and a robust partner ecosystem, Tensilica solutions are designed to meet the future needs of automotive computing, ensuring safety, efficiency, and innovation.

Learn More

 

 




ot

The Future of Driving: How Advanced DSP is Shaping Car Infotainment Systems

As vehicles transition into interconnected ecosystems, artificial intelligence and advanced technologies become increasingly crucial. Infotainment systems have evolved beyond mere music players to become central hubs for connectivity, entertainment, and navigation. With global demand for comfort, convenience, and safety rising, the automotive infotainment market is experiencing significant growth. Valued at USD14.99 billion in 2023, it is projected to grow at a compound annual growth rate (CAGR) of 9.9% from 2024 to 2030.

To keep pace with this evolution, infotainment systems must accommodate a range of workloads, including audio, voice, AI, and vision technologies. This requires a flexible, scalable Digital Signal Processor (DSP) solution that acts as an offload engine for the main application processor. Integrating a single DSP for varied functions offers a cost-effective solution for high-performance, low-power processing, which aligns well with the needs of Electric Vehicles (EVs).

If you missed the detailed presentation by Casey Ng, Product Marketing Director at Cadence at CadenceLIVE 2024, register at the CadenceLIVE On-Demand site to access it and other insightful presentations. Stay ahead of the curve and explore the future of innovative electronics with us.

Cadence Infotainment Solution: Leading the Charge

Cadence Tensilica HiFi DSPs play a crucial role in enhancing audio capabilities in vehicle infotainment systems. They support applications like voice recognition, hands-free calling, and deliver immersive audio experiences. This technology is also paramount for features such as active noise control, which reduces road and cabin noise, and acoustic event detection for identifying unusual sounds like broken glass. One notable innovation is the "audio bubble," enabling personalized audio zones within the vehicle, ensuring passengers enjoy distinct audio settings.

Cadence HiFi DSP technology enriches the driving experience for electric vehicles by mimicking traditional engine sounds, while its advanced audio processing ensures optimal performance across various digital radio standards. It significantly contributes to noise reduction, hence improving the cabin experience. Integrating a Double Precision Floating Point Unit (FPU) stands out, as it upgrades audio performance and Signal-to-Noise Ratio (SNR) through efficient 64-bit processing, allowing control over numerous speakers without hitches.

These advancements distinguish the DSP as an essential tool in evolving infotainment systems, offering unmatched performance and adaptability. Tensilica HiFi processors, crucial to advanced infotainment SoCs, serve as efficient offload processors, augmenting real-time execution and energy efficiency. Cadence’s ecosystem, with over 200 codecs and software partnerships, propels the evolution of innovative infotainment systems. Introducing the HiFi 5s DSP marks a new era in connected car experiences, setting the stage for groundbreaking advancements.

Exploring Tomorrow with HiFi 5s DSP Technology

The HiFi 5s represents the apex of audio and AI digital signal processing performance. Built on the Xtensa LX8 platform, it introduces capabilities like auto-vectorization, which allows standard C code to be automatically optimized for performance. This synergy of hardware and software co-design marks a significant step forward in DSP technology. By leveraging its extended Single Instruction, Multiple Data (SIMD) capabilities alongside features like a double-precision floating-point unit (DP_FPU), the HiFi 5s delivers unparalleled precision and speed improvements in signal and audio processing tasks. Equally notable are its branch prediction and L2 cache enhancements, which optimize system performance by refining the control code execution and recognizing codec efficiency. The application of such enhancements are particularly beneficial in real-world scenarios.

AI-Powered Audio

Cadence's focus on AI integration with the HiFi 5s demonstrates significant improvements in audio clarity through AI-powered solutions.

  • AI models learn from real-world data and adapt dynamically, while classic DSP algorithms rely on fixed rules.
  • AI can be fine-tuned for specific scenarios, whereas classic DSP lacks flexibility.
  • AI handles extreme and marginal noise patterns better, generalizes well across different environments, and is robust against varying noise characteristics.

Cadence's dedication to artificial intelligence marks a pivotal shift in audio processing. Traditional DSP algorithms, bound by rigid rules, are eclipsed by AI's ability to learn dynamically from real-world data. This adaptability equips AI models to tackle challenging noise patterns and offer unmatched clarity even in noisy environments, making them ideal for automotive and consumer audio applications.

Realtime AI-Optimized Speech Enhancements by OmniSpeech and ai|coustics

OmniSpeech

Our partner, OmniSpeech, has advanced AI-based audio processing that enhances the performance of audio software, specifically for omnidirectional and dipole microphones. Impressively, their technology operates with less than 32MHz and requires only 418kB of memory.

Test results show that background noise is significantly reduced when AI employs a single omnidirectional microphone, outperforming non-AI solutions. Additionally, when using a dipole microphone with AI, there is a 3.5X improvement in the weighted Signal-to-Noise Ratio (SNR) and more than a 28% increase in the Global Mean Opinion Score (GMOS) across various background noise.

ai|coustics

ai|coustics, a Cadence partner specializing in advanced audio technologies, utilizes real-time AI-optimized speech enhancement algorithms. They leverage an extensive speech-quality dataset containing thousands of hours and 100 languages to transform low-quality audio into studio-grade audio. Their process includes:

  • De-reverb, which eliminates room resonances, echoes, and reflections
  • Removing artifacts from downsampling and codec compression
  • Dynamic and adaptive background noise removal
  • Reviving audio materials with analog and digital distortions
  • Providing support for all languages, accents, and a variety of speakers

Applications include:

  • Automotive: Enhances clarity of navigation commands and communication for driver safety
  • Consumer audio: Improves voice clarity for better dialogue understanding in TV programs. Optimizes speech intelligibility in communication for both uplink and downlink audio streams
  • Smart IoT: Boosts voice command detection and response quality

Performance Enhancements

The advancements in branch prediction and L2 cache integration have significantly boosted performance metrics across various systems. With HiFi 5s, branch prediction increases codec efficiency by an average of 5%, reaching up to 16% in optimal conditions. L2 cache improvements have drastically enhanced system-level performance, evidenced by a 2.3X boost in EVS decoder efficiency. Adding MACs and imaging ISA in imaging use cases has led to substantial advancements. When comparing HiFi 5s to HiFi 5, imaging ISA performance improvements range with >60% average performance improvements.

The Crescendo of the Future

As Cadence continues to blaze trails in DSP technology, the HiFi 5s emerges as the quintessential solution for consumer and automotive audio use cases. With a robust framework for auto-vectorization, an unmatched double-precision FPU, AI-driven audio solutions, and comprehensive system enhancements, Cadence is orchestrating the next era of audio processing, where every note is clearer, every sound richer, and every experience more engaging. It is not just the future of audio—it's the future of how we experience the world around us.

 Discover how Cadence Automotive Solutions can transform your business today!




ot

what is "cell with Zero maximum clock transition time" ?

anyone know what is "cell with Zero maximum clock transition time"  ?

not zero transition, not maximum transtion, it is zero maximum clock transition time.

it means X0 cell? (drive-strength)

can you explain? 

thanks :-)




ot

Tempus ECO initial setup summary not matching timing report results

We are currently setting up the Tempus flow and have ran into some mismatched data regarding ECO and timing reports. I generated a timing report before running ECO and saw six total setup violations. When running opt_signoff -setup, the initial setup summary that was printed in the shell only showed one violation. I can see that violation from the initial setup summary in my pre-ECO timing report and it is not the worst path. Upon further investigation, I forced the tool to try to fix setup on one of the other five violations from the timing report using the opt_signoff_select_setup_endpoints attribute and the tool said that the endpoint had positive slack and would be ignored.

Has anyone experienced something like this before?




ot

UPF 3.1 / Genus - Cannot find any instance for scope

Hi, I'm using genus (Version 21.14-s082_1) to synthesis a VHDL-design with multiple power-domains. After reading the power intent file and calling 'apply_power_intent',  I get the following warning:

Warning : Potential problem while applying power intent of 1801 file. [1801-99]
: Cannot find any instance for scope '/:CHIP_TOP'. Rest of commands in this scope will be skipped (set_scope:../../upf/CHIP_TOP.upf:2).
: Check the power intent. If the scenario is expected, this message can be ignored.

The fist two lines of CHIP_TOP.upf:

upf_version 3.1
set_scope :CHIP_TOP

I simulated the same  UPF and VHDL files with Xeclium and was able to verify all the IEEE1801/UPF aspects I need without any problems. I don't know, why genus is having a problem with the 'scope'.
In genus, after getting the warning, running 'set_db power_domain:CHIP_TOP/BLOCK_A/PD_CORE_D .library_domain PD0V5' returns the following error:

Error : <Start> word is not recognized. [TUI-182] [set_db]
: 'power_domain:CHIP_TOP/BLOCK/PD_CORE_D' is not a recognized object/attribute. Type 'help root:' to get a list of all supported objects and attributes.
: Check if the given <Start> word is a valid object_type, object or attribute.

Running 'commit_power_intent' gives me:

Started inserting low power cells...
====================================
Info : Command 'commit_power_intent' cannot proceed as there are no power domains present. [CPI-507]
: Design with no power domains is 'design:CHIP_TOP'.
Completed inserting low power cells (runtime 0.00).
====================================================

I'm suspecting that the problem lies in 'set_scope' and VHDL. I never had such problems with Verilog. I tried every way to reference the hierarchy in the code and now I'm at my wit's end and I need your help o/
How to set the scope with 'set_scope' in UPD 3.1 to the toplevel in VHDL, so that genus accepts it? Or is the problem caused by something else?

Best,

Iqbal




ot

How to allow hand-made waveform plot into Viva from Assembler?

Hi! I've made some 1-point waveform "markers" that I want to overlay in my plots to aid visualization (with the added advantage, w.r.t. normal Viva markers, that they update location automatically upon refreshing simulation data).

For example, the plot below shows an spectrum along with two of these markers, which I create with the function "singlePointWave", and the Assembler output definitions also as shown below.

The problem is: as currently created and defined, Assembler is unable to plot these elements. I can send their expressions to the calculator and plotting works from there, BUT ONLY after first enabling the "Allow Any Units" in the target Viva subwindow.

Thus, I suspect Assembler is failing to plot my markers because they "lack" other information like axes units and so on. How could I add whatever is missing, so that these markers can plot automatically from Assembler?

Thanks in advance for any help!

Jorge.

P.S. I also don't know why, but nothing works without those "ymax()" in the output definitions--I suspect they are somehow converting the arguments to the right data type expected by singlePointWave(). Ideas how to fix that are also welcome! ^^

procedure( singlePointWave(xVal yVal)
    let( (xVect yVect wave)
        xVect = drCreateVec('double list(xVal));
        yVect = drCreateVec('double list(yVal));
        wave = drCreateWaveform(xVect yVect);
    );
);




ot

Is there a skill command for "Assign Layout Instance terminals"?

Is there a skill command for "Assign Layout Instance terminals", this form appears when i click on define device correspondence and Bind the devices.

Also,

Problem Statement : i have a schematic with a couple of transistor symbols and and i alos have a corresponding layout view with respective layout transistors but they all are inside a pCell(created by me) i.e layout transistor called inside a custom Pcell. Now i have multiple symbols in schematic view and a single instance(pCell) in layout view. 
Is there a way how i can bind these schematic symbols with layout symbols inside the pCell(custom)? Even if i have to use cph commands i'm fine with it. need help here.

The idea here is to establish XL connectivity between the schematic symbols and corresponding layout transistors(inside the pCell).

Thanks,

Shankar




ot

Virtuoso Fluid Guard Ring Layout error "do_something=nil"

Hello,

When I draw a Fluid Guard Ring in Virtuoso, the layout is not visible, and instead, "do_something=nil" appears.

When I check the details with Q, it shows the same information as a regular NFGR guard ring, and Ctrl+F also displays the instance name, just like with a regular NFGR. 

Additionally, the Pcells of Fluid Guard Rings from previous projects appear broken. 

The version I’m currently using is not different from the one used in the past. Even when I access the same version as the one used during the project, the Pcells still appear broken.

These two issues are occurring, and I’m not sure what to check. I would greatly appreciate it if you could assist me in resolving this issue.

//

Reinstalling the PDK resolved the issue!

I’m not exactly sure what the problem was, but I suspect there might have been an internal issue with permissions or the PDK path.




ot

Destructive form of "cons" - efficiently prepending an item to a procedure's argument which is a list

Hello,

I was looking to destructively and efficiently modify a list that was passed in as an argument to a procedure, by prepending an item to the list.

I noticed that cons lets you do this efficiently, but the operation is non-destructive. Hence this wouldn't work if you are trying to modify a function's list parameter in place.

Here is an example of trying to add "0" to the front of a list:

procedure( attempt_to_prepend_list(l elem)
    l = cons(elem l)
)
a = list(1 2 3)
==> (1 2 3)
attempt_to_prepend_list(a 0)
==> (0 1 2 3)
a
==> (1 2 3)
As we can see, the original list is not prepended.
Here is a function though which achieves the desired result while being efficient. Namely, the following function does not create any new lists and only uses fast methods like cons, rplacd, and rplaca
procedure( prepend_list(l elem)
    ; cons(car(l) cdr(l)) results in a new list with the car(l) duplicated
    ; we then replace the cdr of l so that we are now pointing to this new list
    rplacd(l cons(car(l) cdr(l)))

    ; we replace the previously duplicated car(l) with the element we want
    rplaca(l elem)
)
a = list(1 2 3)
==> (1 2 3)
prepend_list(a 0)
==> (0 1 2 3)
a
==> (0 1 2 3)
This works for me, but I find it surprising there is no built-in function to do this. Am I perhaps overlooking something in the documentation? I know that tconc is an efficient and destructive way to append items to the end of a list, but there isn't an equivalent for the front of the list?




ot

Unlock Your RF Engineering Potential with a Cadence AWR Free Academic Trial!

Are you ready to revolutionize your RF design experience? Look no further! Cadence AWR software is your gateway to mastering the intricacies of Radio Frequency (RF) circuit design, and now, you can explore its power with our exclusive Free Academic T...(read more)




ot

Instance of standard cell does not have layout?

Hi,

I have synthesized a verilog code. When performing the pnr in innovus it is showing the error "Instance g5891__718 (similar for other) of the cell AND2_X6 has no physical library or has wrong dimension  values (<=0). Check your design setup to make sure the physical library is loaded in and attribute specified in library are correct.

When importing synthesized netlist in virtuoso then it says " Module AND2_X6, instantiated in the top module decoder, is not defined. Therefore the top module decoder will be imported as functional."

Please help what's going on here? 




ot

Stream in gds to virtuoso from directory other than where cds.lib exists

I am scripting gds streamin using 'strmin', which works fine so far.

But, as it apparently doesn't have an option to specify where the cds.lib file is, I have to run it from the directory where the cds.lib file is, or I guess I could create a dummy one to source that one.

Is there a way to tell strmin where the cds.lib file is?




ot

How to generate "Sheet Name" column in a pin report?

Hi everyone, 

Is there any method to generate "Sheet" column for a pin report like table below? The column "Name.Pin" & "Signal" can be generated easily, but I have no idea to generate the column of "Sheet Name".

The software using here are Allegro Design Entry HDL, OrCAD Capture and Allegro PCB Editor. Can these 3 software generate "Sheet Name" data?

Name.Pin Signal Sheet Name
C1_1.1 N301321 SITE1_1
C1_1.2 GND_ANA_1 SITE1_1
C1_2.1 N180243 SITE2_1
C1_2.2 GND_ANA_2 SITE2_1

Thank you. 




ot

copy paste circuit from one schematic design to another

Hi, have two designs and would like to copy paste one area of circuit from the old design to the new design, best way/approach and guidance please..




ot

Automotive Revolution with Ethernet Base-T1

The automotive industry revolutionized the definition of a vehicle in terms of safety, comfort, enhanced autonomy, and internet connectivity. With this trend, the automotive industry rapidly adopted automotive Ethernet such as 10Base-T1, 100Base-T1, and in some cases, 1000Base-T1. 

Faster Speed (than CAN-FD), Scalability, embedded security protocols (like MacSec), cost and energy efficiency, and simple yet redundant network made Ethernet an obvious choice over CAN(FD) and FlexRay.  

      

Ethernet 10Base-T1 

10BASE-T1S is defined under IEEE with 802.3cg. The S in 10BASE-T1S stands for a short distance. 10BASE-T1S uses a multidrop topology, where each node connects to a single cable. Multidrop topology eliminates the need for switches and, as a result, fewer cables/less cost. The primary goal of 10BASE-T1S is a deterministic transmission on a collision-free multidrop network. 10BASE-T1S cables use a pair of twisted wires. As per IEEE, at least eight nodes can connect to each, but more connections are feasible.   

The Physical Layer Collision Avoidance [PLCA] protocol ensures that it uses the entire 10 Mbps bandwidth. In 10BaseTs, Reconciliation Sublayer provides optional Physical Layer Collision Avoidance (PLCA) capabilities among participating stations. Using PLCA-enabled Physical Layers in CSMA/CD half-duplex shared-medium networks can provide enhanced bandwidth and improved access latency under heavily loaded traffic conditions. The working principle of PLCA is that transmit opportunities on a mixing segment are granted in sequence based on a node ID unique to the local collision domain (set by the management entity). 10BASE-T1S also supports an arbitration scheme that guarantees consistent node access to the media within a predefined time.  

The 10BASE-T1S PHY is intended to cover the low-speed/low-cost applications in the industrial and automotive environment. A large number of pins (16) required by the MII interface is one of the significant cost factors that must be addressed to fulfill this objective. The 10BASE-T1S "Transceiver" solution is suited for embedded systems where the digital portion of the PHY is fully integrated, e.g., into an MCU or an Ethernet switch core, leaving only the analog portion (the transceiver) into a separate IC. 

Ethernet 100Base-T1/1000Base-T1 

100Base-T1 and 1000Base-T1 can be used for audio/video information. With Higher bandwidth capacity, 100Base-T1/ 1000Base-T1 paired with AVB (Audio video bridging) can be used for car infotainment systems. 100Base-T1/1000Base-T1 paired with time-sensitive networking [TSN] protocol can be used to fulfill the automotive industry's mission-critical, time-sensitive, and deterministic latency needs. 

 PTP Over MacSec  

With today's automotive network, all the Electronic Control Units connected require timing accuracy and network synchronization, Precision Time Protocol (PTP), defined in IEEE 1588, provides synchronized clocks throughout a network.  While maintaining the timing accuracy for mission-critical applications, protecting the vehicle network from vulnerable threats is mandatory, and PTP over MacSec provides the consolidated solution.  

With the availability of the Cadence Verification IP for 10/100/1000BaseT1 and TSN, adopters can start working with these specifications immediately, ensuring compliance with the standard and achieving the fastest path to IP and SoC verification closure. The 10/100/1000GBaseT1 and TSN provide a full-stack solution, including support to the PHY, MAC, and TSN layers with a comprehensive coverage model and protocol checkers. Ethernet BaseT1 and TSN VIP covers all features required for complete coverage verification closure. More details are available in the Ethernet Verification IP portfolio. 

Krunal 




ot

TSN-PTP: A Real-Time Network Clock Synchronizing Protocol

In a network containing multiple nodes, the need for synchronization between the various nodes is not just instrumental but also a complicated and highly complex process. This process becomes even more tricky if we synchronize the clocks between the Manager and the Peripheral. As we know, in a real-time network, some of the nodes would behave like Managers while some would be a Peripheral. If we must make the communication process smooth, then the local clocks of these nodes must be synchronized. 

The problem with this synchronization is that we have the clock running in the Manager as well. If we send the value of the Manager clock to the Peripheral, the synchronization doesn’t happen as we have a propagation delay of the messages, along with the propagation delay of the electronic circuits of Manager and the Peripheral.  

The cherry on the cake is that these electronic circuit propagation delays are not random and remain constant, so we can add a time offset to it to match the clock. To tackle this challenge, IEEE has come up with a protocol named “Precision Timing Protocol.” 

 

Operation of PTP: 

To synchronize the clocks, a Sync message is sent by the Manager to the Peripheral, which then timestamps the receiving time of the same. Following this, a ‘Follow up’ message is issued by the Manager stating the timestamp at which the Sync message was sent. 

The Peripheral then finds the difference between the two values and adds this to its current time. After this, the time difference between the Manager and the Peripheral narrows down to only the propagation delay of the messages.  

To overcome this, the Peripheral issues a ‘Delay Request’ to the Manager, and the Manager, in turn, issues a ‘Delay Response.’ Both these messages have the timestamp of when they were issued. The time at which they are received is then noted. Since two messages are sent, one from the Peripheral and the other from the Manager, there are two propagation delays. Then half of this value is our propagation delay. 

The Peripheral then adds this propagation delay to its clock, and hence the clock gets synchronized. 

Advantages of PTP: 

  1. It provides accurate time stamping. 
  2. It is a well-known clock synchronization protocol. 
  3. It provides intensified security inside the premises. 
  4. It provides the possibility of setting coordinated actions and synchronized communication. 

There are various versions of PTP that have been developed over time, namely PTPv1, PTPv2, PTPv2_1, and the latest PTP-AS. 

Cadence Verification IP for Ethernetis available to support the newer version of PTP, allowing simulation of the device for efficient IP, SoC, and system-level design verification. Semiconductor companies can start using it to fully verify their controller design and achieve functional verification closure on it within no time. 




ot

What is Allegro X Advanced Package Designer and why do I not see Allegro Package Designer Plus (APD+) in 23.1?

Starting SPB 23.1, Allegro Package Designer Plus (APD+) has been rebranded as Allegro X Advanced Package Designer (Allegro X APD).

The splash screen for Allegro X APD will appear as shown below, instead of showing APD+ 2023:

For the Windows Start menu in 23.1, it will display as Allegro X APD 2023 instead of APD+ 2023, as shown below

23.1 Start menu 

In the Product Choices window for 23.1, you will see Allegro X Advanced Package Designer in the place of Allegro Package Designer +, as shown below: 

23.1 product title




ot

Allegro X APD : Tip of the Week: ‘Auto-blank other rats’ feature

When working on a complex design, it is common to have very many net ratlines. Quantities like 1000 ratlines are possible. It can result in a cluttered view while routing. Therefore, it is useful to make all other ratlines invisible while routing interactively. You would like to make all ratlines visible again when each route action is completed.

You can easily do this by enabling the Auto-blank other rats option during routing. When enabled, all rats other than the primary ones are suppressed during the Add Connect command.




ot

Training Webinar: Protium X2: Using Save/Restart for Debugging

Cadence Protium prototyping platforms rapidly bring up an SoC or system prototype and provide a pre-silicon platform for early software development, SoC verification, system validation, and hardware regressions. In this Training W ebinar, we will explore debugging using Save/Restart on Protium X2 . This feature saves execution time and lets you focus on actual debugging. The system state can be saved before the bug appears and restartS directly from there without spending time in initial execution. We’ll cover key concepts and applications, explore Save/Restart performance metrics, and provide examples to help you understand the concepts. Agenda: The key concepts of debugging using save/restart Capabilities, limitations, and performance metrics Some examples to enable and use save/restart on the Protium X2 system Date and Time Thursday, November 7, 2024 07:00 PST San Jose / 10:00 EST New York / 15:00 GMT London / 16:00 CET Munich / 17:00 IST Jerusalem / 20:30 IST Bangalore / 23:00 CST Beijing REGISTER To register for this webinar, sign in with your Cadence Support account (email ID and password) to log in to the Learning and Support System*. Then select Enrol to register for the session. Once registered, you’ll receive a confirmation email containing all login details. A quick reminder: If you haven’t received a registration confirmation within 1 hour of registering, please check your spam folder and ensure your pop-up blockers are off and cookies are enabled. For issues with registration or other inquiries, reach out to eur_training_webinars@cadence.com . Want to See More Webinars? You can find recordings of all past webinars here Like This Topic? Take this opportunity and register for the free online course related to this webinar topic: Protium Introduction Training The course includes slides with audio and downloadable lab exercises designed to emphasize the topics covered in the lecture. There is also a Digital Badge available for the training. Want to share this and other great Cadence learning opportunities with someone else? Tell them to subscribe . Hungry for Training? Choose the Cadence Training Menu that’s right for you. To view our complete training offerings, visit the Cadence Training website . Related Courses Protium Introduction Training Course | Cadence Palladium Introduction Training Course | Cadence Related Blogs Training Insights – A New Free Online Course on the Protium System for Beginner and Advanced Users Training Insights – Palladium Emulation Course for Beginner and Advanced Users Related Training Bytes Protium Flow Steps for Running Design on Protium System ICE and IXCOM mode comparison ICE compile flow IXCOM compile flow PATH settings for using Protium System Please see the course learning maps for a visual representation of courses and course relationships. Regional course catalogs may be viewed here




ot

Matlab cannot open Pspice, to prompt orCEFSimpleUI.exe that it has stopped working!

Cadence_SPB_17.4-2019 + Matlab R2019a

请参考本文档中的步骤进行操作

1,打开BJT_AMP.opj

2,设置Matlab路径

3,打开BJT_AMP_SLPS.slx

4,打开后,设置PSpiceBlock,出现或CEFSimpleUI.exe停止工作

5,添加模块

6,相同

7,打开pspsim.slx

8,相同

9,打开C: Cadence Cadence_SPB_17.4-2019 tools bin

orCEFSimpleUI.exe和orCEFSimple.exe

 

10,相同

我想问一下如何解决,非常感谢!




ot

Here Is Why the Indian Voter Is Saddled With Bad Economics

This is the 15th installment of The Rationalist, my column for the Times of India.

It’s election season, and promises are raining down on voters like rose petals on naïve newlyweds. Earlier this week, the Congress party announced a minimum income guarantee for the poor. This Friday, the Modi government released a budget full of sops. As the days go by, the promises will get bolder, and you might feel important that so much attention is being given to you. Well, the joke is on you.

Every election, HL Mencken once said, is “an advance auction sale of stolen goods.” A bunch of competing mafias fight to rule over you for the next five years. You decide who wins, on the basis of who can bribe you better with your own money. This is an absurd situation, which I tried to express in a limerick I wrote for this page a couple of years ago:

POLITICS: A neta who loves currency notes/ Told me what his line of work denotes./ ‘It is kind of funny./ We steal people’s money/And use some of it to buy their votes.’

We’re the dupes here, and we pay far more to keep this circus going than this circus costs. It would be okay if the parties, once they came to power, provided good governance. But voters have given up on that, and now only want patronage and handouts. That leads to one of the biggest problems in Indian politics: We are stuck in an equilibrium where all good politics is bad economics, and vice versa.

For example, the minimum guarantee for the poor is good politics, because the optics are great. It’s basically Garibi Hatao: that slogan made Indira Gandhi a political juggernaut in the 1970s, at the same time that she unleashed a series of economic policies that kept millions of people in garibi for decades longer than they should have been.

This time, the Congress has released no details, and keeping it vague makes sense because I find it hard to see how it can make economic sense. Depending on how they define ‘poor’, how much income they offer and what the cost is, the plan will either be ineffective or unworkable.

The Modi government’s interim budget announced a handout for poor farmers that seemed rather pointless. Given our agricultural distress, offering a poor farmer 500 bucks a month seems almost like mockery.

Such condescending handouts solve nothing. The poor want jobs and opportunities. Those come with growth, which requires structural reforms. Structural reforms don’t sound sexy as election promises. Handouts do.

A classic example is farm loan waivers. We have reached a stage in our politics where every party has to promise them to assuage farmers, who are a strong vote bank everywhere. You can’t blame farmers for wanting them – they are a necessary anaesthetic. But no government has yet made a serious attempt at tackling the root causes of our agricultural crisis.

Why is it that Good Politics in India is always Bad Economics? Let me put forth some possible reasons. One, voters tend to think in zero-sum ways, as if the pie is fixed, and the only way to bring people out of poverty is to redistribute. The truth is that trade is a positive-sum game, and nations can only be lifted out of poverty when the whole pie grows. But this is unintuitive.

Two, Indian politics revolves around identity and patronage. The spoils of power are limited – that is indeed a zero-sum game – so you’re likely to vote for whoever can look after the interests of your in-group rather than care about the economy as a whole.

Three, voters tend to stay uninformed for good reasons, because of what Public Choice economists call Rational Ignorance. A single vote is unlikely to make a difference in an election, so why put in the effort to understand the nuances of economics and governance? Just ask, what is in it for me, and go with whatever seems to be the best answer.

Four, Politicians have a short-term horizon, geared towards winning the next election. A good policy that may take years to play out is unattractive. A policy that will win them votes in the short term is preferable.

Sadly, no Indian party has shown a willingness to aim for the long term. The Congress has produced new Gandhis, but not new ideas. And while the BJP did make some solid promises in 2014, they did not walk that talk, and have proved to be, as Arun Shourie once called them, UPA + Cow. Even the Congress is adopting the cow, in fact, so maybe the BJP will add Temple to that mix?

Benjamin Franklin once said, “Democracy is two wolves and a lamb voting on what to have for lunch.” This election season, my friends, the people of India are on the menu. You have been deveined and deboned, marinated with rhetoric, seasoned with narrative – now enter the oven and vote.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ot

India’s Problem is Poverty, Not Inequality

This is the 16th installment of The Rationalist, my column for the Times of India.

Steven Pinker, in his book Enlightenment Now, relates an old Russian joke about two peasants named Boris and Igor. They are both poor. Boris has a goat. Igor does not. One day, Igor is granted a wish by a visiting fairy. What will he wish for?

“I wish,” he says, “that Boris’s goat should die.”

The joke ends there, revealing as much about human nature as about economics. Consider the three things that happen if the fairy grants the wish. One, Boris becomes poorer. Two, Igor stays poor. Three, inequality reduces. Is any of them a good outcome?

I feel exasperated when I hear intellectuals and columnists talking about economic inequality. It is my contention that India’s problem is poverty – and that poverty and inequality are two very different things that often do not coincide.

To illustrate this, I sometimes ask this question: In which of the following countries would you rather be poor: USA or Bangladesh? The obvious answer is USA, where the poor are much better off than the poor of Bangladesh. And yet, while Bangladesh has greater poverty, the USA has higher inequality.

Indeed, take a look at the countries of the world measured by the Gini Index, which is that standard metric used to measure inequality, and you will find that USA, Hong Kong, Singapore and the United Kingdom all have greater inequality than Bangladesh, Liberia, Pakistan and Sierra Leone, which are much poorer. And yet, while the poor of Bangladesh would love to migrate to unequal USA, I don’t hear of too many people wishing to go in the opposite direction.

Indeed, people vote with their feet when it comes to choosing between poverty and inequality. All of human history is a story of migration from rural areas to cities – which have greater inequality.

If poverty and inequality are so different, why do people conflate the two? A key reason is that we tend to think of the world in zero-sum ways. For someone to win, someone else must lose. If the rich get richer, the poor must be getting poorer, and the presence of poverty must be proof of inequality.

But that’s not how the world works. The pie is not fixed. Economic growth is a positive-sum game and leads to an expansion of the pie, and everybody benefits. In absolute terms, the rich get richer, and so do the poor, often enough to come out of poverty. And so, in any growing economy, as poverty reduces, inequality tends to increase. (This is counter-intuitive, I know, so used are we to zero-sum thinking.) This is exactly what has happened in India since we liberalised parts of our economy in 1991.

Most people who complain about inequality in India are using the wrong word, and are really worried about poverty. Put a millionaire in a room with a billionaire, and no one will complain about the inequality in that room. But put a starving beggar in there, and the situation is morally objectionable. It is the poverty that makes it a problem, not the inequality.

You might think that this is just semantics, but words matter. Poverty and inequality are different phenomena with opposite solutions. You can solve for inequality by making everyone equally poor. Or you could solve for it by redistributing from the rich to the poor, as if the pie was fixed. The problem with this, as any economist will tell you, is that there is a trade-off between redistribution and growth. All redistribution comes at the cost of growing the pie – and only growth can solve the problem of poverty in a country like ours.

It has been estimated that in India, for every one percent rise in GDP, two million people come out of poverty. That is a stunning statistic. When millions of Indians don’t have enough money to eat properly or sleep with a roof over their heads, it is our moral imperative to help them rise out of poverty. The policies that will make this possible – allowing free markets, incentivising investment and job creation, removing state oppression – are likely to lead to greater inequality. So what? It is more urgent to make sure that every Indian has enough to fulfil his basic needs – what the philosopher Harry Frankfurt, in his fine book On Inequality, called the Doctrine of Sufficiency.

The elite in their airconditioned drawing rooms, and those who live in rich countries, can follow the fashions of the West and talk compassionately about inequality. India does not have that luxury.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ot

To Escalate or Not? This Is Modi’s Zugzwang Moment

This is the 17th installment of The Rationalist, my column for the Times of India.

One of my favourite English words comes from chess. If it is your turn to move, but any move you make makes your position worse, you are in ‘Zugzwang’. Narendra Modi was in zugzwang after the Pulwama attacks a few days ago—as any Indian prime minister in his place would have been.

An Indian PM, after an attack for which Pakistan is held responsible, has only unsavoury choices in front of him. He is pulled in two opposite directions. One, strategy dictates that he must not escalate. Two, politics dictates that he must.

Let’s unpack that. First, consider the strategic imperatives. Ever since both India and Pakistan became nuclear powers, a conventional war has become next to impossible because of the threat of a nuclear war. If India escalates beyond a point, Pakistan might bring their nuclear weapons into play. Even a limited nuclear war could cause millions of casualties and devastate our economy. Thus, no matter what the provocation, India needs to calibrate its response so that the Pakistan doesn’t take it all the way.

It’s impossible to predict what actions Pakistan might view as sufficient provocation, so India has tended to play it safe. Don’t capture territory, don’t attack military assets, don’t kill civilians. In other words, surgical strikes on alleged terrorist camps is the most we can do.

Given that Pakistan knows that it is irrational for India to react, and our leaders tend to be rational, they can ‘bleed us with a thousand cuts’, as their doctrine states, with impunity. Both in 2001, when our parliament was attacked and the BJP’s Atal Bihari Vajpayee was PM, and in 2008, when Mumbai was attacked and the Congress’s Manmohan Singh was PM, our leaders considered all the options on the table—but were forced to do nothing.

But is doing nothing an option in an election year?

Leave strategy aside and turn to politics. India has been attacked. Forty soldiers have been killed, and the nation is traumatised and baying for blood. It is now politically impossible to not retaliate—especially for a PM who has criticized his predecessor for being weak, and portrayed himself as a 56-inch-chested man of action.

I have no doubt that Modi is a rational man, and knows the possible consequences of escalation. But he also knows the possible consequences of not escalating—he could dilute his brand and lose the elections. Thus, he is forced to act. And after he acts, his Pakistan counterpart will face the same domestic pressure to retaliate, and will have to attack back. And so on till my home in Versova is swallowed up by a nuclear crater, right?

Well, not exactly. There is a way to resolve this paradox. India and Pakistan can both escalate, not via military actions, but via optics.

Modi and Imran Khan, who you’d expect to feel like the loneliest men on earth right now, can find sweet company in each other. Their incentives are aligned. Neither man wants this to turn into a full-fledged war. Both men want to appear macho in front of their domestic constituencies. Both men are masters at building narratives, and have a pliant media that will help them.

Thus, India can carry out a surgical strike and claim it destroyed a camp, killed terrorists, and forced Pakistan to return a braveheart prisoner of war. Pakistan can say India merely destroyed two trees plus a rock, and claim the high moral ground by returning the prisoner after giving him good masala tea. A benign military equilibrium is maintained, and both men come out looking like strong leaders: a win-win game for the PMs that avoids a lose-lose game for their nations. They can give themselves a high-five in private when they meet next, and Imran can whisper to Modi, “You’re a good spinner, bro.”

There is one problem here, though: what if the optics don’t work?

If Modi feels that his public is too sceptical and he needs to do more, he might feel forced to resort to actual military escalation. The fog of politics might obscure the possible consequences. If the resultant Indian military action causes serious damage, Pakistan will have to respond in kind. In the chain of events that then begins, with body bags piling up, neither man may be able to back down. They could end up as prisoners of circumstance—and so could we.

***

Also check out:

Why Modi Must Learn to Play the Game of Chicken With Pakistan—Amit Varma
The Two Pakistans—Episode 79 of The Seen and the Unseen
India in the Nuclear Age—Episode 80 of The Seen and the Unseen

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ot

Population Is Not a Problem, but Our Greatest Strength

This is the 21st installment of The Rationalist, my column for the Times of India.

When all political parties agree on something, you know you might have a problem. Giriraj Singh, a minister in Narendra Modi’s new cabinet, tweeted this week that our population control law should become a “movement.” This is something that would find bipartisan support – we are taught from school onwards that India’s population is a big problem, and we need to control it.

This is wrong. Contrary to popular belief, our population is not a problem. It is our greatest strength.

The notion that we should worry about a growing population is an intuitive one. The world has limited resources. People keep increasing. Something’s gotta give.

Robert Malthus made just this point in his 1798 book, An Essay on the Principle of Population. He was worried that our population would grow exponentially while resources would grow arithmetically. As more people entered the workforce, wages would fall and goods would become scarce. Calamity was inevitable.

Malthus’s rationale was so influential that this mode of thinking was soon called ‘Malthusian.’ (It is a pejorative today.) A 20th-century follower of his, Harrison Brown, came up with one of my favourite images on this subject, arguing that a growing population would lead to the earth being “covered completely and to a considerable depth with a writhing mass of human beings, much as a dead cow is covered with a pulsating mass of maggots.”

Another Malthusian, Paul Ehrlich, published a book called The Population Bomb in 1968, which began with the stirring lines, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.” Ehrlich was, as you’d guess, a big supporter of India’s coercive family planning programs. ““I don’t see,” he wrote, “how India could possibly feed two hundred million more people by 1980.”

None of these fears have come true. A 2007 study by Nicholas Eberstadt called ‘Too Many People?’ found no correlation between population density and poverty. The greater the density of people, the more you’d expect them to fight for resources – and yet, Monaco, which has 40 times the population density of Bangladesh, is doing well for itself. So is Bahrain, which has three times the population density of India.

Not only does population not cause poverty, it makes us more prosperous. The economist Julian Simon pointed out in a 1981 book that through history, whenever there has been a spurt in population, it has coincided with a spurt in productivity. Such as, for example, between Malthus’s time and now. There were around a billion people on earth in 1798, and there are around 7.7 billion today. As you read these words, consider that you are better off than the richest person on the planet then.

Why is this? The answer lies in the title of Simon’s book: The Ultimate Resource. When we speak of resources, we forget that human beings are the finest resource of all. There is no limit to our ingenuity. And we interact with each other in positive-sum ways – every voluntary interactions leaves both people better off, and the amount of value in the world goes up. This is why we want to be part of economic networks that are as large, and as dense, as possible. This is why most people migrate to cities rather than away from them – and why cities are so much richer than towns or villages.

If Malthusians were right, essential commodities like wheat, maize and rice would become relatively scarcer over time, and thus more expensive – but they have actually become much cheaper in real terms. This is thanks to the productivity and creativity of humans, who, in Eberstadt’s words, are “in practice always renewable and in theory entirely inexhaustible.”

The error made by Malthus, Brown and Ehrlich is the same error that our politicians make today, and not just in the context of population: zero-sum thinking. If our population grows and resources stays the same, of course there will be scarcity. But this is never the case. All we need to do to learn this lesson is look at our cities!

This mistaken thinking has had savage humanitarian consequences in India. Think of the unborn millions over the decades because of our brutal family planning policies. How many Tendulkars, Rahmans and Satyajit Rays have we lost? Think of the immoral coercion still carried out on poor people across the country. And finally, think of the condescension of our politicians, asserting that people are India’s problem – but always other people, never themselves.

This arrogance is India’s greatest problem, not our people.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




ot

Launch footprint editor from Capture or PCB Editor?

I'd like to be able to edit a footprint for a part in my design without needing to find the footprint filepath and directly open that file in PCB Editor. I see that I can view footprints from Capture, and that doing so shows me the footprint path, but I can't find any way to launch the editor. Is there any way to go directly from a part in a Capture schematic or a placed part in a PCB Editor board design to editing that part's footprint?




ot

Orcad PCB (allegro) not using GPU over USB

Hi,

I have a monitor plugged to my laptop using a HDMI to USB adapter. When using this adapter, Allegro runs very slowly. It seems that it is not using my video card.

Is this a known issue with a workaround I can try?

Thanks,

Michael




ot

Allegro part of DPI does not support scaling above 150%

Allegro part of DPI does not support scaling above 150%




ot

"net logic" question

hello:

i use the command "net logic" to change/assign the net name of the pins but the command can be used for only one pin at a time.

is there a way to change/assign the same net name on 100 pins all at once?

i have a daisy chain design so i need to assign one net name for 100s of pins.

========

thank you david, i was able to do it.

i am writing this section because i can not reply to your comment.




ot

Cannot access individual noise contributions using SpectreMDL

I have tried replicating the setup described in a previous post (here), with the proposed solution.

 

The MDL measurements return a value of 0 for all exported result but the first.

Using Viva I can actually see the correct value for each contribution.

I am using :
- Spectre 23.1.0.538.isr10
- Viva IC23.1-64b.ISR8.40

What should I do differently?

Thanks!

***** test.scs *****
r1 (1 0) res_model l=10e-6 w=2e-6
r2 (2 1) res_model l=15e-6 w=2e-6
vr (2 0) vsource dc=1.0 mag=1
model res_model resistor rsh=100 kf=1e-20*exp(dkf)
parameters dkf=0
statistics {
  process {
    vary dkf dist=gauss std=0.5
  }
}

noi (1 0) noise freq=1

/***** test.mdl *****/
alias measurement noi_test {
  run noi;
  export real noi_total=noi_test:out;
  export real r1_total=r1:total;
  export real r1_flicker=r1:fn;
  export real r1_thermal=r1:rn;
  export real r2_total=r2:total;
  export real r2_flicker=r2:fn;
  export real r2_thermal=r2:rn;
}

run noi_test

**** test.measure ****

Measurement Name   :  noi_test
Analysis Type      :  noise
noi_total             =  6.9282e-06
r1_flicker            =  0
r1_thermal            =  0
r1_total              =  0
r2_flicker            =  0
r2_thermal            =  0
r2_total              =  0




ot

Force virtuoso (Layout XL) to NOT create warning markers in design

Hi

I have a rather strange question - is there a way to tell layout XL to NOT place the error/warning markers on a design when I open a cell?  I do a lot of my layout by using arrays from placed instances and create mosaics that completely ignore the metadata that Layout XL uses with its bindings with schematic (and instances get deleted etc. but I do like using it to generate all my pins etc.) and it's just really annoying when I open a design that I know is LVS clean and since the connectivity metadata is all screwed up (because I did not use it to actually complete the layout) I have a design that's just blinking at me at every gate, source and drain.  I typically delete them at the high level heirarchically but the second I go in and modify something and come back up it places all of them again.  I know that if I flatten all the p cells it goes away but sometimes it's nice to have that piece of metadata but that's about it.  Is there a way to "break" the features of XL like this?  I realize what a weird question this is but it's becoming more of an issue since we moved to IC 23 from IC 6 where there is no longer a layout L that I can use free from these annoyances that can't use any of the connectivity metadata.

Thanks

Chris




ot

AMS simvision cannot load big psf.trn

Hello all,

I have run a simulation with a lot of instnaces extraction and the psf.trn is >= 200 Gb, I tried to load it with simvision and it just breaks.

I would like to ask if there is a way to open this file, e.g. if I could read only some time window e.g. from 10us -> 15us.

getVersion(t)
"sub-version  ICADVM20.1-64b.500.34 "

XCELIUMMAIN23.03.001

thank you in advance




ot

UVM Adapter for Pipelined protocols like AHB, AXI etc

Hello,

I have been running this `uvm_reg_hw_reset_seq` sequence for the AHB protocol. My UVM Adapter looks like:


Issue: When I use basic reg.write, my write access are working well, as that is managed by the driver i.e. once adapter gives the packet to the driver, the driver supplies the address and the control signals to the DUT on the first clock cycle and then the write data on the next clock cycle. But when I am performing the read operation, somehow the UVM adapter is reading the data at the same clock cycle where read address + Controls are supplied and this is triggering read failure messages from the `uvm_reg_hw_reset_seq` sequence. What should I modify in the driver/sequencer/adapter so that the UVM adapter can read the data on the next cycle instead of the same clock cycle.

Just FYI: The waveforms of the read operation are correct, it is just the Adapter and the `uvm_reg_hw_reset_seq`. The AHB Driver + AHB Monitor is fully proven and verified to be working correctly.




ot

Importing ODF to vManager does not update vplan

I exported vplan to .odf file in vManager and after editing it I imported it to vManager. The vplan was expected to be synchronized and updated. However, nothing has changed to it. Does anyone know why?




ot

"How to disable toggle coverage of unused logic"

I'm currently work in coverage analysis. In my design certain register bits remain unused, which could potentially lower toggle coverage. Specifically, I'd like to know how to disable coverage for specific unused register bits within a 32-bit register. For instance, I want to deactivate coverage for bit 17 and bit 20 in a 32-bit register to optimize toggle coverage. Could you please provide guidance on how to accomplish this?




ot

Is it possible to automatically exclude registers or wires that are not used from toggle coverage?

Hello,

I have a question about toggle coverage.

In my case, there are many unused registers or wires that are affecting the toggle coverage score negatively.

Is it possible to automatically exclude registers or wires that are not used from toggle coverage?

My RTL code is as follows, Is it possible to automatically disable tb.top1.b and tb.top1.c without using an exclude file?

module top1;

  reg a;

  reg b;

  reg [31:0] c;

  initial

  begin

  #1 a=1'b0;

  #1 a=1'b1;

  #1 a=1'b0;

  end

endmodule

module tb;

  top1 top1();

endmodule




ot

Using "add net constraints" command in Conformal

Hi

I have tried using "add net constraints" command to place one-cold constraints on a tristate enable bus. In the command line we need to specify the "net pathname" on which the constraints are to be enforced.

The bus here is 20-bit. How should the net pathname be specified to make this 20-bit bus signals one_hot or one_cold.

The bus was declared as follows:
ten_bus [19:0]

The command I used was

add net constraints one_hot /ren_bus[19]

What would the above command mean?
Should we not specify all the nets' pathnames on the bus?
Is it sufficient to specify the pathname of one net on the bus?
I could not get much info regarding the functionality of this command. I would be obliged if anyone can throw some light.

Thanks
Prasad.


Originally posted in cdnusers.org by anssprasad




ot

X-FAB's Innovative Communication and Automotive Designs: Powered by Cadence EMX Planar 3D Solver

Using the EMX solver, X-FAB design engineers can efficiently develop next-generation RF technology for the latest communication standards (including sub-6GHz 5G, mmWave, UWB, etc.), which are enabling technologies for communications and electric vehicle (EV) wireless applications. (read more)





ot

Plot on Smith Chart from HB Simulation

Dear All,

To design an outphasing combiner, I need to extract the input impedances when the circuit is driven by two sources concurrently with a varying phase-shift and plot them on a Smith Chart. However, I can't find a way to display the simulated S-parameters on a Smith Chart.

The testbench, shown below, consists of two sources set to 50 Ohm with variable phase (PORT0: theta, PORT1: -theta) swept from -90° to 90°. The NPORTs are couplers used to isolate the forward and reflected power at each source, i.e. the S-parameters are:

  • S13 = S31 = 1; through path
  • S21 = S41 = 1; forward and reflected power
  • All other are zero

The testbench is simulated with an "hb" analysis instead of "sp", as the two sources have to be excited simultaneously with varying phase-shift to see their load-modulation effect. The sweep is setup in the "Choosing Analysis" window. The powers of the forward (pXa) and reflected (pXb) waves are found through the "Direct Plot" window, e.g. pvr('hb "/p1a" 0 50.0 '(1)) as the power for p1a, and named P1A_Watt. The S-parameters are then calculated as the reflected power w.r.t. the forward power P1B_Watt/P1A_Watt. This approach is based on a Hot S-parameter testbench from ADS.

At this point I would like to display these S-parameters on a Smith Chart. However, this seemed more challenging than expected. One straightforward method would seem to create an empty Smith Chart window in the Display Window and dragging the S-parameters from the rectangular plot on it, but this just deletes them from the Display Window. Hence my question:

How can I display these S-parameters on a Smith Chart?




ot

VAR("") does not work within some expressions

Hi,

My Virtuoso and Spectre Version: ICADVM20.1-64b.NYISR30.2

I have an expression where the EvalType is "sweeps". Here is the expression (I also attached the snapshot):

(peakToPeak(leafValue(swapSweep(delay(?wf1 clip((VT("/clk0") - VT("/clk180")) (VAR("mt_stop") - (4.0 / VAR("datarate"))) VAR("mt_stop")) ?value1 0 ?edge1 "rising" ?nth1 1 ?td1 0 ?tol1 nil ?wf2 clip((VT("/tx_padp") - VT("/tx_padn")) (VAR("mt_stop") - (4.0 / VAR("datarate"))) VAR("mt_stop")) ?value2 0 ?edge2 "rising" ?nth2 1 ?tol2 nil ?td2 nil ?stop nil ?multiple nil) "VDD_FIXED_NOISE") "VREGLN_cmode" 0.85 "VREGDRV_novn" 0.4 "datarate" 1.658e+10) ?overall t) / 10.0)

What this expression does is that it compares the delay between the output data with respect to a reference clock. I then get this information for two conditions (VDD_FIXED_NOISE = 0 or 10mV) to get the effect of the supply-induced jitter. In the expression, I need to give the value of each parameter in different modes to distinguish them from each other. Now I want to sweep the base supply values and see the supply variation effects. For example, I want to change VREGLN_cmode from 0.85 to 0.81 and see how my supply-induced jitter changes. For that, the hard way is to copy the expression and change that value accordingly (e.g. "VREGLN_cmode" 0.81). I'm looking for an easier way to use a variable in the expression. Something like VAR("VREGLN_Sweep"). But I see it doesn't work in my expression and it gives an eVal error. I tested this before in other expressions (not sweep type) and it always worked. I have only one test and these variables are all Design Variables and not Global variables.
I want to know what mistake am I doing here and is there a way to make this work. Sorry that if I could not explain better my inquiry. Thank you.








ot

PSS Shooting - High Q crystal oscillator - Simulator by mistake detects a frequency divider

Hi *,

 

I am simulating a 32kHz high Q crystal oscillator with a pulse shaping circuit. I set up a PSS analysis using the Shooting Newton engine. I set a beat frequency of 32k and used the crystal output and ground as reference nodes. After the initial transient the amplitude growth was already pretty much settled such that the shooting iterations could continue the job.

 

My problem is: In 5...10% of my PVT runs the simulator detects a frequency divider in the initial transient simulation. The output log says:

 

Frequency divided by 3 at node <xxx>

The Estimated oscillating frequency from Tstab Tran is = 11.0193 kHz .

 

However, the mentioned node is only part of the control logic and is always constant (but it has some ripples and glitches which are all less than 30uV). These glitches spoil my fundamental frequency (11kHz instead of 32kHz). Sometimes the simulator detects a frequency division by 2 or 3 and the mentioned node <xxx> is different depending on PVT - but the node is always a genuine high or low signal inside my control logic.

 

How can I tell the simulator that there is no frequency divider and it should only observe the given node pair in the PSS analysis setup to estimate the fundamental frequency? I have tried the following workarounds but none of them worked reliably:

 

- extended/reduced the initial transient simulation time

- decreased accuracy

- preset override with Euler integration method for the initial transient to damp glitches

- tried different initial conditions

- specified various oscillator nodes in the analysis setup form

By the way, I am using Spectre X (version 21.1.0.389.ISR8) with CX accuracy.

 

Thanks for your support and best regards

Stephan




ot

Figures missing in the RF Design Blogs article of "Measuring Fmax for MOS Transistors"

Hi I noticed that some figures from the old posts in the cadence blogs have been missing.

I think this problem happened before and Andrew Beckett asked the original author to fix the issue:

 Figures missing in the RF Design Blogs article of "Measuring Fmax for MOS Transistors" 

Some of these posts are quite valuable, and would be nice to have access to the figures, which are a very important part of some posts,

Thanks

Leandro




ot

Loading Footprints keep getting DB Doctor message

Loading new netlist into 23.1 Apparently it does not like many of the specified footprints or padstacks. I have to open the footprint in 231., save the pad stack then save the footprint. This is very time consuming and frustrating to say the least.

I also get the following message

WARNING(SPMHNI-194): Symbol 'SMD_SOD123_ANODE_PIN1' used by RefDes D30 for device 'DIODE_0_SMD_SOD123_ANODE_PIN1_1N4148W-7-F' not found.

The symbol either does not exist in the library path (PSMPATH) or is an old symbol from a previous release.  
Set the correct library path if not set or use dbdo
     The current version of software is unable to open design smd_sod123_anode_pin1.
The design was last saved using version 16.5 and must be updated using DB Doctor. [help]


Going to DB Doctor does nothing, no option to update a footprint?

Tom





ot

scaling a footprint

hello

 

is there a way to scale a footprint of a symbol?

 

scaling a footprint means moving all the pad locations of the symbol by +x% or -x%. 

 

regards

masa




ot

Allegro PCB Router quit unexpectedly with an exit code of -1073741701. Also, nothing is logged in log file.

Has anyone experienced the same situation?