signal

The focal adhesion protein kindlin-2 controls mitotic spindle assembly by inhibiting histone deacetylase 6 and maintaining {alpha}-tubulin acetylation [Signal Transduction]

Kindlins are focal adhesion proteins that regulate integrin activation and outside-in signaling. The kindlin family consists of three members, kindlin-1, -2, and -3. Kindlin-2 is widely expressed in multiple cell types, except those from the hematopoietic lineage. A previous study has reported that the Drosophila Fit1 protein (an ortholog of kindlin-2) prevents abnormal spindle assembly; however, the mechanism remains unknown. Here, we show that kindlin-2 maintains spindle integrity in mitotic human cells. The human neuroblastoma SH-SY5Y cell line expresses only kindlin-2, and we found that when SH-SY5Y cells are depleted of kindlin-2, they exhibit pronounced spindle abnormalities and delayed mitosis. Of note, acetylation of α-tubulin, which maintains microtubule flexibility and stability, was diminished in the kindlin-2–depleted cells. Mechanistically, we found that kindlin-2 maintains α-tubulin acetylation by inhibiting the microtubule-associated deacetylase histone deacetylase 6 (HDAC6) via a signaling pathway involving AKT Ser/Thr kinase (AKT)/glycogen synthase kinase 3β (GSK3β) or paxillin. We also provide evidence that prolonged hypoxia down-regulates kindlin-2 expression, leading to spindle abnormalities not only in the SH-SY5Y cell line, but also cell lines derived from colon and breast tissues. The findings of our study highlight that kindlin-2 regulates mitotic spindle assembly and that this process is perturbed in cancer cells in a hypoxic environment.




signal

Endorepellin evokes an angiostatic stress signaling cascade in endothelial cells [Glycobiology and Extracellular Matrices]

Endorepellin, the C-terminal fragment of the heparan sulfate proteoglycan perlecan, influences various signaling pathways in endothelial cells by binding to VEGFR2. In this study, we discovered that soluble endorepellin activates the canonical stress signaling pathway consisting of PERK, eIF2α, ATF4, and GADD45α. Specifically, endorepellin evoked transient activation of VEGFR2, which, in turn, phosphorylated PERK at Thr980. Subsequently, PERK phosphorylated eIF2α at Ser51, upregulating its downstream effector proteins ATF4 and GADD45α. RNAi-mediated knockdown of PERK or eIF2α abrogated the endorepellin-mediated up-regulation of GADD45α, the ultimate effector protein of this stress signaling cascade. To functionally validate these findings, we utilized an ex vivo model of angiogenesis. Exposure of the aortic rings embedded in 3D fibrillar collagen to recombinant endorepellin for 2–4 h activated PERK and induced GADD45α vis à vis vehicle-treated counterparts. Similar effects were obtained with the established cellular stress inducer tunicamycin. Notably, chronic exposure of aortic rings to endorepellin for 7–9 days markedly suppressed vessel sprouting, an angiostatic effect that was rescued by blocking PERK kinase activity. Our findings unravel a mechanism by which an extracellular matrix protein evokes stress signaling in endothelial cells, which leads to angiostasis.




signal

Structural basis of cell-surface signaling by a conserved sigma regulator in Gram-negative bacteria [Molecular Biophysics]

Cell-surface signaling (CSS) in Gram-negative bacteria involves highly conserved regulatory pathways that optimize gene expression by transducing extracellular environmental signals to the cytoplasm via inner-membrane sigma regulators. The molecular details of ferric siderophore-mediated activation of the iron import machinery through a sigma regulator are unclear. Here, we present the 1.56 Å resolution structure of the periplasmic complex of the C-terminal CSS domain (CCSSD) of PupR, the sigma regulator in the Pseudomonas capeferrum pseudobactin BN7/8 transport system, and the N-terminal signaling domain (NTSD) of PupB, an outer-membrane TonB-dependent transducer. The structure revealed that the CCSSD consists of two subdomains: a juxta-membrane subdomain, which has a novel all-β-fold, followed by a secretin/TonB, short N-terminal subdomain at the C terminus of the CCSSD, a previously unobserved topological arrangement of this domain. Using affinity pulldown assays, isothermal titration calorimetry, and thermal denaturation CD spectroscopy, we show that both subdomains are required for binding the NTSD with micromolar affinity and that NTSD binding improves CCSSD stability. Our findings prompt us to present a revised model of CSS wherein the CCSSD:NTSD complex forms prior to ferric-siderophore binding. Upon siderophore binding, conformational changes in the CCSSD enable regulated intramembrane proteolysis of the sigma regulator, ultimately resulting in transcriptional regulation.




signal

Tacrolimus-Induced BMP/SMAD Signaling Associates With Metabolic Stress-Activated FOXO1 to Trigger {beta}-Cell Failure

Active maintenance of β-cell identity through fine-tuned regulation of key transcription factors ensures β-cell function. Tacrolimus, a widely used immunosuppressant, accelerates onset of diabetes after organ transplantation, but underlying molecular mechanisms are unclear. Here we show that tacrolimus induces loss of human β-cell maturity and β-cell failure through activation of the BMP/SMAD signaling pathway when administered under mild metabolic stress conditions. Tacrolimus-induced phosphorylated SMAD1/5 acts in synergy with metabolic stress–activated FOXO1 through formation of a complex. This interaction is associated with reduced expression of the key β-cell transcription factor MAFA and abolished insulin secretion, both in vitro in primary human islets and in vivo in human islets transplanted into high-fat diet–fed mice. Pharmacological inhibition of BMP signaling protects human β-cells from tacrolimus-induced β-cell dysfunction in vitro. Furthermore, we confirm that BMP/SMAD signaling is activated in protocol pancreas allograft biopsies from recipients on tacrolimus. To conclude, we propose a novel mechanism underlying the diabetogenicity of tacrolimus in primary human β-cells. This insight could lead to new treatment strategies for new-onset diabetes and may have implications for other forms of diabetes.




signal

HB-EGF Signaling Is Required for Glucose-Induced Pancreatic {beta}-Cell Proliferation in Rats

The molecular mechanisms of β-cell compensation to metabolic stress are poorly understood. We previously observed that nutrient-induced β-cell proliferation in rats is dependent on epidermal growth factor receptor (EGFR) signaling. The aim of this study was to determine the role of the EGFR ligand heparin-binding EGF-like growth factor (HB-EGF) in the β-cell proliferative response to glucose, a β-cell mitogen and key regulator of β-cell mass in response to increased insulin demand. We show that exposure of isolated rat and human islets to HB-EGF stimulates β-cell proliferation. In rat islets, inhibition of EGFR or HB-EGF blocks the proliferative response not only to HB-EGF but also to glucose. Furthermore, knockdown of HB-EGF in rat islets blocks β-cell proliferation in response to glucose ex vivo and in vivo in transplanted glucose-infused rats. Mechanistically, we demonstrate that HB-EGF mRNA levels are increased in β-cells in response to glucose in a carbohydrate-response element–binding protein (ChREBP)–dependent manner. In addition, chromatin immunoprecipitation studies identified ChREBP binding sites in proximity to the HB-EGF gene. Finally, inhibition of Src family kinases, known to be involved in HB-EGF processing, abrogated glucose-induced β-cell proliferation. Our findings identify a novel glucose/HB-EGF/EGFR axis implicated in β-cell compensation to increased metabolic demand.




signal

Signals from the NIHR

If you've been keeping up to day with The BMJ - online on in print, you might have noticed that we've got a new type of article - NIHR Signals - and they are here to give busy clinicians a quick overview of practice changing research that has come out of the UK's National Institute for Health Research. Tara Lamont, director of the NIHR...




signal

Are the {beta}-Cell Signaling Molecules Malonyl-CoA and Cystolic Long-Chain Acyl-CoA Implicated in Multiple Tissue Defects of Obesity and NIDDM?

Marc Prentki
Mar 1, 1996; 45:273-283
Original Article




signal

Free fatty acid-induced insulin resistance is associated with activation of protein kinase C theta and alterations in the insulin signaling cascade

ME Griffin
Jun 1, 1999; 48:1270-1274
Articles




signal

De Novo Mutations in EIF2B1 Affecting eIF2 Signaling Cause Neonatal/Early-Onset Diabetes and Transient Hepatic Dysfunction

Permanent neonatal diabetes mellitus (PNDM) is caused by reduced β-cell number or impaired β-cell function. Understanding of the genetic basis of this disorder highlights fundamental β-cell mechanisms. We performed trio genome sequencing for 44 patients with PNDM and their unaffected parents to identify causative de novo variants. Replication studies were performed in 188 patients diagnosed with diabetes before 2 years of age without a genetic diagnosis. EIF2B1 (encoding the eIF2B complex α subunit) was the only gene with novel de novo variants (all missense) in at least three patients. Replication studies identified two further patients with de novo EIF2B1 variants. In addition to having diabetes, four of five patients had hepatitis-like episodes in childhood. The EIF2B1 de novo mutations were found to map to the same protein surface. We propose that these variants render the eIF2B complex insensitive to eIF2 phosphorylation, which occurs under stress conditions and triggers expression of stress response genes. Failure of eIF2B to sense eIF2 phosphorylation likely leads to unregulated unfolded protein response and cell death. Our results establish de novo EIF2B1 mutations as a novel cause of permanent diabetes and liver dysfunction. These findings confirm the importance of cell stress regulation for β-cells and highlight EIF2B1’s fundamental role within this pathway.




signal

Longitudinal Metabolome-Wide Signals Prior to the Appearance of a First Islet Autoantibody in Children Participating in the TEDDY Study

Children at increased genetic risk for type 1 diabetes (T1D) after environmental exposures may develop pancreatic islet autoantibodies (IA) at a very young age. Metabolic profile changes over time may imply responses to exposures and signal development of the first IA. Our present research in The Environmental Determinants of Diabetes in the Young (TEDDY) study aimed to identify metabolome-wide signals preceding the first IA against GAD (GADA-first) or against insulin (IAA-first). We profiled metabolomes by mass spectrometry from children’s plasma at 3-month intervals after birth until appearance of the first IA. A trajectory analysis discovered each first IA preceded by reduced amino acid proline and branched-chain amino acids (BCAAs), respectively. With independent time point analysis following birth, we discovered dehydroascorbic acid (DHAA) contributing to the risk of each first IA, and -aminobutyric acid (GABAs) associated with the first autoantibody against insulin (IAA-first). Methionine and alanine, compounds produced in BCAA metabolism and fatty acids, also preceded IA at different time points. Unsaturated triglycerides and phosphatidylethanolamines decreased in abundance before appearance of either autoantibody. Our findings suggest that IAA-first and GADA-first are heralded by different patterns of DHAA, GABA, multiple amino acids, and fatty acids, which may be important to primary prevention of T1D.




signal

L-Cell Differentiation Is Induced by Bile Acids Through GPBAR1 and Paracrine GLP-1 and Serotonin Signaling

Glucagon-like peptide 1 (GLP-1) mimetics are effective drugs for treatment of type 2 diabetes, and there is consequently extensive interest in increasing endogenous GLP-1 secretion and L-cell abundance. Here we identify G-protein–coupled bile acid receptor 1 (GPBAR1) as a selective regulator of intestinal L-cell differentiation. Lithocholic acid and the synthetic GPBAR1 agonist, L3740, selectively increased L-cell density in mouse and human intestinal organoids and elevated GLP-1 secretory capacity. L3740 induced expression of Gcg and transcription factors Ngn3 and NeuroD1. L3740 also increased the L-cell number and GLP-1 levels and improved glucose tolerance in vivo. Further mechanistic examination revealed that the effect of L3740 on L cells required intact GLP-1 receptor and serotonin 5-hydroxytryptamine receptor 4 (5-HT4) signaling. Importantly, serotonin signaling through 5-HT4 mimicked the effects of L3740, acting downstream of GLP-1. Thus, GPBAR1 agonists and other powerful GLP-1 secretagogues facilitate L-cell differentiation through a paracrine GLP-1–dependent and serotonin-mediated mechanism.




signal

Inhibition of NFAT Signaling Restores Microvascular Endothelial Function in Diabetic Mice

Central to the development of diabetic macro- and microvascular disease is endothelial dysfunction, which appears well before any clinical sign but, importantly, is potentially reversible. We previously demonstrated that hyperglycemia activates nuclear factor of activated T cells (NFAT) in conduit and medium-sized resistance arteries and that NFAT blockade abolishes diabetes-driven aggravation of atherosclerosis. In this study, we test whether NFAT plays a role in the development of endothelial dysfunction in diabetes. NFAT-dependent transcriptional activity was elevated in skin microvessels of diabetic Akita (Ins2+/–) mice when compared with nondiabetic littermates. Treatment of diabetic mice with the NFAT blocker A-285222 reduced NFATc3 nuclear accumulation and NFAT-luciferase transcriptional activity in skin microvessels, resulting in improved microvascular function, as assessed by laser Doppler imaging and iontophoresis of acetylcholine and localized heating. This improvement was abolished by pretreatment with the nitric oxide (NO) synthase inhibitor l-NG-nitro-l-arginine methyl ester, while iontophoresis of the NO donor sodium nitroprusside eliminated the observed differences. A-285222 treatment enhanced dermis endothelial NO synthase expression and plasma NO levels of diabetic mice. It also prevented induction of inflammatory cytokines interleukin-6 and osteopontin, lowered plasma endothelin-1 and blood pressure, and improved mouse survival without affecting blood glucose. In vivo inhibition of NFAT may represent a novel therapeutic modality to preserve endothelial function in diabetes.




signal

{beta}-Cell Stress Shapes CTL Immune Recognition of Preproinsulin Signal Peptide by Posttranscriptional Regulation of Endoplasmic Reticulum Aminopeptidase 1

The signal peptide of preproinsulin is a major source for HLA class I autoantigen epitopes implicated in CD8 T cell (CTL)–mediated β-cell destruction in type 1 diabetes (T1D). Among them, the 10-mer epitope located at the C-terminal end of the signal peptide was found to be the most prevalent in patients with recent-onset T1D. While the combined action of signal peptide peptidase and endoplasmic reticulum (ER) aminopeptidase 1 (ERAP1) is required for processing of the signal peptide, the mechanisms controlling signal peptide trimming and the contribution of the T1D inflammatory milieu on these mechanisms are unknown. Here, we show in human β-cells that ER stress regulates ERAP1 gene expression at posttranscriptional level via the IRE1α/miR-17-5p axis and demonstrate that inhibition of the IRE1α activity impairs processing of preproinsulin signal peptide antigen and its recognition by specific autoreactive CTLs during inflammation. These results underscore the impact of ER stress in the increased visibility of β-cells to the immune system and position the IRE1α/miR-17 pathway as a central component in β-cell destruction processes and as a potential target for the treatment of autoimmune T1D.




signal

microRNA-21/PDCD4 Proapoptotic Signaling From Circulating CD34+ Cells to Vascular Endothelial Cells: A Potential Contributor to Adverse Cardiovascular Outcomes in Patients With Critical Limb Ischemia

OBJECTIVE

In patients with type 2 diabetes (T2D) and critical limb ischemia (CLI), migration of circulating CD34+ cells predicted cardiovascular mortality at 18 months after revascularization. This study aimed to provide long-term validation and mechanistic understanding of the biomarker.

RESEARCH DESIGN AND METHODS

The association between CD34+ cell migration and cardiovascular mortality was reassessed at 6 years after revascularization. In a new series of T2D-CLI and control subjects, immuno-sorted bone marrow CD34+ cells were profiled for miRNA expression and assessed for apoptosis and angiogenesis activity. The differentially regulated miRNA-21 and its proapoptotic target, PDCD4, were titrated to verify their contribution in transferring damaging signals from CD34+ cells to endothelial cells.

RESULTS

Multivariable regression analysis confirmed that CD34+ cell migration forecasts long-term cardiovascular mortality. CD34+ cells from T2D-CLI patients were more apoptotic and less proangiogenic than control subjects and featured miRNA-21 downregulation, modulation of several long noncoding RNAs acting as miRNA-21 sponges, and upregulation of the miRNA-21 proapoptotic target PDCD4. Silencing miR-21 in control subject CD34+ cells phenocopied the T2D-CLI cell behavior. In coculture, T2D-CLI CD34+ cells imprinted naïve endothelial cells, increasing apoptosis, reducing network formation, and modulating the TUG1 sponge/miRNA-21/PDCD4 axis. Silencing PDCD4 or scavenging reactive oxygen species protected endothelial cells from the negative influence of T2D-CLI CD34+ cells.

CONCLUSIONS

Migration of CD34+ cells predicts long-term cardiovascular mortality in T2D-CLI patients. An altered paracrine signaling conveys antiangiogenic and proapoptotic features from CD34+ cells to the endothelium. This damaging interaction may increase the risk for life-threatening complications.




signal

On polyhedral estimation of signals via indirect observations

Anatoli Juditsky, Arkadi Nemirovski.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 458--502.

Abstract:
We consider the problem of recovering linear image of unknown signal belonging to a given convex compact signal set from noisy observation of another linear image of the signal. We develop a simple generic efficiently computable non linear in observations “polyhedral” estimate along with computation-friendly techniques for its design and risk analysis. We demonstrate that under favorable circumstances the resulting estimate is provably near-optimal in the minimax sense, the “favorable circumstances” being less restrictive than the weakest known so far assumptions ensuring near-optimality of estimates which are linear in observations.




signal

Nonparametric false discovery rate control for identifying simultaneous signals

Sihai Dave Zhao, Yet Tien Nguyen.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 110--142.

Abstract:
It is frequently of interest to identify simultaneous signals, defined as features that exhibit statistical significance across each of several independent experiments. For example, genes that are consistently differentially expressed across experiments in different animal species can reveal evolutionarily conserved biological mechanisms. However, in some problems the test statistics corresponding to these features can have complicated or unknown null distributions. This paper proposes a novel nonparametric false discovery rate control procedure that can identify simultaneous signals even without knowing these null distributions. The method is shown, theoretically and in simulations, to asymptotically control the false discovery rate. It was also used to identify genes that were both differentially expressed and proximal to differentially accessible chromatin in the brains of mice exposed to a conspecific intruder. The proposed method is available in the R package github.com/sdzhao/ssa.




signal

Estimating piecewise monotone signals

Kentaro Minami.

Source: Electronic Journal of Statistics, Volume 14, Number 1, 1508--1576.

Abstract:
We study the problem of estimating piecewise monotone vectors. This problem can be seen as a generalization of the isotonic regression that allows a small number of order-violating changepoints. We focus mainly on the performance of the nearly-isotonic regression proposed by Tibshirani et al. (2011). We derive risk bounds for the nearly-isotonic regression estimators that are adaptive to piecewise monotone signals. The estimator achieves a near minimax convergence rate over certain classes of piecewise monotone signals under a weak assumption. Furthermore, we present an algorithm that can be applied to the nearly-isotonic type estimators on general weighted graphs. The simulation results suggest that the nearly-isotonic regression performs as well as the ideal estimator that knows the true positions of changepoints.




signal

Noise Accumulation in High Dimensional Classification and Total Signal Index

Great attention has been paid to Big Data in recent years. Such data hold promise for scientific discoveries but also pose challenges to analyses. One potential challenge is noise accumulation. In this paper, we explore noise accumulation in high dimensional two-group classification. First, we revisit a previous assessment of noise accumulation with principal component analyses, which yields a different threshold for discriminative ability than originally identified. Then we extend our scope to its impact on classifiers developed with three common machine learning approaches---random forest, support vector machine, and boosted classification trees. We simulate four scenarios with differing amounts of signal strength to evaluate each method. After determining noise accumulation may affect the performance of these classifiers, we assess factors that impact it. We conduct simulations by varying sample size, signal strength, signal strength proportional to the number predictors, and signal magnitude with random forest classifiers. These simulations suggest that noise accumulation affects the discriminative ability of high-dimensional classifiers developed using common machine learning methods, which can be modified by sample size, signal strength, and signal magnitude. We developed the measure total signal index (TSI) to track the trends of total signal and noise accumulation.




signal

A unified treatment for non-asymptotic and asymptotic approaches to minimax signal detection

Clément Marteau, Theofanis Sapatinas.

Source: Statistics Surveys, Volume 9, 253--297.

Abstract:
We are concerned with minimax signal detection. In this setting, we discuss non-asymptotic and asymptotic approaches through a unified treatment. In particular, we consider a Gaussian sequence model that contains classical models as special cases, such as, direct, well-posed inverse and ill-posed inverse problems. Working with certain ellipsoids in the space of squared-summable sequences of real numbers, with a ball of positive radius removed, we compare the construction of lower and upper bounds for the minimax separation radius (non-asymptotic approach) and the minimax separation rate (asymptotic approach) that have been proposed in the literature. Some additional contributions, bringing to light links between non-asymptotic and asymptotic approaches to minimax signal, are also presented. An example of a mildly ill-posed inverse problem is used for illustrative purposes. In particular, it is shown that tools used to derive ‘asymptotic’ results can be exploited to draw ‘non-asymptotic’ conclusions, and vice-versa. In order to enhance our understanding of these two minimax signal detection paradigms, we bring into light hitherto unknown similarities and links between non-asymptotic and asymptotic approaches.




signal

Capturing and Explaining Trajectory Singularities using Composite Signal Neural Networks. (arXiv:2003.10810v2 [cs.LG] UPDATED)

Spatial trajectories are ubiquitous and complex signals. Their analysis is crucial in many research fields, from urban planning to neuroscience. Several approaches have been proposed to cluster trajectories. They rely on hand-crafted features, which struggle to capture the spatio-temporal complexity of the signal, or on Artificial Neural Networks (ANNs) which can be more efficient but less interpretable. In this paper we present a novel ANN architecture designed to capture the spatio-temporal patterns characteristic of a set of trajectories, while taking into account the demographics of the navigators. Hence, our model extracts markers linked to both behaviour and demographics. We propose a composite signal analyser (CompSNN) combining three simple ANN modules. Each of these modules uses different signal representations of the trajectory while remaining interpretable. Our CompSNN performs significantly better than its modules taken in isolation and allows to visualise which parts of the signal were most useful to discriminate the trajectories.




signal

Tumor microenvironment : signaling pathways.

9783030355821 (electronic bk.)




signal

Microbial cyclic di-nucleotide signaling

9783030333089




signal

Encyclopedia of signaling molecules

9781461464389 (electronic bk.)




signal

Calcium signaling

9783030124571 (electronic bk.)




signal

Objective Bayes model selection of Gaussian interventional essential graphs for the identification of signaling pathways

Federico Castelletti, Guido Consonni.

Source: The Annals of Applied Statistics, Volume 13, Number 4, 2289--2311.

Abstract:
A signalling pathway is a sequence of chemical reactions initiated by a stimulus which in turn affects a receptor, and then through some intermediate steps cascades down to the final cell response. Based on the technique of flow cytometry, samples of cell-by-cell measurements are collected under each experimental condition, resulting in a collection of interventional data (assuming no latent variables are involved). Usually several external interventions are applied at different points of the pathway, the ultimate aim being the structural recovery of the underlying signalling network which we model as a causal Directed Acyclic Graph (DAG) using intervention calculus. The advantage of using interventional data, rather than purely observational one, is that identifiability of the true data generating DAG is enhanced. More technically a Markov equivalence class of DAGs, whose members are statistically indistinguishable based on observational data alone, can be further decomposed, using additional interventional data, into smaller distinct Interventional Markov equivalence classes. We present a Bayesian methodology for structural learning of Interventional Markov equivalence classes based on observational and interventional samples of multivariate Gaussian observations. Our approach is objective, meaning that it is based on default parameter priors requiring no personal elicitation; some flexibility is however allowed through a tuning parameter which regulates sparsity in the prior on model space. Based on an analytical expression for the marginal likelihood of a given Interventional Essential Graph, and a suitable MCMC scheme, our analysis produces an approximate posterior distribution on the space of Interventional Markov equivalence classes, which can be used to provide uncertainty quantification for features of substantive scientific interest, such as the posterior probability of inclusion of selected edges, or paths.




signal

Correction: Sequerra, Goyal et al., "NMDA Receptor Signaling Is Important for Neural Tube Formation and for Preventing Antiepileptic Drug-Induced Neural Tube Defects"




signal

Nitric Oxide Signaling Strengthens Inhibitory Synapses of Cerebellar Molecular Layer Interneurons through a GABARAP-Dependent Mechanism

Nitric oxide (NO) is an important signaling molecule that fulfills diverse functional roles as a neurotransmitter or diffusible second messenger in the developing and adult CNS. Although the impact of NO on different behaviors such as movement, sleep, learning, and memory has been well documented, the identity of its molecular and cellular targets is still an area of ongoing investigation. Here, we identify a novel role for NO in strengthening inhibitory GABAA receptor-mediated transmission in molecular layer interneurons of the mouse cerebellum. NO levels are elevated by the activity of neuronal NO synthase (nNOS) following Ca2+ entry through extrasynaptic NMDA-type ionotropic glutamate receptors (NMDARs). NO activates protein kinase G with the subsequent production of cGMP, which prompts the stimulation of NADPH oxidase and protein kinase C (PKC). The activation of PKC promotes the selective strengthening of α3-containing GABAARs synapses through a GABA receptor-associated protein-dependent mechanism. Given the widespread but cell type-specific expression of the NMDAR/nNOS complex in the mammalian brain, our data suggest that NMDARs may uniquely strengthen inhibitory GABAergic transmission in these cells through a novel NO-mediated pathway.

SIGNIFICANCE STATEMENT Long-term changes in the efficacy of GABAergic transmission is mediated by multiple presynaptic and postsynaptic mechanisms. A prominent pathway involves crosstalk between excitatory and inhibitory synapses whereby Ca2+-entering through postsynaptic NMDARs promotes the recruitment and strengthening of GABAA receptor synapses via Ca2+/calmodulin-dependent protein kinase II. Although Ca2+ transport by NMDARs is also tightly coupled to nNOS activity and NO production, it has yet to be determined whether this pathway affects inhibitory synapses. Here, we show that activation of NMDARs trigger a NO-dependent pathway that strengthens inhibitory GABAergic synapses of cerebellar molecular layer interneurons. Given the widespread expression of NMDARs and nNOS in the mammalian brain, we speculate that NO control of GABAergic synapse efficacy may be more widespread than has been appreciated.




signal

The Correlation of Neuronal Signals with Behavior at Different Levels of Visual Cortex and Their Relative Reliability for Behavioral Decisions

Behavior can be guided by neuronal activity in visual, auditory, or somatosensory cerebral cortex, depending on task requirements. In contrast to this flexible access of cortical signals, several observations suggest that behaviors depend more on neurons in later areas of visual cortex than those in earlier areas, although neurons in earlier areas would provide more reliable signals for many tasks. We recorded from neurons in different levels of visual cortex of 2 male rhesus monkeys while the animals did a visual discrimination task and examined trial-to-trial correlations between neuronal and behavioral responses. These correlations became stronger in primary visual cortex as neuronal signals in that area became more reliable relative to the other areas. The results suggest that the mechanisms that read signals from cortex might access any cortical area depending on the relative value of those signals for the task at hand.

SIGNIFICANCE STATEMENT Information is encoded by the action potentials of neurons in various cortical areas in a hierarchical manner such that increasingly complex stimulus features are encoded in successive stages. The brain must extract information from the response of appropriate neurons to drive optimal behavior. A widely held view of this decoding process is that the brain relies on the output of later cortical areas to make decisions, although neurons in earlier areas can provide more reliable signals. We examined correlations between perceptual decisions and the responses of neurons in different levels of monkey visual cortex. The results suggest that the brain may access signals in any cortical area depending on the relative value of those signals for the task at hand.




signal

Molecular Mechanisms of Non-ionotropic NMDA Receptor Signaling in Dendritic Spine Shrinkage

Structural plasticity of dendritic spines is a key component of the refinement of synaptic connections during learning. Recent studies highlight a novel role for the NMDA receptor (NMDAR), independent of ion flow, in driving spine shrinkage and LTD. Yet little is known about the molecular mechanisms that link conformational changes in the NMDAR to changes in spine size and synaptic strength. Here, using two-photon glutamate uncaging to induce plasticity at individual dendritic spines on hippocampal CA1 neurons from mice and rats of both sexes, we demonstrate that p38 MAPK is generally required downstream of non-ionotropic NMDAR signaling to drive both spine shrinkage and LTD. In a series of pharmacological and molecular genetic experiments, we identify key components of the non-ionotropic NMDAR signaling pathway driving dendritic spine shrinkage, including the interaction between NOS1AP (nitric oxide synthase 1 adaptor protein) and neuronal nitric oxide synthase (nNOS), nNOS enzymatic activity, activation of MK2 (MAPK-activated protein kinase 2) and cofilin, and signaling through CaMKII. Our results represent a large step forward in delineating the molecular mechanisms of non-ionotropic NMDAR signaling that can drive shrinkage and elimination of dendritic spines during synaptic plasticity.

SIGNIFICANCE STATEMENT Signaling through the NMDA receptor (NMDAR) is vitally important for the synaptic plasticity that underlies learning. Recent studies highlight a novel role for the NMDAR, independent of ion flow, in driving synaptic weakening and dendritic spine shrinkage during synaptic plasticity. Here, we delineate several key components of the molecular pathway that links conformational signaling through the NMDAR to dendritic spine shrinkage during synaptic plasticity.




signal

10 Ways to Boost Your Wi-Fi Signal

Check out these quick tips to boost your wireless signal from your router, extend and optimize your Wi-Fi coverage, and speed up your surfing.




signal

Grants for New Assessment Systems Signal the End of the Big Test

The Assessment for Learning Project, a partnership between Center for Innovation in Education and Next Generation Learning Challenges, granted twelve grants totaling $2 million for rethinking assessment.




signal

DJI Drones to Warn They're Near by Sending Wi-Fi Signals to Phones

The leading drone vendor developed the system to address safety and privacy concerns. The US Federal Aviation Administration is also drafting a rule that'll require all consumer drones to offer 'remote identification,' or what's basically an electronic license plate.




signal

TRAFFIC ALERT - Closure of North Broad Street for Railroad Crossing Maintenance and Signal Upgrades

Middletown --

Location: North Broad Street (Railroad Crossing) between Cedar Lane Road and Middletown Warwick Road/Summit Bridge Road, Middletown.

Times and Dates: 9:00 p.m. on Wednesday, March 18, 2020 until 5:00 a.m. on Thursday, April 2, 2020

Traffic Information: DelDOT announces to motorists that Delmarva Central Railroad will be replacing the railroad crossing, performing railroad signal upgrades, and doing general maintenance on their railroad crossing on North Broad Street. DelDOT's crews will be reconstructing the existing traffic signals. [More]




signal

TRAFFIC ALERT - Signal Work Will Require Daytime and Nighttime Lane Closures at US 13 and Voshells Mill Road

Camden-Wyoming --

Location: US 13 northbound/southbound at Voshells Mill Road, Camden-Wyoming.

Times and Dates: 9:00 a.m. until 4:00 p.m. Weekdays and 9:00 p.m. until 5:00 a.m. [More]




signal

Delhi Violence: Police fumbling, NSA Ajit Doval steps in, signals PM message

Political observers see the fielding of Doval as the Prime Minister’s assertion on the issue of developments linked to changes in the citizenship law which have triggered protests across the country.




signal

Trump Says He’s ‘Torn’ on China Deal as Advisers Signal Harmony on Trade

The president’s comments, coming just hours after advisers said the agreement was on track, indicate an increasingly unstable relationship.




signal

stretching LOW pulse signal for extra 100ns

Hello, i have a logic output from a D-flipflop which generates a reset signal with variable pulse width. I want to stretch this LOW pulse width with an extra 100ns added to the original pulse width digitally, is there any way to do that?




signal

How to Set Up and Plot Large-Signal S Parameters?

Large-signal S-parameters (LSSPs) are an extension of small-signal S-parameters and are defined as the ratio of reflected (or transmitted) waves to incident waves. (read more)




signal

New Rapid Adoption Kit (RAK) Enables Productive Mixed-Signal, Low Power Structural Verification

All engineers can enhance their mixed-signal low-power structural verification productivity by learning while doing with a PIEA RAK (Power Intent Export Assistant Rapid Adoption Kit). They can verify the mixed-signal chip by a generating macromodel for their analog block automatically, and run it through Conformal Low Power (CLP) to perform a low power structural check.  

The power structure integrity of a mixed-signal, low-power block is verified via Conformal Low Power integrated into the Virtuoso Schematic Editor Power Intent Export Assistant (VSE-PIEA). Here is the flow.

 

Applying the flow iteratively from lower to higher levels can verify the power structure.

Cadence customers can learn more in a Rapid Adoption Kit (RAK) titled IC 6.1.5 Virtuoso Schematic Editor XL PIEA, Conformal Low Power: Mixed-Signal Low Power Structural Verification.

The RAK includes Rapid Adoption Kit with demo design (instructions are provided on how to setup the user environment). It Introduces the Power Intent Export Assistant (PIEA) feature that has been implemented in the Virtuoso IC615 release.  The power intent extracted is then verified by calling Conformal Low Power (CLP) inside the Virtuoso environment.

  • Last Update: 11/15/2012.
  • Validated with IC 6.1.5 and CLP 11.1

The RAK uses a sample test case to go through PIEA + CLP flow as follows:

  • Setup for PIEA
  • Perform power intent extraction
  • CPF Import: It is recommended to Import macro CPF, as oppose to designing CPF for sub-blocks. If you choose to import design CPF files please make sure the design CPF file has power domain information for all the top level boundary ports
  • Generate macro CPF and design CPF
  • Perform low power verification by running CLP

It is also recommended to go through older RAKs as prerequisites.

  • Conformal Low Power, RTL Compiler and Incisive: Low Power Verification for Beginners
  • Conformal Low Power: CPF Macro Models
  • Conformal Low Power and RTL Compiler: Low Power Verification for Advanced Users

To access all these RAKs, visit our RAK Home Page to access Synthesis, Test and Verification flow

Note: To access above docs, use your Cadence credentials to logon to the Cadence Online Support (COS) web site. Cadence Online Support website https://support.cadence.com/ is your 24/7 partner for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you can receive new solutions, Application Notes (Technical Papers), Videos, Manuals, and more.

You can send us your feedback by adding a comment below or using the feedback box on Cadence Online Support.

Sumeet Aggarwal




signal

Mixed-signal and Low-power Demo -- Cadence Booth at DAC

DAC is right around the corner! On the demo floor at Cadence® Booth #2214, we will demonstrate how to use the Cadence mixed-signal and low-power solution to design, verify, and implement a microcontroller-based mixed-signal design. The demo design architecture is very similar to practical designs of many applications like power management ICs, automotive controllers, and the Internet of Things (IoT). Cadene tools demonstrated in this design include Virtuoso® Schematic Editor, Virtuoso Analog Design Environment, Virtuoso AMS Designer, Virtuoso Schematic Model Generator, Virtuoso Power Intent Assistant, Incisive® Enterprise Simulator with DMS option, Virtuoso Digital Implementation, Virtuoso Layout Suite, Encounter® RTL Compiler, Encounter Test, and Conformal Low Power. An extended version of this demo will also be shown at the ARM® Connected Community Pavilion Booth #921.

For additional highlights on Cadence mixed-signal and low-power solutions, stop by our booth for:

  • The popular book, Mixed-signal Methodology Guide, which will be on sale during DAC week!
  • A sneak preview of the eBook version of the Mixed-signal Methodology Guide
  • Customer presentations at the Cadence DAC Theater
    • 9am, Tuesday, June 4  ARM  Low-Power Verification of A15 Hard Macro Using CLP 
    • 10:30am, Tuesday, June 4  Silicon Labs  Power Mode Verification in Mixed-Signal Chip
    • 12:00pm, Tuesday, June 4  IBM  An Interoperable Flow with Unified OA and QRC Technology Files
    • 9am, Wednesday, June 5  Marvell  Low-Power Verification Using CLP
    • 4pm, Wednesday, June 5  Texas Instruments  An Inter-Operable Flow with Unified OA and QRC Technology Files
  • Partner presentations at the Cadence DAC Theater
    • 10am, Monday, June 3  X-Fab  Rapid Adoption of Advanced Cadence Design Flows Using X-FAB's AMS Reference Kit
    • 3:30pm, Monday, June 3  TSMC TSMC Custom Reference Flow for 20nm -  Cadence Track
    • 9:30am,Tuesday, June 4  TowerJazz   Substrate Noise Isolation Extraction/Model Using Cadence Analog Flow
    • 12:30pm, Wednesday, June 5  GLOBALFOUNDRIES  20nm/14nm Analog/Mixed-signal Flow
    • 2:30pm, Wednesday, June 5  ARM  Cortex®-M0 and Cortex-M0+: Tiny, Easy, and Energy-efficient Processors for Mixed-signal Applications
  • Technology sessions at suites
    • 10am, Monday, June 3    Low-power Verification of Mixed-signal Designs
    • 2pm, Monday, June 3      Advanced Implementation Techniques for Mixed-signal Designs
    • 2pm, Monday, June 3      LP Simulation: Are You Really Done?
    • 4pm, Monday, June 3      Power Format Update: Latest on CPF and IEEE 1801  
    • 11am, Wednesday, June 5   Mixed-signal Verification
    • 11am, Wednesday, June 5   LP Simulation: Are You Really Done?
    • 4pm, Wednesday, June 5   Successful RTL-to-GDSII Low-Power Design (FULL)
    • 5pm, Wednesday, June 5   Custom/AMS Design at Advanced Nodes

We will also have three presentations at the Si2 booth (#1427):

  • 10:30am, Monday, June 3   An Interoperable Implementation Solution for Mixed-signal Design
  • 11:30am, Tuesday, June 4   Low-power Verification for Mixed-signal Designs Using CPF
  • 10:30am, Wednesday, June 5   System-level Low-power Verification Using Palladium

 

We have a great program at DAC. Click the link for complete Cadence DAC Theater and Technology Sessions. Look forward to seeing you at DAC!     




signal

Simvision - Signal loading

Hi all 

Good day.

Can anyone tell me whether it is possible to view the signals once it is modified from its previous values without closing the simvision window. If possible kindly let me know the command for it(Linux).

 Is it possible to view the schematic for the code written?? Kindly instruct me.

 Thanks all.

S K S 




signal

Library Characterization Tidbits: Exploring Intuitive Means to Characterize Large Mixed-Signal Blocks

Let’s review a key characteristic feature of Cadence Liberate AMS Mixed-Signal Characterization that offers to you ease of use along with many other benefits like automation of standard Liberty model creation and improvement of up to 20X throughput.(read more)




signal

The Elephant in the Room: Mixed-Signal Models

Key Findings:  Nearly 100% of SoCs are mixed-signal to some extent.  Every one of these could benefit from the use of a metrics-driven unified verification methodology for mixed-signal (MD-UVM-MS), but the modeling step is the biggest hurdle to overcome.  Without the magical models, the process breaks down for lack of performance, or holes in the chip verification.

In the last installment of The Low Road, we were at the mixed-signal verification party. While no one talked about it, we all saw it: The party was raging and everyone was having a great time, but they were all dancing around that big elephant right in the middle of the room. For mixed-signal verification, that elephant is named Modeling.

To get to a fully verified SoC, the analog portions of the design have to run orders of magnitude faster than the speediest SPICE engine available. That means an abstraction of the behavior must be created. It puts a lot of people off when you tell them they have to do something extra to get done with something sooner. Guess what, it couldn’t be more true. If you want to keep dancing around like the elephant isn’t there, then enjoy your day. If you want to see about clearing the pachyderm from the dance floor, you’ll want to read on a little more….

Figure 1: The elephant in the room: who’s going to create the model?

 Whose job is it?

Modeling analog/mixed-signal behavior for use in SoC verification seems like the ultimate hot potato.  The analog team that creates the IP blocks says it doesn't have the expertise in digital verification to create a high-performance model. The digital designers say they don’t understand anything but ones and zeroes. The verification team, usually digitally-centric by background, are stuck in the middle (and have historically said “I just use the collateral from the design teams to do my job; I don’t create it”).

If there is an SoC verification team, then ensuring that the entire chip is verified ultimately rests upon their shoulders, whether or not they get all of the models they need from the various design teams for the project. That means that if a chip does not work because of a modeling error, it ought to point back to the verification team. If not, is it just a “systemic error” not accounted for in the methodology? That seems like a bad answer.

That all makes the most valuable guy in the room the engineer, whose knowledge spans the three worlds of analog, digital, and verification. There are a growing number of “mixed-signal verification engineers” found on SoC verification teams. Having a specialist appears to be the best approach to getting the job done, and done right.

So, my vote is for the verification team to step up and incorporate the expertise required to do a complete job of SoC verification, analog included. (I know my popularity probably did not soar with the attendees of DVCON with that statement, but the job has to get done).

It’s a game of trade-offs

The difference in computations required for continuous time versus discrete time behavior is orders of magnitude (as seen in Figure 2 below). The essential detail versus runtime tradeoff is a key enabler of verification techniques like software-driven testbenches. Abstraction is a lossy process, so care must be taken to fully understand the loss and test those elements in the appropriate domain (continuous time, frequency, etc.).

Figure 2: Modeling is required for performance

 

AFE for instance

The traditional separation of baseband and analog front-end (AFE) chips has shifted for the past several years. Advances in process technology, analog-to-digital converters, and the desire for cost reduction have driven both a re-architecting and re-partitioning of the long-standing baseband/AFE solution. By moving more digital processing to the AFE, lower cost architectures can be created, as well as reducing those 130 or so PCB traces between the chips.

There is lots of good scholarly work from a few years back on this subject, such as Digital Compensation of Dynamic Acquisition Errors at the Front-End of ADCS and Digital Compensation for Analog Front-Ends: A New Approach to Wireless Transceiver Design.


Figure 3: AFE evolution from first reference (Parastoo)

The digital calibration and compensation can be achieved by the introduction of a programmable solution. This is in fact the most popular approach amongst the mobile crowd today. By using a microcontroller, the software algorithms become adaptable to process-related issues and modifications to protocol standards.

However, for the SoC verification team, their job just got a whole lot harder. To determine if the interplay of the digital control and the analog function is working correctly, the software algorithms must be simulated on the combination of the two. That is, here is a classic case of inseparable mixed-signal verification.

So, what needs to be in the model is the big question. And the answer is, a lot. For this example, the main sources of dynamic error at the front-end of ADCs are critical for the non-linear digital filtering that is highly frequency dependent. The correction scheme must be verified to show that the nonlinearities are cancelled across the entire bandwidth of the ADC. 

This all means lots of simulation. It means that the right level of detail must be retained to ensure the integrity of the verification process. This means that domain experience must be added to the list of expertise of that mixed-signal verification engineer.

Back to the pachyderm

There is a lot more to say on this subject, and lots will be said in future posts. The important starting point is the recognition that the potential flaw in the system needs to be examined. It needs to be examined by a specialist.  Maybe a second opinion from the application domain is needed too.

So, put that cute little elephant on your desk as a reminder that the beast can be tamed.

 

 

Steve Carlson

Related stories

It’s Late, But the Party is Just Getting Started




signal

Five Reasons I'm Excited About Mixed-Signal Verification in 2015

Key Findings: Many more design teams will be reaching the mixed-signal methodology tipping point in 2015. That means you need to have a (verification) plan, and measure and execute against it.

As 2014 draws to a close, it is time to look ahead to the coming years and make a plan. While the macro view of the chip design world shows that is has been a mixed-signal world for a long time, it is has been primarily the digital teams that have rapidly evolved design and verification practices over the past decade. Well, I claim that is about to change. 2015 will be a watershed year for many more design teams because of the following factors:

  • 85% of designs are mixed signal, and it is going to stay that way (there is no turning back)
  • Advanced node drives new techniques, but they will be applied on all nodes
  • Equilibrium of mixed-signal designs being challenged, complexity raises risk level
  • Tipping point signs are evident and pervasive, things are going to change
  • The convergence of “big A” and “big D” demands true mixed-signal practices

Reason 1: Mixed-signal is dominant

To begin the examination of what is going to change and why, let’s start with what is not changing. IBS reports that mixed signal accounts for over 85% of chip design starts in 2014, and that percentage will rise, and hold steady at 85% in the coming years. It is a mixed-signal world and there is no turning back!

 

Figure 1. IBS: Mixed-signal design starts as percent of total

The foundational nature of mixed-signal designs in the semiconductor industry is well established. The reason it is exciting is that a stable foundation provides a platform for driving change. (It’s hard to drive on crumbling infrastructure.  If you’re from California, you know what I mean, between the potholes on the highways and the earthquakes and everything.)

Reason 2: Innovation in many directions, mostly mixed-signal applications

While the challenges being felt at the advanced nodes, such as double patterning and adoption of FinFET devices, have slowed some from following onto to nodes past 28nm, innovation has just turned in different directions. Applications for Internet of Things, automotive, and medical all have strong mixed-signal elements in their semiconductor content value proposition. What is critical to recognize is that many of the design techniques that were initially driven by advanced-node programs have merit across the spectrum of active semiconductor process technologies. For example, digitally controlled, calibrated, and compensated analog IP, along with power-reducing mutli-supply domains, power shut-off, and state retention are being applied in many programs on “legacy” nodes.

Another graph from IBS shows that the design starts at 45nm and below will continue to grow at a healthy pace.  The data also shows that nodes from 65nm and larger will continue to comprise a strong majority of the overall starts. 


Figure 2.  IBS: Design starts per process node

TSMC made a comprehensive announcement in September related to “wearables” and the Internet of Things. From their press release:

TSMC’s ultra-low power process lineup expands from the existing 0.18-micron extremely low leakage (0.18eLL) and 90-nanometer ultra low leakage (90uLL) nodes, and 16-nanometer FinFET technology, to new offerings of 55-nanometer ultra-low power (55ULP), 40ULP and 28ULP, which support processing speeds of up to 1.2GHz. The wide spectrum of ultra-low power processes from 0.18-micron to 16-nanometer FinFET is ideally suited for a variety of smart and power-efficient applications in the IoT and wearable device markets. Radio frequency and embedded Flash memory capabilities are also available in 0.18um to 40nm ultra-low power technologies, enabling system level integration for smaller form factors as well as facilitating wireless connections among IoT products.

Compared with their previous low-power generations, TSMC’s ultra-low power processes can further reduce operating voltages by 20% to 30% to lower both active power and standby power consumption and enable significant increases in battery life—by 2X to 10X—when much smaller batteries are demanded in IoT/wearable applications.

The focus on power is quite evident and this means that all of the power management and reduction techniques used in advanced node designs will be coming to legacy nodes soon.

Integration and miniaturization are being pursued from the system-level in, as well as from the process side. Techniques for power reduction and system energy efficiency are central to innovations under way.  For mixed-signal program teams, this means there is an added dimension of complexity in the verification task. If this dimension is not methodologically addressed, the level of risk adds a new dimension as well.

Reason 3: Trends are pushing the limits of established design practices

Risk is the bane of every engineer, but without risk there is no progress. And, sometimes the amount of risk is not something that can be controlled. Figure 3 shows some of the forces at work that cause design teams to undertake more risk than they would ideally like. With price and form factor as primary value elements in many growing markets, integration of analog front-end (AFE) with digital processing is becoming commonplace.  

 

Figure 3.  Trends pushing mixed-signal out of equilibrium

The move to the sweet spot of manufacturing at 28nm enables more integration, while providing excellent power and performance parameters with the best cost per transistor. Variation becomes great and harder to control. For analog design, this means more digital assistance for calibration and compensation. For greatest flexibility and resiliency, many will opt for embedding a microcontroller to perform the analog control functions in software. Finally, the first wave of leaders have already crossed the methodology bridge into true mixed-signal design and verification; those who do not follow are destined to fall farther behind.

Reason 4: The tipping point accelerants are catching fire

The factors cited in Reason 3 all have a technical grounding that serves to create pain in the chip-development process. The more factors that are present, the harder it is to ignore the pain and get the treatment relief  afforded by adopting known best practices for truly mixed-signal design (versus divide and conquer along analog and digital lines design).

In the past design performance was measured in MHz with simple static timing and power analysis. Design flows were conveniently partitioned, literally and figuratively, along analog and digital boundaries. Today, however, there are gigahertz digital signals that interact at the package and board level in analog-like ways. New, dynamic power analysis methods enabled by advanced library characterization must be melded into new design flows. These flows comprehend the growing amount of feedback between analog and digital functions that are becoming so interlocked as to be inseparable. This interlock necessitates design flows that include metrics-driven and software-driven testbenches, cross fabric analysis, electrically aware design, and database interoperability across analog and digital design environments.


Figure 4.  Tipping point indicators

Energy efficiency is a universal driver at this point.  Be it cost of ownership in the data center or battery life in a cell phone or wearable device, using less power creates more value in end products. However, layering multiple energy management and optimization techniques on top of complex mixed-signal designs adds yet more complexity demanding adoption of “modern” mixed-signal design practices.

Reason 5: Convergence of analog and digital design

Divide and conquer is always a powerful tool for complexity management.  However, as the number of interactions across the divide increase, the sub-optimality of those frontiers becomes more evident. Convergence is the name of the game.  Just as analog and digital elements of chips are converging, so will the industry practices associated with dealing with the converged world.


Figure 5. Convergence drivers

Truly mixed-signal design is a discipline that unites the analog and digital domains. That means that there is a common/shared data set (versus forcing a single cockpit or user model on everyone). 

In verification the modern saying is “start with the end in mind”. That means creating a formal approach to the plan of what will be test, how it will be tested, and metrics for success of the tests. Organizing the mechanics of testbench development using the Unified Verification Methodology (UVM) has proven benefits. The mixed-signal elements of SoC verification are not exempted from those benefits.

Competition is growing more fierce in the world for semiconductor design teams. Not being equipped with the best-known practices creates a competitive deficit that is hard to overcome with just hard work. As the landscape of IC content drives to a more energy-efficient mixed-signal nature, the mounting risk posed by old methodologies may cause causalities in the coming year. Better to move forward with haste and create a position of strength from which differentiation and excellence in execution can be forged.

Summary

2015 is going to be a banner year for mixed-signal design and verification methodologies. Those that have forged ahead are in a position of execution advantage. Those that have not will be scrambling to catch up, but with the benefits of following a path that has been proven by many market leaders.



  • uvm
  • mixed signal design
  • Metric-Driven-Verification
  • Mixed Signal Verification
  • MDV-UVM-MS

signal

Top 5 Issues that Make Things Go Wrong in Mixed-Signal Verification

Key Findings:  There are a host of issues that arise in mixed-signal verification.  As discussed in earlier blogs, the industry trends indicate that teams need to prepare themselves for a more mixed world.  The good news is that these top five pitfalls are all avoidable.

It’s always interesting to study the human condition.  Watching the world through the lens of mixed-signal verification brings an interesting microcosm into focus.  The top 5 items that I regularly see vexing teams are:

  1. When there’s a bug, whose problem is it?
  2. Verification team is the lightning rod
  3. Three (conflicting) points of view
  4. Wait, there’s more… software
  5. There’s a whole new language

Reason 1: When there’s a bug, whose problem is it?

It actually turns out to be a good thing when a bug is found during the design process.  Much, much better than when the silicon arrives back from the foundry of course. Whether by sheer luck, or a structured approach to verification, sometimes a bug gets discovered. The trouble in mixed-signal design occurs when that bug is near the boundary of an analog and a digital domain.


Figure 1.   Whose bug is it?

Typically designers are a diligent sort and make sure that their block works as desired. However, when things go wrong during integration, it is usually also project crunch time. So, it has to be the other guy’s bug, right?

A step in the right direction is to have a third party, a mixed-signal verification expert, apply rigorous methods to the mixed-signal verification task.  But, that leads to number 2 on my list.

 

Reason 2: Verification team is the lightning rod

Having a dedicated verification team with mixed-signal expertise is a great start, but what can typically happen is that team is hampered by the lack of availability of a fast executing model of the analog behavior (best practice today being a SystemVerilog real number model – SV_RNM). That model is critical because it enables orders of magnitude more tests to be run against the design in the same timeframe. 

Without that model, there will be a testing deficit. So, when the bugs come in, it is easy for everyone to point their finger at the verification team.


Figure 2.  It’s the verification team’s fault

Yes, the model creates a new validation task – it’s validation – but the speed-up enabled by the model more than compensates in terms of functional coverage and schedule.

The postscript on this finger-pointing is the institutionalization of SV-RNM. And, of course, the verification team gets its turn.


Figure 3.  Verification team’s revenge

 

Reason 3: Three (conflicting) points of view

The third common issue arises when the finger-pointing settles down. There is still a delineation of responsibility that is often not easy to achieve when designs of a truly mixed-signal nature are being undertaken.  


Figure 4.  Points of view and roles

Figure 4 outlines some of the delegated responsibility, but notice that everyone is still potentially on the hook to create a model. It is questions of purpose, expertise, bandwidth, and convention that go into the decision about who will “own” each model. It is not uncommon for the modeling task to be a collaborative effort where the expertise on analog behavior comes from the analog team, while the verification team ensures that the model is constructed in such a manner that it will fit seamlessly into the overall chip verification. Less commonly, the digital design team does the modeling simply to enable the verification of their own work.

Reason 4: Wait, there’s more… software

As if verifying the function of a chip was not hard enough, there is a clear trend towards product offerings that include software along with the chip. In the mixed-signal design realm, many times this software has among its functions things like calibration and compensation that provide a flexible way of delivering guards against parameter drift. When the combination of the chip and the software are the product, they need to be verified together. This puts an enormous premium on fast executing SV-RNM.

 


Figure 5.   There’s software analog and digital

While the added dimension of software to the verification task creates new heights of complexity, it also serves as a very strong driver to get everyone aligned and motivated to adopt best known practices for mixed-signal verification.  This is an opportunity to show superior ability!


Figure 6.  Change in perspective, with the right methodology

 

Reason 5: There’s a whole new language

Communication is of vital importance in a multi-faceted, multi-team program.  Time zones, cultures, and personalities aside, mixed-signal verification needs to be a collaborative effort.  Terminology can be a big stumbling block in getting to a common understanding. If we take a look at the key areas where significant improvement can usually be made, we can start to see the breadth of knowledge that is required to “get” the entirety of the picture:

  • Structure – Verification planning and management
  • Methodology – UVM (Unified Verification Methodology – Accellera Standard)
  • Measure – MDV (Metrics-driven verification)
  • Multi-engine – Software, emulation, FPGA proto, formal, static, VIP
  • Modeling – SystemVerilog (discrete time) down to SPICE (continuous time)
  • Languages – SystemVerilog, Verilog, Verilog-AMS, VHDL, SPICE, PSL, CPF, UPF

Each of these areas has its own jumble of terminology and acronyms. It never hurts to create a team glossary to start with. Heck, I often get my LDO, IFV, and UDT all mixed up myself.

Summary

Yes, there are a lot of things that make it hard for the humans involved in the process of mixed-signal design and verification, but there is a lot that can be improved once the pain is felt (no pain, no gain is akin to no bugs, no verification methodology change). If we take a look at the key areas from the previous section, we can put a different lens on them and describe the value that they bring:

  • Structure – Uniformly organized, auditable, predictable, transparency
  • Methodology – Reusable, productive, portable, industry standard
  • Measure – Quantified progress, risk/quality management, precise goals
  • Multi-engine – Faster execution, improved schedule, enables new quality level
  • Modeling – Enabler, flexible, adaptable for diverse applications/design styles
  • Languages – Flexible, complete, robust, standard, scalability to best practices

With all of this value firmly in hand, we can turn our thoughts to happier words:

…  stay tuned for more!

 

 Steve Carlson




signal

Verifying Power Intent in Analog and Mixed-Signal Designs Using Formal Methods

Analog and Mixed-signal (AMS) designs are increasingly using active power management to minimize power consumption. Typical mixed-signal design uses several power domains and operate in a dozen or more power modes including multiple functional, standby and test modes. To save power, parts of design not active in a mode are shut down or may operate at reduced supply voltage when high performance is not required. These and other low power techniques are applied on both analog and digital parts of the design. Digital designers capture power intent in standard formats like Common Power Format (CPF), IEEE1801 (aka Unified Power Format or UPF) or Liberty and apply it top-down throughout design, verification and implementation flows. Analog parts are often designed bottom-up in schematic without upfront defined power intent. Verifying that low power intent is implemented correctly in mixed-signal design is very challenging. If not discovered early, errors like wrongly connected power nets, missing level shifters or isolations cells can cause costly rework or even silicon re-spin. 

Mixed-signal designers rely on simulation for functional verification. Although still necessary for electrical and performance verification, running simulation on so many power modes is not an effective verification method to discover low power errors. It would be nice to augment simulation with formal low power verification but a specification of power intent for analog/mixed-signal blocs is missing. So how do we obtain it? Can we “extract” it from already built analog circuit? Fortunately, yes we can, and we will describe an automated way to do so!

Virtuoso Power Manager is new tool released in the Virtuoso IC6.1.8 platform which is capable of managing power intent in an Analog/MS design which is captured in Virtuoso Schematic Editor. In setup phase, the user identifies power and ground nets and registers special devices like level shifters and isolation cells. The user has the option to import power intent into IEEE1801 format, applicable for top level or any of the blocks in design. Virtuoso Power Manager uses this information to traverse the schematic and extract complete power intent for the entire design. In the final stage, Virtuoso Power Manager exports the power intent in IEEE1801 format as an input to the formal verification tool (Cadence Conformal-LP) for static verification of power intent.

Cadence and Infineon have been collaborating on the requirements and validation of the Virtuoso Power Manager tool and Low Power verification solution on real designs. A summary of collaboration results were presented at the DVCon conference in Munich, in October of 2018.  Please look for the paper in the conference proceedings for more details. Alternately, can view our Cadence webinar on Verifying Low-Power Intent in Mixed-Signal Design Using Formal Method for more information.




signal

SETI Has Observed A Strong Signal From A Sun-Like Star





signal

Market Forces Signal Clean Energy’s Watershed Moment

Business leaders have an important decision to make this year: to continue operating under the status quo or to join the list of successful companies creating a more sustainable future by contracting or investing in renewable energy and making a positive impact on their brand, customers, employees and bottom line.




signal

Clinton’s Visit to Pacific Islands Forum Signals Renewed U.S. Engagement

By Charles E. Morrison

(Note: This commentary originally appeared in the Honolulu Star-Advertiser on Sept. 12, 2012)

It may not compare to APEC or the G-20 for global economic weight, but for the Pacific island nations, the annual Pacific Islands Forum summit is the premier regional meeting. It brings together heads of the island nations (including Australia and New Zealand) with representatives of international organizations and “dialogue partners,” including the United States, China, Japan and many others. For the Cook Islands, with less than 15,000 residents, hosting last week’s PIF was a rare event made especially significant by Secretary of State Hillary Clinton’s unprecedented stop to attend the post-meeting partner dialogue ­– the highest level U.S. participation ever.