robust Letters: Trump keeps campaign promises by building a robust economy By rssfeeds.indystar.com Published On :: Thu, 06 Feb 2020 10:00:22 +0000 Keeping him in office prevents the left from destroying America with their socialistic ideology, a letter to the editor says. Full Article
robust Letters: Robust health care system needed to combat coronavirus threat By rssfeeds.indystar.com Published On :: Thu, 30 Apr 2020 10:00:10 +0000 Until we have a vaccine, the road to opening is through a health care system which can handle the infection, a letter to the editor says. Full Article
robust Letters: Robust health care system needed to combat coronavirus threat By rssfeeds.indystar.com Published On :: Thu, 30 Apr 2020 10:00:10 +0000 Until we have a vaccine, the road to opening is through a health care system which can handle the infection, a letter to the editor says. Full Article
robust The Russian challenge demands a more robust Western strategy By feedproxy.google.com Published On :: Fri, 10 Jul 2015 09:23:28 +0000 4 June 2015 20150515RussianChallenge.jpg Photo: AP Photo/Alexander Zemlianichenko It is now clear that President Putin’s ‘new model Russia’ cannot be constructively accommodated into the international system. The war in Ukraine, in part the result of the West's laissez-faire approach to Russia, demonstrates the need for a new Western strategy towards Russia.The Russian Challenge - a major new report by six authors from the Russia and Eurasia Programme at Chatham House - argues that a new strategy must recognise that: The decline of the Russian economy, the costs of confrontation and the rise of China mean that the Putin regime is now facing the most serious challenge of its 15 years in power. The West has neither the wish nor the means to promote regime change in Russia. But Western countries need to consider the possible consequences of a chaotic end to the Putin system. A critical element in the new geo-economic competition between the West and Russia is the extent of Western support for Ukraine, whose reconstruction as an effective sovereign state, capable of standing up for itself, is crucial. This will require much greater resources than have been invested up until now. Russia has rapidly developed its armed forces and information warfare capabilities since the war in Georgia in 2008. The West must invest in defensive strategic communications and media support to counter the Kremlin’s false narratives and restore its conventional deterrent capabilities as a matter of urgency. In particular, NATO needs to demonstrate that the response to ‘ambiguous’ or ‘hybrid’ war will be robust. Sanctions are exerting economic pressure on the Russian leadership and should remain in place until Ukraine’s territorial integrity is properly restored. In particular, it is self-defeating to link the lifting of sanctions solely to implementation of the poorly crafted and inherently fragile Minsk accords. While deterrence and constraint are essential in the short term, the West must also prepare for an eventual change of leadership in Russia. There is a reasonable chance that current pressures will incline a future Russian leadership to want to re-engage with the West.James Nixey, Head of the Russia and Eurasia Programme at Chatham House, said: 'Pursuing these goals and achieving these objectives will ensure that the West is better prepared for any further deterioration in relations with Russia. The events of the last 18 months have demonstrated conclusively that when dealing with Russia, optimism is not a strategy.' Editor's notes Read the report The Russian Challenge from the Russia and Eurasia Programme, Chatham House.Embargoed until Thursday 4 June, 00:01 BST.This report will be launched at an event at Chatham House on Friday 5 June.For all enquiries, please contact the press office. Contacts Press Office +44 (0)20 7957 5739 Email Full Article
robust Robust summarization and inference in proteome-wide label-free quantification By feedproxy.google.com Published On :: 2020-04-22 Adriaan StickerApr 22, 2020; 0:RA119.001624v1-mcp.RA119.001624Research Full Article
robust Mass Spectrometry Based Immunopeptidomics Leads to Robust Predictions of Phosphorylated HLA Class I Ligands [Technological Innovation and Resources] By feedproxy.google.com Published On :: 2020-02-01T00:05:30-08:00 The presentation of peptides on class I human leukocyte antigen (HLA-I) molecules plays a central role in immune recognition of infected or malignant cells. In cancer, non-self HLA-I ligands can arise from many different alterations, including non-synonymous mutations, gene fusion, cancer-specific alternative mRNA splicing or aberrant post-translational modifications. Identifying HLA-I ligands remains a challenging task that requires either heavy experimental work for in vivo identification or optimized bioinformatics tools for accurate predictions. To date, no HLA-I ligand predictor includes post-translational modifications. To fill this gap, we curated phosphorylated HLA-I ligands from several immunopeptidomics studies (including six newly measured samples) covering 72 HLA-I alleles and retrieved a total of 2,066 unique phosphorylated peptides. We then expanded our motif deconvolution tool to identify precise binding motifs of phosphorylated HLA-I ligands. Our results reveal a clear enrichment of phosphorylated peptides among HLA-C ligands and demonstrate a prevalent role of both HLA-I motifs and kinase motifs on the presentation of phosphorylated peptides. These data further enabled us to develop and validate the first predictor of interactions between HLA-I molecules and phosphorylated peptides. Full Article
robust Robust summarization and inference in proteome-wide label-free quantification [Research] By feedproxy.google.com Published On :: 2020-04-22T13:36:37-07:00 Label-Free Quantitative mass spectrometry based workflows for differential expression (DE) analysis of proteins impose important challenges on the data analysis due to peptide-specific effects and context dependent missingness of peptide intensities. Peptide-based workflows, like MSqRob, test for DE directly from peptide intensities and outperform summarization methods which first aggregate MS1 peptide intensities to protein intensities before DE analysis. However, these methods are computationally expensive, often hard to understand for the non-specialised end-user, and do not provide protein summaries, which are important for visualisation or downstream processing. In this work, we therefore evaluate state-of-the-art summarization strategies using a benchmark spike-in dataset and discuss why and when these fail compared to the state-of-the-art peptide based model, MSqRob. Based on this evaluation, we propose a novel summarization strategy, MSqRobSum, which estimates MSqRob’s model parameters in a two-stage procedure circumventing the drawbacks of peptide-based workflows. MSqRobSum maintains MSqRob’s superior performance, while providing useful protein expression summaries for plotting and downstream analysis. Summarising peptide to protein intensities considerably reduces the computational complexity, the memory footprint and the model complexity, and makes it easier to disseminate DE inferred on protein summaries. Moreover, MSqRobSum provides a highly modular analysis framework, which provides researchers with full flexibility to develop data analysis workflows tailored towards their specific applications. Full Article
robust Isolation of INS-1-derived cell lines with robust ATP-sensitive K+ channel-dependent and -independent glucose-stimulated insulin secretion By diabetes.diabetesjournals.org Published On :: 2000-03-01 HE HohmeierMar 1, 2000; 49:424-430Articles Full Article
robust Robustness and Locke's Wingless Gentleman By decisions-and-info-gaps.blogspot.com Published On :: Tue, 20 Sep 2011 07:49:00 +0000 Our ancestors have made decisions under uncertainty ever since they had to stand and fight or run away, eat this root or that berry, sleep in this cave or under that bush. Our species is distinguished by the extent of deliberate thought preceding decision. Nonetheless, the ability to decide in the face of the unknown was born from primal necessity. Betting is one of the oldest ways of deciding under uncertainty. But you bet you that 'bet' is a subtler concept than one might think.We all know what it means to make a bet, but just to make sure let's quote the Oxford English Dictionary: "To stake or wager (a sum of money, etc.) in support of an affirmation or on the issue of a forecast." The word has been around for quite a while. Shakespeare used the verb in 1600: "Iohn a Gaunt loued him well, and betted much money on his head." (Henry IV, Pt. 2 iii. ii. 44). Drayton used the noun in 1627 (and he wasn't the first): "For a long while it was an euen bet ... Whether proud Warwick, or the Queene should win."An even bet is a 50-50 chance, an equal probability of each outcome. But betting is not always a matter of chance. Sometimes the meaning is just the opposite. According to the OED 'You bet' or 'You bet you' are slang expressions meaning 'be assured, certainly'. For instance: "'Can you handle this outfit?' 'You bet,' said the scout." (D.L.Sayers, Lord Peter Views Body, iv. 68). Mark Twain wrote "'I'll get you there on time' - and you bet you he did, too." (Roughing It, xx. 152).So 'bet' is one of those words whose meaning stretches from one idea all the way to its opposite. Drayton's "even bet" between Warwick and the Queen means that he has no idea who will win. In contrast, Twain's "you bet you" is a statement of certainty. In Twain's or Sayers' usage, it's as though uncertainty combines with moral conviction to produce a definite resolution. This is a dialectic in which doubt and determination form decisiveness.John Locke may have had something like this in mind when he wrote:"If we will disbelieve everything, because we cannot certainly know all things; we shall do muchwhat as wisely as he, who would not use his legs, but sit still and perish, because he had no wings to fly." (An Essay Concerning Human Understanding, 1706, I.i.5)The absurdity of Locke's wingless gentleman starving in his chair leads us to believe, and to act, despite our doubts. The moral imperative of survival sweeps aside the paralysis of uncertainty. The consequence of unabated doubt - paralysis - induces doubt's opposite: decisiveness.But rational creatures must have some method for reasoning around their uncertainties. Locke does not intend for us to simply ignore our ignorance. But if we have no way to place bets - if the odds simply are unknown - then what are we to do? We cannot "sit still and perish".This is where the strategy of robustness comes in.'Robust' means 'Strong and hardy; sturdy; healthy'. By implication, something that is robust is 'not easily damaged or broken, resilient'. A statistical test is robust if it yields 'approximately correct results despite the falsity of certain of the assumptions underlying it' or despite errors in the data. (OED)A decision is robust if its outcome is satisfactory despite error in the information and understanding which justified or motivated the decision. A robust decision is resilient to surprise, immune to ignorance.It is no coincidence that the colloquial use of the word 'bet' includes concepts of both chance and certainty. A good bet can tolerate large deviation from certainty, large error of information. A good bet is robust to surprise. 'You bet you' does not mean that the world is certain. It means that the outcome is certain to be acceptable, regardless of how the world turns out. The scout will handle the outfit even if there is a rogue in the ranks; Twain will get there on time despite snags and surprises. A good bet is robust to the unknown. You bet you!An extended and more formal discussion of these issues can be found elsewhere. Full Article betting robustness
robust Squirrels and Stock Brokers, Or: Innovation Dilemmas, Robustness and Probability By decisions-and-info-gaps.blogspot.com Published On :: Sun, 09 Oct 2011 11:51:00 +0000 Decisions are made in order to achieve desirable outcomes. An innovation dilemma arises when a seemingly more attractive option is also more uncertain than other options. In this essay we explore the relation between the innovation dilemma and the robustness of a decision, and the relation between robustness and probability. A decision is robust to uncertainty if it achieves required outcomes despite adverse surprises. A robust decision may differ from the seemingly best option. Furthermore, robust decisions are not based on knowledge of probabilities, but can still be the most likely to succeed.Squirrels, Stock-Brokers and Their DilemmasDecision problems.Imagine a squirrel nibbling acorns under an oak tree. They're pretty good acorns, though a bit dry. The good ones have already been taken. Over in the distance is a large stand of fine oaks. The acorns there are probably better. But then, other squirrels can also see those trees, and predators can too. The squirrel doesn't need to get fat, but a critical caloric intake is necessary before moving on to other activities. How long should the squirrel forage at this patch before moving to the more promising patch, if at all?Imagine a hedge fund manager investing in South African diamonds, Australian Uranium, Norwegian Kroners and Singapore semi-conductors. The returns have been steady and good, but not very exciting. A new hi-tech start-up venture has just turned up. It looks promising, has solid backing, and could be very interesting. The manager doesn't need to earn boundless returns, but it is necessary to earn at least a tad more than the competition (who are also prowling around). How long should the manager hold the current portfolio before changing at least some of its components?These are decision problems, and like many other examples, they share three traits: critical needs must be met; the current situation may or may not be adequate; other alternatives look much better but are much more uncertain. To change, or not to change? What strategy to use in making a decision? What choice is the best bet? Betting is a surprising concept, as we have seen before; can we bet without knowing probabilities?Solution strategies.The decision is easy in either of two extreme situations, and their analysis will reveal general conclusions.One extreme is that the status quo is clearly insufficient. For the squirrel this means that these crinkled rotten acorns won't fill anybody's belly even if one nibbled here all day long. Survival requires trying the other patch regardless of the fact that there may be many other squirrels already there and predators just waiting to swoop down. Similarly, for the hedge fund manager, if other funds are making fantastic profits, then something has to change or the competition will attract all the business.The other extreme is that the status quo is just fine, thank you. For the squirrel, just a little more nibbling and these acorns will get us through the night, so why run over to unfamiliar oak trees? For the hedge fund manager, profits are better than those of any credible competitor, so uncertain change is not called for.From these two extremes we draw an important general conclusion: the right answer depends on what you need. To change, or not to change, depends on what is critical for survival. There is no universal answer, like, "Always try to improve" or "If it's working, don't fix it". This is a very general property of decisions under uncertainty, and we will call it preference reversal. The agent's preference between alternatives depends on what the agent needs in order to "survive".The decision strategy that we have described is attuned to the needs of the agent. The strategy attempts to satisfy the agent's critical requirements. If the status quo would reliably do that, then stay put; if not, then move. Following the work of Nobel Laureate Herbert Simon, we will call this a satisficing decision strategy: one which satisfies a critical requirement."Prediction is always difficult, especially of the future." - Robert Storm PetersenNow let's consider a different decision strategy that squirrels and hedge fund managers might be tempted to use. The agent has obtained information about the two alternatives by signals from the environment. (The squirrel sees grand verdant oaks in the distance, the fund manager hears of a new start up.) Given this information, a prediction can be made (though the squirrel may make this prediction based on instincts and without being aware of making it). Given the best available information, the agent predicts which alternative would yield the better outcome. Using this prediction, the decision strategy is to choose the alternative whose predicted outcome is best. We will call this decision strategy best-model optimization. Note that this decision strategy yields a single universal answer to the question facing the agent. This strategy uses the best information to find the choice that - if that information is correct - will yield the best outcome. Best-model optimization (usually) gives a single "best" decision, unlike the satisficing strategy that returns different answers depending on the agent's needs.There is an attractive logic - and even perhaps a moral imperative - to use the best information to make the best choice. One should always try to do one's best. But the catch in the argument for best-model optimization is that the best information may actually be grievously wrong. Those fine oak trees might be swarming with insects who've devoured the acorns. Best-model optimization ignores the agent's central dilemma: stay with the relatively well known but modest alternative, or go for the more promising but more uncertain alternative."Tsk, tsk, tsk" says our hedge fund manager. "My information already accounts for the uncertainty. I have used a probabilistic asset pricing model to predict the likelihood that my profits will beat the competition for each of the two alternatives."Probabilistic asset pricing models are good to have. And the squirrel similarly has evolved instincts that reflect likelihoods. But a best-probabilistic-model optimization is simply one type of best-model optimization, and is subject to the same vulnerability to error. The world is full of surprises. The probability functions that are used are quite likely wrong, especially in predicting the rare events that the manager is most concerned to avoid.Robustness and ProbabilityNow we come to the truly amazing part of the story. The satisficing strategy does not use any probabilistic information. Nonetheless, in many situations, the satisficing strategy is actually a better bet (or at least not a worse bet), probabilistically speaking, than any other strategy, including best-probabilistic-model optimization. We have no probabilistic information in these situations, but we can still maximize the probability of success (though we won't know the value of this maximum).When the satisficing decision strategy is the best bet, this is, in part, because it is more robust to uncertainty than another other strategy. A decision is robust to uncertainty if it achieves required outcomes even if adverse surprises occur. In many important situations (though not invariably), more robustness to uncertainty is equivalent to being more likely to succeed or survive. When this is true we say that robustness is a proxy for probability.A thorough analysis of the proxy property is rather technical. However, we can understand the gist of the idea by considering a simple special case.Let's continue with the squirrel and hedge fund examples. Suppose we are completely confident about the future value (in calories or dollars) of not making any change (staying put). In contrast, the future value of moving is apparently better though uncertain. If staying put would satisfy our critical requirement, then we are absolutely certain of survival if we do not change. Staying put is completely robust to surprises so the probability of success equals 1 if we stay put, regardless of what happens with the other option. Likewise, if staying put would not satisfy our critical requirement, then we are absolutely certain of failure if we do not change; the probability of success equals 0 if we stay, and moving cannot be worse. Regardless of what probability distribution describes future outcomes if we move, we can always choose the option whose likelihood of success is greater (or at least not worse). This is because staying put is either sure to succeed or sure to fail, and we know which.This argument can be extended to the more realistic case where the outcome of staying put is uncertain and the outcome of moving, while seemingly better than staying, is much more uncertain. The agent can know which option is more robust to uncertainty, without having to know probability distributions. This implies, in many situations, that the agent can choose the option that is a better bet for survival.Wrapping UpThe skillful decision maker not only knows a lot, but is also able to deal with conflicting information. We have discussed the innovation dilemma: When choosing between two alternatives, the seemingly better one is also more uncertain.Animals, people, organizations and societies have developed mechanisms for dealing with the innovation dilemma. The response hinges on tuning the decision to the agent's needs, and robustifying the choice against uncertainty. This choice may or may not coincide with the putative best choice. But what seems best depends on the available - though uncertain - information.The commendable tendency to do one's best - and to demand the same of others - can lead to putatively optimal decisions that may be more vulnerable to surprise than other decisions that would have been satisfactory. In contrast, the strategy of robustly satisfying critical needs can be a better bet for survival. Consider the design of critical infrastructure: flood protection, nuclear power, communication networks, and so on. The design of such systems is based on vast knowledge and understanding, but also confronts bewildering uncertainties and endless surprises. We must continue to improve our knowledge and understanding, while also improving our ability to manage the uncertainties resulting from the expanding horizon of our efforts. We must identify the critical goals and seek responses that are immune to surprise. Full Article betting innovation dilemma probability proxy property robustness
robust Path-Based Spectral Clustering: Guarantees, Robustness to Outliers, and Fast Algorithms By Published On :: 2020 We consider the problem of clustering with the longest-leg path distance (LLPD) metric, which is informative for elongated and irregularly shaped clusters. We prove finite-sample guarantees on the performance of clustering with respect to this metric when random samples are drawn from multiple intrinsically low-dimensional clusters in high-dimensional space, in the presence of a large number of high-dimensional outliers. By combining these results with spectral clustering with respect to LLPD, we provide conditions under which the Laplacian eigengap statistic correctly determines the number of clusters for a large class of data sets, and prove guarantees on the labeling accuracy of the proposed algorithm. Our methods are quite general and provide performance guarantees for spectral clustering with any ultrametric. We also introduce an efficient, easy to implement approximation algorithm for the LLPD based on a multiscale analysis of adjacency graphs, which allows for the runtime of LLPD spectral clustering to be quasilinear in the number of data points. Full Article
robust Provably robust estimation of modulo 1 samples of a smooth function with applications to phase unwrapping By Published On :: 2020 Consider an unknown smooth function $f: [0,1]^d ightarrow mathbb{R}$, and assume we are given $n$ noisy mod 1 samples of $f$, i.e., $y_i = (f(x_i) + eta_i) mod 1$, for $x_i in [0,1]^d$, where $eta_i$ denotes the noise. Given the samples $(x_i,y_i)_{i=1}^{n}$, our goal is to recover smooth, robust estimates of the clean samples $f(x_i) mod 1$. We formulate a natural approach for solving this problem, which works with angular embeddings of the noisy mod 1 samples over the unit circle, inspired by the angular synchronization framework. This amounts to solving a smoothness regularized least-squares problem -- a quadratically constrained quadratic program (QCQP) -- where the variables are constrained to lie on the unit circle. Our proposed approach is based on solving its relaxation, which is a trust-region sub-problem and hence solvable efficiently. We provide theoretical guarantees demonstrating its robustness to noise for adversarial, as well as random Gaussian and Bernoulli noise models. To the best of our knowledge, these are the first such theoretical results for this problem. We demonstrate the robustness and efficiency of our proposed approach via extensive numerical simulations on synthetic data, along with a simple least-squares based solution for the unwrapping stage, that recovers the original samples of $f$ (up to a global shift). It is shown to perform well at high levels of noise, when taking as input the denoised modulo $1$ samples. Finally, we also consider two other approaches for denoising the modulo 1 samples that leverage tools from Riemannian optimization on manifolds, including a Burer-Monteiro approach for a semidefinite programming relaxation of our formulation. For the two-dimensional version of the problem, which has applications in synthetic aperture radar interferometry (InSAR), we are able to solve instances of real-world data with a million sample points in under 10 seconds, on a personal laptop. Full Article
robust Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions By Published On :: 2020 We consider the standard model of distributed optimization of a sum of functions $F(mathbf z) = sum_{i=1}^n f_i(mathbf z)$, where node $i$ in a network holds the function $f_i(mathbf z)$. We allow for a harsh network model characterized by asynchronous updates, message delays, unpredictable message losses, and directed communication among nodes. In this setting, we analyze a modification of the Gradient-Push method for distributed optimization, assuming that (i) node $i$ is capable of generating gradients of its function $f_i(mathbf z)$ corrupted by zero-mean bounded-support additive noise at each step, (ii) $F(mathbf z)$ is strongly convex, and (iii) each $f_i(mathbf z)$ has Lipschitz gradients. We show that our proposed method asymptotically performs as well as the best bounds on centralized gradient descent that takes steps in the direction of the sum of the noisy gradients of all the functions $f_1(mathbf z), ldots, f_n(mathbf z)$ at each step. Full Article
robust Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis By Published On :: 2020 This work is concerned with the non-negative rank-1 robust principal component analysis (RPCA), where the goal is to recover the dominant non-negative principal components of a data matrix precisely, where a number of measurements could be grossly corrupted with sparse and arbitrary large noise. Most of the known techniques for solving the RPCA rely on convex relaxation methods by lifting the problem to a higher dimension, which significantly increase the number of variables. As an alternative, the well-known Burer-Monteiro approach can be used to cast the RPCA as a non-convex and non-smooth $ell_1$ optimization problem with a significantly smaller number of variables. In this work, we show that the low-dimensional formulation of the symmetric and asymmetric positive rank-1 RPCA based on the Burer-Monteiro approach has benign landscape, i.e., 1) it does not have any spurious local solution, 2) has a unique global solution, and 3) its unique global solution coincides with the true components. An implication of this result is that simple local search algorithms are guaranteed to achieve a zero global optimality gap when directly applied to the low-dimensional formulation. Furthermore, we provide strong deterministic and probabilistic guarantees for the exact recovery of the true principal components. In particular, it is shown that a constant fraction of the measurements could be grossly corrupted and yet they would not create any spurious local solution. Full Article
robust A note on the “L-logistic regression models: Prior sensitivity analysis, robustness to outliers and applications” By projecteuclid.org Published On :: Mon, 03 Feb 2020 04:00 EST Saralees Nadarajah, Yuancheng Si. Source: Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 183--187.Abstract: Da Paz, Balakrishnan and Bazan [Braz. J. Probab. Stat. 33 (2019), 455–479] introduced the L-logistic distribution, studied its properties including estimation issues and illustrated a data application. This note derives a closed form expression for moment properties of the distribution. Some computational issues are discussed. Full Article
robust Robust Bayesian model selection for heavy-tailed linear regression using finite mixtures By projecteuclid.org Published On :: Mon, 03 Feb 2020 04:00 EST Flávio B. Gonçalves, Marcos O. Prates, Victor Hugo Lachos. Source: Brazilian Journal of Probability and Statistics, Volume 34, Number 1, 51--70.Abstract: In this paper, we present a novel methodology to perform Bayesian model selection in linear models with heavy-tailed distributions. We consider a finite mixture of distributions to model a latent variable where each component of the mixture corresponds to one possible model within the symmetrical class of normal independent distributions. Naturally, the Gaussian model is one of the possibilities. This allows for a simultaneous analysis based on the posterior probability of each model. Inference is performed via Markov chain Monte Carlo—a Gibbs sampler with Metropolis–Hastings steps for a class of parameters. Simulated examples highlight the advantages of this approach compared to a segregated analysis based on arbitrarily chosen model selection criteria. Examples with real data are presented and an extension to censored linear regression is introduced and discussed. Full Article
robust L-Logistic regression models: Prior sensitivity analysis, robustness to outliers and applications By projecteuclid.org Published On :: Mon, 10 Jun 2019 04:04 EDT Rosineide F. da Paz, Narayanaswamy Balakrishnan, Jorge Luis Bazán. Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 455--479.Abstract: Tadikamalla and Johnson [ Biometrika 69 (1982) 461–465] developed the $L_{B}$ distribution to variables with bounded support by considering a transformation of the standard Logistic distribution. In this manuscript, a convenient parametrization of this distribution is proposed in order to develop regression models. This distribution, referred to here as L-Logistic distribution, provides great flexibility and includes the uniform distribution as a particular case. Several properties of this distribution are studied, and a Bayesian approach is adopted for the parameter estimation. Simulation studies, considering prior sensitivity analysis, recovery of parameters and comparison of algorithms, and robustness to outliers are all discussed showing that the results are insensitive to the choice of priors, efficiency of the algorithm MCMC adopted, and robustness of the model when compared with the beta distribution. Applications to estimate the vulnerability to poverty and to explain the anxiety are performed. The results to applications show that the L-Logistic regression models provide a better fit than the corresponding beta regression models. Full Article
robust Failure rate of Birnbaum–Saunders distributions: Shape, change-point, estimation and robustness By projecteuclid.org Published On :: Mon, 04 Mar 2019 04:00 EST Emilia Athayde, Assis Azevedo, Michelli Barros, Víctor Leiva. Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 301--328.Abstract: The Birnbaum–Saunders (BS) distribution has been largely studied and applied. A random variable with BS distribution is a transformation of another random variable with standard normal distribution. Generalized BS distributions are obtained when the normally distributed random variable is replaced by another symmetrically distributed random variable. This allows us to obtain a wide class of positively skewed models with lighter and heavier tails than the BS model. Its failure rate admits several shapes, including the unimodal case, with its change-point being able to be used for different purposes. For example, to establish the reduction in a dose, and then in the cost of the medical treatment. We analyze the failure rates of generalized BS distributions obtained by the logistic, normal and Student-t distributions, considering their shape and change-point, estimating them, evaluating their robustness, assessing their performance by simulations, and applying the results to real data from different areas. Full Article
robust Bayesian robustness to outliers in linear regression and ratio estimation By projecteuclid.org Published On :: Mon, 04 Mar 2019 04:00 EST Alain Desgagné, Philippe Gagnon. Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 2, 205--221.Abstract: Whole robustness is a nice property to have for statistical models. It implies that the impact of outliers gradually vanishes as they approach plus or minus infinity. So far, the Bayesian literature provides results that ensure whole robustness for the location-scale model. In this paper, we make two contributions. First, we generalise the results to attain whole robustness in simple linear regression through the origin, which is a necessary step towards results for general linear regression models. We allow the variance of the error term to depend on the explanatory variable. This flexibility leads to the second contribution: we provide a simple Bayesian approach to robustly estimate finite population means and ratios. The strategy to attain whole robustness is simple since it lies in replacing the traditional normal assumption on the error term by a super heavy-tailed distribution assumption. As a result, users can estimate the parameters as usual, using the posterior distribution. Full Article
robust A Distributionally Robust Area Under Curve Maximization Model. (arXiv:2002.07345v2 [math.OC] UPDATED) By arxiv.org Published On :: Area under ROC curve (AUC) is a widely used performance measure for classification models. We propose two new distributionally robust AUC maximization models (DR-AUC) that rely on the Kantorovich metric and approximate the AUC with the hinge loss function. We consider the two cases with respectively fixed and variable support for the worst-case distribution. We use duality theory to reformulate the DR-AUC models and derive tractable convex optimization problems. The numerical experiments show that the proposed DR-AUC models -- benchmarked with the standard deterministic AUC and the support vector machine models - perform better in general and in particular improve the worst-case out-of-sample performance over the majority of the considered datasets, thereby showing their robustness. The results are particularly encouraging since our numerical experiments are conducted with training sets of small size which have been known to be conducive to low out-of-sample performance. Full Article
robust Robust location estimators in regression models with covariates and responses missing at random. (arXiv:2005.03511v1 [stat.ME]) By arxiv.org Published On :: This paper deals with robust marginal estimation under a general regression model when missing data occur in the response and also in some of covariates. The target is a marginal location parameter which is given through an $M-$functional. To obtain robust Fisher--consistent estimators, properly defined marginal distribution function estimators are considered. These estimators avoid the bias due to missing values by assuming a missing at random condition. Three methods are considered to estimate the marginal distribution function which allows to obtain the $M-$location of interest: the well-known inverse probability weighting, a convolution--based method that makes use of the regression model and an augmented inverse probability weighting procedure that prevents against misspecification. The robust proposed estimators and the classical ones are compared through a numerical study under different missing models including clean and contaminated samples. We illustrate the estimators behaviour under a nonlinear model. A real data set is also analysed. Full Article
robust Distributional Robustness of K-class Estimators and the PULSE. (arXiv:2005.03353v1 [econ.EM]) By arxiv.org Published On :: In causal settings, such as instrumental variable settings, it is well known that estimators based on ordinary least squares (OLS) can yield biased and non-consistent estimates of the causal parameters. This is partially overcome by two-stage least squares (TSLS) estimators. These are, under weak assumptions, consistent but do not have desirable finite sample properties: in many models, for example, they do not have finite moments. The set of K-class estimators can be seen as a non-linear interpolation between OLS and TSLS and are known to have improved finite sample properties. Recently, in causal discovery, invariance properties such as the moment criterion which TSLS estimators leverage have been exploited for causal structure learning: e.g., in cases, where the causal parameter is not identifiable, some structure of the non-zero components may be identified, and coverage guarantees are available. Subsequently, anchor regression has been proposed to trade-off invariance and predictability. The resulting estimator is shown to have optimal predictive performance under bounded shift interventions. In this paper, we show that the concepts of anchor regression and K-class estimators are closely related. Establishing this connection comes with two benefits: (1) It enables us to prove robustness properties for existing K-class estimators when considering distributional shifts. And, (2), we propose a novel estimator in instrumental variable settings by minimizing the mean squared prediction error subject to the constraint that the estimator lies in an asymptotically valid confidence region of the causal parameter. We call this estimator PULSE (p-uncorrelated least squares estimator) and show that it can be computed efficiently, even though the underlying optimization problem is non-convex. We further prove that it is consistent. Full Article
robust Towards Frequency-Based Explanation for Robust CNN. (arXiv:2005.03141v1 [cs.LG]) By arxiv.org Published On :: Current explanation techniques towards a transparent Convolutional Neural Network (CNN) mainly focuses on building connections between the human-understandable input features with models' prediction, overlooking an alternative representation of the input, the frequency components decomposition. In this work, we present an analysis of the connection between the distribution of frequency components in the input dataset and the reasoning process the model learns from the data. We further provide quantification analysis about the contribution of different frequency components toward the model's prediction. We show that the vulnerability of the model against tiny distortions is a result of the model is relying on the high-frequency features, the target features of the adversarial (black and white-box) attackers, to make the prediction. We further show that if the model develops stronger association between the low-frequency component with true labels, the model is more robust, which is the explanation of why adversarially trained models are more robust against tiny distortions. Full Article
robust Robust sparse covariance estimation by thresholding Tyler’s M-estimator By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST John Goes, Gilad Lerman, Boaz Nadler. Source: The Annals of Statistics, Volume 48, Number 1, 86--110.Abstract: Estimating a high-dimensional sparse covariance matrix from a limited number of samples is a fundamental task in contemporary data analysis. Most proposals to date, however, are not robust to outliers or heavy tails. Toward bridging this gap, in this work we consider estimating a sparse shape matrix from $n$ samples following a possibly heavy-tailed elliptical distribution. We propose estimators based on thresholding either Tyler’s M-estimator or its regularized variant. We prove that in the joint limit as the dimension $p$ and the sample size $n$ tend to infinity with $p/n ogamma>0$, our estimators are minimax rate optimal. Results on simulated data support our theoretical analysis. Full Article
robust Robust elastic net estimators for variable selection and identification of proteomic biomarkers By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Gabriela V. Cohen Freue, David Kepplinger, Matías Salibián-Barrera, Ezequiel Smucler. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2065--2090.Abstract: In large-scale quantitative proteomic studies, scientists measure the abundance of thousands of proteins from the human proteome in search of novel biomarkers for a given disease. Penalized regression estimators can be used to identify potential biomarkers among a large set of molecular features measured. Yet, the performance and statistical properties of these estimators depend on the loss and penalty functions used to define them. Motivated by a real plasma proteomic biomarkers study, we propose a new class of penalized robust estimators based on the elastic net penalty, which can be tuned to keep groups of correlated variables together in the selected model and maintain robustness against possible outliers. We also propose an efficient algorithm to compute our robust penalized estimators and derive a data-driven method to select the penalty term. Our robust penalized estimators have very good robustness properties and are also consistent under certain regularity conditions. Numerical results show that our robust estimators compare favorably to other robust penalized estimators. Using our proposed methodology for the analysis of the proteomics data, we identify new potentially relevant biomarkers of cardiac allograft vasculopathy that are not found with nonrobust alternatives. The selected model is validated in a new set of 52 test samples and achieves an area under the receiver operating characteristic (AUC) of 0.85. Full Article
robust Robust regression via mutivariate regression depth By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Chao Gao. Source: Bernoulli, Volume 26, Number 2, 1139--1170.Abstract: This paper studies robust regression in the settings of Huber’s $epsilon$-contamination models. We consider estimators that are maximizers of multivariate regression depth functions. These estimators are shown to achieve minimax rates in the settings of $epsilon$-contamination models for various regression problems including nonparametric regression, sparse linear regression, reduced rank regression, etc. We also discuss a general notion of depth function for linear operators that has potential applications in robust functional linear regression. Full Article
robust Robust estimation of mixing measures in finite mixture models By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Nhat Ho, XuanLong Nguyen, Ya’acov Ritov. Source: Bernoulli, Volume 26, Number 2, 828--857.Abstract: In finite mixture models, apart from underlying mixing measure, true kernel density function of each subpopulation in the data is, in many scenarios, unknown. Perhaps the most popular approach is to choose some kernel functions that we empirically believe our data are generated from and use these kernels to fit our models. Nevertheless, as long as the chosen kernel and the true kernel are different, statistical inference of mixing measure under this setting will be highly unstable. To overcome this challenge, we propose flexible and efficient robust estimators of the mixing measure in these models, which are inspired by the idea of minimum Hellinger distance estimator, model selection criteria, and superefficiency phenomenon. We demonstrate that our estimators consistently recover the true number of components and achieve the optimal convergence rates of parameter estimation under both the well- and misspecified kernel settings for any fixed bandwidth. These desirable asymptotic properties are illustrated via careful simulation studies with both synthetic and real data. Full Article
robust Robust modifications of U-statistics and applications to covariance estimation problems By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Stanislav Minsker, Xiaohan Wei. Source: Bernoulli, Volume 26, Number 1, 694--727.Abstract: Let $Y$ be a $d$-dimensional random vector with unknown mean $mu $ and covariance matrix $Sigma $. This paper is motivated by the problem of designing an estimator of $Sigma $ that admits exponential deviation bounds in the operator norm under minimal assumptions on the underlying distribution, such as existence of only 4th moments of the coordinates of $Y$. To address this problem, we propose robust modifications of the operator-valued U-statistics, obtain non-asymptotic guarantees for their performance, and demonstrate the implications of these results to the covariance estimation problem under various structural assumptions. Full Article
robust Needles and straw in a haystack: Robust confidence for possibly sparse sequences By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Eduard Belitser, Nurzhan Nurushev. Source: Bernoulli, Volume 26, Number 1, 191--225.Abstract: In the general signal$+$noise (allowing non-normal, non-independent observations) model, we construct an empirical Bayes posterior which we then use for uncertainty quantification for the unknown, possibly sparse, signal. We introduce a novel excessive bias restriction (EBR) condition, which gives rise to a new slicing of the entire space that is suitable for uncertainty quantification. Under EBR and some mild exchangeable exponential moment condition on the noise, we establish the local (oracle) optimality of the proposed confidence ball. Without EBR, we propose another confidence ball of full coverage, but its radius contains an additional $sigma n^{1/4}$-term. In passing, we also get the local optimal results for estimation , posterior contraction problems, and the problem of weak recovery of sparsity structure . Adaptive minimax results (also for the estimation and posterior contraction problems) over various sparsity classes follow from our local results. Full Article
robust A New Bayesian Approach to Robustness Against Outliers in Linear Regression By projecteuclid.org Published On :: Thu, 19 Mar 2020 22:02 EDT Philippe Gagnon, Alain Desgagné, Mylène Bédard. Source: Bayesian Analysis, Volume 15, Number 2, 389--414.Abstract: Linear regression is ubiquitous in statistical analysis. It is well understood that conflicting sources of information may contaminate the inference when the classical normality of errors is assumed. The contamination caused by the light normal tails follows from an undesirable effect: the posterior concentrates in an area in between the different sources with a large enough scaling to incorporate them all. The theory of conflict resolution in Bayesian statistics (O’Hagan and Pericchi (2012)) recommends to address this problem by limiting the impact of outliers to obtain conclusions consistent with the bulk of the data. In this paper, we propose a model with super heavy-tailed errors to achieve this. We prove that it is wholly robust, meaning that the impact of outliers gradually vanishes as they move further and further away from the general trend. The super heavy-tailed density is similar to the normal outside of the tails, which gives rise to an efficient estimation procedure. In addition, estimates are easily computed. This is highlighted via a detailed user guide, where all steps are explained through a simulated case study. The performance is shown using simulation. All required code is given. Full Article
robust Hierarchical Normalized Completely Random Measures for Robust Graphical Modeling By projecteuclid.org Published On :: Thu, 19 Dec 2019 22:10 EST Andrea Cremaschi, Raffaele Argiento, Katherine Shoemaker, Christine Peterson, Marina Vannucci. Source: Bayesian Analysis, Volume 14, Number 4, 1271--1301.Abstract: Gaussian graphical models are useful tools for exploring network structures in multivariate normal data. In this paper we are interested in situations where data show departures from Gaussianity, therefore requiring alternative modeling distributions. The multivariate $t$ -distribution, obtained by dividing each component of the data vector by a gamma random variable, is a straightforward generalization to accommodate deviations from normality such as heavy tails. Since different groups of variables may be contaminated to a different extent, Finegold and Drton (2014) introduced the Dirichlet $t$ -distribution, where the divisors are clustered using a Dirichlet process. In this work, we consider a more general class of nonparametric distributions as the prior on the divisor terms, namely the class of normalized completely random measures (NormCRMs). To improve the effectiveness of the clustering, we propose modeling the dependence among the divisors through a nonparametric hierarchical structure, which allows for the sharing of parameters across the samples in the data set. This desirable feature enables us to cluster together different components of multivariate data in a parsimonious way. We demonstrate through simulations that this approach provides accurate graphical model inference, and apply it to a case study examining the dependence structure in radiomics data derived from The Cancer Imaging Atlas. Full Article
robust Jointly Robust Prior for Gaussian Stochastic Process in Emulation, Calibration and Variable Selection By projecteuclid.org Published On :: Tue, 11 Jun 2019 04:00 EDT Mengyang Gu. Source: Bayesian Analysis, Volume 14, Number 3, 877--905.Abstract: Gaussian stochastic process (GaSP) has been widely used in two fundamental problems in uncertainty quantification, namely the emulation and calibration of mathematical models. Some objective priors, such as the reference prior, are studied in the context of emulating (approximating) computationally expensive mathematical models. In this work, we introduce a new class of priors, called the jointly robust prior, for both the emulation and calibration. This prior is designed to maintain various advantages from the reference prior. In emulation, the jointly robust prior has an appropriate tail decay rate as the reference prior, and is computationally simpler than the reference prior in parameter estimation. Moreover, the marginal posterior mode estimation with the jointly robust prior can separate the influential and inert inputs in mathematical models, while the reference prior does not have this property. We establish the posterior propriety for a large class of priors in calibration, including the reference prior and jointly robust prior in general scenarios, but the jointly robust prior is preferred because the calibrated mathematical model typically predicts the reality well. The jointly robust prior is used as the default prior in two new R packages, called “RobustGaSP” and “RobustCalibration”, available on CRAN for emulation and calibration, respectively. Full Article
robust Robust Delaware watermelon season begins By news.delaware.gov Published On :: Mon, 29 Jul 2013 14:03:17 +0000 A strong Delaware watermelon season is now under way, with First State melons now reaching customers in grocery stores and markets along the East Coast, from New England to Florida. This season is featuring good yields and excellent quality for Delaware watermelon growers, said Secretary of Agriculture Ed Kee. The First State produces both seeded and seedless watermelon. Full Article Department of Agriculture News
robust Robustness in an Ultrasensitive Motor By mbio.asm.org Published On :: 2020-03-03T01:30:27-08:00 ABSTRACT In Escherichia coli, the chemotaxis response regulator CheY-P binds to FliM, a component of the switch complex at the base of the bacterial flagellar motor, to modulate the direction of motor rotation. The bacterial flagellar motor is ultrasensitive to the concentration of unbound CheY-P in the cytoplasm. CheY-P binds to FliM molecules both in the cytoplasm and on the motor. As the concentration of FliM unavoidably varies from cell to cell, leading to a variation of unbound CheY-P concentration in the cytoplasm, this raises the question whether the flagellar motor is robust against this variation, that is, whether the rotational bias of the motor is more or less constant as the concentration of FliM varies. Here, we showed that the motor is robust against variations of the concentration of FliM. We identified adaptive remodeling of the motor as the mechanism for this robustness. As the level of FliM molecules changes, resulting in different amounts of the unbound CheY-P molecules, the motor adaptively changes the composition of its switch complex to compensate for this effect. IMPORTANCE The bacterial flagellar motor is an ultrasensitive motor. Its output, the probability of the motor turning clockwise, depends sensitively on the occupancy of the protein FliM (a component on the switch complex of the motor) by the input CheY-P molecules. With a limited cellular pool of CheY-P molecules, cell-to-cell variation of the FliM level would lead to large unwanted variation of the motor output if not compensated. Here, we showed that the motor output is robust against the variation of FliM level and identified the adaptive remodeling of the motor switch complex as the mechanism for this robustness. Full Article
robust Avoiding Drug Resistance by Substrate Envelope-Guided Design: Toward Potent and Robust HCV NS3/4A Protease Inhibitors By mbio.asm.org Published On :: 2020-03-31T01:30:58-07:00 ABSTRACT Hepatitis C virus (HCV) infects millions of people worldwide, causing chronic liver disease that can lead to cirrhosis, hepatocellular carcinoma, and liver transplant. In the last several years, the advent of direct-acting antivirals, including NS3/4A protease inhibitors (PIs), has remarkably improved treatment outcomes of HCV-infected patients. However, selection of resistance-associated substitutions and polymorphisms among genotypes can lead to drug resistance and in some cases treatment failure. A proactive strategy to combat resistance is to constrain PIs within evolutionarily conserved regions in the protease active site. Designing PIs using the substrate envelope is a rational strategy to decrease the susceptibility to resistance by using the constraints of substrate recognition. We successfully designed two series of HCV NS3/4A PIs to leverage unexploited areas in the substrate envelope to improve potency, specifically against resistance-associated substitutions at D168. Our design strategy achieved better resistance profiles over both the FDA-approved NS3/4A PI grazoprevir and the parent compound against the clinically relevant D168A substitution. Crystallographic structural analysis and inhibition assays confirmed that optimally filling the substrate envelope is critical to improve inhibitor potency while avoiding resistance. Specifically, inhibitors that enhanced hydrophobic packing in the S4 pocket and avoided an energetically frustrated pocket performed the best. Thus, the HCV substrate envelope proved to be a powerful tool to design robust PIs, offering a strategy that can be translated to other targets for rational design of inhibitors with improved potency and resistance profiles. IMPORTANCE Despite significant progress, hepatitis C virus (HCV) continues to be a major health problem with millions of people infected worldwide and thousands dying annually due to resulting complications. Recent antiviral combinations can achieve >95% cure, but late diagnosis, low access to treatment, and treatment failure due to drug resistance continue to be roadblocks against eradication of the virus. We report the rational design of two series of HCV NS3/4A protease inhibitors with improved resistance profiles by exploiting evolutionarily constrained regions of the active site using the substrate envelope model. Optimally filling the S4 pocket is critical to avoid resistance and improve potency. Our results provide drug design strategies to avoid resistance that are applicable to other quickly evolving viral drug targets. Full Article
robust A Simple, Cost-Effective, and Robust Method for rRNA Depletion in RNA-Sequencing Studies By mbio.asm.org Published On :: 2020-04-21T01:31:26-07:00 ABSTRACT The profiling of gene expression by RNA sequencing (RNA-seq) has enabled powerful studies of global transcriptional patterns in all organisms, including bacteria. Because the vast majority of RNA in bacteria is rRNA, it is standard practice to deplete the rRNA from a total RNA sample such that the reads in an RNA-seq experiment derive predominantly from mRNA. One of the most commonly used commercial kits for rRNA depletion, the Ribo-Zero kit from Illumina, was recently discontinued abruptly and for an extended period of time. Here, we report the development of a simple, cost-effective, and robust method for depleting rRNA that can be easily implemented by any lab or facility. We first developed an algorithm for designing biotinylated oligonucleotides that will hybridize tightly and specifically to the 23S, 16S, and 5S rRNAs from any species of interest. Precipitation of these oligonucleotides bound to rRNA by magnetic streptavidin-coated beads then depletes rRNA from a complex, total RNA sample such that ~75 to 80% of reads in a typical RNA-seq experiment derive from mRNA. Importantly, we demonstrate a high correlation of RNA abundance or fold change measurements in RNA-seq experiments between our method and the Ribo-Zero kit. Complete details on the methodology are provided, including open-source software for designing oligonucleotides optimized for any bacterial species or community of interest. IMPORTANCE The ability to examine global patterns of gene expression in microbes through RNA sequencing has fundamentally transformed microbiology. However, RNA-seq depends critically on the removal of rRNA from total RNA samples. Otherwise, rRNA would comprise upward of 90% of the reads in a typical RNA-seq experiment, limiting the reads coming from mRNA or requiring high total read depth. A commonly used kit for rRNA subtraction from Illumina was recently unavailable for an extended period of time, disrupting routine rRNA depletion. Here, we report the development of a "do-it-yourself" kit for rapid, cost-effective, and robust depletion of rRNA from total RNA. We present an algorithm for designing biotinylated oligonucleotides that will hybridize to the rRNAs from a target set of species. We then demonstrate that the designed oligonucleotides enable sufficient rRNA depletion to produce RNA-seq data with 75 to 80% of reads coming from mRNA. The methodology presented should enable RNA-seq studies on any species or metagenomic sample of interest. Full Article
robust Bioprocess: Robustness with Respect to Mycoplasma Species By journal.pda.org Published On :: 2020-04-09T09:40:03-07:00 Capture bioprocessing unit operations were previously shown to clear or kill several log10 of a model mycoplasma Acholeplasma laidlawii in lab-scale spike/removal studies. Here, we confirm this observation with two additional mollicute species relevant to biotechnology products for human use: Mycoplasma orale and Mycoplasma arginini. Clearance of M. orale and M. arginini from protein A column purification was similar to that seen with A. laidlawii, though some between cycle carryover was evident, especially for M. orale. However, on-resin growth studies for all three species revealed that residual mycoplasma in a column slowly die off over time rather than expanding further. Solvent/detergent exposure completely inactivated M. arginini though detectable levels of M. orale remained. A small-scale model of a commercial low-pH hold step did inactivate live M. orale, but this inactivation required a lower pH set point and occurred with slower kinetics than previously seen with A. laidlawii. Additionally, ultraviolet-C irradiation was shown to be effective for A. laidlawii and M. orale inactivation whereas virus-retentive filters for upstream and downstream processes, as expected, cleared A. laidlawii. These data argue that M. orale and M. arginini overall would be largely cleared by early bioprocessing steps as shown previously for A. laidlawii, and that barrier technologies can effectively reduce the risk from media components. For some unit operations, M. orale and M. arginini may be hardier, and require more stringent processing or equipment cleaning conditions to assure effective mycoplasma reduction. By exploring how some of the failure modes in commercial antibody manufacturing processes can still eliminate mycoplasma burden, we demonstrate that required best practices assure biotechnology products will be safe for patients. Full Article
robust Yokogawa Releases AI-enabled Versions of SMARTDAC+ Paperless Recorders and Data Logging Software, and Environmentally Robust AI-enabled e-RT3 Plus Edge Computing Platform for Industry Applications By www.yokogawa.com Published On :: 2020-04-07T16:00:00+09:00 Yokogawa Electric Corporation (TOKYO: 6841) announces the release of artificial intelligence (AI)-enabled versions of the GX series panel-mount type paperless recorders, GP series portable paperless recorders, and GA10 data logging software, which are components of the highly operable and expandable SMARTDAC+data acquisition and control system. This new AI functionality includes the future pen, a function developed by Yokogawa that enables the drawing of predicted waveforms. Yokogawa is also releasing a new CPU module for the e-RT3 Plus edge computing platform that is environmentally robust and Python compatible. The GX/GP and e-RT3 release is set for April 8, and the GA10 software will be released on May 13. The SMARTDAC+ system is a product in the OpreX Data Acquisition family, and the e-RT3 Plus is part of the OpreX Control Devices family. Full Article
robust Robust job gains and a continued rebound in labor force participation By webfeeds.brookings.edu Published On :: Fri, 04 Mar 2016 11:43:00 -0500 The latest BLS jobs report shows little sign employers are worried about the future strength of the recovery. Both the employer and household surveys suggest U.S. employers have an undiminished appetite for new hires. Nonfarm payrolls surged 242,000 in February, and upward revisions BLS employment estimates for January added almost 21,000 to estimated payroll gains in that month. The household survey shows even bigger job gains in recent months. An additional 530,000 respondents said they were employed in February compared with January. This follows reported employment gains of 485,000 and 615,000 in December and January. Over the past year the household survey showed employment gains that averaged 237,000 per month. In comparison, the employer survey reported payroll gains averaging 223,000 a month. These monthly gains are about three times faster than the job growth needed to keep the unemployment rate from climbing. As a result, the unemployment rate has fallen over the past year, reaching 4.9 percent in January. The jobless rate remained unchanged in February because of a continued influx of adults into the workforce. An additional 555,000 people entered the labor force, capping a three-month period which saw the labor force grow by over 500,000 a month. The labor force participation rate continued to inch up, rising 0.2 percentage points in February compared with the previous month. Since reaching a 38-year low in September 2015, the labor force participation rate has risen 0.5 points. More than half the decline in the participation rate between the onset of the Great Recession and today is traceable to the aging of the adult population. A growing share of Americans are in late middle age or past 65, ages when we anticipate participation rates will decline. If we focus on the population between 25 and 54, the participation rate stopped declining in 2013 and has edged up 0.6 percentage points since hitting its low point. The employment-to-population rate of 25-54 year-olds has increased 3.0 percentage points since reaching a low in 2009 and 2010. Using the employment rate of 25-54 year-olds as an indicator of labor market tightness, we have recovered about 60 percent of the employment-rate drop that occurred in the Great Recession. Eliminating the rest of the decline will require a further increase in prime-age labor force participation. Two other indicators suggest the job market remains some distance from a full recovery. More than a quarter of the 7.8 million unemployed have been jobless 6 months or longer. The number of long-term unemployed is about 70 percent higher than was the case just before the Great Recession. Nearly 6 million Americans who hold part-time jobs indicate they want to work on full-time schedules. They cannot do so because they have been assigned part-time hours or can only find a part-time job. The number of workers in this position is more than one-third higher than the comparable number back in 2007. Nonetheless, nearly all indicators of labor market tightness have displayed continued improvement in recent months. February’s surge in employment growth and labor force participation was accompanied by an unexpected drop in nominal wages. Average hourly pay fell from $25.38 to $25.35 per hour. Compared with average earnings 12 months ago, workers saw a 2.2 percent rise in nominal hourly earnings. Because inflation is low, this probably translates into a real wage gain of about 1 percent. While employers may have an undiminished appetite for new hires, they show little inclination to boost the pace of wage increases. Authors Gary Burtless Image Source: © Shannon Stapleton / Reuters Full Article
robust Powell says the economy will likely need more support from the Fed for the recovery to be 'robust' By www.cnbc.com Published On :: Wed, 29 Apr 2020 19:40:32 GMT Federal Reserve Chairman Jerome Powell said more stimulus is needed to ensure a robust economic recovery from the coronavirus crisis. Full Article
robust Top Testing Suite: Robust Testing Platform Forever! By feedproxy.google.com Published On :: I was literally confused to deployment testing services and testing scenarios, but thank to my one friend who advised me to use the computaris “top testing suite”. It was the... Full Article
robust Housing, financial and capital taxation policies to ensure robust growth in Sweden By dx.doi.org Published On :: Thu, 21 Feb 2013 09:00:00 GMT Extensive structural reforms since the early 1990s have strengthened the resilience of the Swedish economy to shocks. Full Article
robust Comparing the robustness of PAYG pension schemes By dx.doi.org Published On :: Tue, 15 Jul 2014 09:00:00 GMT This paper provides a framework for comparing a defined benefit (DB) and a defined contribution (DC) point schemes, which are both pay-as-you go (PAYG) financed. Full Article
robust Global economy urgently needs a stronger and more coherent policy response to promote robust and inclusive growth By www.oecd.org Published On :: Wed, 24 Feb 2016 17:55:00 GMT Policymakers need to deploy broad-based reform plans that incorporate monetary, fiscal, and structural policies to stimulate persistently weak demand, re-launch productivity growth, create jobs and build a more inclusive global economy, according to the OECD’s annual Going for Growth report. Full Article
robust Latvia: Maintain robust expansion and continue reforms to achieve income convergence and more inclusive growth By www.oecd.org Published On :: Fri, 15 Sep 2017 16:23:00 GMT Successful implementation of economic reforms has boosted the Latvian economy, leading to strong growth, rising wages and solid public finances. Further policy action is now needed to accelerate productivity growth, create jobs, drive down poverty, improve living standards and ensure that everyone benefits from more inclusive growth, according to a new report from the OECD. Full Article
robust Multi-level governance and robust water allocation regimes needed to secure Brazil’s future water needs By www.oecd.org Published On :: Wed, 02 Sep 2015 19:00:00 GMT The recent droughts in Brazil’s Rio de Janeiro and São Paulo states have exposed the need to shift from crisis management to effective risk governance of the country’s water resources, according to a new OECD report. Full Article
robust Multi-level governance and robust water allocation regimes needed to secure Brazil’s future water needs By www.oecd.org Published On :: Wed, 02 Sep 2015 19:00:00 GMT The recent droughts in Brazil’s Rio de Janeiro and São Paulo states have exposed the need to shift from crisis management to effective risk governance of the country’s water resources, according to a new OECD report. Full Article
robust Chevron to slash spending further despite robust first quarter By www.ft.com Published On :: Fri, 01 May 2020 13:11:11 GMT Earnings of $3.6bn exceeded expectations ahead of collapse in global oil demand Full Article
robust UK’s NHS COVID-19 app lacks robust legal safeguards against data misuse, warns committee By techcrunch.com Published On :: Thu, 07 May 2020 13:05:36 +0000 A UK parliamentary committee that focuses on human rights issues has called for primary legislation to be put in place to ensure that legal protections wrap around the national coronavirus contact tracing app. The app, called NHS COVID-19, is being fast tracked for public use — with a test ongoing this week in the Isle […] Full Article Apps Europe Health Mobile Privacy Bluetooth data protection law digital rights Elizabeth Denham Germany human rights identity management ireland Matt Hancock National Health Service NHS NHS COVID-19 NHSX northern ireland privacy privacy policy terms of service United Kingdom
robust Blackburn boss Tony Mowbray asks Jack Rodwell to be more 'robust' By Published On :: Fri, 24 Aug 2018 11:16:47 +0100 Rodwell, who played his last game for Sunderland ten months ago, has signed a deal with The Championship club until the end of the season in a bid to get his career back on track. Full Article