gradient

A Compact Quadrupole-Orbitrap Mass Spectrometer with FAIMS Interface Improves Proteome Coverage in Short LC Gradients

Dorte B. Bekker-Jensen
Apr 1, 2020; 19:716-729
Technological Innovation and Resources




gradient

A Compact Quadrupole-Orbitrap Mass Spectrometer with FAIMS Interface Improves Proteome Coverage in Short LC Gradients [Technological Innovation and Resources]

State-of-the-art proteomics-grade mass spectrometers can measure peptide precursors and their fragments with ppm mass accuracy at sequencing speeds of tens of peptides per second with attomolar sensitivity. Here we describe a compact and robust quadrupole-orbitrap mass spectrometer equipped with a front-end High Field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS) Interface. The performance of the Orbitrap Exploris 480 mass spectrometer is evaluated in data-dependent acquisition (DDA) and data-independent acquisition (DIA) modes in combination with FAIMS. We demonstrate that different compensation voltages (CVs) for FAIMS are optimal for DDA and DIA, respectively. Combining DIA with FAIMS using single CVs, the instrument surpasses 2500 peptides identified per minute. This enables quantification of >5000 proteins with short online LC gradients delivered by the Evosep One LC system allowing acquisition of 60 samples per day. The raw sensitivity of the instrument is evaluated by analyzing 5 ng of a HeLa digest from which >1000 proteins were reproducibly identified with 5 min LC gradients using DIA-FAIMS. To demonstrate the versatility of the instrument, we recorded an organ-wide map of proteome expression across 12 rat tissues quantified by tandem mass tags and label-free quantification using DIA with FAIMS to a depth of >10,000 proteins.




gradient

Convergences of Regularized Algorithms and Stochastic Gradient Methods with Random Projections

We study the least-squares regression problem over a Hilbert space, covering nonparametric regression over a reproducing kernel Hilbert space as a special case. We first investigate regularized algorithms adapted to a projection operator on a closed subspace of the Hilbert space. We prove convergence results with respect to variants of norms, under a capacity assumption on the hypothesis space and a regularity condition on the target function. As a result, we obtain optimal rates for regularized algorithms with randomized sketches, provided that the sketch dimension is proportional to the effective dimension up to a logarithmic factor. As a byproduct, we obtain similar results for Nystr"{o}m regularized algorithms. Our results provide optimal, distribution-dependent rates that do not have any saturation effect for sketched/Nystr"{o}m regularized algorithms, considering both the attainable and non-attainable cases, in the well-conditioned regimes. We then study stochastic gradient methods with projection over the subspace, allowing multi-pass over the data and minibatches, and we derive similar optimal statistical convergence results.




gradient

Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent

We propose graph-dependent implicit regularisation strategies for synchronised distributed stochastic subgradient descent (Distributed SGD) for convex problems in multi-agent learning. Under the standard assumptions of convexity, Lipschitz continuity, and smoothness, we establish statistical learning rates that retain, up to logarithmic terms, single-machine serial statistical guarantees through implicit regularisation (step size tuning and early stopping) with appropriate dependence on the graph topology. Our approach avoids the need for explicit regularisation in decentralised learning problems, such as adding constraints to the empirical risk minimisation rule. Particularly for distributed methods, the use of implicit regularisation allows the algorithm to remain simple, without projections or dual methods. To prove our results, we establish graph-independent generalisation bounds for Distributed SGD that match the single-machine serial SGD setting (using algorithmic stability), and we establish graph-dependent optimisation bounds that are of independent interest. We present numerical experiments to show that the qualitative nature of the upper bounds we derive can be representative of real behaviours.




gradient

Expected Policy Gradients for Reinforcement Learning

We propose expected policy gradients (EPG), which unify stochastic policy gradients (SPG) and deterministic policy gradients (DPG) for reinforcement learning. Inspired by expected sarsa, EPG integrates (or sums) across actions when estimating the gradient, instead of relying only on the action in the sampled trajectory. For continuous action spaces, we first derive a practical result for Gaussian policies and quadratic critics and then extend it to a universal analytical method, covering a broad class of actors and critics, including Gaussian, exponential families, and policies with bounded support. For Gaussian policies, we introduce an exploration method that uses covariance proportional to the matrix exponential of the scaled Hessian of the critic with respect to the actions. For discrete action spaces, we derive a variant of EPG based on softmax policies. We also establish a new general policy gradient theorem, of which the stochastic and deterministic policy gradient theorems are special cases. Furthermore, we prove that EPG reduces the variance of the gradient estimates without requiring deterministic policies and with little computational overhead. Finally, we provide an extensive experimental evaluation of EPG and show that it outperforms existing approaches on multiple challenging control domains.




gradient

Conjugate Gradients for Kernel Machines

Regularized least-squares (kernel-ridge / Gaussian process) regression is a fundamental algorithm of statistics and machine learning. Because generic algorithms for the exact solution have cubic complexity in the number of datapoints, large datasets require to resort to approximations. In this work, the computation of the least-squares prediction is itself treated as a probabilistic inference problem. We propose a structured Gaussian regression model on the kernel function that uses projections of the kernel matrix to obtain a low-rank approximation of the kernel and the matrix. A central result is an enhanced way to use the method of conjugate gradients for the specific setting of least-squares regression as encountered in machine learning.




gradient

Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions

We consider the standard model of distributed optimization of a sum of functions $F(mathbf z) = sum_{i=1}^n f_i(mathbf z)$, where node $i$ in a network holds the function $f_i(mathbf z)$. We allow for a harsh network model characterized by asynchronous updates, message delays, unpredictable message losses, and directed communication among nodes. In this setting, we analyze a modification of the Gradient-Push method for distributed optimization, assuming that (i) node $i$ is capable of generating gradients of its function $f_i(mathbf z)$ corrupted by zero-mean bounded-support additive noise at each step, (ii) $F(mathbf z)$ is strongly convex, and (iii) each $f_i(mathbf z)$ has Lipschitz gradients. We show that our proposed method asymptotically performs as well as the best bounds on centralized gradient descent that takes steps in the direction of the sum of the noisy gradients of all the functions $f_1(mathbf z), ldots, f_n(mathbf z)$ at each step.




gradient

On Stationary-Point Hitting Time and Ergodicity of Stochastic Gradient Langevin Dynamics

Stochastic gradient Langevin dynamics (SGLD) is a fundamental algorithm in stochastic optimization. Recent work by Zhang et al. (2017) presents an analysis for the hitting time of SGLD for the first and second order stationary points. The proof in Zhang et al. (2017) is a two-stage procedure through bounding the Cheeger's constant, which is rather complicated and leads to loose bounds. In this paper, using intuitions from stochastic differential equations, we provide a direct analysis for the hitting times of SGLD to the first and second order stationary points. Our analysis is straightforward. It only relies on basic linear algebra and probability theory tools. Our direct analysis also leads to tighter bounds comparing to Zhang et al. (2017) and shows the explicit dependence of the hitting time on different factors, including dimensionality, smoothness, noise strength, and step size effects. Under suitable conditions, we show that the hitting time of SGLD to first-order stationary points can be dimension-independent. Moreover, we apply our analysis to study several important online estimation problems in machine learning, including linear regression, matrix factorization, and online PCA.




gradient

MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation. (arXiv:2005.03161v1 [stat.ML])

Model Stealing (MS) attacks allow an adversary with black-box access to a Machine Learning model to replicate its functionality, compromising the confidentiality of the model. Such attacks train a clone model by using the predictions of the target model for different inputs. The effectiveness of such attacks relies heavily on the availability of data necessary to query the target model. Existing attacks either assume partial access to the dataset of the target model or availability of an alternate dataset with semantic similarities.

This paper proposes MAZE -- a data-free model stealing attack using zeroth-order gradient estimation. In contrast to prior works, MAZE does not require any data and instead creates synthetic data using a generative model. Inspired by recent works in data-free Knowledge Distillation (KD), we train the generative model using a disagreement objective to produce inputs that maximize disagreement between the clone and the target model. However, unlike the white-box setting of KD, where the gradient information is available, training a generator for model stealing requires performing black-box optimization, as it involves accessing the target model under attack. MAZE relies on zeroth-order gradient estimation to perform this optimization and enables a highly accurate MS attack.

Our evaluation with four datasets shows that MAZE provides a normalized clone accuracy in the range of 0.91x to 0.99x, and outperforms even the recent attacks that rely on partial data (JBDA, clone accuracy 0.13x to 0.69x) and surrogate data (KnockoffNets, clone accuracy 0.52x to 0.97x). We also study an extension of MAZE in the partial-data setting and develop MAZE-PD, which generates synthetic data closer to the target distribution. MAZE-PD further improves the clone accuracy (0.97x to 1.0x) and reduces the query required for the attack by 2x-24x.




gradient

Statistical inference for model parameters in stochastic gradient descent

Xi Chen, Jason D. Lee, Xin T. Tong, Yichen Zhang.

Source: The Annals of Statistics, Volume 48, Number 1, 251--273.

Abstract:
The stochastic gradient descent (SGD) algorithm has been widely used in statistical estimation for large-scale data due to its computational and memory efficiency. While most existing works focus on the convergence of the objective function or the error of the obtained solution, we investigate the problem of statistical inference of true model parameters based on SGD when the population loss function is strongly convex and satisfies certain smoothness conditions. Our main contributions are twofold. First, in the fixed dimension setup, we propose two consistent estimators of the asymptotic covariance of the average iterate from SGD: (1) a plug-in estimator, and (2) a batch-means estimator, which is computationally more efficient and only uses the iterates from SGD. Both proposed estimators allow us to construct asymptotically exact confidence intervals and hypothesis tests. Second, for high-dimensional linear regression, using a variant of the SGD algorithm, we construct a debiased estimator of each regression coefficient that is asymptotically normal. This gives a one-pass algorithm for computing both the sparse regression coefficients and confidence intervals, which is computationally attractive and applicable to online data.




gradient

The Bayesian Update: Variational Formulations and Gradient Flows

Nicolas Garcia Trillos, Daniel Sanz-Alonso.

Source: Bayesian Analysis, Volume 15, Number 1, 29--56.

Abstract:
The Bayesian update can be viewed as a variational problem by characterizing the posterior as the minimizer of a functional. The variational viewpoint is far from new and is at the heart of popular methods for posterior approximation. However, some of its consequences seem largely unexplored. We focus on the following one: defining the posterior as the minimizer of a functional gives a natural path towards the posterior by moving in the direction of steepest descent of the functional. This idea is made precise through the theory of gradient flows, allowing to bring new tools to the study of Bayesian models and algorithms. Since the posterior may be characterized as the minimizer of different functionals, several variational formulations may be considered. We study three of them and their three associated gradient flows. We show that, in all cases, the rate of convergence of the flows to the posterior can be bounded by the geodesic convexity of the functional to be minimized. Each gradient flow naturally suggests a nonlinear diffusion with the posterior as invariant distribution. These diffusions may be discretized to build proposals for Markov chain Monte Carlo (MCMC) algorithms. By construction, the diffusions are guaranteed to satisfy a certain optimality condition, and rates of convergence are given by the convexity of the functionals. We use this observation to propose a criterion for the choice of metric in Riemannian MCMC methods.




gradient

A Bayesian Conjugate Gradient Method (with Discussion)

Jon Cockayne, Chris J. Oates, Ilse C.F. Ipsen, Mark Girolami.

Source: Bayesian Analysis, Volume 14, Number 3, 937--1012.

Abstract:
A fundamental task in numerical computation is the solution of large linear systems. The conjugate gradient method is an iterative method which offers rapid convergence to the solution, particularly when an effective preconditioner is employed. However, for more challenging systems a substantial error can be present even after many iterations have been performed. The estimates obtained in this case are of little value unless further information can be provided about, for example, the magnitude of the error. In this paper we propose a novel statistical model for this error, set in a Bayesian framework. Our approach is a strict generalisation of the conjugate gradient method, which is recovered as the posterior mean for a particular choice of prior. The estimates obtained are analysed with Krylov subspace methods and a contraction result for the posterior is presented. The method is then analysed in a simulation study as well as being applied to a challenging problem in medical imaging.




gradient

Enrichment of Fully Packaged Virions in Column-Purified Recombinant Adeno-Associated Virus (rAAV) Preparations by Iodixanol Gradient Centrifugation Followed by Anion-Exchange Column Chromatography

This rapid and efficient method to prepare highly purified recombinant adeno-associated viruses (rAAVs) is based on binding of negatively charged rAAV capsids to an anion-exchange resin that is pH dependent.




gradient

Purification of Recombinant Adeno-Associated Viruses (rAAVs) by Iodixanol Gradient Centrifugation

This is a simple method for rapid preparation of recombinant adeno-associated virus (rAAV) stocks, which can be used for in vivo gene delivery. The purity of these vectors is considerably lower than that obtained by either CsCl gradient centrifugation or by combination of iodixanol gradient ultracentrifugation followed by column chromatography.




gradient

Changes in Child Mortality Over Time Across the Wealth Gradient in Less-Developed Countries

In developed countries, child health disparities across wealth gradients are commonly widening; at the same time, child mortality in low- and middle-income countries is declining. Whether these declines are associated with widening or narrowing disparities is unknown.

A systematic analysis of the evidence on child mortality gradients by wealth in less-developed countries shows that mortality is declining fastest among the poorest in most countries, leading to declining disparities in this important indicator of child health. (Read the full article)




gradient

Species Distribution and Comparison between EUCAST and Gradient Concentration Strips Methods for Antifungal Susceptibility Testing of 112 Aspergillus Section Nigri Isolates [Susceptibility]

Aspergillus niger, the third species responsible for invasive aspergillosis has been considered as a homogeneous species until DNA-based identification uncovered many cryptic species. These species have been recently reclassified into the Aspergillus section Nigri. However little is yet known among the section Nigri about the species distribution and the antifungal susceptibility pattern of each cryptic species. A total of 112 clinical isolates collected from 5 teaching hospitals in France and phenotypically identified as A. niger were analyzed. Identification to the species level was carried out by nucleotide sequence analysis. The Minimum Inhibitory Concentrations (MICs) of itraconazole, voriconazole, posaconazole, isavuconazole and amphotericin B were determined by both the EUCAST and gradient concentration strips methods. Aspergillus tubingensis (n=51, 45.5%) and A. welwitschiae (n=50, 44.6%) were the most common species while A. niger accounted for only 6.3% (n=7). The MICs of azoles drugs were higher for A. tubingensis than for A. welwitschiae. The MIC of amphotericin B was 2 mg/L or less for all isolates. Importantly, MICs determined by EUCAST showed no correlation with those determined by gradient concentration strips methods, these latter being lower than the former (Spearman's rank correlation tests ranging - depending on the antifungal agent - from 0.01 to 0.25; p>0.4). In conclusion, A. niger should be considered as a minority species in the section Nigri. The differences in MICs between species for different azoles underline the importance of accurate identification. Significant divergences in the determination of MIC between EUCAST and gradient concentration strips methods require further investigation.




gradient

[ASAP] Accelerated Protein Biomarker Discovery from FFPE Tissue Samples Using Single-Shot, Short Gradient Microflow SWATH MS

Journal of Proteome Research
DOI: 10.1021/acs.jproteome.9b00671




gradient

Renewable energy from the oceans: from wave, tidal and gradient systems to offshore wind and solar / edited by Domenico Coiro and Tonio Sant

Online Resource




gradient

Higher gradient materials and related generalized continua Holm Altenbach, Wolfgang H. Müller, Bilen Emek Abali, editors

Online Resource




gradient

Morse theory of gradient flows, concavity and complexity on manifolds with boundary / Gabriel Katz

Dewey Library - QA614.7.K37 2020




gradient

[ASAP] Advanced Liquid Chromatography of Polyolefins Using Simultaneous Solvent and Temperature Gradients

Analytical Chemistry
DOI: 10.1021/acs.analchem.0c01095




gradient

[ASAP] Counterflow Gradient Focusing in Free-Flow Electrophoresis for Protein Fractionation

Analytical Chemistry
DOI: 10.1021/acs.analchem.0c01024




gradient

The influence of structural gradients in large pore organosilica materials on the capabilities for hosting cellular communities

RSC Adv., 2020, 10,17327-17335
DOI: 10.1039/D0RA00927J, Paper
Open Access
  This article is licensed under a Creative Commons Attribution 3.0 Unported Licence.
Hannah Bronner, Anna-Katharina Holzer, Alexander Finke, Marius Kunkel, Andreas Marx, Marcel Leist, Sebastian Polarz
Chemical and structural gradients in biofunctionalized organosilica–polymer nanocomposites control cell adhesion properties and open perspectives for artificial cellular community systems.
The content of this RSS Feed (c) The Royal Society of Chemistry




gradient

Harnessing salinity gradient energy in coastal stormwater runoff to reduce pathogen loading

Environ. Sci.: Water Res. Technol., 2020, Advance Article
DOI: 10.1039/C9EW01137D, Communication
Kristian L. Dubrawski, Wan Wang, Jianqiao Xu, Craig S. Criddle
First demonstration of the capture of salinity gradient energy from stormwater runoff to the ocean, used to power UV-LED disinfection.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




gradient

[ASAP] A Portable and Accurate Phosphate Sensor Using a Gradient Fabry–Pérot Array

ACS Sensors
DOI: 10.1021/acssensors.0c00090




gradient

Realization of high-quality optical nanoporous gradient-index filters by optimal combination of anodization conditions

Nanoscale, 2020, 12,9404-9415
DOI: 10.1039/C9NR10526C, Paper
Cheryl Suwen Law, Siew Yee Lim, Lina Liu, Andrew D. Abell, Lluis F. Marsal, Abel Santos
High-quality nanoporous anodic alumina gradient-index filters are realized by sinusoidal pulse anodization under optimized anodization conditions.
The content of this RSS Feed (c) The Royal Society of Chemistry




gradient

Bottom-up and top-down effects on insects herbivores along a natural salinity gradient in a florida salt marsh




gradient

Storm-influenced sediment transport gradients on a nourished beach




gradient

RKEM implementation for strain gradient theory in multiple dimensions




gradient

Evaluating dissolved oxygen regimes along a gradient of human disturbance for lotic systems in west-central Florida




gradient

Bird communities of isolated cypress wetlands along an urban gradient in hillsborough county, florida




gradient

Human-wildlife conflict across urbanization gradients :




gradient

Effects of a shallow-water hydrothermal vent gradient on benthic calcifiers, tutum bay, ambitle island, papua new guinea




gradient

A survey of Coleopteran species richness, diversity and abundance in habitats along a disturbance gradient




gradient

Behavioral changes of the Slate-throated Redstart (Myioborus miniatus) and the Collared Redstart (Myioborus toquatus) along an altitudinal gradient in the Monteverde Cloud Forest Preserve




gradient

The San Luis river continuum : a look at the chemical and biological changes along a longitudinal pristine river gradient




gradient

Population abundance, sexual expression, and gender ratios of Marchantia sp. along an elevational gradient in Monteverde, Costa Rica




gradient

Survival and reproductive fitness of two species of Pleurothallid orchids along a changing climatic gradient




gradient

Herpetofauna distribution and species richness along an elevational gradient




gradient

Community composition and diversity of lichens along a disturbance gradient in San Luis, Costa Rica




gradient

Bee and wasp diversity and abundance along an elevational gradient, Monteverde, Costa Rica




gradient

Coleopteran diversity on an elevational gradient in Monteverde, Puntarenas, Costa Rica




gradient

Macroinvertebrate diversity in Heliconia tortuosa along an elevational gradient




gradient

Vesicular-Arbuscular Mycorrhizae (VAM) spore abundance and soil characteristics along a neotropical premontane forest successional gradient




gradient

Foliar changes in Saurauia montana along an altitudinal and soil fertility gradient




gradient

Lichen richness along an air pollution gradient in Monteverde, Costa Rica




gradient

A comparison of moth diversity and abundance along an altitudinal gradient in Monteverde, Costa Rica




gradient

Flower color and shape variation on an elevational gradient




gradient

Distribution and abundance of juvenile fishes along a salinity gradient in the Anclote River Estuary, Tarpon Springs, Florida




gradient

Rolled-leaf hispine herbivory of Heliconia spp. (Heliconiaceae) over an altitudinal gradient