eu

Essai sur la méningite en plaque ou scléreuse limitée a la base de l’encéphale / par Emile Labarriere.

Paris : V.A. Delahaye, 1878.




eu

Essai sur la syringomyelie / par le Docteur Critzman.

Paris : G. Steinheil, 1892.




eu

Essai sur l'application de la chimie a l'étude physiologique du sang de l'homme : et a l'étude physiologico-pathologique, hygiénique et thérapeutique des maladies de cette humeur / par P.S. Denis.

Paris : Béchet jeune, 1838.




eu

Essai sur le rétrécissement tricuspidien / Robert Leudet.

Paris : Steinheil, 1888.




eu

Essai sur le typhus, ou sur les fièvres dites malignes, putrides, bilieuses, muqueuses, jaune, la peste. Exposition analytique et expérimentale de la nature des fièvres en général ... / par J.F. Hernandez.

Paris : chez Mequignon-Marvis, 1816.




eu

Constantia is reunited with her father, the emperor Tiberius II Constantinus, and given as bride to Ælla, king of Northumbria. Stipple engraving by F. Bartolozzi, 1799, after J.F. Rigaud.

London (Poets Gallery Fleet Street) : Publish'd ... by Thos. Macklin, Novr. 30. 1799.




eu

Les oeuures du R. P. Gabriel de Castaigne, tant medicinales que chymiques, : diuisées en quatre principaux traitez. I. Le paradis terrestre. II. Le grand miracle de la nature metallique. III. L'or potable. IV. Le thresor philosophique de la medec

A Paris : Chez Iean Dhourry, au bout du Pont-Neuf, près les Augustins, à l'Image S. Iean, M. DC. LXI. [1661]




eu

Louis Pasteur. Photogravure by Goupil & Cie, 1886, after A.G.A. Edelfelt, 1885.

Paris ; Londres ; La Haye : Imprimé & publié par Boussod, Valadon & Cie, éditeurs successeurs de Goupil & Cie ; Berlin : Verlag von Boussod, Valadon & Co. ; New-York : Published by M. Knoedler & Co., Le 1er Juin 1886.




eu

A Moroccan horseman setting off with a rifle to perform at an equestrian display (fantasia, Tbourida). Etching and drypoint by L.A. Lecouteux after H. Regnault, 1870.




eu

Aeneas carrying his father Anchises on his shoulders as he, his son Ascanius and his wife Creusa flee from the sack of Troy. Engraving by R. Guidi after Agostino Carracci after F. Barocci.




eu

Europa (right), grieving after her rape by Jupiter, is consoled by Venus and Cupid (centre); Jupiter disguised as a bull looks on from the left background. Engraving by T. Cook and R. Pollard, 1797, after B. West, 1772.

London (Braynes-Row, Spa-Fields) : Publish'd ... by R. Pollard printseller, Jany: 30th; 1797.




eu

Perseus, dismounted from Pegasus, after rescuing Andromeda. Engraving by P.F. Tardieu after A.F. Oeser after P.P. Rubens.

[Dresden?], [1754?]




eu

King Charles I on horseback outside the city walls of Hull: the Parliamentarians inside, led by Sir John Hotham, refuse to surrender the city. Engraving by N. Tardieu after C. Parrocel.

London : Printed and sold by Thos. and John Bowles, printsellers, [1728]




eu

The birth of Henri IV at the castle of Pau. Etching by E.J. Ramus after Eugène-François-Marie-Joseph Devéria.




eu

Étude hygienique sur la profession de mouleur en cuivre : pour servir a l'histoire des professions exposées aux poussières inorganiques / par Ambroise Tardieu.

Paris, [France] : J.B. Baillière, Libraire de l'Académie Impériale de Médicine, 1854.




eu

Étude hygiénique et médico-légale sur la fabrication et l'emploi des allumettes chimiques / Ambroise Tardieu.

Paris, [France] : J.B. Baillière, Libraire de l'Académie Impériale de Médicine, 1856.




eu

Étude médico-légale sur l'avortement / Ambroise Tardieu.

Paris, [France] : J.B. Baillière, Libraire de l'Académie Impériale de Médicine, 1855.




eu

Memoire sur la mort par suffocation / Ambroise Tardieu.

Paris, [France] : J.B. Baillière, Libraire de l'Académie Impériale de Médicine, 1855.




eu

Mémoire sur l'empoisonnement par la strychnine : contenant la relation médico-légale complète de l'affaire Palmer / Ambroise Tardieu.

Paris, [France] : J.B. Baillière, Libraire de l'Académie Impériale de Médicine, 1857.




eu

Parce que, travestis et transgenres, notre regard sur le mode et les autres se veut teinté de respect et de douceur / Hommefleur.

Châtillon, France : Association Hommefleur, [date of publication not identified]




eu

burnt out zine ~ how to cope with autistic burnout // autism, asd, aspergers, neurodivergent

2019




eu

Resilient & resisting: Hackney Museum our stories




eu

Resilient & resisting: Hackney Museum love & loss




eu

Tierische Drogen im 18. Jahrhundert im Spiegel offizineller und nicht offizineller Literatur und ihre Bedeutung in der Gegenwart / Katja Susanne Moosmann ; mit einem Geleitwort von Christoph Friedrich.

Stuttgart : In Kommission: Wissenschaftliche Verlagsgesellschaft, 2019.




eu

Neuroscience methods in drug abuse research / editors, Roger M. Brown, David P. Friedman, Yuth Nimit.

Rockville, Maryland : National Institute of Drug Abuse, 1985.




eu

Relapse and recovery in drug abuse / editors, Frank M. Tims, Carl G. Leukefeld.

Rockville, Maryland : National Institute on Drug Abuse, 1986.




eu

Neurobiology of behavioral control in drug abuse / editor, Stephen I. Szara.

Rockville, Maryland : National Institute on Drug Abuse, 1986.




eu

The role of neuroplasticity in the response to drugs / editors, David P. Friedman, Doris H. Clouet.

Rockville, Maryland : National Institute on Drug Abuse, 1987.




eu

Compulsory treatment of drug abuse : research and clinical practice / editors, Carl G. Leukefeld, Frank M. Tims.

Rockville, Maryland : National Institute on Drug Abuse, 1988.




eu

The therapeutic community : study of effectiveness : social and psychological adjustment of 400 dropouts and 100 graduates from the Phoenix House Therapeutic Community / by George De Leon.

Rockville, Maryland : National Institute on Drug Abuse, 1984.




eu

Jeu instructif des peuples, 1815 / Paul-André Basset




eu

Echelet picumne and echelet grimpeur, male / by Jean Gabriel Prêtre, 1824




eu

Target Propagation in Recurrent Neural Networks

Recurrent Neural Networks have been widely used to process sequence data, but have long been criticized for their biological implausibility and training difficulties related to vanishing and exploding gradients. This paper presents a novel algorithm for training recurrent networks, target propagation through time (TPTT), that outperforms standard backpropagation through time (BPTT) on four out of the five problems used for testing. The proposed algorithm is initially tested and compared to BPTT on four synthetic time lag tasks, and its performance is also measured using the sequential MNIST data set. In addition, as TPTT uses target propagation, it allows for discrete nonlinearities and could potentially mitigate the credit assignment problem in more complex recurrent architectures.




eu

Branch and Bound for Piecewise Linear Neural Network Verification

The success of Deep Learning and its potential use in many safety-critical applicationshas motivated research on formal verification of Neural Network (NN) models. In thiscontext, verification involves proving or disproving that an NN model satisfies certaininput-output properties. Despite the reputation of learned NN models as black boxes,and the theoretical hardness of proving useful properties about them, researchers havebeen successful in verifying some classes of models by exploiting their piecewise linearstructure and taking insights from formal methods such as Satisifiability Modulo Theory.However, these methods are still far from scaling to realistic neural networks. To facilitateprogress on this crucial area, we exploit the Mixed Integer Linear Programming (MIP) formulation of verification to propose a family of algorithms based on Branch-and-Bound (BaB). We show that our family contains previous verification methods as special cases.With the help of the BaB framework, we make three key contributions. Firstly, we identifynew methods that combine the strengths of multiple existing approaches, accomplishingsignificant performance improvements over previous state of the art. Secondly, we introducean effective branching strategy on ReLU non-linearities. This branching strategy allows usto efficiently and successfully deal with high input dimensional problems with convolutionalnetwork architecture, on which previous methods fail frequently. Finally, we proposecomprehensive test data sets and benchmarks which includes a collection of previouslyreleased testcases. We use the data sets to conduct a thorough experimental comparison ofexisting and new algorithms and to provide an inclusive analysis of the factors impactingthe hardness of verification problems.




eu

Option pricing with bivariate risk-neutral density via copula and heteroscedastic model: A Bayesian approach

Lucas Pereira Lopes, Vicente Garibay Cancho, Francisco Louzada.

Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 4, 801--825.

Abstract:
Multivariate options are adequate tools for multi-asset risk management. The pricing models derived from the pioneer Black and Scholes method under the multivariate case consider that the asset-object prices follow a Brownian geometric motion. However, the construction of such methods imposes some unrealistic constraints on the process of fair option calculation, such as constant volatility over the maturity time and linear correlation between the assets. Therefore, this paper aims to price and analyze the fair price behavior of the call-on-max (bivariate) option considering marginal heteroscedastic models with dependence structure modeled via copulas. Concerning inference, we adopt a Bayesian perspective and computationally intensive methods based on Monte Carlo simulations via Markov Chain (MCMC). A simulation study examines the bias, and the root mean squared errors of the posterior means for the parameters. Real stocks prices of Brazilian banks illustrate the approach. For the proposed method is verified the effects of strike and dependence structure on the fair price of the option. The results show that the prices obtained by our heteroscedastic model approach and copulas differ substantially from the prices obtained by the model derived from Black and Scholes. Empirical results are presented to argue the advantages of our strategy.




eu

Stochastic monotonicity from an Eulerian viewpoint

Davide Gabrielli, Ida Germana Minelli.

Source: Brazilian Journal of Probability and Statistics, Volume 33, Number 3, 558--585.

Abstract:
Stochastic monotonicity is a well-known partial order relation between probability measures defined on the same partially ordered set. Strassen theorem establishes equivalence between stochastic monotonicity and the existence of a coupling compatible with respect to the partial order. We consider the case of a countable set and introduce the class of finitely decomposable flows on a directed acyclic graph associated to the partial order. We show that a probability measure stochastically dominates another probability measure if and only if there exists a finitely decomposable flow having divergence given by the difference of the two measures. We illustrate the result with some examples.




eu

Odysseus asleep : uncollected sequences, 1994-2019

Sanger, Peter, 1943- author.
9781554472048




eu

Can a powerful neural network be a teacher for a weaker neural network?. (arXiv:2005.00393v2 [cs.LG] UPDATED)

The transfer learning technique is widely used to learning in one context and applying it to another, i.e. the capacity to apply acquired knowledge and skills to new situations. But is it possible to transfer the learning from a deep neural network to a weaker neural network? Is it possible to improve the performance of a weak neural network using the knowledge acquired by a more powerful neural network? In this work, during the training process of a weak network, we add a loss function that minimizes the distance between the features previously learned from a strong neural network with the features that the weak network must try to learn. To demonstrate the effectiveness and robustness of our approach, we conducted a large number of experiments using three known datasets and demonstrated that a weak neural network can increase its performance if its learning process is driven by a more powerful neural network.




eu

Capturing and Explaining Trajectory Singularities using Composite Signal Neural Networks. (arXiv:2003.10810v2 [cs.LG] UPDATED)

Spatial trajectories are ubiquitous and complex signals. Their analysis is crucial in many research fields, from urban planning to neuroscience. Several approaches have been proposed to cluster trajectories. They rely on hand-crafted features, which struggle to capture the spatio-temporal complexity of the signal, or on Artificial Neural Networks (ANNs) which can be more efficient but less interpretable. In this paper we present a novel ANN architecture designed to capture the spatio-temporal patterns characteristic of a set of trajectories, while taking into account the demographics of the navigators. Hence, our model extracts markers linked to both behaviour and demographics. We propose a composite signal analyser (CompSNN) combining three simple ANN modules. Each of these modules uses different signal representations of the trajectory while remaining interpretable. Our CompSNN performs significantly better than its modules taken in isolation and allows to visualise which parts of the signal were most useful to discriminate the trajectories.




eu

A priori generalization error for two-layer ReLU neural network through minimum norm solution. (arXiv:1912.03011v3 [cs.LG] UPDATED)

We focus on estimating emph{a priori} generalization error of two-layer ReLU neural networks (NNs) trained by mean squared error, which only depends on initial parameters and the target function, through the following research line. We first estimate emph{a priori} generalization error of finite-width two-layer ReLU NN with constraint of minimal norm solution, which is proved by cite{zhang2019type} to be an equivalent solution of a linearized (w.r.t. parameter) finite-width two-layer NN. As the width goes to infinity, the linearized NN converges to the NN in Neural Tangent Kernel (NTK) regime citep{jacot2018neural}. Thus, we can derive the emph{a priori} generalization error of two-layer ReLU NN in NTK regime. The distance between NN in a NTK regime and a finite-width NN with gradient training is estimated by cite{arora2019exact}. Based on the results in cite{arora2019exact}, our work proves an emph{a priori} generalization error bound of two-layer ReLU NNs. This estimate uses the intrinsic implicit bias of the minimum norm solution without requiring extra regularity in the loss function. This emph{a priori} estimate also implies that NN does not suffer from curse of dimensionality, and a small generalization error can be achieved without requiring exponentially large number of neurons. In addition the research line proposed in this paper can also be used to study other properties of the finite-width network, such as the posterior generalization error.




eu

Differentiable Sparsification for Deep Neural Networks. (arXiv:1910.03201v2 [cs.LG] UPDATED)

A deep neural network has relieved the burden of feature engineering by human experts, but comparable efforts are instead required to determine an effective architecture. On the other hands, as the size of a network has over-grown, a lot of resources are also invested to reduce its size. These problems can be addressed by sparsification of an over-complete model, which removes redundant parameters or connections by pruning them away after training or encouraging them to become zero during training. In general, however, these approaches are not fully differentiable and interrupt an end-to-end training process with the stochastic gradient descent in that they require either a parameter selection or a soft-thresholding step. In this paper, we propose a fully differentiable sparsification method for deep neural networks, which allows parameters to be exactly zero during training, and thus can learn the sparsified structure and the weights of networks simultaneously using the stochastic gradient descent. We apply the proposed method to various popular models in order to show its effectiveness.




eu

FNNC: Achieving Fairness through Neural Networks. (arXiv:1811.00247v3 [cs.LG] UPDATED)

In classification models fairness can be ensured by solving a constrained optimization problem. We focus on fairness constraints like Disparate Impact, Demographic Parity, and Equalized Odds, which are non-decomposable and non-convex. Researchers define convex surrogates of the constraints and then apply convex optimization frameworks to obtain fair classifiers. Surrogates serve only as an upper bound to the actual constraints, and convexifying fairness constraints might be challenging.

We propose a neural network-based framework, emph{FNNC}, to achieve fairness while maintaining high accuracy in classification. The above fairness constraints are included in the loss using Lagrangian multipliers. We prove bounds on generalization errors for the constrained losses which asymptotically go to zero. The network is optimized using two-step mini-batch stochastic gradient descent. Our experiments show that FNNC performs as good as the state of the art, if not better. The experimental evidence supplements our theoretical guarantees. In summary, we have an automated solution to achieve fairness in classification, which is easily extendable to many fairness constraints.




eu

Physics-informed neural network for ultrasound nondestructive quantification of surface breaking cracks. (arXiv:2005.03596v1 [cs.LG])

We introduce an optimized physics-informed neural network (PINN) trained to solve the problem of identifying and characterizing a surface breaking crack in a metal plate. PINNs are neural networks that can combine data and physics in the learning process by adding the residuals of a system of Partial Differential Equations to the loss function. Our PINN is supervised with realistic ultrasonic surface acoustic wave data acquired at a frequency of 5 MHz. The ultrasonic surface wave data is represented as a surface deformation on the top surface of a metal plate, measured by using the method of laser vibrometry. The PINN is physically informed by the acoustic wave equation and its convergence is sped up using adaptive activation functions. The adaptive activation function uses a scalable hyperparameter in the activation function, which is optimized to achieve best performance of the network as it changes dynamically the topology of the loss function involved in the optimization process. The usage of adaptive activation function significantly improves the convergence, notably observed in the current study. We use PINNs to estimate the speed of sound of the metal plate, which we do with an error of 1\%, and then, by allowing the speed of sound to be space dependent, we identify and characterize the crack as the positions where the speed of sound has decreased. Our study also shows the effect of sub-sampling of the data on the sensitivity of sound speed estimates. More broadly, the resulting model shows a promising deep neural network model for ill-posed inverse problems.




eu

Reducing Communication in Graph Neural Network Training. (arXiv:2005.03300v1 [cs.LG])

Graph Neural Networks (GNNs) are powerful and flexible neural networks that use the naturally sparse connectivity information of the data. GNNs represent this connectivity as sparse matrices, which have lower arithmetic intensity and thus higher communication costs compared to dense matrices, making GNNs harder to scale to high concurrencies than convolutional or fully-connected neural networks.

We present a family of parallel algorithms for training GNNs. These algorithms are based on their counterparts in dense and sparse linear algebra, but they had not been previously applied to GNN training. We show that they can asymptotically reduce communication compared to existing parallel GNN training methods. We implement a promising and practical version that is based on 2D sparse-dense matrix multiplication using torch.distributed. Our implementation parallelizes over GPU-equipped clusters. We train GNNs on up to a hundred GPUs on datasets that include a protein network with over a billion edges.




eu

An Empirical Study of Incremental Learning in Neural Network with Noisy Training Set. (arXiv:2005.03266v1 [cs.LG])

The notion of incremental learning is to train an ANN algorithm in stages, as and when newer training data arrives. Incremental learning is becoming widespread in recent times with the advent of deep learning. Noise in the training data reduces the accuracy of the algorithm. In this paper, we make an empirical study of the effect of noise in the training phase. We numerically show that the accuracy of the algorithm is dependent more on the location of the error than the percentage of error. Using Perceptron, Feed Forward Neural Network and Radial Basis Function Neural Network, we show that for the same percentage of error, the accuracy of the algorithm significantly varies with the location of error. Furthermore, our results show that the dependence of the accuracy with the location of error is independent of the algorithm. However, the slope of the degradation curve decreases with more sophisticated algorithms




eu

Classification of pediatric pneumonia using chest X-rays by functional regression. (arXiv:2005.03243v1 [stat.AP])

An accurate and prompt diagnosis of pediatric pneumonia is imperative for successful treatment intervention. One approach to diagnose pneumonia cases is using radiographic data. In this article, we propose a novel parsimonious scalar-on-image classification model adopting the ideas of functional data analysis. Our main idea is to treat images as functional measurements and exploit underlying covariance structures to select basis functions; these bases are then used in approximating both image profiles and corresponding regression coefficient. We re-express the regression model into a standard generalized linear model where the functional principal component scores are treated as covariates. We apply the method to (1) classify pneumonia against healthy and viral against bacterial pneumonia patients, and (2) test the null effect about the association between images and responses. Extensive simulation studies show excellent numerical performance in terms of classification, hypothesis testing, and efficient computation.




eu

Efficient Characterization of Dynamic Response Variation Using Multi-Fidelity Data Fusion through Composite Neural Network. (arXiv:2005.03213v1 [stat.ML])

Uncertainties in a structure is inevitable, which generally lead to variation in dynamic response predictions. For a complex structure, brute force Monte Carlo simulation for response variation analysis is infeasible since one single run may already be computationally costly. Data driven meta-modeling approaches have thus been explored to facilitate efficient emulation and statistical inference. The performance of a meta-model hinges upon both the quality and quantity of training dataset. In actual practice, however, high-fidelity data acquired from high-dimensional finite element simulation or experiment are generally scarce, which poses significant challenge to meta-model establishment. In this research, we take advantage of the multi-level response prediction opportunity in structural dynamic analysis, i.e., acquiring rapidly a large amount of low-fidelity data from reduced-order modeling, and acquiring accurately a small amount of high-fidelity data from full-scale finite element analysis. Specifically, we formulate a composite neural network fusion approach that can fully utilize the multi-level, heterogeneous datasets obtained. It implicitly identifies the correlation of the low- and high-fidelity datasets, which yields improved accuracy when compared with the state-of-the-art. Comprehensive investigations using frequency response variation characterization as case example are carried out to demonstrate the performance.




eu

Model Reduction and Neural Networks for Parametric PDEs. (arXiv:2005.03180v1 [math.NA])

We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature.




eu

Plague in Italy and Europe during the 17th century

The next seminar in the 2017–18 History of Pre-Modern Medicine seminar series takes place on Tuesday 30 January. Speaker: Professor Guido Alfani (Bocconi University, Milan) Plague in Italy and Europe during the 17th century: epidemiology and impact Abstract: After many years of relative… Continue reading




eu

Translational neuroscience of speech and language disorders

9783030356873 (electronic bk.)