mac

Digest of researches and criticisms bearing on the revision of the British pharmacopoeia, 1898 : 1899 to 1902 inclusive / prepared for the Pharmacopoeia Committee of the General Council of Medical Education and Registration of the United Kingdom by W. Cha

London : printed for the Council by Spottiswoode, 1903.




mac

Diphtheria and croup : what are they? / by Sir John Rose Cormack.

Edinburgh : Oliver and Boyd, 1876.




mac

Diphtheria : its nature and treatment, varieties and local expressions / by Morell MacKenzie.

London : J. & A. Churchill, 1879.




mac

Directions for preparing aerated medicinal waters, by means of the improved glass machines made at Leith Glass-Works.

Edinburgh : printed for William Creech, 1787.




mac

Diseases and remedies : a concise survey of the most modern methods of medicine / written expressly for the drug trade by physicians and pharmacists.

London : Chemist and Druggist, 1898.




mac

Dispensatorium universale ad tempora nostra accommodatum, et ad formam lexici chimico-pharmaceutici redactum / Christ. Frider. Reuss.

Argentorati : Sumtibus Amandi Koenig, 1786-1787.




mac

A dissertation on the best mode of treating spasmodic cholera ; with a view of its history and progress, from its origin in India, in 1817 down to the present time ; together with an appendix, containing a review of Dr McCormac's pamphlet, &c / by

London : Longman, Rees, Orme, Brown, and Green, 1834.




mac

Documents inédits sur la grande peste de 1348 (Consultation de la Faculte de Paris, consultation d'un praticien de Montpellier, description de Guillaume de Machaut) / publiés avec une introduction et des notes par L.-A. Joseph Michon.

Londres : Paris, 1860.




mac

Dr Pereira's elements of materia medica and therapeutics : abridged and adapted for the use of medical and pharmaceutical practitioners and students, and comprising all the medicines of the British pharmacopoeia, with such others as are frequently ord

London : Longmans, Green, 1872.




mac

Dreaming / by A.W. MacFarlane.

Edinburgh : printed by Oliver and Boyd, 1891.




mac

The Edinburgh new dispensatory : Containing I. The elements of pharmaceutical chemistry. II. The materia medica; or, The natural, pharmaceutical and medical history, of the substances employed in medicine. III. The pharmaceutical preparations and composit

Edinburgh : Bell & Bradfute, 1813.




mac

Eine neue methode der Asepsis : welche im Gegensatz zu den bisherigen Methoden eine absolute Keimfreiheit bei Operationen verburgt und Wasserdampf- sowie Wasser-Sterilisatoren entbehrlich macht / von Otto Jhle.

Stuttgart : F. Enke, 1895.




mac

An elementary treatise on kinematics and dynamics / by James Gordon MacGregor.

London : Macmillan, 1902.




mac

Elements of materia medica : containing the chemistry and natural history of drugs, their effects, doses, and adulterations : with observations on all the new remedies recently introduced into practice, and on the preparations of the British Pharmacopoeia

London : J. Churchill, 1864.




mac

Elements of pharmacology / by Oswald Schmiedeberg ; translated under the author's supervision by Thomas Dixson.

Edinburgh : Young J. Pentland, 1887.




mac

Elements of pharmacy, materia medica, and therapeutics / by William Whitla.

London : H. Renshaw, 1898.




mac

Elements of pharmacy, materia medica, and therapeutics / by Sir William Whitla.

London : Bailliere, Tindall and Cox, 1910.




mac

William Macready in character as Werner in the play Werner by Lord Byron. Engraving by C.W. Sharpe after D. Maclise.

[London?]




mac

Administrative control of the purity of food in England : / A. W. J. MacFadden.

England : Society of Medical Officers of Health in England, [192-?]




mac

Cocaine : pharmacology, effects, and treatment of abuse / editor, John Grabowski.

Rockville, Maryland : National Institute on Drug Abuse, 1984.




mac

O problema do abuso de drogas prevenção através investigação, pesquisa e educação / Murillo de Macedo Pereira, Vera Kühn de Macedo Pereira.

São Paulo : Governo do Estado de Sao Paulo, Secretaria da Segurança Pública, 1975.




mac

Coca : cocaína / Murillo de Macedo Pereira.

São Paulo : Serviço Grafíco da Secretaria da Segurança Pública, 1976.




mac

A pesquisa sobre o problema do abuso de drogas / Murillo de Macedo Pereira.

São Paulo : Serviço Grafíco da Secretaria da Segurança Pública, 1976.




mac

Lachlan Macquarie land grant to John Laurie




mac

Wedding photographs of William Thomas Cadell and Anne Macansh set in Harriet Scott graphic




mac

On lp-Support Vector Machines and Multidimensional Kernels

In this paper, we extend the methodology developed for Support Vector Machines (SVM) using the $ell_2$-norm ($ell_2$-SVM) to the more general case of $ell_p$-norms with $p>1$ ($ell_p$-SVM). We derive second order cone formulations for the resulting dual and primal problems. The concept of kernel function, widely applied in $ell_2$-SVM, is extended to the more general case of $ell_p$-norms with $p>1$ by defining a new operator called multidimensional kernel. This object gives rise to reformulations of dual problems, in a transformed space of the original data, where the dependence on the original data always appear as homogeneous polynomials. We adapt known solution algorithms to efficiently solve the primal and dual resulting problems and some computational experiments on real-world datasets are presented showing rather good behavior in terms of the accuracy of $ell_p$-SVM with $p>1$.




mac

Conjugate Gradients for Kernel Machines

Regularized least-squares (kernel-ridge / Gaussian process) regression is a fundamental algorithm of statistics and machine learning. Because generic algorithms for the exact solution have cubic complexity in the number of datapoints, large datasets require to resort to approximations. In this work, the computation of the least-squares prediction is itself treated as a probabilistic inference problem. We propose a structured Gaussian regression model on the kernel function that uses projections of the kernel matrix to obtain a low-rank approximation of the kernel and the matrix. A central result is an enhanced way to use the method of conjugate gradients for the specific setting of least-squares regression as encountered in machine learning.




mac

GADMM: Fast and Communication Efficient Framework for Distributed Machine Learning

When the data is distributed across multiple servers, lowering the communication cost between the servers (or workers) while solving the distributed learning problem is an important problem and is the focus of this paper. In particular, we propose a fast, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm, Group Alternating Direction Method of Multipliers (GADMM) is based on the Alternating Direction Method of Multipliers (ADMM) framework. The key novelty in GADMM is that it solves the problem in a decentralized topology where at most half of the workers are competing for the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges to the optimal solution for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets. Furthermore, we propose Dynamic GADMM (D-GADMM), a variant of GADMM, and prove its convergence under the time-varying network topology of the workers.




mac

Cyclic Boosting -- an explainable supervised machine learning algorithm. (arXiv:2002.03425v2 [cs.LG] UPDATED)

Supervised machine learning algorithms have seen spectacular advances and surpassed human level performance in a wide range of specific applications. However, using complex ensemble or deep learning algorithms typically results in black box models, where the path leading to individual predictions cannot be followed in detail. In order to address this issue, we propose the novel "Cyclic Boosting" machine learning algorithm, which allows to efficiently perform accurate regression and classification tasks while at the same time allowing a detailed understanding of how each individual prediction was made.




mac

Relevance Vector Machine with Weakly Informative Hyperprior and Extended Predictive Information Criterion. (arXiv:2005.03419v1 [stat.ML])

In the variational relevance vector machine, the gamma distribution is representative as a hyperprior over the noise precision of automatic relevance determination prior. Instead of the gamma hyperprior, we propose to use the inverse gamma hyperprior with a shape parameter close to zero and a scale parameter not necessary close to zero. This hyperprior is associated with the concept of a weakly informative prior. The effect of this hyperprior is investigated through regression to non-homogeneous data. Because it is difficult to capture the structure of such data with a single kernel function, we apply the multiple kernel method, in which multiple kernel functions with different widths are arranged for input data. We confirm that the degrees of freedom in a model is controlled by adjusting the scale parameter and keeping the shape parameter close to zero. A candidate for selecting the scale parameter is the predictive information criterion. However the estimated model using this criterion seems to cause over-fitting. This is because the multiple kernel method makes the model a situation where the dimension of the model is larger than the data size. To select an appropriate scale parameter even in such a situation, we also propose an extended prediction information criterion. It is confirmed that a multiple kernel relevance vector regression model with good predictive accuracy can be obtained by selecting the scale parameter minimizing extended prediction information criterion.




mac

Training and Classification using a Restricted Boltzmann Machine on the D-Wave 2000Q. (arXiv:2005.03247v1 [cs.LG])

Restricted Boltzmann Machine (RBM) is an energy based, undirected graphical model. It is commonly used for unsupervised and supervised machine learning. Typically, RBM is trained using contrastive divergence (CD). However, training with CD is slow and does not estimate exact gradient of log-likelihood cost function. In this work, the model expectation of gradient learning for RBM has been calculated using a quantum annealer (D-Wave 2000Q), which is much faster than Markov chain Monte Carlo (MCMC) used in CD. Training and classification results are compared with CD. The classification accuracy results indicate similar performance of both methods. Image reconstruction as well as log-likelihood calculations are used to compare the performance of quantum and classical algorithms for RBM training. It is shown that the samples obtained from quantum annealer can be used to train a RBM on a 64-bit `bars and stripes' data set with classification performance similar to a RBM trained with CD. Though training based on CD showed improved learning performance, training using a quantum annealer eliminates computationally expensive MCMC steps of CD.




mac

The behavioral ecology of the Tibetan macaque

9783030279202 (electronic bk.)




mac

Medical pharmacology at a glance

Neal, M. J., author.
9781119548096 (epub)




mac

Machine learning in medicine : a complete overview

Cleophas, Ton J. M., author
9783030339708 (electronic bk.)




mac

Machine learning in aquaculture : hunger classification of Lates calcarifer

Mohd Razman, Mohd Azraai, author
9789811522376 (electronic bk.)




mac

Encyclopedia of molecular pharmacology

9783030215736 (electronic bk.)




mac

Development of biopharmaceutical drug-device products

9783030314156 (electronic bk.)




mac

100 cases in clinical pharmacology, therapeutics and prescribing

Layne, Kerry, author.
9780429624537 electronic book




mac

Exact lower bounds for the agnostic probably-approximately-correct (PAC) machine learning model

Aryeh Kontorovich, Iosif Pinelis.

Source: The Annals of Statistics, Volume 47, Number 5, 2822--2854.

Abstract:
We provide an exact nonasymptotic lower bound on the minimax expected excess risk (EER) in the agnostic probably-approximately-correct (PAC) machine learning classification model and identify minimax learning algorithms as certain maximally symmetric and minimally randomized “voting” procedures. Based on this result, an exact asymptotic lower bound on the minimax EER is provided. This bound is of the simple form $c_{infty}/sqrt{ u}$ as $ u oinfty$, where $c_{infty}=0.16997dots$ is a universal constant, $ u=m/d$, $m$ is the size of the training sample and $d$ is the Vapnik–Chervonenkis dimension of the hypothesis class. It is shown that the differences between these asymptotic and nonasymptotic bounds, as well as the differences between these two bounds and the maximum EER of any learning algorithms that minimize the empirical risk, are asymptotically negligible, and all these differences are due to ties in the mentioned “voting” procedures. A few easy to compute nonasymptotic lower bounds on the minimax EER are also obtained, which are shown to be close to the exact asymptotic lower bound $c_{infty}/sqrt{ u}$ even for rather small values of the ratio $ u=m/d$. As an application of these results, we substantially improve existing lower bounds on the tail probability of the excess risk. Among the tools used are Bayes estimation and apparently new identities and inequalities for binomial distributions.




mac

Phase transition in the spiked random tensor with Rademacher prior

Wei-Kuo Chen.

Source: The Annals of Statistics, Volume 47, Number 5, 2734--2756.

Abstract:
We consider the problem of detecting a deformation from a symmetric Gaussian random $p$-tensor $(pgeq3)$ with a rank-one spike sampled from the Rademacher prior. Recently, in Lesieur et al. (Barbier, Krzakala, Macris, Miolane and Zdeborová (2017)), it was proved that there exists a critical threshold $eta_{p}$ so that when the signal-to-noise ratio exceeds $eta_{p}$, one can distinguish the spiked and unspiked tensors and weakly recover the prior via the minimal mean-square-error method. On the other side, Perry, Wein and Bandeira (Perry, Wein and Bandeira (2017)) proved that there exists a $eta_{p}'<eta_{p}$ such that any statistical hypothesis test cannot distinguish these two tensors, in the sense that their total variation distance asymptotically vanishes, when the signa-to-noise ratio is less than $eta_{p}'$. In this work, we show that $eta_{p}$ is indeed the critical threshold that strictly separates the distinguishability and indistinguishability between the two tensors under the total variation distance. Our approach is based on a subtle analysis of the high temperature behavior of the pure $p$-spin model with Ising spin, arising initially from the field of spin glasses. In particular, we identify the signal-to-noise criticality $eta_{p}$ as the critical temperature, distinguishing the high and low temperature behavior, of the Ising pure $p$-spin mean-field spin glass model.




mac

Gaussianization Machines for Non-Gaussian Function Estimation Models

T. Tony Cai.

Source: Statistical Science, Volume 34, Number 4, 635--656.

Abstract:
A wide range of nonparametric function estimation models have been studied individually in the literature. Among them the homoscedastic nonparametric Gaussian regression is arguably the best known and understood. Inspired by the asymptotic equivalence theory, Brown, Cai and Zhou ( Ann. Statist. 36 (2008) 2055–2084; Ann. Statist. 38 (2010) 2005–2046) and Brown et al. ( Probab. Theory Related Fields 146 (2010) 401–433) developed a unified approach to turn a collection of non-Gaussian function estimation models into a standard Gaussian regression and any good Gaussian nonparametric regression method can then be used. These Gaussianization Machines have two key components, binning and transformation. When combined with BlockJS, a wavelet thresholding procedure for Gaussian regression, the procedures are computationally efficient with strong theoretical guarantees. Technical analysis given in Brown, Cai and Zhou ( Ann. Statist. 36 (2008) 2055–2084; Ann. Statist. 38 (2010) 2005–2046) and Brown et al. ( Probab. Theory Related Fields 146 (2010) 401–433) shows that the estimators attain the optimal rate of convergence adaptively over a large set of Besov spaces and across a collection of non-Gaussian function estimation models, including robust nonparametric regression, density estimation, and nonparametric regression in exponential families. The estimators are also spatially adaptive. The Gaussianization Machines significantly extend the flexibility and scope of the theories and methodologies originally developed for the conventional nonparametric Gaussian regression. This article aims to provide a concise account of the Gaussianization Machines developed in Brown, Cai and Zhou ( Ann. Statist. 36 (2008) 2055–2084; Ann. Statist. 38 (2010) 2005–2046), Brown et al. ( Probab. Theory Related Fields 146 (2010) 401–433).




mac

Macy&rsquo;s Insane Cyber Monday Sale Ends in a Few Hours&mdash;Here Are the Best Deals

You've got exactly four hours left to take advantage of these heavily discounted prices.




mac

Neural Mechanisms of Visual Working Memory in Prefrontal Cortex of the Macaque

Earl K. Miller
Aug 15, 1996; 16:5154-5167
Articles




mac

Effects of Attention on Orientation-Tuning Functions of Single Neurons in Macaque Cortical Area V4

Carrie J. McAdams
Jan 1, 1999; 19:431-441
Articles




mac

Linearity and Normalization in Simple Cells of the Macaque Primary Visual Cortex

Matteo Carandini
Nov 1, 1997; 17:8621-8644
Articles




mac

Requisitos de divulgación para el Tercer Pilar - Macro actualizado

Spanish translation of "Pillar 3 disclosure requirements - updated framework", December 2018




mac

it says mac-10, cool - :lol:




mac

Neural Correlates of Strategy Switching in the Macaque Orbital Prefrontal Cortex

We can adapt flexibly to environment changes and search for the most appropriate rule to a context. The orbital prefrontal cortex (PFo) has been associated with decision making, rule generation and maintenance, and more generally has been considered important for behavioral flexibility. To better understand the neural mechanisms underlying the flexible behavior, we studied the ability to generate a switching signal in monkey PFo when a strategy is changed. In the strategy task, we used a visual cue to instruct two male rhesus monkeys either to repeat their most recent choice (i.e., stay strategy) or to change it (i.e., shift strategy). To identify the strategy switching-related signal, we compared nonswitch and switch trials, which cued the same or a different strategy from the previous trial, respectively. We found that the switching-related signal emerged during the cue presentation and it was combined with the strategy signal in a subpopulation of cells. Moreover, the error analysis showed that the activity of the switch-related cells reflected whether the monkeys erroneously switched or not the strategy, rather than what was required for that trial. The function of the switching signal could be to prompt the use of different strategies when older strategies are no longer appropriate, conferring the ability to adapt flexibly to environmental changes. In our task, the switching signal might contribute to the implementation of the strategy cued, overcoming potential interference effects from the strategy previously cued. Our results support the idea that ascribes to PFo an important role for behavioral flexibility.

SIGNIFICANCE STATEMENT We can flexibly adapt our behavior to a changing environment. One of the prefrontal areas traditionally associated with the ability to adapt to new contingencies is the orbital prefrontal cortex (PFo). We analyzed the switching related activity using a strategy task in which two rhesus monkeys were instructed by a visual cue either to repeat or change their most recent choice, respectively using a stay or a shift strategy. We found that PFo neurons were modulated by the strategy switching signal, pointing to the importance of PFo in behavioral flexibility by generating control over the switching of strategies.




mac

Neural Evidence for the Prediction of Animacy Features during Language Comprehension: Evidence from MEG and EEG Representational Similarity Analysis

It has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used magnetoencephalography (MEG) and electroencephalography (EEG), in combination with representational similarity analysis, to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate-constraining verbs was greater than following inanimate-constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.

SIGNIFICANCE STATEMENT Language inputs unfold very quickly during real-time communication. By predicting ahead, we can give our brains a "head start," so that language comprehension is faster and more efficient. Although most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context of "they cautioned the...," we can predict that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here, we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.




mac

Tony Rudd - Machadaynu (remix by The Freelance Hairdresser) - *Official Video*       [3m15s]


Original audio available as an mp3 from: http://soundcloud.com/the-freelance-hairdresser - also visit http://www.soundhog.co.uk for the rest [...]