rx Fault Tree Analysis: Identifying Maximum Probability Minimal Cut Sets with MaxSAT. (arXiv:2005.03003v1 [cs.AI]) By arxiv.org Published On :: In this paper, we present a novel MaxSAT-based technique to compute Maximum Probability Minimal Cut Sets (MPMCSs) in fault trees. We model the MPMCS problem as a Weighted Partial MaxSAT problem and solve it using a parallel SAT-solving architecture. The results obtained with our open source tool indicate that the approach is effective and efficient. Full Article
rx Computing-in-Memory for Performance and Energy Efficient Homomorphic Encryption. (arXiv:2005.03002v1 [cs.CR]) By arxiv.org Published On :: Homomorphic encryption (HE) allows direct computations on encrypted data. Despite numerous research efforts, the practicality of HE schemes remains to be demonstrated. In this regard, the enormous size of ciphertexts involved in HE computations degrades computational efficiency. Near-memory Processing (NMP) and Computing-in-memory (CiM) - paradigms where computation is done within the memory boundaries - represent architectural solutions for reducing latency and energy associated with data transfers in data-intensive applications such as HE. This paper introduces CiM-HE, a Computing-in-memory (CiM) architecture that can support operations for the B/FV scheme, a somewhat homomorphic encryption scheme for general computation. CiM-HE hardware consists of customized peripherals such as sense amplifiers, adders, bit-shifters, and sequencing circuits. The peripherals are based on CMOS technology, and could support computations with memory cells of different technologies. Circuit-level simulations are used to evaluate our CiM-HE framework assuming a 6T-SRAM memory. We compare our CiM-HE implementation against (i) two optimized CPU HE implementations, and (ii) an FPGA-based HE accelerator implementation. When compared to a CPU solution, CiM-HE obtains speedups between 4.6x and 9.1x, and energy savings between 266.4x and 532.8x for homomorphic multiplications (the most expensive HE operation). Also, a set of four end-to-end tasks, i.e., mean, variance, linear regression, and inference are up to 1.1x, 7.7x, 7.1x, and 7.5x faster (and 301.1x, 404.6x, 532.3x, and 532.8x more energy efficient). Compared to CPU-based HE in a previous work, CiM-HE obtain 14.3x speed-up and >2600x energy savings. Finally, our design offers 2.2x speed-up with 88.1x energy savings compared to a state-of-the-art FPGA-based accelerator. Full Article
rx Cluster Sound releases RX7 free drum pack for Ableton Live By rekkerd.org Published On :: Fri, 01 May 2020 09:19:53 +0000 Cluster Sound has announced the release of RX7, a free Ableton Live drum pack based on the vintage RX7 drum machine. The 12-bit drum machine from Yamaha comes equipped with 100 PCM samples. It was used by artists such as Future Sound of London, Massive Attack, Bjork and Nine Inch Nails. Released in 1988 by […] The post Cluster Sound releases RX7 free drum pack for Ableton Live appeared first on rekkerd.org. Full Article News Samples and sound libraries Cluster Sound drum machine drums freeware vintage Yamaha
rx Save up to 50% on Inphonik’s RX950 Classic AD/DA Converter and RYM2612 Iconic FM Synthesizer By rekkerd.org Published On :: Fri, 08 May 2020 09:38:18 +0000 Plugin Boutique has launched a sale on Inphonik, offering discounts of up to 50% off on its plugins for a limited time only. The RX950 Classic AD/DA Converter effect plugin is designed to perfectly mimic the whole AD/DA conversion process of the Akai S950 in order to give your music this vintage, warm and crunchy […] The post Save up to 50% on Inphonik’s RX950 Classic AD/DA Converter and RYM2612 Iconic FM Synthesizer appeared first on rekkerd.org. Full Article News Sales and promotions Akai converter Inphonik Plugin Boutique sale synthesizer
rx arXiv.org python developer (Ithaca NY) By jobs.metafilter.com Published On :: Wed, 23 Oct 2019 13:44:42 -0800 Cornell University seeks a Backend Python Developer to join a distributed team building arXiv’s next generation (“NG”) system and maintaining the service’s daily operations. arXiv is the premier open access platform serving scientists in physics, mathematics, computer science, and other disciplines. For over 25 years, arXiv has enabled scientists to rapidly disseminate their papers within their scientific communities. Around the world, arXiv is recognized as an essential resource for the scientists that it serves. As a member of a broader team that is passionate about arXiv’s mission and legacy, the incumbent will also be part of a supportive work culture that places high value on inclusivity, team-work, collegiality and work-life balance. As a Backend Python Developer, you will be responsible for designing, coding, testing, documenting, and debugging highly complex applications and APIs (mostly implemented in Python/Flask), including but not limited to those that control the infrastructure and configurations that form the backbone of the arXiv platform. You will collaborate closely with team members on the design and implementation of applications, configurations, and workflows to test, deploy, monitor, and scale the arXiv system, and participate in code review, planning, and retrospectives. A strong orientation towards site security and data protection are a big plus. Full Article programming python
rx Man who allegedly stole Subaru WRX with owner's 5yo son inside faces court By www.abc.net.au Published On :: Mon, 14 Oct 2019 18:04:00 +1100 Police say the 32-year-old man stole the Subaru WRX during a test drive while the owner's five-year-old son was still inside the vehicle. Nicholas Trent Hallett, 32, has been remanded in custody to appear at a later date. Full Article ABC Radio Adelaide adelaide Law Crime and Justice:All:All Law Crime and Justice:Courts and Trials:All Law Crime and Justice:Crime:All Law Crime and Justice:Crime:Burglary Law Crime and Justice:Traffic Offences:All Australia:SA:Adelaide 5000 Australia:SA:All Australia:SA:Campbelltown 5074 Australia:SA:Hope Valley 5090 Australia:SA:Smithfield Plains 5114
rx LawArXiv Papers | Analysis of the NHSX Contact Tracing App ‘Isle of Wight’ Data Protection Impact Assessment By osf.io Published On :: 2020-05-09T11:13:41+00:00 This note examines the published data protection impact assessment (DPIA) released by NHSX in relation to their contact tracing/proximity tracing app. It highlights a range of significant issues which leave the app falling short of data protection legislation. It does this in order so that these issues can be remedied before the next DPIA is published. Full Article
rx Angry Mob Music Group Signs Exclusive Worldwide Co-Publishing Deal With LA-Based Songwriting/Production Team Schmarx & Savvy By feedproxy.google.com Published On :: The Deal Covers All New Works From The Versatile Power Duo, Whose Successes Include The #1 ITunes Electronic Hit Touch By 3LAU Featuring Carly Paige. Full Article
rx Sol RX Beach Volleyball Tournament Held By bernews.com Published On :: Sun, 25 Aug 2019 13:10:07 +0000 The Bermuda Volleyball Association hosted their Sol RX Beach Tournament at the Horseshoe Bay Beach. Allison Settle and Allison Lacoursiere won the Women’s 2’s Division, while 3 Nickles & a Dime won the Women’s 4 Division and 5 Toed Beach Slothes won the Men’s 4’s Division. Women’s 2 Division 1st Allison Settle and Allison Lacoursiere […](Click to read the full article) Full Article All Sports #Volleyball
rx Mature myelin maintenance requires Qki to coactivate PPARβ-RXRα–mediated lipid metabolism By www.jci.org Published On :: Lipid-rich myelin forms electrically insulating, axon-wrapping multilayers that are essential for neural function, and mature myelin is traditionally considered metabolically inert. Surprisingly, we discovered that mature myelin lipids undergo rapid turnover, and quaking (Qki) is a major regulator of myelin lipid homeostasis. Oligodendrocyte-specific Qki depletion, without affecting oligodendrocyte survival, resulted in rapid demyelination, within 1 week, and gradually neurological deficits in adult mice. Myelin lipids, especially the monounsaturated fatty acids and very-long-chain fatty acids, were dramatically reduced by Qki depletion, whereas the major myelin proteins remained intact, and the demyelinating phenotypes of Qki-depleted mice were alleviated by a high-fat diet. Mechanistically, Qki serves as a coactivator of the PPARβ-RXRα complex, which controls the transcription of lipid-metabolism genes, particularly those involved in fatty acid desaturation and elongation. Treatment of Qki-depleted mice with PPARβ/RXR agonists significantly alleviated neurological disability and extended survival durations. Furthermore, a subset of lesions from patients with primary progressive multiple sclerosis were characterized by preferential reductions in myelin lipid contents, activities of various lipid metabolism pathways, and expression level of QKI-5 in human oligodendrocytes. Together, our results demonstrate that continuous lipid synthesis is indispensable for mature myelin maintenance and highlight an underappreciated role of lipid metabolism in demyelinating diseases. Full Article
rx HyperX Teams up with Ducky and Launches HyperX x Ducky One 2 Mini Mechanical Gaming Keyboard By feedproxy.google.com Published On :: Wed, 06 May 2020 15:07:25 +0000 The HyperX x Ducky One 2 Mini mechanical gaming keyboard features HyperX red linear mechanical switches built for performance, longevity and an 80 million lifetime click rating per switch. The post HyperX Teams up with Ducky and Launches HyperX x Ducky One 2 Mini Mechanical Gaming Keyboard appeared first on ThinkComputers.org. Full Article All News Hardware News Press Releases Ducky Gaming Keyboard HyperX HyperX x Ducky One 2 keyboard Mechanical Keyboard
rx Is the Supply of Charitable Donations Fixed? Evidence from Deadly Tornadoes -- by Tatyana Deryugina, Benjamin M. Marx By www.nber.org Published On :: Do new societal needs increase charitable giving or simply reallocate a fixed supply of donations? We study this question using IRS datasets and the natural experiment of deadly tornadoes. Among ZIP Codes located more than 20 miles away from a tornado's path, donations by households increase by over $1 million per tornado fatality. We find no negative effects on charities located in these ZIP Codes, with a bootstrapped confidence interval that rejects substitution rates above 16 percent. The results imply that giving to one cause need not come at the expense of another. Full Article
rx Islam and the State: Religious Education in the Age of Mass Schooling -- by Samuel Bazzi, Benjamin Marx, Masyhur Hilmy By www.nber.org Published On :: Public schooling systems are an essential feature of modern states. These systems often developed at the expense of religious schools, which undertook the bulk of education historically and still cater to large student populations worldwide. This paper examines how Indonesia’s long-standing Islamic school system responded to the construction of 61,000 public elementary schools in the mid-1970s. The policy was designed in part to foster nation building and to curb religious influence in society. We are the first to study the market response to these ideological objectives. Using novel data on Islamic school construction and curriculum, we identify both short-run effects on exposed cohorts as well as dynamic, long-run effects on education markets. While primary enrollment shifted towards state schools, religious education increased on net as Islamic secondary schools absorbed the increased demand for continued education. The Islamic sector not only entered new markets to compete with the state but also increased religious curriculum at newly created schools. Our results suggest that the Islamic sector response increased religiosity at the expense of a secular national identity. Overall, this ideological competition in education undermined the nation-building impacts of mass schooling. Full Article
rx Social limits needed through summer, Birx says, as some states ease coronavirus restrictions By www.latimes.com Published On :: Sun, 26 Apr 2020 19:56:40 -0400 Social distancing should continue through the summer, White House advisor Deborah Birx said Sunday, and other experts warned against states' moves to lift restrictions. Full Article
rx We tuned in for the White House briefings but stayed for Dr. Birx's glorious scarves By www.latimes.com Published On :: Thu, 23 Apr 2020 17:21:06 -0400 Dr. Deborah Birx's scarves became this moment's fashion inspiration. Here's how to get her look. Full Article
rx Column: Is it time for Drs. Fauci and Birx to quit on principle? By www.latimes.com Published On :: Wed, 6 May 2020 06:10:17 -0400 Fauci and Birx could storm out and publicly speak their minds, but then they'd lose any influence they have on President Trump. Full Article
rx strataconf: Today's the last day to get best price discounts on #StrataRx Conf. Register by 11:59pmET http://t.co/cy4SudVIHZ #healthdata By twitter.com Published On :: Thu, 06 Jun 2013 16:55:18 +0000 strataconf: Today's the last day to get best price discounts on #StrataRx Conf. Register by 11:59pmET http://t.co/cy4SudVIHZ #healthdata Full Article
rx strataconf: Moving to the open healthcare graph http:// http://t.co/YYTUDN3Vzn Achieving the triple aim in healthcare: better, cheaper, safer #stratarx By twitter.com Published On :: Sat, 08 Jun 2013 21:02:16 +0000 strataconf: Moving to the open healthcare graph http:// http://t.co/YYTUDN3Vzn Achieving the triple aim in healthcare: better, cheaper, safer #stratarx Full Article
rx strataconf: A detailed agenda for #StrataRx 2013 is now posted: workshops, sessions, speakers + more http://t.co/RtaRpQroaN #healthdata By twitter.com Published On :: Mon, 10 Jun 2013 20:57:23 +0000 strataconf: A detailed agenda for #StrataRx 2013 is now posted: workshops, sessions, speakers + more http://t.co/RtaRpQroaN #healthdata Full Article
rx strataconf: Ways to put the patient first when collecting health data http://t.co/iACckzJjAW @praxagora #stratarx #healthit By twitter.com Published On :: Tue, 11 Jun 2013 17:01:16 +0000 strataconf: Ways to put the patient first when collecting health data http://t.co/iACckzJjAW @praxagora #stratarx #healthit Full Article
rx velocityconf: New #velocityconf CA program preview is up: http://t.co/rKjf91RXdD @ariyahidayat on End-to-End JS Quality Analysis. By feedproxy.google.com Published On :: Thu, 23 May 2013 17:01:13 +0000 velocityconf: New #velocityconf CA program preview is up: http://t.co/rKjf91RXdD @ariyahidayat on End-to-End JS Quality Analysis. Full Article
rx Diatribe anatomico-physiologica de structura atque vita venarum a medicorum ordine Heidelbergensi praemio proposito ornata / autore Henrico Marx. By feedproxy.google.com Published On :: Carlsruhae : D.R. Marx, 1819. Full Article
rx Die Bedeutung des Herzschlages für die Athmung : eine neue Theorie der Respiration dargestellt für Physiologen und Ärzte / von Ernst Fleischl v. Marxow. By feedproxy.google.com Published On :: Stuttgart : F. Enke, 1887. Full Article
rx Die Lehre von den Giften, in medizinischer, gerichtlicher und polizeylicher Hinsicht / von K.F.H. Marx. By feedproxy.google.com Published On :: Gottingen : Dieterich, 1827- Full Article
rx Rx: 3 x/week LAAM : alternative to methadone / editors, Jack D. Blaine, Pierre F. Renault. By search.wellcomelibrary.org Published On :: Rockville, Maryland : The National Institute on Drug Abuse, 1976. Full Article
rx Arctic Amplification of Anthropogenic Forcing: A Vector Autoregressive Analysis. (arXiv:2005.02535v1 [econ.EM] CROSS LISTED) By arxiv.org Published On :: Arctic sea ice extent (SIE) in September 2019 ranked second-to-lowest in history and is trending downward. The understanding of how internal variability amplifies the effects of external $ ext{CO}_2$ forcing is still limited. We propose the VARCTIC, which is a Vector Autoregression (VAR) designed to capture and extrapolate Arctic feedback loops. VARs are dynamic simultaneous systems of equations, routinely estimated to predict and understand the interactions of multiple macroeconomic time series. Hence, the VARCTIC is a parsimonious compromise between fullblown climate models and purely statistical approaches that usually offer little explanation of the underlying mechanism. Our "business as usual" completely unconditional forecast has SIE hitting 0 in September by the 2060s. Impulse response functions reveal that anthropogenic $ ext{CO}_2$ emission shocks have a permanent effect on SIE - a property shared by no other shock. Further, we find Albedo- and Thickness-based feedbacks to be the main amplification channels through which $ ext{CO}_2$ anomalies impact SIE in the short/medium run. Conditional forecast analyses reveal that the future path of SIE crucially depends on the evolution of $ ext{CO}_2$ emissions, with outcomes ranging from recovering SIE to it reaching 0 in the 2050s. Finally, Albedo and Thickness feedbacks are shown to play an important role in accelerating the speed at which predicted SIE is heading towards 0. Full Article
rx Unsupervised Pre-trained Models from Healthy ADLs Improve Parkinson's Disease Classification of Gait Patterns. (arXiv:2005.02589v2 [cs.LG] UPDATED) By arxiv.org Published On :: Application and use of deep learning algorithms for different healthcare applications is gaining interest at a steady pace. However, use of such algorithms can prove to be challenging as they require large amounts of training data that capture different possible variations. This makes it difficult to use them in a clinical setting since in most health applications researchers often have to work with limited data. Less data can cause the deep learning model to over-fit. In this paper, we ask how can we use data from a different environment, different use-case, with widely differing data distributions. We exemplify this use case by using single-sensor accelerometer data from healthy subjects performing activities of daily living - ADLs (source dataset), to extract features relevant to multi-sensor accelerometer gait data (target dataset) for Parkinson's disease classification. We train the pre-trained model using the source dataset and use it as a feature extractor. We show that the features extracted for the target dataset can be used to train an effective classification model. Our pre-trained source model consists of a convolutional autoencoder, and the target classification model is a simple multi-layer perceptron model. We explore two different pre-trained source models, trained using different activity groups, and analyze the influence the choice of pre-trained model has over the task of Parkinson's disease classification. Full Article
rx Statistical errors in Monte Carlo-based inference for random elements. (arXiv:2005.02532v2 [math.ST] UPDATED) By arxiv.org Published On :: Monte Carlo simulation is useful to compute or estimate expected functionals of random elements if those random samples are possible to be generated from the true distribution. However, when the distribution has some unknown parameters, the samples must be generated from an estimated distribution with the parameters replaced by some estimators, which causes a statistical error in Monte Carlo estimation. This paper considers such a statistical error and investigates the asymptotic distributions of Monte Carlo-based estimators when the random elements are not only the real valued, but also functional valued random variables. We also investigate expected functionals for semimartingales in details. The consideration indicates that the Monte Carlo estimation can get worse when a semimartingale has a jump part with unremovable unknown parameters. Full Article
rx Generating Thermal Image Data Samples using 3D Facial Modelling Techniques and Deep Learning Methodologies. (arXiv:2005.01923v2 [cs.CV] UPDATED) By arxiv.org Published On :: Methods for generating synthetic data have become of increasing importance to build large datasets required for Convolution Neural Networks (CNN) based deep learning techniques for a wide range of computer vision applications. In this work, we extend existing methodologies to show how 2D thermal facial data can be mapped to provide 3D facial models. For the proposed research work we have used tufts datasets for generating 3D varying face poses by using a single frontal face pose. The system works by refining the existing image quality by performing fusion based image preprocessing operations. The refined outputs have better contrast adjustments, decreased noise level and higher exposedness of the dark regions. It makes the facial landmarks and temperature patterns on the human face more discernible and visible when compared to original raw data. Different image quality metrics are used to compare the refined version of images with original images. In the next phase of the proposed study, the refined version of images is used to create 3D facial geometry structures by using Convolution Neural Networks (CNN). The generated outputs are then imported in blender software to finally extract the 3D thermal facial outputs of both males and females. The same technique is also used on our thermal face data acquired using prototype thermal camera (developed under Heliaus EU project) in an indoor lab environment which is then used for generating synthetic 3D face data along with varying yaw face angles and lastly facial depth map is generated. Full Article
rx Interpreting Rate-Distortion of Variational Autoencoder and Using Model Uncertainty for Anomaly Detection. (arXiv:2005.01889v2 [cs.LG] UPDATED) By arxiv.org Published On :: Building a scalable machine learning system for unsupervised anomaly detection via representation learning is highly desirable. One of the prevalent methods is using a reconstruction error from variational autoencoder (VAE) via maximizing the evidence lower bound. We revisit VAE from the perspective of information theory to provide some theoretical foundations on using the reconstruction error, and finally arrive at a simpler and more effective model for anomaly detection. In addition, to enhance the effectiveness of detecting anomalies, we incorporate a practical model uncertainty measure into the metric. We show empirically the competitive performance of our approach on benchmark datasets. Full Article
rx How many modes can a constrained Gaussian mixture have?. (arXiv:2005.01580v2 [math.ST] UPDATED) By arxiv.org Published On :: We show, by an explicit construction, that a mixture of univariate Gaussians with variance 1 and means in $[-A,A]$ can have $Omega(A^2)$ modes. This disproves a recent conjecture of Dytso, Yagli, Poor and Shamai [IEEE Trans. Inform. Theory, Apr. 2020], who showed that such a mixture can have at most $O(A^2)$ modes and surmised that the upper bound could be improved to $O(A)$. Our result holds even if an additional variance constraint is imposed on the mixing distribution. Extending the result to higher dimensions, we exhibit a mixture of Gaussians in $mathbb{R}^d$, with identity covariances and means inside $[-A,A]^d$, that has $Omega(A^{2d})$ modes. Full Article
rx Is the NUTS algorithm correct?. (arXiv:2005.01336v2 [stat.CO] UPDATED) By arxiv.org Published On :: This paper is devoted to investigate whether the popular No U-turn (NUTS) sampling algorithm is correct, i.e. whether the target probability distribution is emph{exactly} conserved by the algorithm. It turns out that one of the Gibbs substeps used in the algorithm cannot always be guaranteed to be correct. Full Article
rx Can a powerful neural network be a teacher for a weaker neural network?. (arXiv:2005.00393v2 [cs.LG] UPDATED) By arxiv.org Published On :: The transfer learning technique is widely used to learning in one context and applying it to another, i.e. the capacity to apply acquired knowledge and skills to new situations. But is it possible to transfer the learning from a deep neural network to a weaker neural network? Is it possible to improve the performance of a weak neural network using the knowledge acquired by a more powerful neural network? In this work, during the training process of a weak network, we add a loss function that minimizes the distance between the features previously learned from a strong neural network with the features that the weak network must try to learn. To demonstrate the effectiveness and robustness of our approach, we conducted a large number of experiments using three known datasets and demonstrated that a weak neural network can increase its performance if its learning process is driven by a more powerful neural network. Full Article
rx Data-Space Inversion Using a Recurrent Autoencoder for Time-Series Parameterization. (arXiv:2005.00061v2 [stat.ML] UPDATED) By arxiv.org Published On :: Data-space inversion (DSI) and related procedures represent a family of methods applicable for data assimilation in subsurface flow settings. These methods differ from model-based techniques in that they provide only posterior predictions for quantities (time series) of interest, not posterior models with calibrated parameters. DSI methods require a large number of flow simulations to first be performed on prior geological realizations. Given observed data, posterior predictions can then be generated directly. DSI operates in a Bayesian setting and provides posterior samples of the data vector. In this work we develop and evaluate a new approach for data parameterization in DSI. Parameterization reduces the number of variables to determine in the inversion, and it maintains the physical character of the data variables. The new parameterization uses a recurrent autoencoder (RAE) for dimension reduction, and a long-short-term memory (LSTM) network to represent flow-rate time series. The RAE-based parameterization is combined with an ensemble smoother with multiple data assimilation (ESMDA) for posterior generation. Results are presented for two- and three-phase flow in a 2D channelized system and a 3D multi-Gaussian model. The RAE procedure, along with existing DSI treatments, are assessed through comparison to reference rejection sampling (RS) results. The new DSI methodology is shown to consistently outperform existing approaches, in terms of statistical agreement with RS results. The method is also shown to accurately capture derived quantities, which are computed from variables considered directly in DSI. This requires correlation and covariance between variables to be properly captured, and accuracy in these relationships is demonstrated. The RAE-based parameterization developed here is clearly useful in DSI, and it may also find application in other subsurface flow problems. Full Article
rx Short-term forecasts of COVID-19 spread across Indian states until 1 May 2020. (arXiv:2004.13538v2 [q-bio.PE] UPDATED) By arxiv.org Published On :: The very first case of corona-virus illness was recorded on 30 January 2020, in India and the number of infected cases, including the death toll, continues to rise. In this paper, we present short-term forecasts of COVID-19 for 28 Indian states and five union territories using real-time data from 30 January to 21 April 2020. Applying Holt's second-order exponential smoothing method and autoregressive integrated moving average (ARIMA) model, we generate 10-day ahead forecasts of the likely number of infected cases and deaths in India for 22 April to 1 May 2020. Our results show that the number of cumulative cases in India will rise to 36335.63 [PI 95% (30884.56, 42918.87)], concurrently the number of deaths may increase to 1099.38 [PI 95% (959.77, 1553.76)] by 1 May 2020. Further, we have divided the country into severity zones based on the cumulative cases. According to this analysis, Maharashtra is likely to be the most affected states with around 9787.24 [PI 95% (6949.81, 13757.06)] cumulative cases by 1 May 2020. However, Kerala and Karnataka are likely to shift from the red zone (i.e. highly affected) to the lesser affected region. On the other hand, Gujarat and Madhya Pradesh will move to the red zone. These results mark the states where lockdown by 3 May 2020, can be loosened. Full Article
rx A bimodal gamma distribution: Properties, regression model and applications. (arXiv:2004.12491v2 [stat.ME] UPDATED) By arxiv.org Published On :: In this paper we propose a bimodal gamma distribution using a quadratic transformation based on the alpha-skew-normal model. We discuss several properties of this distribution such as mean, variance, moments, hazard rate and entropy measures. Further, we propose a new regression model with censored data based on the bimodal gamma distribution. This regression model can be very useful to the analysis of real data and could give more realistic fits than other special regression models. Monte Carlo simulations were performed to check the bias in the maximum likelihood estimation. The proposed models are applied to two real data sets found in literature. Full Article
rx A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced Cardiac Magnetic Resonance Imaging. (arXiv:2004.12314v3 [cs.CV] UPDATED) By arxiv.org Published On :: Segmentation of cardiac images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) widely used for visualizing diseased cardiac structures, is a crucial first step for clinical diagnosis and treatment. However, direct segmentation of LGE-MRIs is challenging due to its attenuated contrast. Since most clinical studies have relied on manual and labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the "2018 Left Atrium Segmentation Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double, sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved far superior results than traditional methods and pipelines containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for cardiac LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field. Full Article
rx Excess registered deaths in England and Wales during the COVID-19 pandemic, March 2020 and April 2020. (arXiv:2004.11355v4 [stat.AP] UPDATED) By arxiv.org Published On :: Official counts of COVID-19 deaths have been criticized for potentially including people who did not die of COVID-19 but merely died with COVID-19. I address that critique by fitting a generalized additive model to weekly counts of all registered deaths in England and Wales during the 2010s. The model produces baseline rates of death registrations expected in the absence of the COVID-19 pandemic, and comparing those baselines to recent counts of registered deaths exposes the emergence of excess deaths late in March 2020. Among adults aged 45+, about 38,700 excess deaths were registered in the 5 weeks comprising 21 March through 24 April (612 $pm$ 416 from 21$-$27 March, 5675 $pm$ 439 from 28 March through 3 April, then 9183 $pm$ 468, 12,712 $pm$ 589, and 10,511 $pm$ 567 in April's next 3 weeks). Both the Office for National Statistics's respective count of 26,891 death certificates which mention COVID-19, and the Department of Health and Social Care's hospital-focused count of 21,222 deaths, are appreciably less, implying that their counting methods have underestimated rather than overestimated the pandemic's true death toll. If underreporting rates have held steady, about 45,900 direct and indirect COVID-19 deaths might have been registered by April's end but not yet publicly reported in full. Full Article
rx On a phase transition in general order spline regression. (arXiv:2004.10922v2 [math.ST] UPDATED) By arxiv.org Published On :: In the Gaussian sequence model $Y= heta_0 + varepsilon$ in $mathbb{R}^n$, we study the fundamental limit of approximating the signal $ heta_0$ by a class $Theta(d,d_0,k)$ of (generalized) splines with free knots. Here $d$ is the degree of the spline, $d_0$ is the order of differentiability at each inner knot, and $k$ is the maximal number of pieces. We show that, given any integer $dgeq 0$ and $d_0in{-1,0,ldots,d-1}$, the minimax rate of estimation over $Theta(d,d_0,k)$ exhibits the following phase transition: egin{equation*} egin{aligned} inf_{widetilde{ heta}}sup_{ hetainTheta(d,d_0, k)}mathbb{E}_ heta|widetilde{ heta} - heta|^2 asymp_d egin{cases} kloglog(16n/k), & 2leq kleq k_0,\ klog(en/k), & k geq k_0+1. end{cases} end{aligned} end{equation*} The transition boundary $k_0$, which takes the form $lfloor{(d+1)/(d-d_0) floor} + 1$, demonstrates the critical role of the regularity parameter $d_0$ in the separation between a faster $log log(16n)$ and a slower $log(en)$ rate. We further show that, once encouraging an additional '$d$-monotonicity' shape constraint (including monotonicity for $d = 0$ and convexity for $d=1$), the above phase transition is eliminated and the faster $kloglog(16n/k)$ rate can be achieved for all $k$. These results provide theoretical support for developing $ell_0$-penalized (shape-constrained) spline regression procedures as useful alternatives to $ell_1$- and $ell_2$-penalized ones. Full Article
rx A Critical Overview of Privacy-Preserving Approaches for Collaborative Forecasting. (arXiv:2004.09612v3 [cs.LG] UPDATED) By arxiv.org Published On :: Cooperation between different data owners may lead to an improvement in forecast quality - for instance by benefiting from spatial-temporal dependencies in geographically distributed time series. Due to business competitive factors and personal data protection questions, said data owners might be unwilling to share their data, which increases the interest in collaborative privacy-preserving forecasting. This paper analyses the state-of-the-art and unveils several shortcomings of existing methods in guaranteeing data privacy when employing Vector Autoregressive (VAR) models. The paper also provides mathematical proofs and numerical analysis to evaluate existing privacy-preserving methods, dividing them into three groups: data transformation, secure multi-party computations, and decomposition methods. The analysis shows that state-of-the-art techniques have limitations in preserving data privacy, such as a trade-off between privacy and forecasting accuracy, while the original data in iterative model fitting processes, in which intermediate results are shared, can be inferred after some iterations. Full Article
rx Deep transfer learning for improving single-EEG arousal detection. (arXiv:2004.05111v2 [cs.CV] UPDATED) By arxiv.org Published On :: Datasets in sleep science present challenges for machine learning algorithms due to differences in recording setups across clinics. We investigate two deep transfer learning strategies for overcoming the channel mismatch problem for cases where two datasets do not contain exactly the same setup leading to degraded performance in single-EEG models. Specifically, we train a baseline model on multivariate polysomnography data and subsequently replace the first two layers to prepare the architecture for single-channel electroencephalography data. Using a fine-tuning strategy, our model yields similar performance to the baseline model (F1=0.682 and F1=0.694, respectively), and was significantly better than a comparable single-channel model. Our results are promising for researchers working with small databases who wish to use deep learning models pre-trained on larger databases. Full Article
rx Strong Converse for Testing Against Independence over a Noisy channel. (arXiv:2004.00775v2 [cs.IT] UPDATED) By arxiv.org Published On :: A distributed binary hypothesis testing (HT) problem over a noisy (discrete and memoryless) channel studied previously by the authors is investigated from the perspective of the strong converse property. It was shown by Ahlswede and Csisz'{a}r that a strong converse holds in the above setting when the channel is rate-limited and noiseless. Motivated by this observation, we show that the strong converse continues to hold in the noisy channel setting for a special case of HT known as testing against independence (TAI), under the assumption that the channel transition matrix has non-zero elements. The proof utilizes the blowing up lemma and the recent change of measure technique of Tyagi and Watanabe as the key tools. Full Article
rx Capturing and Explaining Trajectory Singularities using Composite Signal Neural Networks. (arXiv:2003.10810v2 [cs.LG] UPDATED) By arxiv.org Published On :: Spatial trajectories are ubiquitous and complex signals. Their analysis is crucial in many research fields, from urban planning to neuroscience. Several approaches have been proposed to cluster trajectories. They rely on hand-crafted features, which struggle to capture the spatio-temporal complexity of the signal, or on Artificial Neural Networks (ANNs) which can be more efficient but less interpretable. In this paper we present a novel ANN architecture designed to capture the spatio-temporal patterns characteristic of a set of trajectories, while taking into account the demographics of the navigators. Hence, our model extracts markers linked to both behaviour and demographics. We propose a composite signal analyser (CompSNN) combining three simple ANN modules. Each of these modules uses different signal representations of the trajectory while remaining interpretable. Our CompSNN performs significantly better than its modules taken in isolation and allows to visualise which parts of the signal were most useful to discriminate the trajectories. Full Article
rx Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A Multi-Agent Deep Reinforcement Learning Approach. (arXiv:2003.02157v2 [physics.soc-ph] UPDATED) By arxiv.org Published On :: In recent years, multi-access edge computing (MEC) is a key enabler for handling the massive expansion of Internet of Things (IoT) applications and services. However, energy consumption of a MEC network depends on volatile tasks that induces risk for energy demand estimations. As an energy supplier, a microgrid can facilitate seamless energy supply. However, the risk associated with energy supply is also increased due to unpredictable energy generation from renewable and non-renewable sources. Especially, the risk of energy shortfall is involved with uncertainties in both energy consumption and generation. In this paper, we study a risk-aware energy scheduling problem for a microgrid-powered MEC network. First, we formulate an optimization problem considering the conditional value-at-risk (CVaR) measurement for both energy consumption and generation, where the objective is to minimize the loss of energy shortfall of the MEC networks and we show this problem is an NP-hard problem. Second, we analyze our formulated problem using a multi-agent stochastic game that ensures the joint policy Nash equilibrium, and show the convergence of the proposed model. Third, we derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based asynchronous advantage actor-critic (A3C) algorithm with shared neural networks. This method mitigates the curse of dimensionality of the state space and chooses the best policy among the agents for the proposed problem. Finally, the experimental results establish a significant performance gain by considering CVaR for high accuracy energy scheduling of the proposed model than both the single and random agent models. Full Article
rx Mnemonics Training: Multi-Class Incremental Learning without Forgetting. (arXiv:2002.10211v3 [cs.CV] UPDATED) By arxiv.org Published On :: Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes. Full Article
rx A Distributionally Robust Area Under Curve Maximization Model. (arXiv:2002.07345v2 [math.OC] UPDATED) By arxiv.org Published On :: Area under ROC curve (AUC) is a widely used performance measure for classification models. We propose two new distributionally robust AUC maximization models (DR-AUC) that rely on the Kantorovich metric and approximate the AUC with the hinge loss function. We consider the two cases with respectively fixed and variable support for the worst-case distribution. We use duality theory to reformulate the DR-AUC models and derive tractable convex optimization problems. The numerical experiments show that the proposed DR-AUC models -- benchmarked with the standard deterministic AUC and the support vector machine models - perform better in general and in particular improve the worst-case out-of-sample performance over the majority of the considered datasets, thereby showing their robustness. The results are particularly encouraging since our numerical experiments are conducted with training sets of small size which have been known to be conducive to low out-of-sample performance. Full Article
rx Statistical aspects of nuclear mass models. (arXiv:2002.04151v3 [nucl-th] UPDATED) By arxiv.org Published On :: We study the information content of nuclear masses from the perspective of global models of nuclear binding energies. To this end, we employ a number of statistical methods and diagnostic tools, including Bayesian calibration, Bayesian model averaging, chi-square correlation analysis, principal component analysis, and empirical coverage probability. Using a Bayesian framework, we investigate the structure of the 4-parameter Liquid Drop Model by considering discrepant mass domains for calibration. We then use the chi-square correlation framework to analyze the 14-parameter Skyrme energy density functional calibrated using homogeneous and heterogeneous datasets. We show that a quite dramatic parameter reduction can be achieved in both cases. The advantage of Bayesian model averaging for improving uncertainty quantification is demonstrated. The statistical approaches used are pedagogically described; in this context this work can serve as a guide for future applications. Full Article
rx Cyclic Boosting -- an explainable supervised machine learning algorithm. (arXiv:2002.03425v2 [cs.LG] UPDATED) By arxiv.org Published On :: Supervised machine learning algorithms have seen spectacular advances and surpassed human level performance in a wide range of specific applications. However, using complex ensemble or deep learning algorithms typically results in black box models, where the path leading to individual predictions cannot be followed in detail. In order to address this issue, we propose the novel "Cyclic Boosting" machine learning algorithm, which allows to efficiently perform accurate regression and classification tasks while at the same time allowing a detailed understanding of how each individual prediction was made. Full Article
rx On the impact of selected modern deep-learning techniques to the performance and celerity of classification models in an experimental high-energy physics use case. (arXiv:2002.01427v3 [physics.data-an] UPDATED) By arxiv.org Published On :: Beginning from a basic neural-network architecture, we test the potential benefits offered by a range of advanced techniques for machine learning, in particular deep learning, in the context of a typical classification problem encountered in the domain of high-energy physics, using a well-studied dataset: the 2014 Higgs ML Kaggle dataset. The advantages are evaluated in terms of both performance metrics and the time required to train and apply the resulting models. Techniques examined include domain-specific data-augmentation, learning rate and momentum scheduling, (advanced) ensembling in both model-space and weight-space, and alternative architectures and connection methods. Following the investigation, we arrive at a model which achieves equal performance to the winning solution of the original Kaggle challenge, whilst being significantly quicker to train and apply, and being suitable for use with both GPU and CPU hardware setups. These reductions in timing and hardware requirements potentially allow the use of more powerful algorithms in HEP analyses, where models must be retrained frequently, sometimes at short notice, by small groups of researchers with limited hardware resources. Additionally, a new wrapper library for PyTorch called LUMINis presented, which incorporates all of the techniques studied. Full Article
rx Restricting the Flow: Information Bottlenecks for Attribution. (arXiv:2001.00396v3 [stat.ML] UPDATED) By arxiv.org Published On :: Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks. For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image. In this work we adapt the information bottleneck concept for attribution. By adding noise to intermediate feature maps we restrict the flow of information and can quantify (in bits) how much information image regions provide. We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings. The method's information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision. For reviews: https://openreview.net/forum?id=S1xWh1rYwB For code: https://github.com/BioroboticsLab/IBA Full Article