ca

Regression Forest-Based Atlas Localization and Direction Specific Atlas Generation for Pancreas Segmentation. (arXiv:2005.03345v1 [cs.CV])

This paper proposes a fully automated atlas-based pancreas segmentation method from CT volumes utilizing atlas localization by regression forest and atlas generation using blood vessel information. Previous probabilistic atlas-based pancreas segmentation methods cannot deal with spatial variations that are commonly found in the pancreas well. Also, shape variations are not represented by an averaged atlas. We propose a fully automated pancreas segmentation method that deals with two types of variations mentioned above. The position and size of the pancreas is estimated using a regression forest technique. After localization, a patient-specific probabilistic atlas is generated based on a new image similarity that reflects the blood vessel position and direction information around the pancreas. We segment it using the EM algorithm with the atlas as prior followed by the graph-cut. In evaluation results using 147 CT volumes, the Jaccard index and the Dice overlap of the proposed method were 62.1% and 75.1%, respectively. Although we automated all of the segmentation processes, segmentation results were superior to the other state-of-the-art methods in the Dice overlap.




ca

Wavelet Integrated CNNs for Noise-Robust Image Classification. (arXiv:2005.03337v1 [cs.CV])

Convolutional Neural Networks (CNNs) are generally prone to noise interruptions, i.e., small image noise can cause drastic changes in the output. To suppress the noise effect to the final predication, we enhance CNNs by replacing max-pooling, strided-convolution, and average-pooling with Discrete Wavelet Transform (DWT). We present general DWT and Inverse DWT (IDWT) layers applicable to various wavelets like Haar, Daubechies, and Cohen, etc., and design wavelet integrated CNNs (WaveCNets) using these layers for image classification. In WaveCNets, feature maps are decomposed into the low-frequency and high-frequency components during the down-sampling. The low-frequency component stores main information including the basic object structures, which is transmitted into the subsequent layers to extract robust high-level features. The high-frequency components, containing most of the data noise, are dropped during inference to improve the noise-robustness of the WaveCNets. Our experimental results on ImageNet and ImageNet-C (the noisy version of ImageNet) show that WaveCNets, the wavelet integrated versions of VGG, ResNets, and DenseNet, achieve higher accuracy and better noise-robustness than their vanilla versions.




ca

Causal Paths in Temporal Networks of Face-to-Face Human Interactions. (arXiv:2005.03333v1 [cs.SI])

In a temporal network causal paths are characterized by the fact that links from a source to a target must respect the chronological order. In this article we study the causal paths structure in temporal networks of human face to face interactions in different social contexts. In a static network paths are transitive i.e. the existence of a link from $a$ to $b$ and from $b$ to $c$ implies the existence of a path from $a$ to $c$ via $b$. In a temporal network the chronological constraint introduces time correlations that affects transitivity. A probabilistic model based on higher order Markov chains shows that correlations that can invalidate transitivity are present only when the time gap between consecutive events is larger than the average value and are negligible below such a value. The comparison between the densities of the temporal and static accessibility matrices shows that the static representation can be used with good approximation. Moreover, we quantify the extent of the causally connected region of the networks over time.




ca

Crop Aggregating for short utterances speaker verification using raw waveforms. (arXiv:2005.03329v1 [eess.AS])

Most studies on speaker verification systems focus on long-duration utterances, which are composed of sufficient phonetic information. However, the performances of these systems are known to degrade when short-duration utterances are inputted due to the lack of phonetic information as compared to the long utterances. In this paper, we propose a method that compensates for the performance degradation of speaker verification for short utterances, referred to as "crop aggregating". The proposed method adopts an ensemble-based design to improve the stability and accuracy of speaker verification systems. The proposed method segments an input utterance into several short utterances and then aggregates the segment embeddings extracted from the segmented inputs to compose a speaker embedding. Then, this method simultaneously trains the segment embeddings and the aggregated speaker embedding. In addition, we also modified the teacher-student learning method for the proposed method. Experimental results on different input duration using the VoxCeleb1 test set demonstrate that the proposed technique improves speaker verification performance by about 45.37% relatively compared to the baseline system with 1-second test utterance condition.




ca

Specification and Automated Analysis of Inter-Parameter Dependencies in Web APIs. (arXiv:2005.03320v1 [cs.SE])

Web services often impose inter-parameter dependencies that restrict the way in which two or more input parameters can be combined to form valid calls to the service. Unfortunately, current specification languages for web services like the OpenAPI Specification (OAS) provide no support for the formal description of such dependencies, which makes it hardly possible to automatically discover and interact with services without human intervention. In this article, we present an approach for the specification and automated analysis of inter-parameter dependencies in web APIs. We first present a domain-specific language, called Inter-parameter Dependency Language (IDL), for the specification of dependencies among input parameters in web services. Then, we propose a mapping to translate an IDL document into a constraint satisfaction problem (CSP), enabling the automated analysis of IDL specifications using standard CSP-based reasoning operations. Specifically, we present a catalogue of nine analysis operations on IDL documents allowing to compute, for example, whether a given request satisfies all the dependencies of the service. Finally, we present a tool suite including an editor, a parser, an OAS extension, a constraint programming-aided library, and a test suite supporting IDL specifications and their analyses. Together, these contributions pave the way for a new range of specification-driven applications in areas such as code generation and testing.




ca

Knowledge Enhanced Neural Fashion Trend Forecasting. (arXiv:2005.03297v1 [cs.IR])

Fashion trend forecasting is a crucial task for both academia and industry. Although some efforts have been devoted to tackling this challenging task, they only studied limited fashion elements with highly seasonal or simple patterns, which could hardly reveal the real fashion trends. Towards insightful fashion trend forecasting, this work focuses on investigating fine-grained fashion element trends for specific user groups. We first contribute a large-scale fashion trend dataset (FIT) collected from Instagram with extracted time series fashion element records and user information. Further-more, to effectively model the time series data of fashion elements with rather complex patterns, we propose a Knowledge EnhancedRecurrent Network model (KERN) which takes advantage of the capability of deep recurrent neural networks in modeling time-series data. Moreover, it leverages internal and external knowledge in fashion domain that affects the time-series patterns of fashion element trends. Such incorporation of domain knowledge further enhances the deep learning model in capturing the patterns of specific fashion elements and predicting the future trends. Extensive experiments demonstrate that the proposed KERN model can effectively capture the complicated patterns of objective fashion elements, therefore making preferable fashion trend forecast.




ca

Expressing Accountability Patterns using Structural Causal Models. (arXiv:2005.03294v1 [cs.SE])

While the exact definition and implementation of accountability depend on the specific context, at its core accountability describes a mechanism that will make decisions transparent and often provides means to sanction "bad" decisions. As such, accountability is specifically relevant for Cyber-Physical Systems, such as robots or drones, that embed themselves into a human society, take decisions and might cause lasting harm. Without a notion of accountability, such systems could behave with impunity and would not fit into society. Despite its relevance, there is currently no agreement on its meaning and, more importantly, no way to express accountability properties for these systems. As a solution we propose to express the accountability properties of systems using Structural Causal Models. They can be represented as human-readable graphical models while also offering mathematical tools to analyze and reason over them. Our central contribution is to show how Structural Causal Models can be used to express and analyze the accountability properties of systems and that this approach allows us to identify accountability patterns. These accountability patterns can be catalogued and used to improve systems and their architectures.




ca

Deep Learning based Person Re-identification. (arXiv:2005.03293v1 [cs.CV])

Automated person re-identification in a multi-camera surveillance setup is very important for effective tracking and monitoring crowd movement. In the recent years, few deep learning based re-identification approaches have been developed which are quite accurate but time-intensive, and hence not very suitable for practical purposes. In this paper, we propose an efficient hierarchical re-identification approach in which color histogram based comparison is first employed to find the closest matches in the gallery set, and next deep feature based comparison is carried out using Siamese network. Reduction in search space after the first level of matching helps in achieving a fast response time as well as improving the accuracy of prediction by the Siamese network by eliminating vastly dissimilar elements. A silhouette part-based feature extraction scheme is adopted in each level of hierarchy to preserve the relative locations of the different body structures and make the appearance descriptors more discriminating in nature. The proposed approach has been evaluated on five public data sets and also a new data set captured by our team in our laboratory. Results reveal that it outperforms most state-of-the-art approaches in terms of overall accuracy.




ca

YANG2UML: Bijective Transformation and Simplification of YANG to UML. (arXiv:2005.03292v1 [cs.SE])

Software Defined Networking is currently revolutionizing computer networking by decoupling the network control (control plane) from the forwarding functions (data plane) enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services. Next to the well-known OpenFlow protocol, the XML-based NETCONF protocol is also an important means for exchanging configuration information from a management platform and is nowadays even part of OpenFlow. In combination with NETCONF, YANG is the corresponding protocol that defines the associated data structures supporting virtually all network configuration protocols. YANG itself is a semantically rich language, which -- in order to facilitate familiarization with the relevant subject -- is often visualized to involve other experts or developers and to support them by their daily work (writing applications which make use of YANG). In order to support this process, this paper presents an novel approach to optimize and simplify YANG data models to assist further discussions with the management and implementations (especially of interfaces) to reduce complexity. Therefore, we have defined a bidirectional mapping of YANG to UML and developed a tool that renders the created UML diagrams. This combines the benefits to use the formal language YANG with automatically maintained UML diagrams to involve other experts or developers, closing the gap between technically improved data models and their human readability.




ca

Multi-view data capture using edge-synchronised mobiles. (arXiv:2005.03286v1 [cs.MM])

Multi-view data capture permits free-viewpoint video (FVV) content creation. To this end, several users must capture video streams, calibrated in both time and pose, framing the same object/scene, from different viewpoints. New-generation network architectures (e.g. 5G) promise lower latency and larger bandwidth connections supported by powerful edge computing, properties that seem ideal for reliable FVV capture. We have explored this possibility, aiming to remove the need for bespoke synchronisation hardware when capturing a scene from multiple viewpoints, making it possible through off-the-shelf mobiles. We propose a novel and scalable data capture architecture that exploits edge resources to synchronise and harvest frame captures. We have designed an edge computing unit that supervises the relaying of timing triggers to and from multiple mobiles, in addition to synchronising frame harvesting. We empirically show the benefits of our edge computing unit by analysing latencies and show the quality of 3D reconstruction outputs against an alternative and popular centralised solution based on Unity3D.




ca

Continuous maximal covering location problems with interconnected facilities. (arXiv:2005.03274v1 [math.OC])

In this paper we analyze a continuous version of the maximal covering location problem, in which the facilities are required to be interconnected by means of a graph structure in which two facilities are allowed to be linked if a given distance is not exceed. We provide a mathematical programming framework for the problem and different resolution strategies. First, we propose a Mixed Integer Non Linear Programming formulation, and derive properties of the problem that allow us to project the continuous variables out avoiding the nonlinear constraints, resulting in an equivalent pure integer programming formulation. Since the number of constraints in the integer programming formulation is large and the constraints are, in general, difficult to handle, we propose two branch-&-cut approaches that avoid the complete enumeration of the constraints resulting in more efficient procedures. We report the results of an extensive battery of computational experiments comparing the performance of the different approaches.




ca

RNN-T Models Fail to Generalize to Out-of-Domain Audio: Causes and Solutions. (arXiv:2005.03271v1 [eess.AS])

In recent years, all-neural end-to-end approaches have obtained state-of-the-art results on several challenging automatic speech recognition (ASR) tasks. However, most existing works focus on building ASR models where train and test data are drawn from the same domain. This results in poor generalization characteristics on mismatched-domains: e.g., end-to-end models trained on short segments perform poorly when evaluated on longer utterances. In this work, we analyze the generalization properties of streaming and non-streaming recurrent neural network transducer (RNN-T) based end-to-end models in order to identify model components that negatively affect generalization performance. We propose two solutions: combining multiple regularization techniques during training, and using dynamic overlapping inference. On a long-form YouTube test set, when the non-streaming RNN-T model is trained with shorter segments of data, the proposed combination improves word error rate (WER) from 22.3% to 14.8%; when the streaming RNN-T model trained on short Search queries, the proposed techniques improve WER on the YouTube set from 67.0% to 25.3%. Finally, when trained on Librispeech, we find that dynamic overlapping inference improves WER on YouTube from 99.8% to 33.0%.




ca

Adaptive Feature Selection Guided Deep Forest for COVID-19 Classification with Chest CT. (arXiv:2005.03264v1 [eess.IV])

Chest computed tomography (CT) becomes an effective tool to assist the diagnosis of coronavirus disease-19 (COVID-19). Due to the outbreak of COVID-19 worldwide, using the computed-aided diagnosis technique for COVID-19 classification based on CT images could largely alleviate the burden of clinicians. In this paper, we propose an Adaptive Feature Selection guided Deep Forest (AFS-DF) for COVID-19 classification based on chest CT images. Specifically, we first extract location-specific features from CT images. Then, in order to capture the high-level representation of these features with the relatively small-scale data, we leverage a deep forest model to learn high-level representation of the features. Moreover, we propose a feature selection method based on the trained deep forest model to reduce the redundancy of features, where the feature selection could be adaptively incorporated with the COVID-19 classification model. We evaluated our proposed AFS-DF on COVID-19 dataset with 1495 patients of COVID-19 and 1027 patients of community acquired pneumonia (CAP). The accuracy (ACC), sensitivity (SEN), specificity (SPE) and AUC achieved by our method are 91.79%, 93.05%, 89.95% and 96.35%, respectively. Experimental results on the COVID-19 dataset suggest that the proposed AFS-DF achieves superior performance in COVID-19 vs. CAP classification, compared with 4 widely used machine learning methods.




ca

DFSeer: A Visual Analytics Approach to Facilitate Model Selection for Demand Forecasting. (arXiv:2005.03244v1 [cs.HC])

Selecting an appropriate model to forecast product demand is critical to the manufacturing industry. However, due to the data complexity, market uncertainty and users' demanding requirements for the model, it is challenging for demand analysts to select a proper model. Although existing model selection methods can reduce the manual burden to some extent, they often fail to present model performance details on individual products and reveal the potential risk of the selected model. This paper presents DFSeer, an interactive visualization system to conduct reliable model selection for demand forecasting based on the products with similar historical demand. It supports model comparison and selection with different levels of details. Besides, it shows the difference in model performance on similar products to reveal the risk of model selection and increase users' confidence in choosing a forecasting model. Two case studies and interviews with domain experts demonstrate the effectiveness and usability of DFSeer.




ca

Multi-Target Deep Learning for Algal Detection and Classification. (arXiv:2005.03232v1 [cs.CV])

Water quality has a direct impact on industry, agriculture, and public health. Algae species are common indicators of water quality. It is because algal communities are sensitive to changes in their habitats, giving valuable knowledge on variations in water quality. However, water quality analysis requires professional inspection of algal detection and classification under microscopes, which is very time-consuming and tedious. In this paper, we propose a novel multi-target deep learning framework for algal detection and classification. Extensive experiments were carried out on a large-scale colored microscopic algal dataset. Experimental results demonstrate that the proposed method leads to the promising performance on algal detection, class identification and genus identification.




ca

Hierarchical Predictive Coding Models in a Deep-Learning Framework. (arXiv:2005.03230v1 [cs.CV])

Bayesian predictive coding is a putative neuromorphic method for acquiring higher-level neural representations to account for sensory input. Although originating in the neuroscience community, there are also efforts in the machine learning community to study these models. This paper reviews some of the more well known models. Our review analyzes module connectivity and patterns of information transfer, seeking to find general principles used across the models. We also survey some recent attempts to cast these models within a deep learning framework. A defining feature of Bayesian predictive coding is that it uses top-down, reconstructive mechanisms to predict incoming sensory inputs or their lower-level representations. Discrepancies between the predicted and the actual inputs, known as prediction errors, then give rise to future learning that refines and improves the predictive accuracy of learned higher-level representations. Predictive coding models intended to describe computations in the neocortex emerged prior to the development of deep learning and used a communication structure between modules that we name the Rao-Ballard protocol. This protocol was derived from a Bayesian generative model with some rather strong statistical assumptions. The RB protocol provides a rubric to assess the fidelity of deep learning models that claim to implement predictive coding.




ca

End-to-End Domain Adaptive Attention Network for Cross-Domain Person Re-Identification. (arXiv:2005.03222v1 [cs.CV])

Person re-identification (re-ID) remains challenging in a real-world scenario, as it requires a trained network to generalise to totally unseen target data in the presence of variations across domains. Recently, generative adversarial models have been widely adopted to enhance the diversity of training data. These approaches, however, often fail to generalise to other domains, as existing generative person re-identification models have a disconnect between the generative component and the discriminative feature learning stage. To address the on-going challenges regarding model generalisation, we propose an end-to-end domain adaptive attention network to jointly translate images between domains and learn discriminative re-id features in a single framework. To address the domain gap challenge, we introduce an attention module for image translation from source to target domains without affecting the identity of a person. More specifically, attention is directed to the background instead of the entire image of the person, ensuring identifying characteristics of the subject are preserved. The proposed joint learning network results in a significant performance improvement over state-of-the-art methods on several benchmark datasets.




ca

Hierarchical Attention Network for Action Segmentation. (arXiv:2005.03209v1 [cs.CV])

The temporal segmentation of events is an essential task and a precursor for the automatic recognition of human actions in the video. Several attempts have been made to capture frame-level salient aspects through attention but they lack the capacity to effectively map the temporal relationships in between the frames as they only capture a limited span of temporal dependencies. To this end we propose a complete end-to-end supervised learning approach that can better learn relationships between actions over time, thus improving the overall segmentation performance. The proposed hierarchical recurrent attention framework analyses the input video at multiple temporal scales, to form embeddings at frame level and segment level, and perform fine-grained action segmentation. This generates a simple, lightweight, yet extremely effective architecture for segmenting continuous video streams and has multiple application domains. We evaluate our system on multiple challenging public benchmark datasets, including MERL Shopping, 50 salads, and Georgia Tech Egocentric datasets, and achieves state-of-the-art performance. The evaluated datasets encompass numerous video capture settings which are inclusive of static overhead camera views and dynamic, ego-centric head-mounted camera views, demonstrating the direct applicability of the proposed framework in a variety of settings.




ca

Distributed Stabilization by Probability Control for Deterministic-Stochastic Large Scale Systems : Dissipativity Approach. (arXiv:2005.03193v1 [eess.SY])

By using dissipativity approach, we establish the stability condition for the feedback connection of a deterministic dynamical system $Sigma$ and a stochastic memoryless map $Psi$. After that, we extend the result to the class of large scale systems in which: $Sigma$ consists of many sub-systems; and $Psi$ consists of many "stochastic actuators" and "probability controllers" that control the actuator's output events. We will demonstrate the proposed approach by showing the design procedures to globally stabilize the manufacturing systems while locally balance the stock levels in any production process.




ca

A Dynamical Perspective on Point Cloud Registration. (arXiv:2005.03190v1 [cs.CV])

We provide a dynamical perspective on the classical problem of 3D point cloud registration with correspondences. A point cloud is considered as a rigid body consisting of particles. The problem of registering two point clouds is formulated as a dynamical system, where the dynamic model point cloud translates and rotates in a viscous environment towards the static scene point cloud, under forces and torques induced by virtual springs placed between each pair of corresponding points. We first show that the potential energy of the system recovers the objective function of the maximum likelihood estimation. We then adopt Lyapunov analysis, particularly the invariant set theorem, to analyze the rigid body dynamics and show that the system globally asymptotically tends towards the set of equilibrium points, where the globally optimal registration solution lies in. We conjecture that, besides the globally optimal equilibrium point, the system has either three or infinite "spurious" equilibrium points, and these spurious equilibria are all locally unstable. The case of three spurious equilibria corresponds to generic shape of the point cloud, while the case of infinite spurious equilibria happens when the point cloud exhibits symmetry. Therefore, simulating the dynamics with random perturbations guarantees to obtain the globally optimal registration solution. Numerical experiments support our analysis and conjecture.




ca

Determinantal Point Processes in Randomized Numerical Linear Algebra. (arXiv:2005.03185v1 [cs.DS])

Randomized Numerical Linear Algebra (RandNLA) uses randomness to develop improved algorithms for matrix problems that arise in scientific computing, data science, machine learning, etc. Determinantal Point Processes (DPPs), a seemingly unrelated topic in pure and applied mathematics, is a class of stochastic point processes with probability distribution characterized by sub-determinants of a kernel matrix. Recent work has uncovered deep and fruitful connections between DPPs and RandNLA which lead to new guarantees and improved algorithms that are of interest to both areas. We provide an overview of this exciting new line of research, including brief introductions to RandNLA and DPPs, as well as applications of DPPs to classical linear algebra tasks such as least squares regression, low-rank approximation and the Nystr"om method. For example, random sampling with a DPP leads to new kinds of unbiased estimators for least squares, enabling more refined statistical and inferential understanding of these algorithms; a DPP is, in some sense, an optimal randomized algorithm for the Nystr"om method; and a RandNLA technique called leverage score sampling can be derived as the marginal distribution of a DPP. We also discuss recent algorithmic developments, illustrating that, while not quite as efficient as standard RandNLA techniques, DPP-based algorithms are only moderately more expensive.




ca

On Optimal Control of Discounted Cost Infinite-Horizon Markov Decision Processes Under Local State Information Structures. (arXiv:2005.03169v1 [eess.SY])

This paper investigates a class of optimal control problems associated with Markov processes with local state information. The decision-maker has only local access to a subset of a state vector information as often encountered in decentralized control problems in multi-agent systems. Under this information structure, part of the state vector cannot be observed. We leverage ab initio principles and find a new form of Bellman equations to characterize the optimal policies of the control problem under local information structures. The dynamic programming solutions feature a mixture of dynamics associated unobservable state components and the local state-feedback policy based on the observable local information. We further characterize the optimal local-state feedback policy using linear programming methods. To reduce the computational complexity of the optimal policy, we propose an approximate algorithm based on virtual beliefs to find a sub-optimal policy. We show the performance bounds on the sub-optimal solution and corroborate the results with numerical case studies.




ca

A Gentle Introduction to Quantum Computing Algorithms with Applications to Universal Prediction. (arXiv:2005.03137v1 [quant-ph])

In this technical report we give an elementary introduction to Quantum Computing for non-physicists. In this introduction we describe in detail some of the foundational Quantum Algorithms including: the Deutsch-Jozsa Algorithm, Shor's Algorithm, Grocer Search, and Quantum Counting Algorithm and briefly the Harrow-Lloyd Algorithm. Additionally we give an introduction to Solomonoff Induction, a theoretically optimal method for prediction. We then attempt to use Quantum computing to find better algorithms for the approximation of Solomonoff Induction. This is done by using techniques from other Quantum computing algorithms to achieve a speedup in computing the speed prior, which is an approximation of Solomonoff's prior, a key part of Solomonoff Induction. The major limiting factors are that the probabilities being computed are often so small that without a sufficient (often large) amount of trials, the error may be larger than the result. If a substantial speedup in the computation of an approximation of Solomonoff Induction can be achieved through quantum computing, then this can be applied to the field of intelligent agents as a key part of an approximation of the agent AIXI.




ca

Catch Me If You Can: Using Power Analysis to Identify HPC Activity. (arXiv:2005.03135v1 [cs.CR])

Monitoring users on large computing platforms such as high performance computing (HPC) and cloud computing systems is non-trivial. Utilities such as process viewers provide limited insight into what users are running, due to granularity limitation, and other sources of data, such as system call tracing, can impose significant operational overhead. However, despite technical and procedural measures, instances of users abusing valuable HPC resources for personal gains have been documented in the past cite{hpcbitmine}, and systems that are open to large numbers of loosely-verified users from around the world are at risk of abuse. In this paper, we show how electrical power consumption data from an HPC platform can be used to identify what programs are executed. The intuition is that during execution, programs exhibit various patterns of CPU and memory activity. These patterns are reflected in the power consumption of the system and can be used to identify programs running. We test our approach on an HPC rack at Lawrence Berkeley National Laboratory using a variety of scientific benchmarks. Among other interesting observations, our results show that by monitoring the power consumption of an HPC rack, it is possible to identify if particular programs are running with precision up to and recall of 95\% even in noisy scenarios.




ca

Evaluation, Tuning and Interpretation of Neural Networks for Meteorological Applications. (arXiv:2005.03126v1 [physics.ao-ph])

Neural networks have opened up many new opportunities to utilize remotely sensed images in meteorology. Common applications include image classification, e.g., to determine whether an image contains a tropical cyclone, and image translation, e.g., to emulate radar imagery for satellites that only have passive channels. However, there are yet many open questions regarding the use of neural networks in meteorology, such as best practices for evaluation, tuning and interpretation. This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of effective receptive fields, underutilized meteorological performance measures, and methods for NN interpretation, such as synthetic experiments and layer-wise relevance propagation. We also consider the process of neural network interpretation as a whole, recognizing it as an iterative scientist-driven discovery process, and breaking it down into individual steps that researchers can take. Finally, while most work on neural network interpretation in meteorology has so far focused on networks for image classification tasks, we expand the focus to also include networks for image translation.




ca

Strong replica symmetry in high-dimensional optimal Bayesian inference. (arXiv:2005.03115v1 [math.PR])

We consider generic optimal Bayesian inference, namely, models of signal reconstruction where the posterior distribution and all hyperparameters are known. Under a standard assumption on the concentration of the free energy, we show how replica symmetry in the strong sense of concentration of all multioverlaps can be established as a consequence of the Franz-de Sanctis identities; the identities themselves in the current setting are obtained via a novel perturbation of the prior distribution of the signal. Concentration of multioverlaps means that asymptotically the posterior distribution has a particularly simple structure encoded by a random probability measure (or, in the case of binary signal, a non-random probability measure). We believe that such strong control of the model should be key in the study of inference problems with underlying sparse graphical structure (error correcting codes, block models, etc) and, in particular, in the derivation of replica symmetric formulas for the free energy and mutual information in this context.




ca

Constrained de Bruijn Codes: Properties, Enumeration, Constructions, and Applications. (arXiv:2005.03102v1 [cs.IT])

The de Bruijn graph, its sequences, and their various generalizations, have found many applications in information theory, including many new ones in the last decade. In this paper, motivated by a coding problem for emerging memory technologies, a set of sequences which generalize sequences in the de Bruijn graph are defined. These sequences can be also defined and viewed as constrained sequences. Hence, they will be called constrained de Bruijn sequences and a set of such sequences will be called a constrained de Bruijn code. Several properties and alternative definitions for such codes are examined and they are analyzed as generalized sequences in the de Bruijn graph (and its generalization) and as constrained sequences. Various enumeration techniques are used to compute the total number of sequences for any given set of parameters. A construction method of such codes from the theory of shift-register sequences is proposed. Finally, we show how these constrained de Bruijn sequences and codes can be applied in constructions of codes for correcting synchronization errors in the $ell$-symbol read channel and in the racetrack memory channel. For this purpose, these codes are superior in their size on previously known codes.




ca

Scale-Equalizing Pyramid Convolution for Object Detection. (arXiv:2005.03101v1 [cs.CV])

Feature pyramid has been an efficient method to extract features at different scales. Development over this method mainly focuses on aggregating contextual information at different levels while seldom touching the inter-level correlation in the feature pyramid. Early computer vision methods extracted scale-invariant features by locating the feature extrema in both spatial and scale dimension. Inspired by this, a convolution across the pyramid level is proposed in this study, which is termed pyramid convolution and is a modified 3-D convolution. Stacked pyramid convolutions directly extract 3-D (scale and spatial) features and outperforms other meticulously designed feature fusion modules. Based on the viewpoint of 3-D convolution, an integrated batch normalization that collects statistics from the whole feature pyramid is naturally inserted after the pyramid convolution. Furthermore, we also show that the naive pyramid convolution, together with the design of RetinaNet head, actually best applies for extracting features from a Gaussian pyramid, whose properties can hardly be satisfied by a feature pyramid. In order to alleviate this discrepancy, we build a scale-equalizing pyramid convolution (SEPC) that aligns the shared pyramid convolution kernel only at high-level feature maps. Being computationally efficient and compatible with the head design of most single-stage object detectors, the SEPC module brings significant performance improvement ($>4$AP increase on MS-COCO2017 dataset) in state-of-the-art one-stage object detectors, and a light version of SEPC also has $sim3.5$AP gain with only around 7% inference time increase. The pyramid convolution also functions well as a stand-alone module in two-stage object detectors and is able to improve the performance by $sim2$AP. The source code can be found at https://github.com/jshilong/SEPC.




ca

Optimal Location of Cellular Base Station via Convex Optimization. (arXiv:2005.03099v1 [cs.IT])

An optimal base station (BS) location depends on the traffic (user) distribution, propagation pathloss and many system parameters, which renders its analytical study difficult so that numerical algorithms are widely used instead. In this paper, the problem is studied analytically. First, it is formulated as a convex optimization problem to minimize the total BS transmit power subject to quality-of-service (QoS) constraints, which also account for fairness among users. Due to its convex nature, Karush-Kuhn-Tucker (KKT) conditions are used to characterize a globally-optimum location as a convex combination of user locations, where convex weights depend on user parameters, pathloss exponent and overall geometry of the problem. Based on this characterization, a number of closed-form solutions are obtained. In particular, the optimum BS location is the mean of user locations in the case of free-space propagation and identical user parameters. If the user set is symmetric (as defined in the paper), the optimal BS location is independent of pathloss exponent, which is not the case in general. The analytical results show the impact of propagation conditions as well as system and user parameters on optimal BS location and can be used to develop design guidelines.




ca

Inference with Choice Functions Made Practical. (arXiv:2005.03098v1 [cs.AI])

We study how to infer new choices from previous choices in a conservative manner. To make such inferences, we use the theory of choice functions: a unifying mathematical framework for conservative decision making that allows one to impose axioms directly on the represented decisions. We here adopt the coherence axioms of De Bock and De Cooman (2019). We show how to naturally extend any given choice assessment to such a coherent choice function, whenever possible, and use this natural extension to make new choices. We present a practical algorithm to compute this natural extension and provide several methods that can be used to improve its scalability.




ca

Heterogeneous Facility Location Games. (arXiv:2005.03095v1 [cs.GT])

We study heterogeneous $k$-facility location games. In this model there are $k$ facilities where each facility serves a different purpose. Thus, the preferences of the agents over the facilities can vary arbitrarily. Our goal is to design strategy proof mechanisms that place the facilities in a way to maximize the minimum utility among the agents. For $k=1$, if the agents' locations are known, we prove that the mechanism that places the facility on an optimal location is strategy proof. For $k geq 2$, we prove that there is no optimal strategy proof mechanism, deterministic or randomized, even when $k=2$ there are only two agents with known locations, and the facilities have to be placed on a line segment. We derive inapproximability bounds for deterministic and randomized strategy proof mechanisms. Finally, we focus on the line segment and provide strategy proof mechanisms that achieve constant approximation. All of our mechanisms are simple and communication efficient. As a byproduct we show that some of our mechanisms can be used to achieve constant factor approximations for other objectives as the social welfare and the happiness.




ca

Line Artefact Quantification in Lung Ultrasound Images of COVID-19 Patients via Non-Convex Regularisation. (arXiv:2005.03080v1 [eess.IV])

In this paper, we present a novel method for line artefacts quantification in lung ultrasound (LUS) images of COVID-19 patients. We formulate this as a non-convex regularisation problem involving a sparsity-enforcing, Cauchy-based penalty function, and the inverse Radon transform. We employ a simple local maxima detection technique in the Radon transform domain, associated with known clinical definitions of line artefacts. Despite being non-convex, the proposed method has guaranteed convergence via a proximal splitting algorithm and accurately identifies both horizontal and vertical line artefacts in LUS images. In order to reduce the number of false and missed detection, our method includes a two-stage validation mechanism, which is performed in both Radon and image domains. We evaluate the performance of the proposed method in comparison to the current state-of-the-art B-line identification method and show a considerable performance gain with 87% correctly detected B-lines in LUS images of nine COVID-19 patients. In addition, owing to its fast convergence, which takes around 12 seconds for a given frame, our proposed method is readily applicable for processing LUS image sequences.




ca

Categorical Vector Space Semantics for Lambek Calculus with a Relevant Modality. (arXiv:2005.03074v1 [cs.CL])

We develop a categorical compositional distributional semantics for Lambek Calculus with a Relevant Modality !L*, which has a limited edition of the contraction and permutation rules. The categorical part of the semantics is a monoidal biclosed category with a coalgebra modality, very similar to the structure of a Differential Category. We instantiate this category to finite dimensional vector spaces and linear maps via "quantisation" functors and work with three concrete interpretations of the coalgebra modality. We apply the model to construct categorical and concrete semantic interpretations for the motivating example of !L*: the derivation of a phrase with a parasitic gap. The effectiveness of the concrete interpretations are evaluated via a disambiguation task, on an extension of a sentence disambiguation dataset to parasitic gap phrase one, using BERT, Word2Vec, and FastText vectors and Relational tensors.




ca

I Always Feel Like Somebody's Sensing Me! A Framework to Detect, Identify, and Localize Clandestine Wireless Sensors. (arXiv:2005.03068v1 [cs.CR])

The increasing ubiquity of low-cost wireless sensors in smart homes and buildings has enabled users to easily deploy systems to remotely monitor and control their environments. However, this raises privacy concerns for third-party occupants, such as a hotel room guest who may be unaware of deployed clandestine sensors. Previous methods focused on specific modalities such as detecting cameras but do not provide a generalizable and comprehensive method to capture arbitrary sensors which may be "spying" on a user. In this work, we seek to determine whether one can walk in a room and detect any wireless sensor monitoring an individual. As such, we propose SnoopDog, a framework to not only detect wireless sensors that are actively monitoring a user, but also classify and localize each device. SnoopDog works by establishing causality between patterns in observable wireless traffic and a trusted sensor in the same space, e.g., an inertial measurement unit (IMU) that captures a user's movement. Once causality is established, SnoopDog performs packet inspection to inform the user about the monitoring device. Finally, SnoopDog localizes the clandestine device in a 2D plane using a novel trial-based localization technique. We evaluated SnoopDog across several devices and various modalities and were able to detect causality 96.6% percent of the time, classify suspicious devices with 100% accuracy, and localize devices to a sufficiently reduced sub-space.




ca

Learning, transferring, and recommending performance knowledge with Monte Carlo tree search and neural networks. (arXiv:2005.03063v1 [cs.LG])

Making changes to a program to optimize its performance is an unscalable task that relies entirely upon human intuition and experience. In addition, companies operating at large scale are at a stage where no single individual understands the code controlling its systems, and for this reason, making changes to improve performance can become intractably difficult. In this paper, a learning system is introduced that provides AI assistance for finding recommended changes to a program. Specifically, it is shown how the evaluative feedback, delayed-reward performance programming domain can be effectively formulated via the Monte Carlo tree search (MCTS) framework. It is then shown that established methods from computational games for using learning to expedite tree-search computation can be adapted to speed up computing recommended program alterations. Estimates of expected utility from MCTS trees built for previous problems are used to learn a sampling policy that remains effective across new problems, thus demonstrating transferability of optimization knowledge. This formulation is applied to the Apache Spark distributed computing environment, and a preliminary result is observed that the time required to build a search tree for finding recommendations is reduced by up to a factor of 10x.




ca

Overview of Surgical Simulation. (arXiv:2005.03011v1 [cs.HC])

Motivated by the current demand of clinical governance, surgical simulation is now a well-established modality for basic skills training and assessment. The practical deployment of the technique is a multi-disciplinary venture encompassing areas in engineering, medicine and psychology. This paper provides an overview of the key topics involved in surgical simulation and associated technical challenges. The paper discusses the clinical motivation for surgical simulation, the use of virtual environments for surgical training, model acquisition and simplification, deformable models, collision detection, tissue property measurement, haptic rendering and image synthesis. Additional topics include surgical skill training and assessment metrics as well as challenges facing the incorporation of surgical simulation into medical education curricula.




ca

When Retired Soccer Star Briana Scurry Knew Her Career Was Over

After several weeks of not playing because of a concussion and then failing  several baseline tests, Briana Scurry became very worried.




ca

The Hit That Ended Briana Scurry's Soccer Career

"I knew I was in trouble ... I didn't know how much trouble," says retired soccer star Briana Scurry.




ca

What “Friday Night Tykes” Can Teach Us About Youth Football

Why do some parents and coaches think it's okay to let 9-year-old kids get hit in the head over and over in football practices and games?




ca

How Personalized Landing Pages Can Make Your Site More Profitable

Personalization is one of the most effective marketing techniques to connect with customers online. While the exact methods are different for every business, adding personalized elements to landing pages is a proven method of driving conversions on your site. But why is it so successful? The simple answer is that personalization shows customers that you […]

The post How Personalized Landing Pages Can Make Your Site More Profitable appeared first on WebFX Blog.




ca

Category Page Design Examples: 6 Category Page Inspirations

Dozens of people find your business when looking for a type of product but aren’t sure which product fits their needs best. With a well-designed and organized category page, you’ll help people browse products easier and find what they want. To help you get inspired, let’s take a look at some excellent category page design […]

The post Category Page Design Examples: 6 Category Page Inspirations appeared first on WebFX Blog.




ca

Website Statistics for 2020: 10 Critical Stats to Know for Web Design

Are you looking to start 2020 with a fresh web design for your business? If so, you must know what you need to do in 2020 to have a website that drives success for your business. With website statistics for 2020, you can see what to do and what to avoid, which will help you […]

The post Website Statistics for 2020: 10 Critical Stats to Know for Web Design appeared first on WebFX Blog.




ca

How Biofuels Can Cool Our Climate and Strengthen Our Ecosystems

By Evan H. DeLucia Courtesy of EOS Critics of biofuels like ethanol argue they are an unsustainable use of land. But with careful management, next-generation grass-based biofuels can net climate savings and improve their ecosystems. As the world seeks strategies … Continue reading




ca

Closure of Diablo Canyon Nuclear Plant

By Lauren McCauley Common Dreams In landmark agreement, California’s last remaining nuclear plant will be replaced by greenhouse-gas-free energy sources A plan to shutter the last remaining nuclear power plant in California and replace it with renewable energy is being … Continue reading




ca

Syncing Local Alexa Skills JSON Files With Alexa Developer Console Settings

In the Alexa Skills for Node.JS ASK SDK development world, the Alexa Skills Kit (ASK) Command-Line Interface (CLI) is one of the most overlooked tools.

Boosting Developer Productivity

With proper use, one could really increase productivity when developing Alexa Skills. This is especially so if you are creating many Alexa Skills, either because you are in the learning process or you are just managing multiple Alexa Skills projects for yourself or your clients.




ca

What’s New With Node? Interview With Bethany Griggs, Node.js Technical Steering Committee

Node.js 14 is available now. We wanted to get more context and details about the state of Node, and why developers should care about Node.js 14. We talked with Bethany Griggs, Node.js Technical Steering Committee member and Open-source Engineer at IBM, to find out more. 

Bethany has been a Node Core Collaborator for over two years. She contributes to the open-source Node.js runtime and is a member of the Node.js Release Working Group where she is involved with auditing commits for the long-term support (LTS) release lines and the creation of releases. 




ca

A toker's musical guide through pop history

The Cannabis Issue People have been enjoying cannabis for recreational purposes for centuries, including in the United States since the early 1900s. That means weed was in America a good 50 years or so before the invention of rock 'n' roll.…




ca

Weed can help your anxiety - or make it a ton worse

The Cannabis Issue Times are stressful, what with a virus rampaging, people dying, hospitals being overloaded, the economy imploding and unemployment soaring.…




ca

In Washington's rural pot shops, the effects of the coronavirus scare can be dramatic

The Cannabis Issue During normal times, I-90 Green House is like a destination resort for marijuana lovers.…




ca

Melt your problems away with this cannabutter ice cream

The Cannabis Issue As we look ahead to sunnier days, few things are as satisfying as a scoop of nice, cold ice cream.…