ca A Quantum Algorithm To Locate Unknown Hashes For Known N-Grams Within A Large Malware Corpus. (arXiv:2005.02911v2 [quant-ph] UPDATED) By arxiv.org Published On :: Quantum computing has evolved quickly in recent years and is showing significant benefits in a variety of fields. Malware analysis is one of those fields that could also take advantage of quantum computing. The combination of software used to locate the most frequent hashes and $n$-grams between benign and malicious software (KiloGram) and a quantum search algorithm could be beneficial, by loading the table of hashes and $n$-grams into a quantum computer, and thereby speeding up the process of mapping $n$-grams to their hashes. The first phase will be to use KiloGram to find the top-$k$ hashes and $n$-grams for a large malware corpus. From here, the resulting hash table is then loaded into a quantum machine. A quantum search algorithm is then used search among every permutation of the entangled key and value pairs to find the desired hash value. This prevents one from having to re-compute hashes for a set of $n$-grams, which can take on average $O(MN)$ time, whereas the quantum algorithm could take $O(sqrt{N})$ in the number of table lookups to find the desired hash values. Full Article
ca The Cascade Transformer: an Application for Efficient Answer Sentence Selection. (arXiv:2005.02534v2 [cs.CL] UPDATED) By arxiv.org Published On :: Large transformer-based language models have been shown to be very effective in many classification tasks. However, their computational complexity prevents their use in applications requiring the classification of a large set of candidates. While previous works have investigated approaches to reduce model size, relatively little attention has been paid to techniques to improve batch throughput during inference. In this paper, we introduce the Cascade Transformer, a simple yet effective technique to adapt transformer-based models into a cascade of rankers. Each ranker is used to prune a subset of candidates in a batch, thus dramatically increasing throughput at inference time. Partial encodings from the transformer model are shared among rerankers, providing further speed-up. When compared to a state-of-the-art transformer model, our approach reduces computation by 37% with almost no impact on accuracy, as measured on two English Question Answering datasets. Full Article
ca On-board Deep-learning-based Unmanned Aerial Vehicle Fault Cause Detection and Identification. (arXiv:2005.00336v2 [eess.SP] UPDATED) By arxiv.org Published On :: With the increase in use of Unmanned Aerial Vehicles (UAVs)/drones, it is important to detect and identify causes of failure in real time for proper recovery from a potential crash-like scenario or post incident forensics analysis. The cause of crash could be either a fault in the sensor/actuator system, a physical damage/attack, or a cyber attack on the drone's software. In this paper, we propose novel architectures based on deep Convolutional and Long Short-Term Memory Neural Networks (CNNs and LSTMs) to detect (via Autoencoder) and classify drone mis-operations based on sensor data. The proposed architectures are able to learn high-level features automatically from the raw sensor data and learn the spatial and temporal dynamics in the sensor data. We validate the proposed deep-learning architectures via simulations and experiments on a real drone. Empirical results show that our solution is able to detect with over 90% accuracy and classify various types of drone mis-operations (with about 99% accuracy (simulation data) and upto 88% accuracy (experimental data)). Full Article
ca Teaching Cameras to Feel: Estimating Tactile Physical Properties of Surfaces From Images. (arXiv:2004.14487v2 [cs.CV] UPDATED) By arxiv.org Published On :: The connection between visual input and tactile sensing is critical for object manipulation tasks such as grasping and pushing. In this work, we introduce the challenging task of estimating a set of tactile physical properties from visual information. We aim to build a model that learns the complex mapping between visual information and tactile physical properties. We construct a first of its kind image-tactile dataset with over 400 multiview image sequences and the corresponding tactile properties. A total of fifteen tactile physical properties across categories including friction, compliance, adhesion, texture, and thermal conductance are measured and then estimated by our models. We develop a cross-modal framework comprised of an adversarial objective and a novel visuo-tactile joint classification loss. Additionally, we develop a neural architecture search framework capable of selecting optimal combinations of viewing angles for estimating a given physical property. Full Article
ca Improved RawNet with Feature Map Scaling for Text-independent Speaker Verification using Raw Waveforms. (arXiv:2004.00526v2 [eess.AS] UPDATED) By arxiv.org Published On :: Recent advances in deep learning have facilitated the design of speaker verification systems that directly input raw waveforms. For example, RawNet extracts speaker embeddings from raw waveforms, which simplifies the process pipeline and demonstrates competitive performance. In this study, we improve RawNet by scaling feature maps using various methods. The proposed mechanism utilizes a scale vector that adopts a sigmoid non-linear function. It refers to a vector with dimensionality equal to the number of filters in a given feature map. Using a scale vector, we propose to scale the feature map multiplicatively, additively, or both. In addition, we investigate replacing the first convolution layer with the sinc-convolution layer of SincNet. Experiments performed on the VoxCeleb1 evaluation dataset demonstrate the effectiveness of the proposed methods, and the best performing system reduces the equal error rate by half compared to the original RawNet. Expanded evaluation results obtained using the VoxCeleb1-E and VoxCeleb-H protocols marginally outperform existing state-of-the-art systems. Full Article
ca Mathematical Formulae in Wikimedia Projects 2020. (arXiv:2003.09417v2 [cs.DL] UPDATED) By arxiv.org Published On :: This poster summarizes our contributions to Wikimedia's processing pipeline for mathematical formulae. We describe how we have supported the transition from rendering formulae as course-grained PNG images in 2001 to providing modern semantically enriched language-independent MathML formulae in 2020. Additionally, we describe our plans to improve the accessibility and discoverability of mathematical knowledge in Wikimedia projects further. Full Article
ca Hierarchical Neural Architecture Search for Single Image Super-Resolution. (arXiv:2003.04619v2 [cs.CV] UPDATED) By arxiv.org Published On :: Deep neural networks have exhibited promising performance in image super-resolution (SR). Most SR models follow a hierarchical architecture that contains both the cell-level design of computational blocks and the network-level design of the positions of upsampling blocks. However, designing SR models heavily relies on human expertise and is very labor-intensive. More critically, these SR models often contain a huge number of parameters and may not meet the requirements of computation resources in real-world applications. To address the above issues, we propose a Hierarchical Neural Architecture Search (HNAS) method to automatically design promising architectures with different requirements of computation cost. To this end, we design a hierarchical SR search space and propose a hierarchical controller for architecture search. Such a hierarchical controller is able to simultaneously find promising cell-level blocks and network-level positions of upsampling layers. Moreover, to design compact architectures with promising performance, we build a joint reward by considering both the performance and computation cost to guide the search process. Extensive experiments on five benchmark datasets demonstrate the superiority of our method over existing methods. Full Article
ca Hardware Implementation of Neural Self-Interference Cancellation. (arXiv:2001.04543v2 [eess.SP] UPDATED) By arxiv.org Published On :: In-band full-duplex systems can transmit and receive information simultaneously on the same frequency band. However, due to the strong self-interference caused by the transmitter to its own receiver, the use of non-linear digital self-interference cancellation is essential. In this work, we describe a hardware architecture for a neural network-based non-linear self-interference (SI) canceller and we compare it with our own hardware implementation of a conventional polynomial based SI canceller. In particular, we present implementation results for a shallow and a deep neural network SI canceller as well as for a polynomial SI canceller. Our results show that the deep neural network canceller achieves a hardware efficiency of up to $312.8$ Msamples/s/mm$^2$ and an energy efficiency of up to $0.9$ nJ/sample, which is $2.1 imes$ and $2 imes$ better than the polynomial SI canceller, respectively. These results show that NN-based methods applied to communications are not only useful from a performance perspective, but can also be a very effective means to reduce the implementation complexity. Full Article
ca Games Where You Can Play Optimally with Arena-Independent Finite Memory. (arXiv:2001.03894v2 [cs.GT] UPDATED) By arxiv.org Published On :: For decades, two-player (antagonistic) games on graphs have been a framework of choice for many important problems in theoretical computer science. A notorious one is controller synthesis, which can be rephrased through the game-theoretic metaphor as the quest for a winning strategy of the system in a game against its antagonistic environment. Depending on the specification, optimal strategies might be simple or quite complex, for example having to use (possibly infinite) memory. Hence, research strives to understand which settings allow for simple strategies. In 2005, Gimbert and Zielonka provided a complete characterization of preference relations (a formal framework to model specifications and game objectives) that admit memoryless optimal strategies for both players. In the last fifteen years however, practical applications have driven the community toward games with complex or multiple objectives, where memory -- finite or infinite -- is almost always required. Despite much effort, the exact frontiers of the class of preference relations that admit finite-memory optimal strategies still elude us. In this work, we establish a complete characterization of preference relations that admit optimal strategies using arena-independent finite memory, generalizing the work of Gimbert and Zielonka to the finite-memory case. We also prove an equivalent to their celebrated corollary of great practical interest: if both players have optimal (arena-independent-)finite-memory strategies in all one-player games, then it is also the case in all two-player games. Finally, we pinpoint the boundaries of our results with regard to the literature: our work completely covers the case of arena-independent memory (e.g., multiple parity objectives, lower- and upper-bounded energy objectives), and paves the way to the arena-dependent case (e.g., multiple lower-bounded energy objectives). Full Article
ca Safe non-smooth black-box optimization with application to policy search. (arXiv:1912.09466v3 [math.OC] UPDATED) By arxiv.org Published On :: For safety-critical black-box optimization tasks, observations of the constraints and the objective are often noisy and available only for the feasible points. We propose an approach based on log barriers to find a local solution of a non-convex non-smooth black-box optimization problem $min f^0(x)$ subject to $f^i(x)leq 0,~ i = 1,ldots, m$, at the same time, guaranteeing constraint satisfaction while learning an optimal solution with high probability. Our proposed algorithm exploits noisy observations to iteratively improve on an initial safe point until convergence. We derive the convergence rate and prove safety of our algorithm. We demonstrate its performance in an application to an iterative control design problem. Full Article
ca SCAttNet: Semantic Segmentation Network with Spatial and Channel Attention Mechanism for High-Resolution Remote Sensing Images. (arXiv:1912.09121v2 [cs.CV] UPDATED) By arxiv.org Published On :: High-resolution remote sensing images (HRRSIs) contain substantial ground object information, such as texture, shape, and spatial location. Semantic segmentation, which is an important task for element extraction, has been widely used in processing mass HRRSIs. However, HRRSIs often exhibit large intraclass variance and small interclass variance due to the diversity and complexity of ground objects, thereby bringing great challenges to a semantic segmentation task. In this paper, we propose a new end-to-end semantic segmentation network, which integrates lightweight spatial and channel attention modules that can refine features adaptively. We compare our method with several classic methods on the ISPRS Vaihingen and Potsdam datasets. Experimental results show that our method can achieve better semantic segmentation results. The source codes are available at https://github.com/lehaifeng/SCAttNet. Full Article
ca Multi-group Multicast Beamforming: Optimal Structure and Efficient Algorithms. (arXiv:1911.08925v2 [eess.SP] UPDATED) By arxiv.org Published On :: This paper considers the multi-group multicast beamforming optimization problem, for which the optimal solution has been unknown due to the non-convex and NP-hard nature of the problem. By utilizing the successive convex approximation numerical method and Lagrangian duality, we obtain the optimal multicast beamforming solution structure for both the quality-of-service (QoS) problem and the max-min fair (MMF) problem. The optimal structure brings valuable insights into multicast beamforming: We show that the notion of uplink-downlink duality can be generalized to the multicast beamforming problem. The optimal multicast beamformer is a weighted MMSE filter based on a group-channel direction: a generalized version of the optimal downlink multi-user unicast beamformer. We also show that there is an inherent low-dimensional structure in the optimal multicast beamforming solution independent of the number of transmit antennas, leading to efficient numerical algorithm design, especially for systems with large antenna arrays. We propose efficient algorithms to compute the multicast beamformer based on the optimal beamforming structure. Through asymptotic analysis, we characterize the asymptotic behavior of the multicast beamformers as the number of antennas grows, and in turn, provide simple closed-form approximate multicast beamformers for both the QoS and MMF problems. This approximation offers practical multicast beamforming solutions with a near-optimal performance at very low computational complexity for large-scale antenna systems. Full Article
ca Box Covers and Domain Orderings for Beyond Worst-Case Join Processing. (arXiv:1909.12102v2 [cs.DB] UPDATED) By arxiv.org Published On :: Recent beyond worst-case optimal join algorithms Minesweeper and its generalization Tetris have brought the theory of indexing and join processing together by developing a geometric framework for joins. These algorithms take as input an index $mathcal{B}$, referred to as a box cover, that stores output gaps that can be inferred from traditional indexes, such as B+ trees or tries, on the input relations. The performances of these algorithms highly depend on the certificate of $mathcal{B}$, which is the smallest subset of gaps in $mathcal{B}$ whose union covers all of the gaps in the output space of a query $Q$. We study how to generate box covers that contain small size certificates to guarantee efficient runtimes for these algorithms. First, given a query $Q$ over a set of relations of size $N$ and a fixed set of domain orderings for the attributes, we give a $ ilde{O}(N)$-time algorithm called GAMB which generates a box cover for $Q$ that is guaranteed to contain the smallest size certificate across any box cover for $Q$. Second, we show that finding a domain ordering to minimize the box cover size and certificate is NP-hard through a reduction from the 2 consecutive block minimization problem on boolean matrices. Our third contribution is a $ ilde{O}(N)$-time approximation algorithm called ADORA to compute domain orderings, under which one can compute a box cover of size $ ilde{O}(K^r)$, where $K$ is the minimum box cover for $Q$ under any domain ordering and $r$ is the maximum arity of any relation. This guarantees certificates of size $ ilde{O}(K^r)$. We combine ADORA and GAMB with Tetris to form a new algorithm we call TetrisReordered, which provides several new beyond worst-case bounds. On infinite families of queries, TetrisReordered's runtimes are unboundedly better than the bounds stated in prior work. Full Article
ca Global Locality in Biomedical Relation and Event Extraction. (arXiv:1909.04822v2 [cs.CL] UPDATED) By arxiv.org Published On :: Due to the exponential growth of biomedical literature, event and relation extraction are important tasks in biomedical text mining. Most work only focus on relation extraction, and detect a single entity pair mention on a short span of text, which is not ideal due to long sentences that appear in biomedical contexts. We propose an approach to both relation and event extraction, for simultaneously predicting relationships between all mention pairs in a text. We also perform an empirical study to discuss different network setups for this purpose. The best performing model includes a set of multi-head attentions and convolutions, an adaptation of the transformer architecture, which offers self-attention the ability to strengthen dependencies among related elements, and models the interaction between features extracted by multiple attention heads. Experiment results demonstrate that our approach outperforms the state of the art on a set of benchmark biomedical corpora including BioNLP 2009, 2011, 2013 and BioCreative 2017 shared tasks. Full Article
ca The Mapillary Traffic Sign Dataset for Detection and Classification on a Global Scale. (arXiv:1909.04422v2 [cs.CV] UPDATED) By arxiv.org Published On :: Traffic signs are essential map features globally in the era of autonomous driving and smart cities. To develop accurate and robust algorithms for traffic sign detection and classification, a large-scale and diverse benchmark dataset is required. In this paper, we introduce a traffic sign benchmark dataset of 100K street-level images around the world that encapsulates diverse scenes, wide coverage of geographical locations, and varying weather and lighting conditions and covers more than 300 manually annotated traffic sign classes. The dataset includes 52K images that are fully annotated and 48K images that are partially annotated. This is the largest and the most diverse traffic sign dataset consisting of images from all over world with fine-grained annotations of traffic sign classes. We have run extensive experiments to establish strong baselines for both the detection and the classification tasks. In addition, we have verified that the diversity of this dataset enables effective transfer learning for existing large-scale benchmark datasets on traffic sign detection and classification. The dataset is freely available for academic research: https://www.mapillary.com/dataset/trafficsign. Full Article
ca Over-the-Air Computation Systems: Optimization, Analysis and Scaling Laws. (arXiv:1909.00329v2 [cs.IT] UPDATED) By arxiv.org Published On :: For future Internet of Things (IoT)-based Big Data applications (e.g., smart cities/transportation), wireless data collection from ubiquitous massive smart sensors with limited spectrum bandwidth is very challenging. On the other hand, to interpret the meaning behind the collected data, it is also challenging for edge fusion centers running computing tasks over large data sets with limited computation capacity. To tackle these challenges, by exploiting the superposition property of a multiple-access channel and the functional decomposition properties, the recently proposed technique, over-the-air computation (AirComp), enables an effective joint data collection and computation from concurrent sensor transmissions. In this paper, we focus on a single-antenna AirComp system consisting of $K$ sensors and one receiver (i.e., the fusion center). We consider an optimization problem to minimize the computation mean-squared error (MSE) of the $K$ sensors' signals at the receiver by optimizing the transmitting-receiving (Tx-Rx) policy, under the peak power constraint of each sensor. Although the problem is not convex, we derive the computation-optimal policy in closed form. Also, we comprehensively investigate the ergodic performance of AirComp systems in terms of the average computation MSE and the average power consumption under Rayleigh fading channels with different Tx-Rx policies. For the computation-optimal policy, we prove that its average computation MSE has a decay rate of $O(1/sqrt{K})$, and our numerical results illustrate that the policy also has a vanishing average power consumption with the increasing $K$, which jointly show the computation effectiveness and the energy efficiency of the policy with a large number of sensors. Full Article
ca Numerical study on the effect of geometric approximation error in the numerical solution of PDEs using a high-order curvilinear mesh. (arXiv:1908.09917v2 [math.NA] UPDATED) By arxiv.org Published On :: When time-dependent partial differential equations (PDEs) are solved numerically in a domain with curved boundary or on a curved surface, mesh error and geometric approximation error caused by the inaccurate location of vertices and other interior grid points, respectively, could be the main source of the inaccuracy and instability of the numerical solutions of PDEs. The role of these geometric errors in deteriorating the stability and particularly the conservation properties are largely unknown, which seems to necessitate very fine meshes especially to remove geometric approximation error. This paper aims to investigate the effect of geometric approximation error by using a high-order mesh with negligible geometric approximation error, even for high order polynomial of order p. To achieve this goal, the high-order mesh generator from CAD geometry called NekMesh is adapted for surface mesh generation in comparison to traditional meshes with non-negligible geometric approximation error. Two types of numerical tests are considered. Firstly, the accuracy of differential operators is compared for various p on a curved element of the sphere. Secondly, by applying the method of moving frames, four different time-dependent PDEs on the sphere are numerically solved to investigate the impact of geometric approximation error on the accuracy and conservation properties of high-order numerical schemes for PDEs on the sphere. Full Article
ca A Fast and Accurate Algorithm for Spherical Harmonic Analysis on HEALPix Grids with Applications to the Cosmic Microwave Background Radiation. (arXiv:1904.10514v4 [math.NA] UPDATED) By arxiv.org Published On :: The Hierarchical Equal Area isoLatitude Pixelation (HEALPix) scheme is used extensively in astrophysics for data collection and analysis on the sphere. The scheme was originally designed for studying the Cosmic Microwave Background (CMB) radiation, which represents the first light to travel during the early stages of the universe's development and gives the strongest evidence for the Big Bang theory to date. Refined analysis of the CMB angular power spectrum can lead to revolutionary developments in understanding the nature of dark matter and dark energy. In this paper, we present a new method for performing spherical harmonic analysis for HEALPix data, which is a central component to computing and analyzing the angular power spectrum of the massive CMB data sets. The method uses a novel combination of a non-uniform fast Fourier transform, the double Fourier sphere method, and Slevinsky's fast spherical harmonic transform (Slevinsky, 2019). For a HEALPix grid with $N$ pixels (points), the computational complexity of the method is $mathcal{O}(Nlog^2 N)$, with an initial set-up cost of $mathcal{O}(N^{3/2}log N)$. This compares favorably with $mathcal{O}(N^{3/2})$ runtime complexity of the current methods available in the HEALPix software when multiple maps need to be analyzed at the same time. Using numerical experiments, we demonstrate that the new method also appears to provide better accuracy over the entire angular power spectrum of synthetic data when compared to the current methods, with a convergence rate at least two times higher. Full Article
ca Constrained Restless Bandits for Dynamic Scheduling in Cyber-Physical Systems. (arXiv:1904.08962v3 [cs.SY] UPDATED) By arxiv.org Published On :: Restless multi-armed bandits are a class of discrete-time stochastic control problems which involve sequential decision making with a finite set of actions (set of arms). This paper studies a class of constrained restless multi-armed bandits (CRMAB). The constraints are in the form of time varying set of actions (set of available arms). This variation can be either stochastic or semi-deterministic. Given a set of arms, a fixed number of them can be chosen to be played in each decision interval. The play of each arm yields a state dependent reward. The current states of arms are partially observable through binary feedback signals from arms that are played. The current availability of arms is fully observable. The objective is to maximize long term cumulative reward. The uncertainty about future availability of arms along with partial state information makes this objective challenging. Applications for CRMAB abound in the domain of cyber-physical systems. This optimization problem is analyzed using Whittle's index policy. To this end, a constrained restless single-armed bandit is studied. It is shown to admit a threshold-type optimal policy, and is also indexable. An algorithm to compute Whittle's index is presented. Further, upper bounds on the value function are derived in order to estimate the degree of sub-optimality of various solutions. The simulation study compares the performance of Whittle's index, modified Whittle's index and myopic policies. Full Article
ca Keeping out the Masses: Understanding the Popularity and Implications of Internet Paywalls. (arXiv:1903.01406v4 [cs.CY] UPDATED) By arxiv.org Published On :: Funding the production of quality online content is a pressing problem for content producers. The most common funding method, online advertising, is rife with well-known performance and privacy harms, and an intractable subject-agent conflict: many users do not want to see advertisements, depriving the site of needed funding. Because of these negative aspects of advertisement-based funding, paywalls are an increasingly popular alternative for websites. This shift to a "pay-for-access" web is one that has potentially huge implications for the web and society. Instead of a system where information (nominally) flows freely, paywalls create a web where high quality information is available to fewer and fewer people, leaving the rest of the web users with less information, that might be also less accurate and of lower quality. Despite the potential significance of a move from an "advertising-but-open" web to a "paywalled" web, we find this issue understudied. This work addresses this gap in our understanding by measuring how widely paywalls have been adopted, what kinds of sites use paywalls, and the distribution of policies enforced by paywalls. A partial list of our findings include that (i) paywall use is accelerating (2x more paywalls every 6 months), (ii) paywall adoption differs by country (e.g. 18.75% in US, 12.69% in Australia), (iii) paywalls change how users interact with sites (e.g. higher bounce rates, less incoming links), (iv) the median cost of an annual paywall access is $108 per site, and (v) paywalls are in general trivial to circumvent. Finally, we present the design of a novel, automated system for detecting whether a site uses a paywall, through the combination of runtime browser instrumentation and repeated programmatic interactions with the site. We intend this classifier to augment future, longitudinal measurements of paywall use and behavior. Full Article
ca Machine learning topological phases in real space. (arXiv:1901.01963v4 [cond-mat.mes-hall] UPDATED) By arxiv.org Published On :: We develop a supervised machine learning algorithm that is able to learn topological phases for finite condensed matter systems from bulk data in real lattice space. The algorithm employs diagonalization in real space together with any supervised learning algorithm to learn topological phases through an eigenvector ensembling procedure. We combine our algorithm with decision trees and random forests to successfully recover topological phase diagrams of Su-Schrieffer-Heeger (SSH) models from bulk lattice data in real space and show how the Shannon information entropy of ensembles of lattice eigenvectors can be used to retrieve a signal detailing how topological information is distributed in the bulk. The discovery of Shannon information entropy signals associated with topological phase transitions from the analysis of data from several thousand SSH systems illustrates how model explainability in machine learning can advance the research of exotic quantum materials with properties that may power future technological applications such as qubit engineering for quantum computing. Full Article
ca Identifying Compromised Accounts on Social Media Using Statistical Text Analysis. (arXiv:1804.07247v3 [cs.SI] UPDATED) By arxiv.org Published On :: Compromised accounts on social networks are regular user accounts that have been taken over by an entity with malicious intent. Since the adversary exploits the already established trust of a compromised account, it is crucial to detect these accounts to limit the damage they can cause. We propose a novel general framework for discovering compromised accounts by semantic analysis of text messages coming out from an account. Our framework is built on the observation that normal users will use language that is measurably different from the language that an adversary would use when the account is compromised. We use our framework to develop specific algorithms that use the difference of language models of users and adversaries as features in a supervised learning setup. Evaluation results show that the proposed framework is effective for discovering compromised accounts on social networks and a KL-divergence-based language model feature works best. Full Article
ca ErdH{o}s-P'osa property of chordless cycles and its applications. (arXiv:1711.00667v3 [math.CO] UPDATED) By arxiv.org Published On :: A chordless cycle, or equivalently a hole, in a graph $G$ is an induced subgraph of $G$ which is a cycle of length at least $4$. We prove that the ErdH{o}s-P'osa property holds for chordless cycles, which resolves the major open question concerning the ErdH{o}s-P'osa property. Our proof for chordless cycles is constructive: in polynomial time, one can find either $k+1$ vertex-disjoint chordless cycles, or $c_1k^2 log k+c_2$ vertices hitting every chordless cycle for some constants $c_1$ and $c_2$. It immediately implies an approximation algorithm of factor $mathcal{O}(sf{opt}log {sf opt})$ for Chordal Vertex Deletion. We complement our main result by showing that chordless cycles of length at least $ell$ for any fixed $ellge 5$ do not have the ErdH{o}s-P'osa property. Full Article
ca Using hierarchical matrices in the solution of the time-fractional heat equation by multigrid waveform relaxation. (arXiv:1706.07632v3 [math.NA] UPDATED) By arxiv.org Published On :: This work deals with the efficient numerical solution of the time-fractional heat equation discretized on non-uniform temporal meshes. Non-uniform grids are essential to capture the singularities of "typical" solutions of time-fractional problems. We propose an efficient space-time multigrid method based on the waveform relaxation technique, which accounts for the nonlocal character of the fractional differential operator. To maintain an optimal complexity, which can be obtained for the case of uniform grids, we approximate the coefficient matrix corresponding to the temporal discretization by its hierarchical matrix (${cal H}$-matrix) representation. In particular, the proposed method has a computational cost of ${cal O}(k N M log(M))$, where $M$ is the number of time steps, $N$ is the number of spatial grid points, and $k$ is a parameter which controls the accuracy of the ${cal H}$-matrix approximation. The efficiency and the good convergence of the algorithm, which can be theoretically justified by a semi-algebraic mode analysis, are demonstrated through numerical experiments in both one- and two-dimensional spaces. Full Article
ca Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity. (arXiv:1706.02205v4 [math.NA] UPDATED) By arxiv.org Published On :: Dense kernel matrices $Theta in mathbb{R}^{N imes N}$ obtained from point evaluations of a covariance function $G$ at locations ${ x_{i} }_{1 leq i leq N} subset mathbb{R}^{d}$ arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions of elliptic boundary value problems and homogeneously-distributed sampling points, we show how to identify a subset $S subset { 1 , dots , N }^2$, with $# S = O ( N log (N) log^{d} ( N /epsilon ) )$, such that the zero fill-in incomplete Cholesky factorisation of the sparse matrix $Theta_{ij} 1_{( i, j ) in S}$ is an $epsilon$-approximation of $Theta$. This factorisation can provably be obtained in complexity $O ( N log( N ) log^{d}( N /epsilon) )$ in space and $O ( N log^{2}( N ) log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators; we further present numerical evidence that $d$ can be taken to be the intrinsic dimension of the data set rather than that of the ambient space. The algorithm only needs to know the spatial configuration of the $x_{i}$ and does not require an analytic representation of $G$. Furthermore, this factorization straightforwardly provides an approximate sparse PCA with optimal rate of convergence in the operator norm. Hence, by using only subsampling and the incomplete Cholesky factorization, we obtain, at nearly linear complexity, the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky factorization we also obtain a solver for elliptic PDE with complexity $O ( N log^{d}( N /epsilon) )$ in space and $O ( N log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators. Full Article
ca Seismic Shot Gather Noise Localization Using a Multi-Scale Feature-Fusion-Based Neural Network. (arXiv:2005.03626v1 [cs.CV]) By arxiv.org Published On :: Deep learning-based models, such as convolutional neural networks, have advanced various segments of computer vision. However, this technology is rarely applied to seismic shot gather noise localization problem. This letter presents an investigation on the effectiveness of a multi-scale feature-fusion-based network for seismic shot-gather noise localization. Herein, we describe the following: (1) the construction of a real-world dataset of seismic noise localization based on 6,500 seismograms; (2) a multi-scale feature-fusion-based detector that uses the MobileNet combined with the Feature Pyramid Net as the backbone; and (3) the Single Shot multi-box detector for box classification/regression. Additionally, we propose the use of the Focal Loss function that improves the detector's prediction accuracy. The proposed detector achieves an AP@0.5 of 78.67\% in our empirical evaluation. Full Article
ca Technical Report of "Deductive Joint Support for Rational Unrestricted Rebuttal". (arXiv:2005.03620v1 [cs.AI]) By arxiv.org Published On :: In ASPIC-style structured argumentation an argument can rebut another argument by attacking its conclusion. Two ways of formalizing rebuttal have been proposed: In restricted rebuttal, the attacked conclusion must have been arrived at with a defeasible rule, whereas in unrestricted rebuttal, it may have been arrived at with a strict rule, as long as at least one of the antecedents of this strict rule was already defeasible. One systematic way of choosing between various possible definitions of a framework for structured argumentation is to study what rationality postulates are satisfied by which definition, for example whether the closure postulate holds, i.e. whether the accepted conclusions are closed under strict rules. While having some benefits, the proposal to use unrestricted rebuttal faces the problem that the closure postulate only holds for the grounded semantics but fails when other argumentation semantics are applied, whereas with restricted rebuttal the closure postulate always holds. In this paper we propose that ASPIC-style argumentation can benefit from keeping track not only of the attack relation between arguments, but also the relation of deductive joint support that holds between a set of arguments and an argument that was constructed from that set using a strict rule. By taking this deductive joint support relation into account while determining the extensions, the closure postulate holds with unrestricted rebuttal under all admissibility-based semantics. We define the semantics of deductive joint support through the flattening method. Full Article
ca A Local Spectral Exterior Calculus for the Sphere and Application to the Shallow Water Equations. (arXiv:2005.03598v1 [math.NA]) By arxiv.org Published On :: We introduce $Psimathrm{ec}$, a local spectral exterior calculus for the two-sphere $S^2$. $Psimathrm{ec}$ provides a discretization of Cartan's exterior calculus on $S^2$ formed by spherical differential $r$-form wavelets. These are well localized in space and frequency and provide (Stevenson) frames for the homogeneous Sobolev spaces $dot{H}^{-r+1}( Omega_{ u}^{r} , S^2 )$ of differential $r$-forms. At the same time, they satisfy important properties of the exterior calculus, such as the de Rahm complex and the Hodge-Helmholtz decomposition. Through this, $Psimathrm{ec}$ is tailored towards structure preserving discretizations that can adapt to solutions with varying regularity. The construction of $Psimathrm{ec}$ is based on a novel spherical wavelet frame for $L_2(S^2)$ that we obtain by introducing scalable reproducing kernel frames. These extend scalable frames to weighted sampling expansions and provide an alternative to quadrature rules for the discretization of needlet-like scale-discrete wavelets. We verify the practicality of $Psimathrm{ec}$ for numerical computations using the rotating shallow water equations. Our numerical results demonstrate that a $Psimathrm{ec}$-based discretization of the equations attains accuracy comparable to those of spectral methods while using a representation that is well localized in space and frequency. Full Article
ca Efficient Exact Verification of Binarized Neural Networks. (arXiv:2005.03597v1 [cs.AI]) By arxiv.org Published On :: We present a new system, EEV, for verifying binarized neural networks (BNNs). We formulate BNN verification as a Boolean satisfiability problem (SAT) with reified cardinality constraints of the form $y = (x_1 + cdots + x_n le b)$, where $x_i$ and $y$ are Boolean variables possibly with negation and $b$ is an integer constant. We also identify two properties, specifically balanced weight sparsity and lower cardinality bounds, that reduce the verification complexity of BNNs. EEV contains both a SAT solver enhanced to handle reified cardinality constraints natively and novel training strategies designed to reduce verification complexity by delivering networks with improved sparsity properties and cardinality bounds. We demonstrate the effectiveness of EEV by presenting the first exact verification results for $ell_{infty}$-bounded adversarial robustness of nontrivial convolutional BNNs on the MNIST and CIFAR10 datasets. Our results also show that, depending on the dataset and network architecture, our techniques verify BNNs between a factor of ten to ten thousand times faster than the best previous exact verification techniques for either binarized or real-valued networks. Full Article
ca A Tale of Two Perplexities: Sensitivity of Neural Language Models to Lexical Retrieval Deficits in Dementia of the Alzheimer's Type. (arXiv:2005.03593v1 [cs.CL]) By arxiv.org Published On :: In recent years there has been a burgeoning interest in the use of computational methods to distinguish between elicited speech samples produced by patients with dementia, and those from healthy controls. The difference between perplexity estimates from two neural language models (LMs) - one trained on transcripts of speech produced by healthy participants and the other trained on transcripts from patients with dementia - as a single feature for diagnostic classification of unseen transcripts has been shown to produce state-of-the-art performance. However, little is known about why this approach is effective, and on account of the lack of case/control matching in the most widely-used evaluation set of transcripts (DementiaBank), it is unclear if these approaches are truly diagnostic, or are sensitive to other variables. In this paper, we interrogate neural LMs trained on participants with and without dementia using synthetic narratives previously developed to simulate progressive semantic dementia by manipulating lexical frequency. We find that perplexity of neural LMs is strongly and differentially associated with lexical frequency, and that a mixture model resulting from interpolating control and dementia LMs improves upon the current state-of-the-art for models trained on transcript text exclusively. Full Article
ca GeoLogic -- Graphical interactive theorem prover for Euclidean geometry. (arXiv:2005.03586v1 [cs.LO]) By arxiv.org Published On :: Domain of mathematical logic in computers is dominated by automated theorem provers (ATP) and interactive theorem provers (ITP). Both of these are hard to access by AI from the human-imitation approach: ATPs often use human-unfriendly logical foundations while ITPs are meant for formalizing existing proofs rather than problem solving. We aim to create a simple human-friendly logical system for mathematical problem solving. We picked the case study of Euclidean geometry as it can be easily visualized, has simple logic, and yet potentially offers many high-school problems of various difficulty levels. To make the environment user friendly, we abandoned strict logic required by ITPs, allowing to infer topological facts from pictures. We present our system for Euclidean geometry, together with a graphical application GeoLogic, similar to GeoGebra, which allows users to interactively study and prove properties about the geometrical setup. Full Article
ca Checking Qualitative Liveness Properties of Replicated Systems with Stochastic Scheduling. (arXiv:2005.03555v1 [cs.LO]) By arxiv.org Published On :: We present a sound and complete method for the verification of qualitative liveness properties of replicated systems under stochastic scheduling. These are systems consisting of a finite-state program, executed by an unknown number of indistinguishable agents, where the next agent to make a move is determined by the result of a random experiment. We show that if a property of such a system holds, then there is always a witness in the shape of a Presburger stage graph: a finite graph whose nodes are Presburger-definable sets of configurations. Due to the high complexity of the verification problem (non-elementary), we introduce an incomplete procedure for the construction of Presburger stage graphs, and implement it on top of an SMT solver. The procedure makes extensive use of the theory of well-quasi-orders, and of the structural theory of Petri nets and vector addition systems. We apply our results to a set of benchmarks, in particular to a large collection of population protocols, a model of distributed computation extensively studied by the distributed computing community. Full Article
ca Credulous Users and Fake News: a Real Case Study on the Propagation in Twitter. (arXiv:2005.03550v1 [cs.SI]) By arxiv.org Published On :: Recent studies have confirmed a growing trend, especially among youngsters, of using Online Social Media as favourite information platform at the expense of traditional mass media. Indeed, they can easily reach a wide audience at a high speed; but exactly because of this they are the preferred medium for influencing public opinion via so-called fake news. Moreover, there is a general agreement that the main vehicle of fakes news are malicious software robots (bots) that automatically interact with human users. In previous work we have considered the problem of tagging human users in Online Social Networks as credulous users. Specifically, we have considered credulous those users with relatively high number of bot friends when compared to total number of their social friends. We consider this group of users worth of attention because they might have a higher exposure to malicious activities and they may contribute to the spreading of fake information by sharing dubious content. In this work, starting from a dataset of fake news, we investigate the behaviour and the degree of involvement of credulous users in fake news diffusion. The study aims to: (i) fight fake news by considering the content diffused by credulous users; (ii) highlight the relationship between credulous users and fake news spreading; (iii) target fake news detection by focusing on the analysis of specific accounts more exposed to malicious activities of bots. Our first results demonstrate a strong involvement of credulous users in fake news diffusion. This findings are calling for tools that, by performing data streaming on credulous' users actions, enables us to perform targeted fact-checking. Full Article
ca p for political: Participation Without Agency Is Not Enough. (arXiv:2005.03534v1 [cs.HC]) By arxiv.org Published On :: Participatory Design's vision of democratic participation assumes participants' feelings of agency in envisioning a collective future. But this assumption may be leaky when dealing with vulnerable populations. We reflect on the results of a series of activities aimed at supporting agentic-future-envisionment with a group of sex-trafficking survivors in Nepal. We observed a growing sense among the survivors that they could play a role in bringing about change in their families. They also became aware of how they could interact with available institutional resources. Reflecting on the observations, we argue that building participant agency on the small and personal interactions is necessary before demanding larger Political participation. In particular, a value of PD, especially for vulnerable populations, can lie in the process itself if it helps participants position themselves as actors in the larger world. Full Article
ca Practical Perspectives on Quality Estimation for Machine Translation. (arXiv:2005.03519v1 [cs.CL]) By arxiv.org Published On :: Sentence level quality estimation (QE) for machine translation (MT) attempts to predict the translation edit rate (TER) cost of post-editing work required to correct MT output. We describe our view on sentence-level QE as dictated by several practical setups encountered in the industry. We find consumers of MT output---whether human or algorithmic ones---to be primarily interested in a binary quality metric: is the translated sentence adequate as-is or does it need post-editing? Motivated by this we propose a quality classification (QC) view on sentence-level QE whereby we focus on maximizing recall at precision above a given threshold. We demonstrate that, while classical QE regression models fare poorly on this task, they can be re-purposed by replacing the output regression layer with a binary classification one, achieving 50-60\% recall at 90\% precision. For a high-quality MT system producing 75-80\% correct translations, this promises a significant reduction in post-editing work indeed. Full Article
ca An asynchronous distributed and scalable generalized Nash equilibrium seeking algorithm for strongly monotone games. (arXiv:2005.03507v1 [cs.GT]) By arxiv.org Published On :: In this paper, we present three distributed algorithms to solve a class of generalized Nash equilibrium (GNE) seeking problems in strongly monotone games. The first one (SD-GENO) is based on synchronous updates of the agents, while the second and the third (AD-GEED and AD-GENO) represent asynchronous solutions that are robust to communication delays. AD-GENO can be seen as a refinement of AD-GEED, since it only requires node auxiliary variables, enhancing the scalability of the algorithm. Our main contribution is to prove converge to a variational GNE of the game via an operator-theoretic approach. Finally, we apply the algorithms to network Cournot games and show how different activation sequences and delays affect convergence. We also compare the proposed algorithms to the only other in the literature (ADAGNES), and observe that AD-GENO outperforms the alternative. Full Article
ca Heidelberg Colorectal Data Set for Surgical Data Science in the Sensor Operating Room. (arXiv:2005.03501v1 [cs.CV]) By arxiv.org Published On :: Image-based tracking of medical instruments is an integral part of many surgical data science applications. Previous research has addressed the tasks of detecting, segmenting and tracking medical instruments based on laparoscopic video data. However, the methods proposed still tend to fail when applied to challenging images and do not generalize well to data they have not been trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms with a specific emphasis on robustness and generalization capabilities of the methods. Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery. Annotations include surgical phase labels for all frames in the videos as well as instance-wise segmentation masks for surgical instruments in more than 10,000 individual frames. The data has successfully been used to organize international competitions in the scope of the Endoscopic Vision Challenges (EndoVis) 2017 and 2019. Full Article
ca Computing with bricks and mortar: Classification of waveforms with a doped concrete blocks. (arXiv:2005.03498v1 [cs.ET]) By arxiv.org Published On :: We present results showing the capability of concrete-based information processing substrate in the signal classification task in accordance with in materio computing paradigm. As the Reservoir Computing is a suitable model for describing embedded in materio computation, we propose that this type of presented basic construction unit can be used as a source for "reservoir of states" necessary for simple tuning of the readout layer. In that perspective, buildings constructed from computing concrete could function as a highly parallel information processor for smart architecture. We present an electrical characterization of the set of samples with different additive concentrations followed by a dynamical analysis of selected specimens showing fingerprints of memfractive properties. Moreover, on the basis of obtained parameters, classification of the signal waveform shapes can be performed in scenarios explicitly tuned for a given device terminal. Full Article
ca How Can CNNs Use Image Position for Segmentation?. (arXiv:2005.03463v1 [eess.IV]) By arxiv.org Published On :: Convolution is an equivariant operation, and image position does not affect its result. A recent study shows that the zero-padding employed in convolutional layers of CNNs provides position information to the CNNs. The study further claims that the position information enables accurate inference for several tasks, such as object recognition, segmentation, etc. However, there is a technical issue with the design of the experiments of the study, and thus the correctness of the claim is yet to be verified. Moreover, the absolute image position may not be essential for the segmentation of natural images, in which target objects will appear at any image position. In this study, we investigate how positional information is and can be utilized for segmentation tasks. Toward this end, we consider {em positional encoding} (PE) that adds channels embedding image position to the input images and compare PE with several padding methods. Considering the above nature of natural images, we choose medical image segmentation tasks, in which the absolute position appears to be relatively important, as the same organs (of different patients) are captured in similar sizes and positions. We draw a mixed conclusion from the experimental results; the positional encoding certainly works in some cases, but the absolute image position may not be so important for segmentation tasks as we think. Full Article
ca A combination of 'pooling' with a prediction model can reduce by 73% the number of COVID-19 (Corona-virus) tests. (arXiv:2005.03453v1 [cs.LG]) By arxiv.org Published On :: We show that combining a prediction model (based on neural networks), with a new method of test pooling (better than the original Dorfman method, and better than double-pooling) called 'Grid', we can reduce the number of Covid-19 tests by 73%. Full Article
ca Detection and Feeder Identification of the High Impedance Fault at Distribution Networks Based on Synchronous Waveform Distortions. (arXiv:2005.03411v1 [eess.SY]) By arxiv.org Published On :: Diagnosis of high impedance fault (HIF) is a challenge for nowadays distribution network protections. The fault current of a HIF is much lower than that of a normal load, and fault feature is significantly affected by fault scenarios. A detection and feeder identification algorithm for HIFs is proposed in this paper, based on the high-resolution and synchronous waveform data. In the algorithm, an interval slope is defined to describe the waveform distortions, which guarantees a uniform feature description under various HIF nonlinearities and noise interferences. For three typical types of network neutrals, i.e.,isolated neutral, resonant neutral, and low-resistor-earthed neutral, differences of the distorted components between the zero-sequence currents of healthy and faulty feeders are mathematically deduced, respectively. As a result, the proposed criterion, which is based on the distortion relationships between zero-sequence currents of feeders and the zero-sequence voltage at the substation, is theoretically supported. 28 HIFs grounded to various materials are tested in a 10kV distribution networkwith three neutral types, and are utilized to verify the effectiveness of the proposed algorithm. Full Article
ca Joint Prediction and Time Estimation of COVID-19 Developing Severe Symptoms using Chest CT Scan. (arXiv:2005.03405v1 [eess.IV]) By arxiv.org Published On :: With the rapidly worldwide spread of Coronavirus disease (COVID-19), it is of great importance to conduct early diagnosis of COVID-19 and predict the time that patients might convert to the severe stage, for designing effective treatment plan and reducing the clinicians' workloads. In this study, we propose a joint classification and regression method to determine whether the patient would develop severe symptoms in the later time, and if yes, predict the possible conversion time that the patient would spend to convert to the severe stage. To do this, the proposed method takes into account 1) the weight for each sample to reduce the outliers' influence and explore the problem of imbalance classification, and 2) the weight for each feature via a sparsity regularization term to remove the redundant features of high-dimensional data and learn the shared information across the classification task and the regression task. To our knowledge, this study is the first work to predict the disease progression and the conversion time, which could help clinicians to deal with the potential severe cases in time or even save the patients' lives. Experimental analysis was conducted on a real data set from two hospitals with 422 chest computed tomography (CT) scans, where 52 cases were converted to severe on average 5.64 days and 34 cases were severe at admission. Results show that our method achieves the best classification (e.g., 85.91% of accuracy) and regression (e.g., 0.462 of the correlation coefficient) performance, compared to all comparison methods. Moreover, our proposed method yields 76.97% of accuracy for predicting the severe cases, 0.524 of the correlation coefficient, and 0.55 days difference for the converted time. Full Article
ca A LiDAR-based real-time capable 3D Perception System for Automated Driving in Urban Domains. (arXiv:2005.03404v1 [cs.RO]) By arxiv.org Published On :: We present a LiDAR-based and real-time capable 3D perception system for automated driving in urban domains. The hierarchical system design is able to model stationary and movable parts of the environment simultaneously and under real-time conditions. Our approach extends the state of the art by innovative in-detail enhancements for perceiving road users and drivable corridors even in case of non-flat ground surfaces and overhanging or protruding elements. We describe a runtime-efficient pointcloud processing pipeline, consisting of adaptive ground surface estimation, 3D clustering and motion classification stages. Based on the pipeline's output, the stationary environment is represented in a multi-feature mapping and fusion approach. Movable elements are represented in an object tracking system capable of using multiple reference points to account for viewpoint changes. We further enhance the tracking system by explicit consideration of occlusion and ambiguity cases. Our system is evaluated using a subset of the TUBS Road User Dataset. We enhance common performance metrics by considering application-driven aspects of real-world traffic scenarios. The perception system shows impressive results and is able to cope with the addressed scenarios while still preserving real-time capability. Full Article
ca Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation. (arXiv:2005.03393v1 [cs.CL]) By arxiv.org Published On :: In encoder-decoder neural models, multiple encoders are in general used to represent the contextual information in addition to the individual sentence. In this paper, we investigate multi-encoder approaches in documentlevel neural machine translation (NMT). Surprisingly, we find that the context encoder does not only encode the surrounding sentences but also behaves as a noise generator. This makes us rethink the real benefits of multi-encoder in context-aware translation - some of the improvements come from robust training. We compare several methods that introduce noise and/or well-tuned dropout setup into the training of these encoders. Experimental results show that noisy training plays an important role in multi-encoder-based NMT, especially when the training data is small. Also, we establish a new state-of-the-art on IWSLT Fr-En task by careful use of noise generation and dropout methods. Full Article
ca Semantic Signatures for Large-scale Visual Localization. (arXiv:2005.03388v1 [cs.CV]) By arxiv.org Published On :: Visual localization is a useful alternative to standard localization techniques. It works by utilizing cameras. In a typical scenario, features are extracted from captured images and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. These approaches offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called "semantic signature" is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. This paper is an extended version of a conference paper in CBMI'18. A more efficient retrieval protocol is presented with additional experiment results. Full Article
ca Vid2Curve: Simultaneously Camera Motion Estimation and Thin Structure Reconstruction from an RGB Video. (arXiv:2005.03372v1 [cs.GR]) By arxiv.org Published On :: Thin structures, such as wire-frame sculptures, fences, cables, power lines, and tree branches, are common in the real world. It is extremely challenging to acquire their 3D digital models using traditional image-based or depth-based reconstruction methods because thin structures often lack distinct point features and have severe self-occlusion. We propose the first approach that simultaneously estimates camera motion and reconstructs the geometry of complex 3D thin structures in high quality from a color video captured by a handheld camera. Specifically, we present a new curve-based approach to estimate accurate camera poses by establishing correspondences between featureless thin objects in the foreground in consecutive video frames, without requiring visual texture in the background scene to lock on. Enabled by this effective curve-based camera pose estimation strategy, we develop an iterative optimization method with tailored measures on geometry, topology as well as self-occlusion handling for reconstructing 3D thin structures. Extensive validations on a variety of thin structures show that our method achieves accurate camera pose estimation and faithful reconstruction of 3D thin structures with complex shape and topology at a level that has not been attained by other existing reconstruction methods. Full Article
ca Scoring Root Necrosis in Cassava Using Semantic Segmentation. (arXiv:2005.03367v1 [eess.IV]) By arxiv.org Published On :: Cassava a major food crop in many parts of Africa, has majorly been affected by Cassava Brown Streak Disease (CBSD). The disease affects tuberous roots and presents symptoms that include a yellow/brown, dry, corky necrosis within the starch-bearing tissues. Cassava breeders currently depend on visual inspection to score necrosis in roots based on a qualitative score which is quite subjective. In this paper we present an approach to automate root necrosis scoring using deep convolutional neural networks with semantic segmentation. Our experiments show that the UNet model performs this task with high accuracy achieving a mean Intersection over Union (IoU) of 0.90 on the test set. This method provides a means to use a quantitative measure for necrosis scoring on root cross-sections. This is done by segmentation and classifying the necrotized and non-necrotized pixels of cassava root cross-sections without any additional feature engineering. Full Article
ca Soft Interference Cancellation for Random Coding in Massive Gaussian Multiple-Access. (arXiv:2005.03364v1 [cs.IT]) By arxiv.org Published On :: We utilize recent results on the exact block error probability of Gaussian random codes in additive white Gaussian noise to analyze Gaussian random coding for massive multiple-access at finite message length. Soft iterative interference cancellation is found to closely approach the performance bounds recently found in [1]. The existence of two fundamentally different regimes in the trade-off between power and bandwidth efficiency reported in [2] is related to much older results in [3] on power optimization by linear programming. Furthermore, we tighten the achievability bounds of [1] in the low power regime and show that orthogonal constellations are very close to the theoretical limits for message lengths around 100 and above. Full Article
ca DramaQA: Character-Centered Video Story Understanding with Hierarchical QA. (arXiv:2005.03356v1 [cs.CL]) By arxiv.org Published On :: Despite recent progress on computer vision and natural language processing, developing video understanding intelligence is still hard to achieve due to the intrinsic difficulty of story in video. Moreover, there is not a theoretical metric for evaluating the degree of video understanding. In this paper, we propose a novel video question answering (Video QA) task, DramaQA, for a comprehensive understanding of the video story. The DramaQA focused on two perspectives: 1) hierarchical QAs as an evaluation metric based on the cognitive developmental stages of human intelligence. 2) character-centered video annotations to model local coherence of the story. Our dataset is built upon the TV drama "Another Miss Oh" and it contains 16,191 QA pairs from 23,928 various length video clips, with each QA pair belonging to one of four difficulty levels. We provide 217,308 annotated images with rich character-centered annotations, including visual bounding boxes, behaviors, and emotions of main characters, and coreference resolved scripts. Additionally, we provide analyses of the dataset as well as Dual Matching Multistream model which effectively learns character-centered representations of video to answer questions about the video. We are planning to release our dataset and model publicly for research purposes and expect that our work will provide a new perspective on video story understanding research. Full Article
ca Error estimates for the Cahn--Hilliard equation with dynamic boundary conditions. (arXiv:2005.03349v1 [math.NA]) By arxiv.org Published On :: A proof of convergence is given for bulk--surface finite element semi-discretisation of the Cahn--Hilliard equation with Cahn--Hilliard-type dynamic boundary conditions in a smooth domain. The semi-discretisation is studied in the weak formulation as a second order system. Optimal-order uniform-in-time error estimates are shown in the $L^2$ and $H^1$ norms. The error estimates are based on a consistency and stability analysis. The proof of stability is performed in an abstract framework, based on energy estimates exploiting the anti-symmetric structure of the second order system. Numerical experiments illustrate the theoretical results. Full Article