is

A Shift Selection Strategy for Parallel Shift-Invert Spectrum Slicing in Symmetric Self-Consistent Eigenvalue Computation. (arXiv:1908.06043v2 [math.NA] UPDATED)

The central importance of large scale eigenvalue problems in scientific computation necessitates the development of massively parallel algorithms for their solution. Recent advances in dense numerical linear algebra have enabled the routine treatment of eigenvalue problems with dimensions on the order of hundreds of thousands on the world's largest supercomputers. In cases where dense treatments are not feasible, Krylov subspace methods offer an attractive alternative due to the fact that they do not require storage of the problem matrices. However, demonstration of scalability of either of these classes of eigenvalue algorithms on computing architectures capable of expressing massive parallelism is non-trivial due to communication requirements and serial bottlenecks, respectively. In this work, we introduce the SISLICE method: a parallel shift-invert algorithm for the solution of the symmetric self-consistent field (SCF) eigenvalue problem. The SISLICE method drastically reduces the communication requirement of current parallel shift-invert eigenvalue algorithms through various shift selection and migration techniques based on density of states estimation and k-means clustering, respectively. This work demonstrates the robustness and parallel performance of the SISLICE method on a representative set of SCF eigenvalue problems and outlines research directions which will be explored in future work.




is

Single use register automata for data words. (arXiv:1907.10504v2 [cs.FL] UPDATED)

Our starting point are register automata for data words, in the style of Kaminski and Francez. We study the effects of the single-use restriction, which says that a register is emptied immediately after being used. We show that under the single-use restriction, the theory of automata for data words becomes much more robust. The main results are: (a) five different machine models are equivalent as language acceptors, including one-way and two-way single-use register automata; (b) one can recover some of the algebraic theory of languages over finite alphabets, including a version of the Krohn-Rhodes Theorem; (c) there is also a robust theory of transducers, with four equivalent models, including two-way single use transducers and a variant of streaming string transducers for data words. These results are in contrast with automata for data words without the single-use restriction, where essentially all models are pairwise non-equivalent.




is

Establishing the Quantum Supremacy Frontier with a 281 Pflop/s Simulation. (arXiv:1905.00444v2 [quant-ph] UPDATED)

Noisy Intermediate-Scale Quantum (NISQ) computers are entering an era in which they can perform computational tasks beyond the capabilities of the most powerful classical computers, thereby achieving "Quantum Supremacy", a major milestone in quantum computing. NISQ Supremacy requires comparison with a state-of-the-art classical simulator. We report HPC simulations of hard random quantum circuits (RQC), which have been recently used as a benchmark for the first experimental demonstration of Quantum Supremacy, sustaining an average performance of 281 Pflop/s (true single precision) on Summit, currently the fastest supercomputer in the World. These simulations were carried out using qFlex, a tensor-network-based classical high-performance simulator of RQCs. Our results show an advantage of many orders of magnitude in energy consumption of NISQ devices over classical supercomputers. In addition, we propose a standard benchmark for NISQ computers based on qFlex.




is

Parameterised Counting in Logspace. (arXiv:1904.12156v3 [cs.LO] UPDATED)

Stockhusen and Tantau (IPEC 2013) defined the operators paraW and paraBeta for parameterised space complexity classes by allowing bounded nondeterminism with multiple read and read-once access, respectively. Using these operators, they obtained characterisations for the complexity of many parameterisations of natural problems on graphs.

In this article, we study the counting versions of such operators and introduce variants based on tail-nondeterminism, paraW[1] and paraBetaTail, in the setting of parameterised logarithmic space. We examine closure properties of the new classes under the central reductions and arithmetic operations. We also identify a wide range of natural complete problems for our classes in the areas of walk counting in digraphs, first-order model-checking and graph-homomorphisms. In doing so, we also see that the closure of #paraBetaTail-L under parameterised logspace parsimonious reductions coincides with #paraBeta-L. We show that the complexity of a parameterised variant of the determinant function is #paraBetaTail-L-hard and can be written as the difference of two functions in #paraBetaTail-L for (0,1)-matrices. Finally, we characterise the new complexity classes in terms of branching programs.




is

A Fast and Accurate Algorithm for Spherical Harmonic Analysis on HEALPix Grids with Applications to the Cosmic Microwave Background Radiation. (arXiv:1904.10514v4 [math.NA] UPDATED)

The Hierarchical Equal Area isoLatitude Pixelation (HEALPix) scheme is used extensively in astrophysics for data collection and analysis on the sphere. The scheme was originally designed for studying the Cosmic Microwave Background (CMB) radiation, which represents the first light to travel during the early stages of the universe's development and gives the strongest evidence for the Big Bang theory to date. Refined analysis of the CMB angular power spectrum can lead to revolutionary developments in understanding the nature of dark matter and dark energy. In this paper, we present a new method for performing spherical harmonic analysis for HEALPix data, which is a central component to computing and analyzing the angular power spectrum of the massive CMB data sets. The method uses a novel combination of a non-uniform fast Fourier transform, the double Fourier sphere method, and Slevinsky's fast spherical harmonic transform (Slevinsky, 2019). For a HEALPix grid with $N$ pixels (points), the computational complexity of the method is $mathcal{O}(Nlog^2 N)$, with an initial set-up cost of $mathcal{O}(N^{3/2}log N)$. This compares favorably with $mathcal{O}(N^{3/2})$ runtime complexity of the current methods available in the HEALPix software when multiple maps need to be analyzed at the same time. Using numerical experiments, we demonstrate that the new method also appears to provide better accuracy over the entire angular power spectrum of synthetic data when compared to the current methods, with a convergence rate at least two times higher.




is

Ranked List Loss for Deep Metric Learning. (arXiv:1903.03238v6 [cs.CV] UPDATED)

The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity and dissimilarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, ranking-motivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. They converge faster and achieve state-of-the-art performance. In this work, we unveil two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery. The learning setting can be interpreted as few-shot retrieval: given a mini-batch, every example is iteratively used as a query, and the rest ones compose the galley to search, i.e., the support set in few-shot setting. The rest examples are split into a positive set and a negative set. For every mini-batch, the learning objective of ranked list loss is to make the query closer to the positive set than to the negative set by a margin. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution tends to be extremely compressed. In contrast, we propose to learn a hypersphere for each class in order to preserve useful similarity structure inside it, which functions as regularisation. Extensive experiments demonstrate the superiority of our proposal by comparing with the state-of-the-art methods on the fine-grained image retrieval task.




is

Deterministic Sparse Fourier Transform with an ell_infty Guarantee. (arXiv:1903.00995v3 [cs.DS] UPDATED)

In this paper we revisit the deterministic version of the Sparse Fourier Transform problem, which asks to read only a few entries of $x in mathbb{C}^n$ and design a recovery algorithm such that the output of the algorithm approximates $hat x$, the Discrete Fourier Transform (DFT) of $x$. The randomized case has been well-understood, while the main work in the deterministic case is that of Merhi et al.@ (J Fourier Anal Appl 2018), which obtains $O(k^2 log^{-1}k cdot log^{5.5}n)$ samples and a similar runtime with the $ell_2/ell_1$ guarantee. We focus on the stronger $ell_{infty}/ell_1$ guarantee and the closely related problem of incoherent matrices. We list our contributions as follows.

1. We find a deterministic collection of $O(k^2 log n)$ samples for the $ell_infty/ell_1$ recovery in time $O(nk log^2 n)$, and a deterministic collection of $O(k^2 log^2 n)$ samples for the $ell_infty/ell_1$ sparse recovery in time $O(k^2 log^3n)$.

2. We give new deterministic constructions of incoherent matrices that are row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's inequality and bounds on exponential sums considered in analytic number theory. Our first construction matches a previous randomized construction of Nelson, Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of the incoherent matrix.

Our algorithms are nearly sample-optimal, since a lower bound of $Omega(k^2 + k log n)$ is known, even for the case where the sensing matrix can be arbitrarily designed. A similar lower bound of $Omega(k^2 log n/ log k)$ is known for incoherent matrices.




is

Identifying Compromised Accounts on Social Media Using Statistical Text Analysis. (arXiv:1804.07247v3 [cs.SI] UPDATED)

Compromised accounts on social networks are regular user accounts that have been taken over by an entity with malicious intent. Since the adversary exploits the already established trust of a compromised account, it is crucial to detect these accounts to limit the damage they can cause. We propose a novel general framework for discovering compromised accounts by semantic analysis of text messages coming out from an account. Our framework is built on the observation that normal users will use language that is measurably different from the language that an adversary would use when the account is compromised. We use our framework to develop specific algorithms that use the difference of language models of users and adversaries as features in a supervised learning setup. Evaluation results show that the proposed framework is effective for discovering compromised accounts on social networks and a KL-divergence-based language model feature works best.




is

Active Intent Disambiguation for Shared Control Robots. (arXiv:2005.03652v1 [cs.RO])

Assistive shared-control robots have the potential to transform the lives of millions of people afflicted with severe motor impairments. The usefulness of shared-control robots typically relies on the underlying autonomy's ability to infer the user's needs and intentions, and the ability to do so unambiguously is often a limiting factor for providing appropriate assistance confidently and accurately. The contributions of this paper are four-fold. First, we introduce the idea of intent disambiguation via control mode selection, and present a mathematical formalism for the same. Second, we develop a control mode selection algorithm which selects the control mode in which the user-initiated motion helps the autonomy to maximally disambiguate user intent. Third, we present a pilot study with eight subjects to evaluate the efficacy of the disambiguation algorithm. Our results suggest that the disambiguation system (a) helps to significantly reduce task effort, as measured by number of button presses, and (b) is of greater utility for more limited control interfaces and more complex tasks. We also observe that (c) subjects demonstrated a wide range of disambiguation request behaviors, with the common thread of concentrating requests early in the execution. As our last contribution, we introduce a novel field-theoretic approach to intent inference inspired by dynamic field theory that works in tandem with the disambiguation scheme.




is

Where is Linked Data in Question Answering over Linked Data?. (arXiv:2005.03640v1 [cs.CL])

We argue that "Question Answering with Knowledge Base" and "Question Answering over Linked Data" are currently two instances of the same problem, despite one explicitly declares to deal with Linked Data. We point out the lack of existing methods to evaluate question answering on datasets which exploit external links to the rest of the cloud or share common schema. To this end, we propose the creation of new evaluation settings to leverage the advantages of the Semantic Web to achieve AI-complete question answering.




is

Seismic Shot Gather Noise Localization Using a Multi-Scale Feature-Fusion-Based Neural Network. (arXiv:2005.03626v1 [cs.CV])

Deep learning-based models, such as convolutional neural networks, have advanced various segments of computer vision. However, this technology is rarely applied to seismic shot gather noise localization problem. This letter presents an investigation on the effectiveness of a multi-scale feature-fusion-based network for seismic shot-gather noise localization. Herein, we describe the following: (1) the construction of a real-world dataset of seismic noise localization based on 6,500 seismograms; (2) a multi-scale feature-fusion-based detector that uses the MobileNet combined with the Feature Pyramid Net as the backbone; and (3) the Single Shot multi-box detector for box classification/regression. Additionally, we propose the use of the Focal Loss function that improves the detector's prediction accuracy. The proposed detector achieves an AP@0.5 of 78.67\% in our empirical evaluation.




is

Real-Time Context-aware Detection of Unsafe Events in Robot-Assisted Surgery. (arXiv:2005.03611v1 [cs.RO])

Cyber-physical systems for robotic surgery have enabled minimally invasive procedures with increased precision and shorter hospitalization. However, with increasing complexity and connectivity of software and major involvement of human operators in the supervision of surgical robots, there remain significant challenges in ensuring patient safety. This paper presents a safety monitoring system that, given the knowledge of the surgical task being performed by the surgeon, can detect safety-critical events in real-time. Our approach integrates a surgical gesture classifier that infers the operational context from the time-series kinematics data of the robot with a library of erroneous gesture classifiers that given a surgical gesture can detect unsafe events. Our experiments using data from two surgical platforms show that the proposed system can detect unsafe events caused by accidental or malicious faults within an average reaction time window of 1,693 milliseconds and F1 score of 0.88 and human errors within an average reaction time window of 57 milliseconds and F1 score of 0.76.




is

Delayed approximate matrix assembly in multigrid with dynamic precisions. (arXiv:2005.03606v1 [cs.MS])

The accurate assembly of the system matrix is an important step in any code that solves partial differential equations on a mesh. We either explicitly set up a matrix, or we work in a matrix-free environment where we have to be able to quickly return matrix entries upon demand. Either way, the construction can become costly due to non-trivial material parameters entering the equations, multigrid codes requiring cascades of matrices that depend upon each other, or dynamic adaptive mesh refinement that necessitates the recomputation of matrix entries or the whole equation system throughout the solve. We propose that these constructions can be performed concurrently with the multigrid cycles. Initial geometric matrices and low accuracy integrations kickstart the multigrid, while improved assembly data is fed to the solver as and when it becomes available. The time to solution is improved as we eliminate an expensive preparation phase traditionally delaying the actual computation. We eliminate algorithmic latency. Furthermore, we desynchronise the assembly from the solution process. This anarchic increase of the concurrency level improves the scalability. Assembly routines are notoriously memory- and bandwidth-demanding. As we work with iteratively improving operator accuracies, we finally propose the use of a hierarchical, lossy compression scheme such that the memory footprint is brought down aggressively where the system matrix entries carry little information or are not yet available with high accuracy.




is

A Reduced Basis Method For Fractional Diffusion Operators II. (arXiv:2005.03574v1 [math.NA])

We present a novel numerical scheme to approximate the solution map $smapsto u(s) := mathcal{L}^{-s}f$ to partial differential equations involving fractional elliptic operators. Reinterpreting $mathcal{L}^{-s}$ as interpolation operator allows us to derive an integral representation of $u(s)$ which includes solutions to parametrized reaction-diffusion problems. We propose a reduced basis strategy on top of a finite element method to approximate its integrand. Unlike prior works, we deduce the choice of snapshots for the reduced basis procedure analytically. Avoiding further discretization, the integral is interpreted in a spectral setting to evaluate the surrogate directly. Its computation boils down to a matrix approximation $L$ of the operator whose inverse is projected to a low-dimensional space, where explicit diagonalization is feasible. The universal character of the underlying $s$-independent reduced space allows the approximation of $(u(s))_{sin(0,1)}$ in its entirety. We prove exponential convergence rates and confirm the analysis with a variety of numerical examples.

Further improvements are proposed in the second part of this investigation to avoid inversion of $L$. Instead, we directly project the matrix to the reduced space, where its negative fractional power is evaluated. A numerical comparison with the predecessor highlights its competitive performance.




is

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis. (arXiv:2005.03545v1 [cs.CL])

Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal Humor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework.




is

p for political: Participation Without Agency Is Not Enough. (arXiv:2005.03534v1 [cs.HC])

Participatory Design's vision of democratic participation assumes participants' feelings of agency in envisioning a collective future. But this assumption may be leaky when dealing with vulnerable populations. We reflect on the results of a series of activities aimed at supporting agentic-future-envisionment with a group of sex-trafficking survivors in Nepal. We observed a growing sense among the survivors that they could play a role in bringing about change in their families. They also became aware of how they could interact with available institutional resources. Reflecting on the observations, we argue that building participant agency on the small and personal interactions is necessary before demanding larger Political participation. In particular, a value of PD, especially for vulnerable populations, can lie in the process itself if it helps participants position themselves as actors in the larger world.




is

CounQER: A System for Discovering and Linking Count Information in Knowledge Bases. (arXiv:2005.03529v1 [cs.IR])

Predicate constraints of general-purpose knowledge bases (KBs) like Wikidata, DBpedia and Freebase are often limited to subproperty, domain and range constraints. In this demo we showcase CounQER, a system that illustrates the alignment of counting predicates, like staffSize, and enumerating predicates, like workInstitution^{-1} . In the demonstration session, attendees can inspect these alignments, and will learn about the importance of these alignments for KB question answering and curation. CounQER is available at https://counqer.mpi-inf.mpg.de/spo.




is

The Danish Gigaword Project. (arXiv:2005.03521v1 [cs.CL])

Danish is a North Germanic/Scandinavian language spoken primarily in Denmark, a country with a tradition of technological and scientific innovation. However, from a technological perspective, the Danish language has received relatively little attention and, as a result, Danish language technology is hard to develop, in part due to a lack of large or broad-coverage Danish corpora. This paper describes the Danish Gigaword project, which aims to construct a freely-available one billion word corpus of Danish text that represents the breadth of the written language.




is

An asynchronous distributed and scalable generalized Nash equilibrium seeking algorithm for strongly monotone games. (arXiv:2005.03507v1 [cs.GT])

In this paper, we present three distributed algorithms to solve a class of generalized Nash equilibrium (GNE) seeking problems in strongly monotone games. The first one (SD-GENO) is based on synchronous updates of the agents, while the second and the third (AD-GEED and AD-GENO) represent asynchronous solutions that are robust to communication delays. AD-GENO can be seen as a refinement of AD-GEED, since it only requires node auxiliary variables, enhancing the scalability of the algorithm. Our main contribution is to prove converge to a variational GNE of the game via an operator-theoretic approach. Finally, we apply the algorithms to network Cournot games and show how different activation sequences and delays affect convergence. We also compare the proposed algorithms to the only other in the literature (ADAGNES), and observe that AD-GENO outperforms the alternative.




is

Sunny Pointer: Designing a mouse pointer for people with peripheral vision loss. (arXiv:2005.03504v1 [cs.HC])

We present a new mouse cursor designed to facilitate the use of the mouse by people with peripheral vision loss. The pointer consists of a collection of converging straight lines covering the whole screen and following the position of the mouse cursor. We measured its positive effects with a group of participants with peripheral vision loss of different kinds and we found that it can reduce by a factor of 7 the time required to complete a targeting task using the mouse. Using eye tracking, we show that this system makes it possible to initiate the movement towards the target without having to precisely locate the mouse pointer. Using Fitts' Law, we compare these performances with those of full visual field users in order to understand the relation between the accuracy of the estimated mouse cursor position and the index of performance obtained with our tool.




is

Anonymized GCN: A Novel Robust Graph Embedding Method via Hiding Node Position in Noise. (arXiv:2005.03482v1 [cs.LG])

Graph convolution network (GCN) have achieved state-of-the-art performance in the task of node prediction in the graph structure. However, with the gradual various of graph attack methods, there are lack of research on the robustness of GCN. At this paper, we will design a robust GCN method for node prediction tasks. Considering the graph structure contains two types of information: node information and connection information, and attackers usually modify the connection information to complete the interference with the prediction results of the node, we first proposed a method to hide the connection information in the generator, named Anonymized GCN (AN-GCN). By hiding the connection information in the graph structure in the generator through adversarial training, the accurate node prediction can be completed only by the node number rather than its specific position in the graph. Specifically, we first demonstrated the key to determine the embedding of a specific node: the row corresponding to the node of the eigenmatrix of the Laplace matrix, by target it as the output of the generator, we designed a method to hide the node number in the noise. Take the corresponding noise as input, we will obtain the connection structure of the node instead of directly obtaining. Then the encoder and decoder are spliced both in discriminator, so that after adversarial training, the generator and discriminator can cooperate to complete the encoding and decoding of the graph, then complete the node prediction. Finally, All node positions can generated by noise at the same time, that is to say, the generator will hides all the connection information of the graph structure. The evaluation shows that we only need to obtain the initial features and node numbers of the nodes to complete the node prediction, and the accuracy did not decrease, but increased by 0.0293.




is

Brain-like approaches to unsupervised learning of hidden representations -- a comparative study. (arXiv:2005.03476v1 [cs.NE])

Unsupervised learning of hidden representations has been one of the most vibrant research directions in machine learning in recent years. In this work we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations. The saliency and separability of the hidden representations when trained on MNIST dataset is studied using an external classifier, and compared with other unsupervised learning methods that include restricted Boltzmann machines and autoencoders.




is

Predictions and algorithmic statistics for infinite sequence. (arXiv:2005.03467v1 [cs.IT])

Consider the following prediction problem. Assume that there is a block box that produces bits according to some unknown computable distribution on the binary tree. We know first $n$ bits $x_1 x_2 ldots x_n$. We want to know the probability of the event that that the next bit is equal to $1$. Solomonoff suggested to use universal semimeasure $m$ for solving this task. He proved that for every computable distribution $P$ and for every $b in {0,1}$ the following holds: $$sum_{n=1}^{infty}sum_{x: l(x)=n} P(x) (P(b | x) - m(b | x))^2 < infty .$$ However, Solomonoff's method has a negative aspect: Hutter and Muchnik proved that there are an universal semimeasure $m$, computable distribution $P$ and a random (in Martin-L{"o}f sense) sequence $x_1 x_2ldots$ such that $lim_{n o infty} P(x_{n+1} | x_1ldots x_n) - m(x_{n+1} | x_1ldots x_n) rightarrow 0$. We suggest a new way for prediction. For every finite string $x$ we predict the new bit according to the best (in some sence) distribution for $x$. We prove the similar result as Solomonoff theorem for our way of prediction. Also we show that our method of prediction has no that negative aspect as Solomonoff's method.




is

AIBench: Scenario-distilling AI Benchmarking. (arXiv:2005.03459v1 [cs.PF])

Real-world application scenarios like modern Internet services consist of diversity of AI and non-AI modules with very long and complex execution paths. Using component or micro AI benchmarks alone can lead to error-prone conclusions. This paper proposes a scenario-distilling AI benchmarking methodology. Instead of using real-world applications, we propose the permutations of essential AI and non-AI tasks as a scenario-distilling benchmark. We consider scenario-distilling benchmarks, component and micro benchmarks as three indispensable parts of a benchmark suite. Together with seventeen industry partners, we identify nine important real-world application scenarios. We design and implement a highly extensible, configurable, and flexible benchmark framework. On the basis of the framework, we propose the guideline for building scenario-distilling benchmarks, and present two Internet service AI ones. The preliminary evaluation shows the advantage of scenario-distilling AI benchmarking against using component or micro AI benchmarks alone. The specifications, source code, testbed, and results are publicly available from the web site url{this http URL}.




is

Successfully Applying the Stabilized Lottery Ticket Hypothesis to the Transformer Architecture. (arXiv:2005.03454v1 [cs.LG])

Sparse models require less memory for storage and enable a faster inference by reducing the necessary number of FLOPs. This is relevant both for time-critical and on-device computations using neural networks. The stabilized lottery ticket hypothesis states that networks can be pruned after none or few training iterations, using a mask computed based on the unpruned converged model. On the transformer architecture and the WMT 2014 English-to-German and English-to-French tasks, we show that stabilized lottery ticket pruning performs similar to magnitude pruning for sparsity levels of up to 85%, and propose a new combination of pruning techniques that outperforms all other techniques for even higher levels of sparsity. Furthermore, we confirm that the parameter's initial sign and not its specific value is the primary factor for successful training, and show that magnitude pruning cannot be used to find winning lottery tickets.




is

Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences. (arXiv:2005.03436v1 [cs.CL])

The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer. Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs. We propose a framework for extracting divergence patterns for any language pair from a parallel corpus, building on Universal Dependencies. We show that our framework provides a detailed picture of cross-language divergences, generalizes previous approaches, and lends itself to full automation. We further present a novel dataset, a manually word-aligned subset of the Parallel UD corpus in five languages, and use it to perform a detailed corpus study. We demonstrate the usefulness of the resulting analysis by showing that it can help account for performance patterns of a cross-lingual parser.




is

The Perceptimatic English Benchmark for Speech Perception Models. (arXiv:2005.03418v1 [cs.CL])

We present the Perceptimatic English Benchmark, an open experimental benchmark for evaluating quantitative models of speech perception in English. The benchmark consists of ABX stimuli along with the responses of 91 American English-speaking listeners. The stimuli test discrimination of a large number of English and French phonemic contrasts. They are extracted directly from corpora of read speech, making them appropriate for evaluating statistical acoustic models (such as those used in automatic speech recognition) trained on typical speech data sets. We show that phone discrimination is correlated with several types of models, and give recommendations for researchers seeking easily calculated norms of acoustic distance on experimental stimuli. We show that DeepSpeech, a standard English speech recognizer, is more specialized on English phoneme discrimination than English listeners, and is poorly correlated with their behaviour, even though it yields a low error on the decision task given to humans.




is

Detection and Feeder Identification of the High Impedance Fault at Distribution Networks Based on Synchronous Waveform Distortions. (arXiv:2005.03411v1 [eess.SY])

Diagnosis of high impedance fault (HIF) is a challenge for nowadays distribution network protections. The fault current of a HIF is much lower than that of a normal load, and fault feature is significantly affected by fault scenarios. A detection and feeder identification algorithm for HIFs is proposed in this paper, based on the high-resolution and synchronous waveform data. In the algorithm, an interval slope is defined to describe the waveform distortions, which guarantees a uniform feature description under various HIF nonlinearities and noise interferences. For three typical types of network neutrals, i.e.,isolated neutral, resonant neutral, and low-resistor-earthed neutral, differences of the distorted components between the zero-sequence currents of healthy and faulty feeders are mathematically deduced, respectively. As a result, the proposed criterion, which is based on the distortion relationships between zero-sequence currents of feeders and the zero-sequence voltage at the substation, is theoretically supported. 28 HIFs grounded to various materials are tested in a 10kV distribution networkwith three neutral types, and are utilized to verify the effectiveness of the proposed algorithm.




is

Semantic Signatures for Large-scale Visual Localization. (arXiv:2005.03388v1 [cs.CV])

Visual localization is a useful alternative to standard localization techniques. It works by utilizing cameras. In a typical scenario, features are extracted from captured images and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. These approaches offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called "semantic signature" is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. This paper is an extended version of a conference paper in CBMI'18. A more efficient retrieval protocol is presented with additional experiment results.




is

Scoring Root Necrosis in Cassava Using Semantic Segmentation. (arXiv:2005.03367v1 [eess.IV])

Cassava a major food crop in many parts of Africa, has majorly been affected by Cassava Brown Streak Disease (CBSD). The disease affects tuberous roots and presents symptoms that include a yellow/brown, dry, corky necrosis within the starch-bearing tissues. Cassava breeders currently depend on visual inspection to score necrosis in roots based on a qualitative score which is quite subjective. In this paper we present an approach to automate root necrosis scoring using deep convolutional neural networks with semantic segmentation. Our experiments show that the UNet model performs this task with high accuracy achieving a mean Intersection over Union (IoU) of 0.90 on the test set. This method provides a means to use a quantitative measure for necrosis scoring on root cross-sections. This is done by segmentation and classifying the necrotized and non-necrotized pixels of cassava root cross-sections without any additional feature engineering.




is

Probabilistic Hyperproperties of Markov Decision Processes. (arXiv:2005.03362v1 [cs.LO])

We study the specification and verification of hyperproperties for probabilistic systems represented as Markov decision processes (MDPs). Hyperproperties are system properties that describe the correctness of a system as a relation between multiple executions. Hyperproperties generalize trace properties and include information-flow security requirements, like noninterference, as well as requirements like symmetry, partial observation, robustness, and fault tolerance. We introduce the temporal logic PHL, which extends classic probabilistic logics with quantification over schedulers and traces. PHL can express a wide range of hyperproperties for probabilistic systems, including both classical applications, such as differential privacy, and novel applications in areas such as robotics and planning. While the model checking problem for PHL is in general undecidable, we provide methods both for proving and for refuting a class of probabilistic hyperproperties for MDPs.




is

Self-Supervised Human Depth Estimation from Monocular Videos. (arXiv:2005.03358v1 [cs.CV])

Previous methods on estimating detailed human depth often require supervised training with `ground truth' depth data. This paper presents a self-supervised method that can be trained on YouTube videos without known depth, which makes training data collection simple and improves the generalization of the learned network. The self-supervised learning is achieved by minimizing a photo-consistency loss, which is evaluated between a video frame and its neighboring frames warped according to the estimated depth and the 3D non-rigid motion of the human body. To solve this non-rigid motion, we first estimate a rough SMPL model at each video frame and compute the non-rigid body motion accordingly, which enables self-supervised learning on estimating the shape details. Experiments demonstrate that our method enjoys better generalization and performs much better on data in the wild.




is

Quantum correlation alignment for unsupervised domain adaptation. (arXiv:2005.03355v1 [quant-ph])

Correlation alignment (CORAL), a representative domain adaptation (DA) algorithm, decorrelates and aligns a labelled source domain dataset to an unlabelled target domain dataset to minimize the domain shift such that a classifier can be applied to predict the target domain labels. In this paper, we implement the CORAL on quantum devices by two different methods. One method utilizes quantum basic linear algebra subroutines (QBLAS) to implement the CORAL with exponential speedup in the number and dimension of the given data samples. The other method is achieved through a variational hybrid quantum-classical procedure. In addition, the numerical experiments of the CORAL with three different types of data sets, namely the synthetic data, the synthetic-Iris data, the handwritten digit data, are presented to evaluate the performance of our work. The simulation results prove that the variational quantum correlation alignment algorithm (VQCORAL) can achieve competitive performance compared with the classical CORAL.




is

Wavelet Integrated CNNs for Noise-Robust Image Classification. (arXiv:2005.03337v1 [cs.CV])

Convolutional Neural Networks (CNNs) are generally prone to noise interruptions, i.e., small image noise can cause drastic changes in the output. To suppress the noise effect to the final predication, we enhance CNNs by replacing max-pooling, strided-convolution, and average-pooling with Discrete Wavelet Transform (DWT). We present general DWT and Inverse DWT (IDWT) layers applicable to various wavelets like Haar, Daubechies, and Cohen, etc., and design wavelet integrated CNNs (WaveCNets) using these layers for image classification. In WaveCNets, feature maps are decomposed into the low-frequency and high-frequency components during the down-sampling. The low-frequency component stores main information including the basic object structures, which is transmitted into the subsequent layers to extract robust high-level features. The high-frequency components, containing most of the data noise, are dropped during inference to improve the noise-robustness of the WaveCNets. Our experimental results on ImageNet and ImageNet-C (the noisy version of ImageNet) show that WaveCNets, the wavelet integrated versions of VGG, ResNets, and DenseNet, achieve higher accuracy and better noise-robustness than their vanilla versions.




is

Bitvector-aware Query Optimization for Decision Support Queries (extended version). (arXiv:2005.03328v1 [cs.DB])

Bitvector filtering is an important query processing technique that can significantly reduce the cost of execution, especially for complex decision support queries with multiple joins. Despite its wide application, however, its implication to query optimization is not well understood.

In this work, we study how bitvector filters impact query optimization. We show that incorporating bitvector filters into query optimization straightforwardly can increase the plan space complexity by an exponential factor in the number of relations in the query. We analyze the plans with bitvector filters for star and snowflake queries in the plan space of right deep trees without cross products. Surprisingly, with some simplifying assumptions, we prove that, the plan of the minimal cost with bitvector filters can be found from a linear number of plans in the number of relations in the query. This greatly reduces the plan space complexity for such queries from exponential to linear.

Motivated by our analysis, we propose an algorithm that accounts for the impact of bitvector filters in query optimization. Our algorithm optimizes the join order for an arbitrary decision support query by choosing from a linear number of candidate plans in the number of relations in the query. We implement our algorithm in Microsoft SQL Server as a transformation rule. Our evaluation on both industry standard benchmarks and customer workload shows that, compared with the original Microsoft SQL Server, our technique reduces the total CPU execution time by 22%-64% for the workloads, with up to two orders of magnitude reduction in CPU execution time for individual queries.




is

Global Distribution of Google Scholar Citations: A Size-independent Institution-based Analysis. (arXiv:2005.03324v1 [cs.DL])

Most currently available schemes for performance based ranking of Universities or Research organizations, such as, Quacarelli Symonds (QS), Times Higher Education (THE), Shanghai University based All Research of World Universities (ARWU) use a variety of criteria that include productivity, citations, awards, reputation, etc., while Leiden and Scimago use only bibliometric indicators. The research performance evaluation in the aforesaid cases is based on bibliometric data from Web of Science or Scopus, which are commercially available priced databases. The coverage includes peer reviewed journals and conference proceedings. Google Scholar (GS) on the other hand, provides a free and open alternative to obtaining citations of papers available on the net, (though it is not clear exactly which journals are covered.) Citations are collected automatically from the net and also added to self created individual author profiles under Google Scholar Citations (GSC). This data was used by Webometrics Lab, Spain to create a ranked list of 4000+ institutions in 2016, based on citations from only the top 10 individual GSC profiles in each organization. (GSC excludes the top paper for reasons explained in the text; the simple selection procedure makes the ranked list size-independent as claimed by the Cybermetrics Lab). Using this data (Transparent Ranking TR, 2016), we find the regional and country wise distribution of GS-TR Citations. The size independent ranked list is subdivided into deciles of 400 institutions each and the number of institutions and citations of each country obtained for each decile. We test for correlation between institutional ranks between GS TR and the other ranking schemes for the top 20 institutions.




is

Specification and Automated Analysis of Inter-Parameter Dependencies in Web APIs. (arXiv:2005.03320v1 [cs.SE])

Web services often impose inter-parameter dependencies that restrict the way in which two or more input parameters can be combined to form valid calls to the service. Unfortunately, current specification languages for web services like the OpenAPI Specification (OAS) provide no support for the formal description of such dependencies, which makes it hardly possible to automatically discover and interact with services without human intervention. In this article, we present an approach for the specification and automated analysis of inter-parameter dependencies in web APIs. We first present a domain-specific language, called Inter-parameter Dependency Language (IDL), for the specification of dependencies among input parameters in web services. Then, we propose a mapping to translate an IDL document into a constraint satisfaction problem (CSP), enabling the automated analysis of IDL specifications using standard CSP-based reasoning operations. Specifically, we present a catalogue of nine analysis operations on IDL documents allowing to compute, for example, whether a given request satisfies all the dependencies of the service. Finally, we present a tool suite including an editor, a parser, an OAS extension, a constraint programming-aided library, and a test suite supporting IDL specifications and their analyses. Together, these contributions pave the way for a new range of specification-driven applications in areas such as code generation and testing.




is

A Review of Computer Vision Methods in Network Security. (arXiv:2005.03318v1 [cs.NI])

Network security has become an area of significant importance more than ever as highlighted by the eye-opening numbers of data breaches, attacks on critical infrastructure, and malware/ransomware/cryptojacker attacks that are reported almost every day. Increasingly, we are relying on networked infrastructure and with the advent of IoT, billions of devices will be connected to the internet, providing attackers with more opportunities to exploit. Traditional machine learning methods have been frequently used in the context of network security. However, such methods are more based on statistical features extracted from sources such as binaries, emails, and packet flows.

On the other hand, recent years witnessed a phenomenal growth in computer vision mainly driven by the advances in the area of convolutional neural networks. At a glance, it is not trivial to see how computer vision methods are related to network security. Nonetheless, there is a significant amount of work that highlighted how methods from computer vision can be applied in network security for detecting attacks or building security solutions. In this paper, we provide a comprehensive survey of such work under three topics; i) phishing attempt detection, ii) malware detection, and iii) traffic anomaly detection. Next, we review a set of such commercial products for which public information is available and explore how computer vision methods are effectively used in those products. Finally, we discuss existing research gaps and future research directions, especially focusing on how network security research community and the industry can leverage the exponential growth of computer vision methods to build much secure networked systems.




is

Safe Data-Driven Distributed Coordination of Intersection Traffic. (arXiv:2005.03304v1 [math.OC])

This work addresses the problem of traffic management at and near an isolated un-signalized intersection for autonomous and networked vehicles through coordinated optimization of their trajectories. We decompose the trajectory of each vehicle into two phases: the provisional phase and the coordinated phase. A vehicle, upon entering the region of interest, initially operates in the provisional phase, in which the vehicle is allowed to optimize its trajectory but is constrained to guarantee in-lane safety and to not enter the intersection. Periodically, all the vehicles in their provisional phase switch to their coordinated phase, which is obtained by coordinated optimization of the schedule of the vehicles' intersection usage as well as their trajectories. For the coordinated phase, we propose a data-driven solution, in which the intersection usage order is obtained through a data-driven online "classification" and the trajectories are computed sequentially. This approach is computationally very efficient and does not compromise much on optimality. Moreover, it also allows for incorporation of "macro" information such as traffic arrival rates into the solution. We also discuss a distributed implementation of this proposed data-driven sequential algorithm. Finally, we compare the proposed algorithm and its two variants against traditional methods of intersection management and against some existing results in the literature by micro-simulations.




is

Multi-view data capture using edge-synchronised mobiles. (arXiv:2005.03286v1 [cs.MM])

Multi-view data capture permits free-viewpoint video (FVV) content creation. To this end, several users must capture video streams, calibrated in both time and pose, framing the same object/scene, from different viewpoints. New-generation network architectures (e.g. 5G) promise lower latency and larger bandwidth connections supported by powerful edge computing, properties that seem ideal for reliable FVV capture. We have explored this possibility, aiming to remove the need for bespoke synchronisation hardware when capturing a scene from multiple viewpoints, making it possible through off-the-shelf mobiles. We propose a novel and scalable data capture architecture that exploits edge resources to synchronise and harvest frame captures. We have designed an edge computing unit that supervises the relaying of timing triggers to and from multiple mobiles, in addition to synchronising frame harvesting. We empirically show the benefits of our edge computing unit by analysing latencies and show the quality of 3D reconstruction outputs against an alternative and popular centralised solution based on Unity3D.




is

Quda: Natural Language Queries for Visual Data Analytics. (arXiv:2005.03257v1 [cs.CL])

Visualization-oriented natural language interfaces (V-NLIs) have been explored and developed in recent years. One challenge faced by V-NLIs is in the formation of effective design decisions that usually requires a deep understanding of user queries. Learning-based approaches have shown potential in V-NLIs and reached state-of-the-art performance in various NLP tasks. However, because of the lack of sufficient training samples that cater to visual data analytics, cutting-edge techniques have rarely been employed to facilitate the development of V-NLIs. We present a new dataset, called Quda, to help V-NLIs understand free-form natural language. Our dataset contains 14;035 diverse user queries annotated with 10 low-level analytic tasks that assist in the deployment of state-of-the-art techniques for parsing complex human language. We achieve this goal by first gathering seed queries with data analysts who are target users of V-NLIs. Then we employ extensive crowd force for paraphrase generation and validation. We demonstrate the usefulness of Quda in building V-NLIs by creating a prototype that makes effective design decisions for free-form user queries. We also show that Quda can be beneficial for a wide range of applications in the visualization community by analyzing the design tasks described in academic publications.




is

DFSeer: A Visual Analytics Approach to Facilitate Model Selection for Demand Forecasting. (arXiv:2005.03244v1 [cs.HC])

Selecting an appropriate model to forecast product demand is critical to the manufacturing industry. However, due to the data complexity, market uncertainty and users' demanding requirements for the model, it is challenging for demand analysts to select a proper model. Although existing model selection methods can reduce the manual burden to some extent, they often fail to present model performance details on individual products and reveal the potential risk of the selected model. This paper presents DFSeer, an interactive visualization system to conduct reliable model selection for demand forecasting based on the products with similar historical demand. It supports model comparison and selection with different levels of details. Besides, it shows the difference in model performance on similar products to reveal the risk of model selection and increase users' confidence in choosing a forecasting model. Two case studies and interviews with domain experts demonstrate the effectiveness and usability of DFSeer.




is

Mortar-based entropy-stable discontinuous Galerkin methods on non-conforming quadrilateral and hexahedral meshes. (arXiv:2005.03237v1 [math.NA])

High-order entropy-stable discontinuous Galerkin (DG) methods for nonlinear conservation laws reproduce a discrete entropy inequality by combining entropy conservative finite volume fluxes with summation-by-parts (SBP) discretization matrices. In the DG context, on tensor product (quadrilateral and hexahedral) elements, SBP matrices are typically constructed by collocating at Lobatto quadrature points. Recent work has extended the construction of entropy-stable DG schemes to collocation at more accurate Gauss quadrature points.

In this work, we extend entropy-stable Gauss collocation schemes to non-conforming meshes. Entropy-stable DG schemes require computing entropy conservative numerical fluxes between volume and surface quadrature nodes. On conforming tensor product meshes where volume and surface nodes are aligned, flux evaluations are required only between "lines" of nodes. However, on non-conforming meshes, volume and surface nodes are no longer aligned, resulting in a larger number of flux evaluations. We reduce this expense by introducing an entropy-stable mortar-based treatment of non-conforming interfaces via a face-local correction term, and provide necessary conditions for high-order accuracy. Numerical experiments in both two and three dimensions confirm the stability and accuracy of this approach.




is

Diagnosis of Coronavirus Disease 2019 (COVID-19) with Structured Latent Multi-View Representation Learning. (arXiv:2005.03227v1 [eess.IV])

Recently, the outbreak of Coronavirus Disease 2019 (COVID-19) has spread rapidly across the world. Due to the large number of affected patients and heavy labor for doctors, computer-aided diagnosis with machine learning algorithm is urgently needed, and could largely reduce the efforts of clinicians and accelerate the diagnosis process. Chest computed tomography (CT) has been recognized as an informative tool for diagnosis of the disease. In this study, we propose to conduct the diagnosis of COVID-19 with a series of features extracted from CT images. To fully explore multiple features describing CT images from different views, a unified latent representation is learned which can completely encode information from different aspects of features and is endowed with promising class structure for separability. Specifically, the completeness is guaranteed with a group of backward neural networks (each for one type of features), while by using class labels the representation is enforced to be compact within COVID-19/community-acquired pneumonia (CAP) and also a large margin is guaranteed between different types of pneumonia. In this way, our model can well avoid overfitting compared to the case of directly projecting highdimensional features into classes. Extensive experimental results show that the proposed method outperforms all comparison methods, and rather stable performances are observed when varying the numbers of training data.




is

Deeply Supervised Active Learning for Finger Bones Segmentation. (arXiv:2005.03225v1 [cs.CV])

Segmentation is a prerequisite yet challenging task for medical image analysis. In this paper, we introduce a novel deeply supervised active learning approach for finger bones segmentation. The proposed architecture is fine-tuned in an iterative and incremental learning manner. In each step, the deep supervision mechanism guides the learning process of hidden layers and selects samples to be labeled. Extensive experiments demonstrated that our method achieves competitive segmentation results using less labeled samples as compared with full annotation.




is

What comprises a good talking-head video generation?: A Survey and Benchmark. (arXiv:2005.03201v1 [cs.CV])

Over the years, performance evaluation has become essential in computer vision, enabling tangible progress in many sub-fields. While talking-head video generation has become an emerging research topic, existing evaluations on this topic present many limitations. For example, most approaches use human subjects (e.g., via Amazon MTurk) to evaluate their research claims directly. This subjective evaluation is cumbersome, unreproducible, and may impend the evolution of new research. In this work, we present a carefully-designed benchmark for evaluating talking-head video generation with standardized dataset pre-processing strategies. As for evaluation, we either propose new metrics or select the most appropriate ones to evaluate results in what we consider as desired properties for a good talking-head video, namely, identity preserving, lip synchronization, high video quality, and natural-spontaneous motion. By conducting a thoughtful analysis across several state-of-the-art talking-head generation approaches, we aim to uncover the merits and drawbacks of current methods and point out promising directions for future work. All the evaluation code is available at: https://github.com/lelechen63/talking-head-generation-survey.




is

Recognizing Exercises and Counting Repetitions in Real Time. (arXiv:2005.03194v1 [cs.CV])

Artificial intelligence technology has made its way absolutely necessary in a variety of industries including the fitness industry. Human pose estimation is one of the important researches in the field of Computer Vision for the last few years. In this project, pose estimation and deep machine learning techniques are combined to analyze the performance and report feedback on the repetitions of performed exercises in real-time. Involving machine learning technology in the fitness industry could help the judges to count repetitions of any exercise during Weightlifting or CrossFit competitions.




is

Distributed Stabilization by Probability Control for Deterministic-Stochastic Large Scale Systems : Dissipativity Approach. (arXiv:2005.03193v1 [eess.SY])

By using dissipativity approach, we establish the stability condition for the feedback connection of a deterministic dynamical system $Sigma$ and a stochastic memoryless map $Psi$. After that, we extend the result to the class of large scale systems in which: $Sigma$ consists of many sub-systems; and $Psi$ consists of many "stochastic actuators" and "probability controllers" that control the actuator's output events. We will demonstrate the proposed approach by showing the design procedures to globally stabilize the manufacturing systems while locally balance the stock levels in any production process.




is

A Dynamical Perspective on Point Cloud Registration. (arXiv:2005.03190v1 [cs.CV])

We provide a dynamical perspective on the classical problem of 3D point cloud registration with correspondences. A point cloud is considered as a rigid body consisting of particles. The problem of registering two point clouds is formulated as a dynamical system, where the dynamic model point cloud translates and rotates in a viscous environment towards the static scene point cloud, under forces and torques induced by virtual springs placed between each pair of corresponding points. We first show that the potential energy of the system recovers the objective function of the maximum likelihood estimation. We then adopt Lyapunov analysis, particularly the invariant set theorem, to analyze the rigid body dynamics and show that the system globally asymptotically tends towards the set of equilibrium points, where the globally optimal registration solution lies in. We conjecture that, besides the globally optimal equilibrium point, the system has either three or infinite "spurious" equilibrium points, and these spurious equilibria are all locally unstable. The case of three spurious equilibria corresponds to generic shape of the point cloud, while the case of infinite spurious equilibria happens when the point cloud exhibits symmetry. Therefore, simulating the dynamics with random perturbations guarantees to obtain the globally optimal registration solution. Numerical experiments support our analysis and conjecture.




is

A Proposal for Intelligent Agents with Episodic Memory. (arXiv:2005.03182v1 [cs.AI])

In the future we can expect that artificial intelligent agents, once deployed, will be required to learn continually from their experience during their operational lifetime. Such agents will also need to communicate with humans and other agents regarding the content of their experience, in the context of passing along their learnings, for the purpose of explaining their actions in specific circumstances or simply to relate more naturally to humans concerning experiences the agent acquires that are not necessarily related to their assigned tasks. We argue that to support these goals, an agent would benefit from an episodic memory; that is, a memory that encodes the agent's experience in such a way that the agent can relive the experience, communicate about it and use its past experience, inclusive of the agents own past actions, to learn more effective models and policies. In this short paper, we propose one potential approach to provide an AI agent with such capabilities. We draw upon the ever-growing body of work examining the function and operation of the Medial Temporal Lobe (MTL) in mammals to guide us in adding an episodic memory capability to an AI agent composed of artificial neural networks (ANNs). Based on that, we highlight important aspects to be considered in the memory organization and we propose an architecture combining ANNs and standard Computer Science techniques for supporting storage and retrieval of episodic memories. Despite being initial work, we hope this short paper can spark discussions around the creation of intelligent agents with memory or, at least, provide a different point of view on the subject.