rx

Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity. (arXiv:1706.02205v4 [math.NA] UPDATED)

Dense kernel matrices $Theta in mathbb{R}^{N imes N}$ obtained from point evaluations of a covariance function $G$ at locations ${ x_{i} }_{1 leq i leq N} subset mathbb{R}^{d}$ arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions of elliptic boundary value problems and homogeneously-distributed sampling points, we show how to identify a subset $S subset { 1 , dots , N }^2$, with $# S = O ( N log (N) log^{d} ( N /epsilon ) )$, such that the zero fill-in incomplete Cholesky factorisation of the sparse matrix $Theta_{ij} 1_{( i, j ) in S}$ is an $epsilon$-approximation of $Theta$. This factorisation can provably be obtained in complexity $O ( N log( N ) log^{d}( N /epsilon) )$ in space and $O ( N log^{2}( N ) log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators; we further present numerical evidence that $d$ can be taken to be the intrinsic dimension of the data set rather than that of the ambient space. The algorithm only needs to know the spatial configuration of the $x_{i}$ and does not require an analytic representation of $G$. Furthermore, this factorization straightforwardly provides an approximate sparse PCA with optimal rate of convergence in the operator norm. Hence, by using only subsampling and the incomplete Cholesky factorization, we obtain, at nearly linear complexity, the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky factorization we also obtain a solver for elliptic PDE with complexity $O ( N log^{d}( N /epsilon) )$ in space and $O ( N log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators.




rx

Active Intent Disambiguation for Shared Control Robots. (arXiv:2005.03652v1 [cs.RO])

Assistive shared-control robots have the potential to transform the lives of millions of people afflicted with severe motor impairments. The usefulness of shared-control robots typically relies on the underlying autonomy's ability to infer the user's needs and intentions, and the ability to do so unambiguously is often a limiting factor for providing appropriate assistance confidently and accurately. The contributions of this paper are four-fold. First, we introduce the idea of intent disambiguation via control mode selection, and present a mathematical formalism for the same. Second, we develop a control mode selection algorithm which selects the control mode in which the user-initiated motion helps the autonomy to maximally disambiguate user intent. Third, we present a pilot study with eight subjects to evaluate the efficacy of the disambiguation algorithm. Our results suggest that the disambiguation system (a) helps to significantly reduce task effort, as measured by number of button presses, and (b) is of greater utility for more limited control interfaces and more complex tasks. We also observe that (c) subjects demonstrated a wide range of disambiguation request behaviors, with the common thread of concentrating requests early in the execution. As our last contribution, we introduce a novel field-theoretic approach to intent inference inspired by dynamic field theory that works in tandem with the disambiguation scheme.




rx

Defending Hardware-based Malware Detectors against Adversarial Attacks. (arXiv:2005.03644v1 [cs.CR])

In the era of Internet of Things (IoT), Malware has been proliferating exponentially over the past decade. Traditional anti-virus software are ineffective against modern complex Malware. In order to address this challenge, researchers have proposed Hardware-assisted Malware Detection (HMD) using Hardware Performance Counters (HPCs). The HPCs are used to train a set of Machine learning (ML) classifiers, which in turn, are used to distinguish benign programs from Malware. Recently, adversarial attacks have been designed by introducing perturbations in the HPC traces using an adversarial sample predictor to misclassify a program for specific HPCs. These attacks are designed with the basic assumption that the attacker is aware of the HPCs being used to detect Malware. Since modern processors consist of hundreds of HPCs, restricting to only a few of them for Malware detection aids the attacker. In this paper, we propose a Moving target defense (MTD) for this adversarial attack by designing multiple ML classifiers trained on different sets of HPCs. The MTD randomly selects a classifier; thus, confusing the attacker about the HPCs or the number of classifiers applied. We have developed an analytical model which proves that the probability of an attacker to guess the perfect HPC-classifier combination for MTD is extremely low (in the range of $10^{-1864}$ for a system with 20 HPCs). Our experimental results prove that the proposed defense is able to improve the classification accuracy of HPC traces that have been modified through an adversarial sample generator by up to 31.5%, for a near perfect (99.4%) restoration of the original accuracy.




rx

On Exposure Bias, Hallucination and Domain Shift in Neural Machine Translation. (arXiv:2005.03642v1 [cs.CL])

The standard training algorithm in neural machine translation (NMT) suffers from exposure bias, and alternative algorithms have been proposed to mitigate this. However, the practical impact of exposure bias is under debate. In this paper, we link exposure bias to another well-known problem in NMT, namely the tendency to generate hallucinations under domain shift. In experiments on three datasets with multiple test domains, we show that exposure bias is partially to blame for hallucinations, and that training with Minimum Risk Training, which avoids exposure bias, can mitigate this. Our analysis explains why exposure bias is more problematic under domain shift, and also links exposure bias to the beam search problem, i.e. performance deterioration with increasing beam size. Our results provide a new justification for methods that reduce exposure bias: even if they do not increase performance on in-domain test sets, they can increase model robustness to domain shift.




rx

Where is Linked Data in Question Answering over Linked Data?. (arXiv:2005.03640v1 [cs.CL])

We argue that "Question Answering with Knowledge Base" and "Question Answering over Linked Data" are currently two instances of the same problem, despite one explicitly declares to deal with Linked Data. We point out the lack of existing methods to evaluate question answering on datasets which exploit external links to the rest of the cloud or share common schema. To this end, we propose the creation of new evaluation settings to leverage the advantages of the Semantic Web to achieve AI-complete question answering.




rx

Mutli-task Learning with Alignment Loss for Far-field Small-Footprint Keyword Spotting. (arXiv:2005.03633v1 [eess.AS])

In this paper, we focus on the task of small-footprint keyword spotting under the far-field scenario. Far-field environments are commonly encountered in real-life speech applications, and it causes serve degradation of performance due to room reverberation and various kinds of noises. Our baseline system is built on the convolutional neural network trained with pooled data of both far-field and close-talking speech. To cope with the distortions, we adopt the multi-task learning scheme with alignment loss to reduce the mismatch between the embedding features learned from different domains of data. Experimental results show that our proposed method maintains the performance on close-talking speech and achieves significant improvement on the far-field test set.




rx

The Zhou Ordinal of Labelled Markov Processes over Separable Spaces. (arXiv:2005.03630v1 [cs.LO])

There exist two notions of equivalence of behavior between states of a Labelled Markov Process (LMP): state bisimilarity and event bisimilarity. The first one can be considered as an appropriate generalization to continuous spaces of Larsen and Skou's probabilistic bisimilarity, while the second one is characterized by a natural logic. C. Zhou expressed state bisimilarity as the greatest fixed point of an operator $mathcal{O}$, and thus introduced an ordinal measure of the discrepancy between it and event bisimilarity. We call this ordinal the "Zhou ordinal" of $mathbb{S}$, $mathfrak{Z}(mathbb{S})$. When $mathfrak{Z}(mathbb{S})=0$, $mathbb{S}$ satisfies the Hennessy-Milner property. The second author proved the existence of an LMP $mathbb{S}$ with $mathfrak{Z}(mathbb{S}) geq 1$ and Zhou showed that there are LMPs having an infinite Zhou ordinal. In this paper we show that there are LMPs $mathbb{S}$ over separable metrizable spaces having arbitrary large countable $mathfrak{Z}(mathbb{S})$ and that it is consistent with the axioms of $mathit{ZFC}$ that there is such a process with an uncountable Zhou ordinal.




rx

Universal Coding and Prediction on Martin-L"of Random Points. (arXiv:2005.03627v1 [math.PR])

We perform an effectivization of classical results concerning universal coding and prediction for stationary ergodic processes over an arbitrary finite alphabet. That is, we lift the well-known almost sure statements to statements about Martin-L"of random sequences. Most of this work is quite mechanical but, by the way, we complete a result of Ryabko from 2008 by showing that each universal probability measure in the sense of universal coding induces a universal predictor in the prequential sense. Surprisingly, the effectivization of this implication holds true provided the universal measure does not ascribe too low conditional probabilities to individual symbols. As an example, we show that the Prediction by Partial Matching (PPM) measure satisfies this requirement. In the almost sure setting, the requirement is superfluous.




rx

Seismic Shot Gather Noise Localization Using a Multi-Scale Feature-Fusion-Based Neural Network. (arXiv:2005.03626v1 [cs.CV])

Deep learning-based models, such as convolutional neural networks, have advanced various segments of computer vision. However, this technology is rarely applied to seismic shot gather noise localization problem. This letter presents an investigation on the effectiveness of a multi-scale feature-fusion-based network for seismic shot-gather noise localization. Herein, we describe the following: (1) the construction of a real-world dataset of seismic noise localization based on 6,500 seismograms; (2) a multi-scale feature-fusion-based detector that uses the MobileNet combined with the Feature Pyramid Net as the backbone; and (3) the Single Shot multi-box detector for box classification/regression. Additionally, we propose the use of the Focal Loss function that improves the detector's prediction accuracy. The proposed detector achieves an AP@0.5 of 78.67\% in our empirical evaluation.




rx

Learning Robust Models for e-Commerce Product Search. (arXiv:2005.03624v1 [cs.CL])

Showing items that do not match search query intent degrades customer experience in e-commerce. These mismatches result from counterfactual biases of the ranking algorithms toward noisy behavioral signals such as clicks and purchases in the search logs. Mitigating the problem requires a large labeled dataset, which is expensive and time-consuming to obtain. In this paper, we develop a deep, end-to-end model that learns to effectively classify mismatches and to generate hard mismatched examples to improve the classifier. We train the model end-to-end by introducing a latent variable into the cross-entropy loss that alternates between using the real and generated samples. This not only makes the classifier more robust but also boosts the overall ranking performance. Our model achieves a relative gain compared to baselines by over 26% in F-score, and over 17% in Area Under PR curve. On live search traffic, our model gains significant improvement in multiple countries.




rx

Technical Report of "Deductive Joint Support for Rational Unrestricted Rebuttal". (arXiv:2005.03620v1 [cs.AI])

In ASPIC-style structured argumentation an argument can rebut another argument by attacking its conclusion. Two ways of formalizing rebuttal have been proposed: In restricted rebuttal, the attacked conclusion must have been arrived at with a defeasible rule, whereas in unrestricted rebuttal, it may have been arrived at with a strict rule, as long as at least one of the antecedents of this strict rule was already defeasible. One systematic way of choosing between various possible definitions of a framework for structured argumentation is to study what rationality postulates are satisfied by which definition, for example whether the closure postulate holds, i.e. whether the accepted conclusions are closed under strict rules. While having some benefits, the proposal to use unrestricted rebuttal faces the problem that the closure postulate only holds for the grounded semantics but fails when other argumentation semantics are applied, whereas with restricted rebuttal the closure postulate always holds. In this paper we propose that ASPIC-style argumentation can benefit from keeping track not only of the attack relation between arguments, but also the relation of deductive joint support that holds between a set of arguments and an argument that was constructed from that set using a strict rule. By taking this deductive joint support relation into account while determining the extensions, the closure postulate holds with unrestricted rebuttal under all admissibility-based semantics. We define the semantics of deductive joint support through the flattening method.




rx

Real-Time Context-aware Detection of Unsafe Events in Robot-Assisted Surgery. (arXiv:2005.03611v1 [cs.RO])

Cyber-physical systems for robotic surgery have enabled minimally invasive procedures with increased precision and shorter hospitalization. However, with increasing complexity and connectivity of software and major involvement of human operators in the supervision of surgical robots, there remain significant challenges in ensuring patient safety. This paper presents a safety monitoring system that, given the knowledge of the surgical task being performed by the surgeon, can detect safety-critical events in real-time. Our approach integrates a surgical gesture classifier that infers the operational context from the time-series kinematics data of the robot with a library of erroneous gesture classifiers that given a surgical gesture can detect unsafe events. Our experiments using data from two surgical platforms show that the proposed system can detect unsafe events caused by accidental or malicious faults within an average reaction time window of 1,693 milliseconds and F1 score of 0.88 and human errors within an average reaction time window of 57 milliseconds and F1 score of 0.76.




rx

Delayed approximate matrix assembly in multigrid with dynamic precisions. (arXiv:2005.03606v1 [cs.MS])

The accurate assembly of the system matrix is an important step in any code that solves partial differential equations on a mesh. We either explicitly set up a matrix, or we work in a matrix-free environment where we have to be able to quickly return matrix entries upon demand. Either way, the construction can become costly due to non-trivial material parameters entering the equations, multigrid codes requiring cascades of matrices that depend upon each other, or dynamic adaptive mesh refinement that necessitates the recomputation of matrix entries or the whole equation system throughout the solve. We propose that these constructions can be performed concurrently with the multigrid cycles. Initial geometric matrices and low accuracy integrations kickstart the multigrid, while improved assembly data is fed to the solver as and when it becomes available. The time to solution is improved as we eliminate an expensive preparation phase traditionally delaying the actual computation. We eliminate algorithmic latency. Furthermore, we desynchronise the assembly from the solution process. This anarchic increase of the concurrency level improves the scalability. Assembly routines are notoriously memory- and bandwidth-demanding. As we work with iteratively improving operator accuracies, we finally propose the use of a hierarchical, lossy compression scheme such that the memory footprint is brought down aggressively where the system matrix entries carry little information or are not yet available with high accuracy.




rx

COVID-19 Contact-tracing Apps: A Survey on the Global Deployment and Challenges. (arXiv:2005.03599v1 [cs.CR])

In response to the coronavirus disease (COVID-19) outbreak, there is an ever-increasing number of national governments that are rolling out contact-tracing Apps to aid the containment of the virus. The first hugely contentious issue facing the Apps is the deployment framework, i.e. centralised or decentralised. Based on this, the debate branches out to the corresponding technologies that underpin these architectures, i.e. GPS, QR codes, and Bluetooth. This work conducts a pioneering review of the above scenarios and contributes a geolocation mapping of the current deployment. The vulnerabilities and the directions of research are identified, with a special focus on the Bluetooth-based decentralised scheme.




rx

A Local Spectral Exterior Calculus for the Sphere and Application to the Shallow Water Equations. (arXiv:2005.03598v1 [math.NA])

We introduce $Psimathrm{ec}$, a local spectral exterior calculus for the two-sphere $S^2$. $Psimathrm{ec}$ provides a discretization of Cartan's exterior calculus on $S^2$ formed by spherical differential $r$-form wavelets. These are well localized in space and frequency and provide (Stevenson) frames for the homogeneous Sobolev spaces $dot{H}^{-r+1}( Omega_{ u}^{r} , S^2 )$ of differential $r$-forms. At the same time, they satisfy important properties of the exterior calculus, such as the de Rahm complex and the Hodge-Helmholtz decomposition. Through this, $Psimathrm{ec}$ is tailored towards structure preserving discretizations that can adapt to solutions with varying regularity. The construction of $Psimathrm{ec}$ is based on a novel spherical wavelet frame for $L_2(S^2)$ that we obtain by introducing scalable reproducing kernel frames. These extend scalable frames to weighted sampling expansions and provide an alternative to quadrature rules for the discretization of needlet-like scale-discrete wavelets. We verify the practicality of $Psimathrm{ec}$ for numerical computations using the rotating shallow water equations. Our numerical results demonstrate that a $Psimathrm{ec}$-based discretization of the equations attains accuracy comparable to those of spectral methods while using a representation that is well localized in space and frequency.




rx

Efficient Exact Verification of Binarized Neural Networks. (arXiv:2005.03597v1 [cs.AI])

We present a new system, EEV, for verifying binarized neural networks (BNNs). We formulate BNN verification as a Boolean satisfiability problem (SAT) with reified cardinality constraints of the form $y = (x_1 + cdots + x_n le b)$, where $x_i$ and $y$ are Boolean variables possibly with negation and $b$ is an integer constant. We also identify two properties, specifically balanced weight sparsity and lower cardinality bounds, that reduce the verification complexity of BNNs. EEV contains both a SAT solver enhanced to handle reified cardinality constraints natively and novel training strategies designed to reduce verification complexity by delivering networks with improved sparsity properties and cardinality bounds. We demonstrate the effectiveness of EEV by presenting the first exact verification results for $ell_{infty}$-bounded adversarial robustness of nontrivial convolutional BNNs on the MNIST and CIFAR10 datasets. Our results also show that, depending on the dataset and network architecture, our techniques verify BNNs between a factor of ten to ten thousand times faster than the best previous exact verification techniques for either binarized or real-valued networks.




rx

A Tale of Two Perplexities: Sensitivity of Neural Language Models to Lexical Retrieval Deficits in Dementia of the Alzheimer's Type. (arXiv:2005.03593v1 [cs.CL])

In recent years there has been a burgeoning interest in the use of computational methods to distinguish between elicited speech samples produced by patients with dementia, and those from healthy controls. The difference between perplexity estimates from two neural language models (LMs) - one trained on transcripts of speech produced by healthy participants and the other trained on transcripts from patients with dementia - as a single feature for diagnostic classification of unseen transcripts has been shown to produce state-of-the-art performance. However, little is known about why this approach is effective, and on account of the lack of case/control matching in the most widely-used evaluation set of transcripts (DementiaBank), it is unclear if these approaches are truly diagnostic, or are sensitive to other variables. In this paper, we interrogate neural LMs trained on participants with and without dementia using synthetic narratives previously developed to simulate progressive semantic dementia by manipulating lexical frequency. We find that perplexity of neural LMs is strongly and differentially associated with lexical frequency, and that a mixture model resulting from interpolating control and dementia LMs improves upon the current state-of-the-art for models trained on transcript text exclusively.




rx

VM placement over WDM-TDM AWGR PON Based Data Centre Architecture. (arXiv:2005.03590v1 [cs.NI])

Passive optical networks (PON) can play a vital role in data centres and access fog solutions by providing scalable, cost and energy efficient architectures. This paper proposes a Mixed Integer Linear Programming (MILP) model to optimize the placement of virtual machines (VMs) over an energy efficient WDM-TDM AWGR PON based data centre architecture. In this optimization, the use of VMs and their requirements affect the optimum number of servers utilized in the data centre when minimizing the power consumption and enabling more efficient utilization of servers is considered. Two power consumption minimization objectives were examined for up to 20 VMs with different computing and networking requirements. The results indicate that considering the minimization of the processing and networking power consumption in the allocation of VMs in the WDM-TDM AWGR PON can reduce the networking power consumption by up to 70% compared to the minimization of the processing power consumption.




rx

Learning Implicit Text Generation via Feature Matching. (arXiv:2005.03588v1 [cs.CL])

Generative feature matching network (GFMN) is an approach for training implicit generative models for images by performing moment matching on features from pre-trained neural networks. In this paper, we present new GFMN formulations that are effective for sequential data. Our experimental results show the effectiveness of the proposed method, SeqGFMN, for three distinct generation tasks in English: unconditional text generation, class-conditional text generation, and unsupervised text style transfer. SeqGFMN is stable to train and outperforms various adversarial approaches for text generation and text style transfer.




rx

GeoLogic -- Graphical interactive theorem prover for Euclidean geometry. (arXiv:2005.03586v1 [cs.LO])

Domain of mathematical logic in computers is dominated by automated theorem provers (ATP) and interactive theorem provers (ITP). Both of these are hard to access by AI from the human-imitation approach: ATPs often use human-unfriendly logical foundations while ITPs are meant for formalizing existing proofs rather than problem solving. We aim to create a simple human-friendly logical system for mathematical problem solving. We picked the case study of Euclidean geometry as it can be easily visualized, has simple logic, and yet potentially offers many high-school problems of various difficulty levels. To make the environment user friendly, we abandoned strict logic required by ITPs, allowing to infer topological facts from pictures. We present our system for Euclidean geometry, together with a graphical application GeoLogic, similar to GeoGebra, which allows users to interactively study and prove properties about the geometrical setup.




rx

Simulating Population Protocols in Sub-Constant Time per Interaction. (arXiv:2005.03584v1 [cs.DS])

We consider the problem of efficiently simulating population protocols. In the population model, we are given a distributed system of $n$ agents modeled as identical finite-state machines. In each time step, a pair of agents is selected uniformly at random to interact. In an interaction, agents update their states according to a common transition function. We empirically and analytically analyze two classes of simulators for this model.

First, we consider sequential simulators executing one interaction after the other. Key to the performance of these simulators is the data structure storing the agents' states. For our analysis, we consider plain arrays, binary search trees, and a novel Dynamic Alias Table data structure.

Secondly, we consider batch processing to efficiently update the states of multiple independent agents in one step. For many protocols considered in literature, our simulator requires amortized sub-constant time per interaction and is fast in practice: given a fixed time budget, the implementation of our batched simulator is able to simulate population protocols several orders of magnitude larger compared to the sequential competitors, and can carry out $2^{50}$ interactions among the same number of agents in less than 400s.




rx

A Reduced Basis Method For Fractional Diffusion Operators II. (arXiv:2005.03574v1 [math.NA])

We present a novel numerical scheme to approximate the solution map $smapsto u(s) := mathcal{L}^{-s}f$ to partial differential equations involving fractional elliptic operators. Reinterpreting $mathcal{L}^{-s}$ as interpolation operator allows us to derive an integral representation of $u(s)$ which includes solutions to parametrized reaction-diffusion problems. We propose a reduced basis strategy on top of a finite element method to approximate its integrand. Unlike prior works, we deduce the choice of snapshots for the reduced basis procedure analytically. Avoiding further discretization, the integral is interpreted in a spectral setting to evaluate the surrogate directly. Its computation boils down to a matrix approximation $L$ of the operator whose inverse is projected to a low-dimensional space, where explicit diagonalization is feasible. The universal character of the underlying $s$-independent reduced space allows the approximation of $(u(s))_{sin(0,1)}$ in its entirety. We prove exponential convergence rates and confirm the analysis with a variety of numerical examples.

Further improvements are proposed in the second part of this investigation to avoid inversion of $L$. Instead, we directly project the matrix to the reduced space, where its negative fractional power is evaluated. A numerical comparison with the predecessor highlights its competitive performance.




rx

Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. (arXiv:2005.03572v1 [cs.CV])

Deep learning-based object detection and instance segmentation have achieved unprecedented progress. In this paper, we propose Complete-IoU (CIoU) loss and Cluster-NMS for enhancing geometric factors in both bounding box regression and Non-Maximum Suppression (NMS), leading to notable gains of average precision (AP) and average recall (AR), without the sacrifice of inference efficiency. In particular, we consider three geometric factors, i.e., overlap area, normalized central point distance and aspect ratio, which are crucial for measuring bounding box regression in object detection and instance segmentation. The three geometric factors are then incorporated into CIoU loss for better distinguishing difficult regression cases. The training of deep models using CIoU loss results in consistent AP and AR improvements in comparison to widely adopted $ell_n$-norm loss and IoU-based loss. Furthermore, we propose Cluster-NMS, where NMS during inference is done by implicitly clustering detected boxes and usually requires less iterations. Cluster-NMS is very efficient due to its pure GPU implementation, , and geometric factors can be incorporated to improve both AP and AR. In the experiments, CIoU loss and Cluster-NMS have been applied to state-of-the-art instance segmentation (e.g., YOLACT), and object detection (e.g., YOLO v3, SSD and Faster R-CNN) models. Taking YOLACT on MS COCO as an example, our method achieves performance gains as +1.7 AP and +6.2 AR$_{100}$ for object detection, and +0.9 AP and +3.5 AR$_{100}$ for instance segmentation, with 27.1 FPS on one NVIDIA GTX 1080Ti GPU. All the source code and trained models are available at https://github.com/Zzh-tju/CIoU




rx

QuickSync: A Quickly Synchronizing PoS-Based Blockchain Protocol. (arXiv:2005.03564v1 [cs.CR])

To implement a blockchain, we need a blockchain protocol for all the nodes to follow. To design a blockchain protocol, we need a block publisher selection mechanism and a chain selection rule. In Proof-of-Stake (PoS) based blockchain protocols, block publisher selection mechanism selects the node to publish the next block based on the relative stake held by the node. However, PoS protocols may face vulnerability to fully adaptive corruptions. In literature, researchers address this issue at the cost of performance.

In this paper, we propose a novel PoS-based blockchain protocol, QuickSync, to achieve security against fully adaptive corruptions without compromising on performance. We propose a metric called block power, a value defined for each block, derived from the output of the verifiable random function based on the digital signature of the block publisher. With this metric, we compute chain power, the sum of block powers of all the blocks comprising the chain, for all the valid chains. These metrics are a function of the block publisher's stake to enable the PoS aspect of the protocol. The chain selection rule selects the chain with the highest chain power as the one to extend. This chain selection rule hence determines the selected block publisher of the previous block. When we use metrics to define the chain selection rule, it may lead to vulnerabilities against Sybil attacks. QuickSync uses a Sybil attack resistant function implemented using histogram matching. We prove that QuickSync satisfies common prefix, chain growth, and chain quality properties and hence it is secure. We also show that it is resilient to different types of adversarial attack strategies. Our analysis demonstrates that QuickSync performs better than Bitcoin by an order of magnitude on both transactions per second and time to finality, and better than Ouroboros v1 by a factor of three on time to finality.




rx

NH-HAZE: An Image Dehazing Benchmark with Non-Homogeneous Hazy and Haze-Free Images. (arXiv:2005.03560v1 [cs.CV])

Image dehazing is an ill-posed problem that has been extensively studied in the recent years. The objective performance evaluation of the dehazing methods is one of the major obstacles due to the lacking of a reference dataset. While the synthetic datasets have shown important limitations, the few realistic datasets introduced recently assume homogeneous haze over the entire scene. Since in many real cases haze is not uniformly distributed we introduce NH-HAZE, a non-homogeneous realistic dataset with pairs of real hazy and corresponding haze-free images. This is the first non-homogeneous image dehazing dataset and contains 55 outdoor scenes. The non-homogeneous haze has been introduced in the scene using a professional haze generator that imitates the real conditions of hazy scenes. Additionally, this work presents an objective assessment of several state-of-the-art single image dehazing methods that were evaluated using NH-HAZE dataset.




rx

Checking Qualitative Liveness Properties of Replicated Systems with Stochastic Scheduling. (arXiv:2005.03555v1 [cs.LO])

We present a sound and complete method for the verification of qualitative liveness properties of replicated systems under stochastic scheduling. These are systems consisting of a finite-state program, executed by an unknown number of indistinguishable agents, where the next agent to make a move is determined by the result of a random experiment. We show that if a property of such a system holds, then there is always a witness in the shape of a Presburger stage graph: a finite graph whose nodes are Presburger-definable sets of configurations. Due to the high complexity of the verification problem (non-elementary), we introduce an incomplete procedure for the construction of Presburger stage graphs, and implement it on top of an SMT solver. The procedure makes extensive use of the theory of well-quasi-orders, and of the structural theory of Petri nets and vector addition systems. We apply our results to a set of benchmarks, in particular to a large collection of population protocols, a model of distributed computation extensively studied by the distributed computing community.




rx

Online Algorithms to Schedule a Proportionate Flexible Flow Shop of Batching Machines. (arXiv:2005.03552v1 [cs.DS])

This paper is the first to consider online algorithms to schedule a proportionate flexible flow shop of batching machines (PFFB). The scheduling model is motivated by manufacturing processes of individualized medicaments, which are used in modern medicine to treat some serious illnesses. We provide two different online algorithms, proving also lower bounds for the offline problem to compute their competitive ratios. The first algorithm is an easy-to-implement, general local scheduling heuristic. It is 2-competitive for PFFBs with an arbitrary number of stages and for several natural scheduling objectives. We also show that for total/average flow time, no deterministic algorithm with better competitive ratio exists. For the special case with two stages and the makespan or total completion time objective, we describe an improved algorithm that achieves the best possible competitive ratio $varphi=frac{1+sqrt{5}}{2}$, the golden ratio. All our results also hold for proportionate (non-flexible) flow shops of batching machines (PFB) for which this is also the first paper to study online algorithms.




rx

Credulous Users and Fake News: a Real Case Study on the Propagation in Twitter. (arXiv:2005.03550v1 [cs.SI])

Recent studies have confirmed a growing trend, especially among youngsters, of using Online Social Media as favourite information platform at the expense of traditional mass media. Indeed, they can easily reach a wide audience at a high speed; but exactly because of this they are the preferred medium for influencing public opinion via so-called fake news. Moreover, there is a general agreement that the main vehicle of fakes news are malicious software robots (bots) that automatically interact with human users. In previous work we have considered the problem of tagging human users in Online Social Networks as credulous users. Specifically, we have considered credulous those users with relatively high number of bot friends when compared to total number of their social friends. We consider this group of users worth of attention because they might have a higher exposure to malicious activities and they may contribute to the spreading of fake information by sharing dubious content. In this work, starting from a dataset of fake news, we investigate the behaviour and the degree of involvement of credulous users in fake news diffusion. The study aims to: (i) fight fake news by considering the content diffused by credulous users; (ii) highlight the relationship between credulous users and fake news spreading; (iii) target fake news detection by focusing on the analysis of specific accounts more exposed to malicious activities of bots. Our first results demonstrate a strong involvement of credulous users in fake news diffusion. This findings are calling for tools that, by performing data streaming on credulous' users actions, enables us to perform targeted fact-checking.




rx

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis. (arXiv:2005.03545v1 [cs.CL])

Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal Humor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework.




rx

Collaborative Deanonymization. (arXiv:2005.03535v1 [cs.CR])

We propose protocols to resolve the tension between anonymity and accountability in a peer-to-peer manner. Law enforcement can adopt this approach to solve crimes involving cryptocurrency and anonymization techniques. We illustrate how the protocols could apply to Monero rings and CoinJoin transactions in Bitcoin.




rx

p for political: Participation Without Agency Is Not Enough. (arXiv:2005.03534v1 [cs.HC])

Participatory Design's vision of democratic participation assumes participants' feelings of agency in envisioning a collective future. But this assumption may be leaky when dealing with vulnerable populations. We reflect on the results of a series of activities aimed at supporting agentic-future-envisionment with a group of sex-trafficking survivors in Nepal. We observed a growing sense among the survivors that they could play a role in bringing about change in their families. They also became aware of how they could interact with available institutional resources. Reflecting on the observations, we argue that building participant agency on the small and personal interactions is necessary before demanding larger Political participation. In particular, a value of PD, especially for vulnerable populations, can lie in the process itself if it helps participants position themselves as actors in the larger world.




rx

Faceted Search of Heterogeneous Geographic Information for Dynamic Map Projection. (arXiv:2005.03531v1 [cs.HC])

This paper proposes a faceted information exploration model that supports coarse-grained and fine-grained focusing of geographic maps by offering a graphical representation of data attributes within interactive widgets. The proposed approach enables (i) a multi-category projection of long-lasting geographic maps, based on the proposal of efficient facets for data exploration in sparse and noisy datasets, and (ii) an interactive representation of the search context based on widgets that support data visualization, faceted exploration, category-based information hiding and transparency of results at the same time. The integration of our model with a semantic representation of geographical knowledge supports the exploration of information retrieved from heterogeneous data sources, such as Public Open Data and OpenStreetMap. We evaluated our model with users in the OnToMap collaborative Web GIS. The experimental results show that, when working on geographic maps populated with multiple data categories, it outperforms simple category-based map projection and traditional faceted search tools, such as checkboxes, in both user performance and experience.




rx

CounQER: A System for Discovering and Linking Count Information in Knowledge Bases. (arXiv:2005.03529v1 [cs.IR])

Predicate constraints of general-purpose knowledge bases (KBs) like Wikidata, DBpedia and Freebase are often limited to subproperty, domain and range constraints. In this demo we showcase CounQER, a system that illustrates the alignment of counting predicates, like staffSize, and enumerating predicates, like workInstitution^{-1} . In the demonstration session, attendees can inspect these alignments, and will learn about the importance of these alignments for KB question answering and curation. CounQER is available at https://counqer.mpi-inf.mpg.de/spo.




rx

Linear Time LexDFS on Chordal Graphs. (arXiv:2005.03523v1 [cs.DM])

Lexicographic Depth First Search (LexDFS) is a special variant of a Depth First Search (DFS), which was introduced by Corneil and Krueger in 2008. While this search has been used in various applications, in contrast to other graph searches, no general linear time implementation is known to date. In 2014, K"ohler and Mouatadid achieved linear running time to compute some special LexDFS orders for cocomparability graphs. In this paper, we present a linear time implementation of LexDFS for chordal graphs. Our algorithm is able to find any LexDFS order for this graph class. To the best of our knowledge this is the first unrestricted linear time implementation of LexDFS on a non-trivial graph class. In the algorithm we use a search tree computed by Lexicographic Breadth First Search (LexBFS).




rx

The Danish Gigaword Project. (arXiv:2005.03521v1 [cs.CL])

Danish is a North Germanic/Scandinavian language spoken primarily in Denmark, a country with a tradition of technological and scientific innovation. However, from a technological perspective, the Danish language has received relatively little attention and, as a result, Danish language technology is hard to develop, in part due to a lack of large or broad-coverage Danish corpora. This paper describes the Danish Gigaword project, which aims to construct a freely-available one billion word corpus of Danish text that represents the breadth of the written language.




rx

Practical Perspectives on Quality Estimation for Machine Translation. (arXiv:2005.03519v1 [cs.CL])

Sentence level quality estimation (QE) for machine translation (MT) attempts to predict the translation edit rate (TER) cost of post-editing work required to correct MT output. We describe our view on sentence-level QE as dictated by several practical setups encountered in the industry. We find consumers of MT output---whether human or algorithmic ones---to be primarily interested in a binary quality metric: is the translated sentence adequate as-is or does it need post-editing? Motivated by this we propose a quality classification (QC) view on sentence-level QE whereby we focus on maximizing recall at precision above a given threshold. We demonstrate that, while classical QE regression models fare poorly on this task, they can be re-purposed by replacing the output regression layer with a binary classification one, achieving 50-60\% recall at 90\% precision. For a high-quality MT system producing 75-80\% correct translations, this promises a significant reduction in post-editing work indeed.




rx

Two Efficient Device Independent Quantum Dialogue Protocols. (arXiv:2005.03518v1 [quant-ph])

Quantum dialogue is a process of two way secure and simultaneous communication using a single channel. Recently, a Measurement Device Independent Quantum Dialogue (MDI-QD) protocol has been proposed (Quantum Information Processing 16.12 (2017): 305). To make the protocol secure against information leakage, the authors have discarded almost half of the qubits remaining after the error estimation phase. In this paper, we propose two modified versions of the MDI-QD protocol such that the number of discarded qubits is reduced to almost one-fourth of the remaining qubits after the error estimation phase. We use almost half of their discarded qubits along with their used qubits to make our protocol more efficient in qubits count. We show that both of our protocols are secure under the same adversarial model given in MDI-QD protocol.




rx

An asynchronous distributed and scalable generalized Nash equilibrium seeking algorithm for strongly monotone games. (arXiv:2005.03507v1 [cs.GT])

In this paper, we present three distributed algorithms to solve a class of generalized Nash equilibrium (GNE) seeking problems in strongly monotone games. The first one (SD-GENO) is based on synchronous updates of the agents, while the second and the third (AD-GEED and AD-GENO) represent asynchronous solutions that are robust to communication delays. AD-GENO can be seen as a refinement of AD-GEED, since it only requires node auxiliary variables, enhancing the scalability of the algorithm. Our main contribution is to prove converge to a variational GNE of the game via an operator-theoretic approach. Finally, we apply the algorithms to network Cournot games and show how different activation sequences and delays affect convergence. We also compare the proposed algorithms to the only other in the literature (ADAGNES), and observe that AD-GENO outperforms the alternative.




rx

Sunny Pointer: Designing a mouse pointer for people with peripheral vision loss. (arXiv:2005.03504v1 [cs.HC])

We present a new mouse cursor designed to facilitate the use of the mouse by people with peripheral vision loss. The pointer consists of a collection of converging straight lines covering the whole screen and following the position of the mouse cursor. We measured its positive effects with a group of participants with peripheral vision loss of different kinds and we found that it can reduce by a factor of 7 the time required to complete a targeting task using the mouse. Using eye tracking, we show that this system makes it possible to initiate the movement towards the target without having to precisely locate the mouse pointer. Using Fitts' Law, we compare these performances with those of full visual field users in order to understand the relation between the accuracy of the estimated mouse cursor position and the index of performance obtained with our tool.




rx

Subtle Sensing: Detecting Differences in the Flexibility of Virtually Simulated Molecular Objects. (arXiv:2005.03503v1 [cs.HC])

During VR demos we have performed over last few years, many participants (in the absence of any haptic feedback) have commented on their perceived ability to 'feel' differences between simulated molecular objects. The mechanisms for such 'feeling' are not entirely clear: observing from outside VR, one can see that there is nothing physical for participants to 'feel'. Here we outline exploratory user studies designed to evaluate the extent to which participants can distinguish quantitative differences in the flexibility of VR-simulated molecular objects. The results suggest that an individual's capacity to detect differences in molecular flexibility is enhanced when they can interact with and manipulate the molecules, as opposed to merely observing the same interaction. Building on these results, we intend to carry out further studies investigating humans' ability to sense quantitative properties of VR simulations without haptic technology.




rx

Heidelberg Colorectal Data Set for Surgical Data Science in the Sensor Operating Room. (arXiv:2005.03501v1 [cs.CV])

Image-based tracking of medical instruments is an integral part of many surgical data science applications. Previous research has addressed the tasks of detecting, segmenting and tracking medical instruments based on laparoscopic video data. However, the methods proposed still tend to fail when applied to challenging images and do not generalize well to data they have not been trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms with a specific emphasis on robustness and generalization capabilities of the methods. Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery. Annotations include surgical phase labels for all frames in the videos as well as instance-wise segmentation masks for surgical instruments in more than 10,000 individual frames. The data has successfully been used to organize international competitions in the scope of the Endoscopic Vision Challenges (EndoVis) 2017 and 2019.




rx

Computing with bricks and mortar: Classification of waveforms with a doped concrete blocks. (arXiv:2005.03498v1 [cs.ET])

We present results showing the capability of concrete-based information processing substrate in the signal classification task in accordance with in materio computing paradigm. As the Reservoir Computing is a suitable model for describing embedded in materio computation, we propose that this type of presented basic construction unit can be used as a source for "reservoir of states" necessary for simple tuning of the readout layer. In that perspective, buildings constructed from computing concrete could function as a highly parallel information processor for smart architecture. We present an electrical characterization of the set of samples with different additive concentrations followed by a dynamical analysis of selected specimens showing fingerprints of memfractive properties. Moreover, on the basis of obtained parameters, classification of the signal waveform shapes can be performed in scenarios explicitly tuned for a given device terminal.




rx

Subquadratic-Time Algorithms for Normal Bases. (arXiv:2005.03497v1 [cs.SC])

For any finite Galois field extension $mathsf{K}/mathsf{F}$, with Galois group $G = mathrm{Gal}(mathsf{K}/mathsf{F})$, there exists an element $alpha in mathsf{K}$ whose orbit $Gcdotalpha$ forms an $mathsf{F}$-basis of $mathsf{K}$. Such an $alpha$ is called a normal element and $Gcdotalpha$ is a normal basis. We introduce a probabilistic algorithm for testing whether a given $alpha in mathsf{K}$ is normal, when $G$ is either a finite abelian or a metacyclic group. The algorithm is based on the fact that deciding whether $alpha$ is normal can be reduced to deciding whether $sum_{g in G} g(alpha)g in mathsf{K}[G]$ is invertible; it requires a slightly subquadratic number of operations. Once we know that $alpha$ is normal, we show how to perform conversions between the working basis of $mathsf{K}/mathsf{F}$ and the normal basis with the same asymptotic cost.




rx

Text Recognition in the Wild: A Survey. (arXiv:2005.03492v1 [cs.CV])

The history of text can be traced back over thousands of years. Rich and precise semantic information carried by text is important in a wide range of vision-based application scenarios. Therefore, text recognition in natural scenes has been an active research field in computer vision and pattern recognition. In recent years, with the rise and development of deep learning, numerous methods have shown promising in terms of innovation, practicality, and efficiency. This paper aims to (1) summarize the fundamental problems and the state-of-the-art associated with scene text recognition; (2) introduce new insights and ideas; (3) provide a comprehensive review of publicly available resources; (4) point out directions for future work. In summary, this literature review attempts to present the entire picture of the field of scene text recognition. It provides a comprehensive reference for people entering this field, and could be helpful to inspire future research. Related resources are available at our Github repository: https://github.com/HCIILAB/Scene-Text-Recognition.




rx

Algorithmic Averaging for Studying Periodic Orbits of Planar Differential Systems. (arXiv:2005.03487v1 [cs.SC])

One of the main open problems in the qualitative theory of real planar differential systems is the study of limit cycles. In this article, we present an algorithmic approach for detecting how many limit cycles can bifurcate from the periodic orbits of a given polynomial differential center when it is perturbed inside a class of polynomial differential systems via the averaging method. We propose four symbolic algorithms to implement the averaging method. The first algorithm is based on the change of polar coordinates that allows one to transform a considered differential system to the normal form of averaging. The second algorithm is used to derive the solutions of certain differential systems associated to the unperturbed term of the normal of averaging. The third algorithm exploits the partial Bell polynomials and allows one to compute the integral formula of the averaged functions at any order. The last algorithm is based on the aforementioned algorithms and determines the exact expressions of the averaged functions for the considered differential systems. The implementation of our algorithms is discussed and evaluated using several examples. The experimental results have extended the existing relevant results for certain classes of differential systems.




rx

Anonymized GCN: A Novel Robust Graph Embedding Method via Hiding Node Position in Noise. (arXiv:2005.03482v1 [cs.LG])

Graph convolution network (GCN) have achieved state-of-the-art performance in the task of node prediction in the graph structure. However, with the gradual various of graph attack methods, there are lack of research on the robustness of GCN. At this paper, we will design a robust GCN method for node prediction tasks. Considering the graph structure contains two types of information: node information and connection information, and attackers usually modify the connection information to complete the interference with the prediction results of the node, we first proposed a method to hide the connection information in the generator, named Anonymized GCN (AN-GCN). By hiding the connection information in the graph structure in the generator through adversarial training, the accurate node prediction can be completed only by the node number rather than its specific position in the graph. Specifically, we first demonstrated the key to determine the embedding of a specific node: the row corresponding to the node of the eigenmatrix of the Laplace matrix, by target it as the output of the generator, we designed a method to hide the node number in the noise. Take the corresponding noise as input, we will obtain the connection structure of the node instead of directly obtaining. Then the encoder and decoder are spliced both in discriminator, so that after adversarial training, the generator and discriminator can cooperate to complete the encoding and decoding of the graph, then complete the node prediction. Finally, All node positions can generated by noise at the same time, that is to say, the generator will hides all the connection information of the graph structure. The evaluation shows that we only need to obtain the initial features and node numbers of the nodes to complete the node prediction, and the accuracy did not decrease, but increased by 0.0293.




rx

Brain-like approaches to unsupervised learning of hidden representations -- a comparative study. (arXiv:2005.03476v1 [cs.NE])

Unsupervised learning of hidden representations has been one of the most vibrant research directions in machine learning in recent years. In this work we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations. The saliency and separability of the hidden representations when trained on MNIST dataset is studied using an external classifier, and compared with other unsupervised learning methods that include restricted Boltzmann machines and autoencoders.




rx

Bundle Recommendation with Graph Convolutional Networks. (arXiv:2005.03475v1 [cs.IR])

Bundle recommendation aims to recommend a bundle of items for a user to consume as a whole. Existing solutions integrate user-item interaction modeling into bundle recommendation by sharing model parameters or learning in a multi-task manner, which cannot explicitly model the affiliation between items and bundles, and fail to explore the decision-making when a user chooses bundles. In this work, we propose a graph neural network model named BGCN (short for extit{ extBF{B}undle extBF{G}raph extBF{C}onvolutional extBF{N}etwork}) for bundle recommendation. BGCN unifies user-item interaction, user-bundle interaction and bundle-item affiliation into a heterogeneous graph. With item nodes as the bridge, graph convolutional propagation between user and bundle nodes makes the learned representations capture the item level semantics. Through training based on hard-negative sampler, the user's fine-grained preferences for similar bundles are further distinguished. Empirical results on two real-world datasets demonstrate the strong performance gains of BGCN, which outperforms the state-of-the-art baselines by 10.77\% to 23.18\%.




rx

Ensuring Fairness under Prior Probability Shifts. (arXiv:2005.03474v1 [cs.LG])

In this paper, we study the problem of fair classification in the presence of prior probability shifts, where the training set distribution differs from the test set. This phenomenon can be observed in the yearly records of several real-world datasets, such as recidivism records and medical expenditure surveys. If unaccounted for, such shifts can cause the predictions of a classifier to become unfair towards specific population subgroups. While the fairness notion called Proportional Equality (PE) accounts for such shifts, a procedure to ensure PE-fairness was unknown.

In this work, we propose a method, called CAPE, which provides a comprehensive solution to the aforementioned problem. CAPE makes novel use of prevalence estimation techniques, sampling and an ensemble of classifiers to ensure fair predictions under prior probability shifts. We introduce a metric, called prevalence difference (PD), which CAPE attempts to minimize in order to ensure PE-fairness. We theoretically establish that this metric exhibits several desirable properties.

We evaluate the efficacy of CAPE via a thorough empirical evaluation on synthetic datasets. We also compare the performance of CAPE with several popular fair classifiers on real-world datasets like COMPAS (criminal risk assessment) and MEPS (medical expenditure panel survey). The results indicate that CAPE ensures PE-fair predictions, while performing well on other performance metrics.




rx

Indexing Metric Spaces for Exact Similarity Search. (arXiv:2005.03468v1 [cs.DB])

With the continued digitalization of societal processes, we are seeing an explosion in available data. This is referred to as big data. In a research setting, three aspects of the data are often viewed as the main sources of challenges when attempting to enable value creation from big data: volume, velocity and variety. Many studies address volume or velocity, while much fewer studies concern the variety. Metric space is ideal for addressing variety because it can accommodate any type of data as long as its associated distance notion satisfies the triangle inequality. To accelerate search in metric space, a collection of indexing techniques for metric data have been proposed. However, existing surveys each offers only a narrow coverage, and no comprehensive empirical study of those techniques exists. We offer a survey of all the existing metric indexes that can support exact similarity search, by i) summarizing all the existing partitioning, pruning and validation techniques used for metric indexes, ii) providing the time and storage complexity analysis on the index construction, and iii) report on a comprehensive empirical comparison of their similarity query processing performance. Here, empirical comparisons are used to evaluate the index performance during search as it is hard to see the complexity analysis differences on the similarity query processing and the query performance depends on the pruning and validation abilities related to the data distribution. This article aims at revealing different strengths and weaknesses of different indexing techniques in order to offer guidance on selecting an appropriate indexing technique for a given setting, and directing the future research for metric indexes.