al

The Zhou Ordinal of Labelled Markov Processes over Separable Spaces. (arXiv:2005.03630v1 [cs.LO])

There exist two notions of equivalence of behavior between states of a Labelled Markov Process (LMP): state bisimilarity and event bisimilarity. The first one can be considered as an appropriate generalization to continuous spaces of Larsen and Skou's probabilistic bisimilarity, while the second one is characterized by a natural logic. C. Zhou expressed state bisimilarity as the greatest fixed point of an operator $mathcal{O}$, and thus introduced an ordinal measure of the discrepancy between it and event bisimilarity. We call this ordinal the "Zhou ordinal" of $mathbb{S}$, $mathfrak{Z}(mathbb{S})$. When $mathfrak{Z}(mathbb{S})=0$, $mathbb{S}$ satisfies the Hennessy-Milner property. The second author proved the existence of an LMP $mathbb{S}$ with $mathfrak{Z}(mathbb{S}) geq 1$ and Zhou showed that there are LMPs having an infinite Zhou ordinal. In this paper we show that there are LMPs $mathbb{S}$ over separable metrizable spaces having arbitrary large countable $mathfrak{Z}(mathbb{S})$ and that it is consistent with the axioms of $mathit{ZFC}$ that there is such a process with an uncountable Zhou ordinal.




al

Universal Coding and Prediction on Martin-L"of Random Points. (arXiv:2005.03627v1 [math.PR])

We perform an effectivization of classical results concerning universal coding and prediction for stationary ergodic processes over an arbitrary finite alphabet. That is, we lift the well-known almost sure statements to statements about Martin-L"of random sequences. Most of this work is quite mechanical but, by the way, we complete a result of Ryabko from 2008 by showing that each universal probability measure in the sense of universal coding induces a universal predictor in the prequential sense. Surprisingly, the effectivization of this implication holds true provided the universal measure does not ascribe too low conditional probabilities to individual symbols. As an example, we show that the Prediction by Partial Matching (PPM) measure satisfies this requirement. In the almost sure setting, the requirement is superfluous.




al

Seismic Shot Gather Noise Localization Using a Multi-Scale Feature-Fusion-Based Neural Network. (arXiv:2005.03626v1 [cs.CV])

Deep learning-based models, such as convolutional neural networks, have advanced various segments of computer vision. However, this technology is rarely applied to seismic shot gather noise localization problem. This letter presents an investigation on the effectiveness of a multi-scale feature-fusion-based network for seismic shot-gather noise localization. Herein, we describe the following: (1) the construction of a real-world dataset of seismic noise localization based on 6,500 seismograms; (2) a multi-scale feature-fusion-based detector that uses the MobileNet combined with the Feature Pyramid Net as the backbone; and (3) the Single Shot multi-box detector for box classification/regression. Additionally, we propose the use of the Focal Loss function that improves the detector's prediction accuracy. The proposed detector achieves an AP@0.5 of 78.67\% in our empirical evaluation.




al

Technical Report of "Deductive Joint Support for Rational Unrestricted Rebuttal". (arXiv:2005.03620v1 [cs.AI])

In ASPIC-style structured argumentation an argument can rebut another argument by attacking its conclusion. Two ways of formalizing rebuttal have been proposed: In restricted rebuttal, the attacked conclusion must have been arrived at with a defeasible rule, whereas in unrestricted rebuttal, it may have been arrived at with a strict rule, as long as at least one of the antecedents of this strict rule was already defeasible. One systematic way of choosing between various possible definitions of a framework for structured argumentation is to study what rationality postulates are satisfied by which definition, for example whether the closure postulate holds, i.e. whether the accepted conclusions are closed under strict rules. While having some benefits, the proposal to use unrestricted rebuttal faces the problem that the closure postulate only holds for the grounded semantics but fails when other argumentation semantics are applied, whereas with restricted rebuttal the closure postulate always holds. In this paper we propose that ASPIC-style argumentation can benefit from keeping track not only of the attack relation between arguments, but also the relation of deductive joint support that holds between a set of arguments and an argument that was constructed from that set using a strict rule. By taking this deductive joint support relation into account while determining the extensions, the closure postulate holds with unrestricted rebuttal under all admissibility-based semantics. We define the semantics of deductive joint support through the flattening method.




al

Real-Time Context-aware Detection of Unsafe Events in Robot-Assisted Surgery. (arXiv:2005.03611v1 [cs.RO])

Cyber-physical systems for robotic surgery have enabled minimally invasive procedures with increased precision and shorter hospitalization. However, with increasing complexity and connectivity of software and major involvement of human operators in the supervision of surgical robots, there remain significant challenges in ensuring patient safety. This paper presents a safety monitoring system that, given the knowledge of the surgical task being performed by the surgeon, can detect safety-critical events in real-time. Our approach integrates a surgical gesture classifier that infers the operational context from the time-series kinematics data of the robot with a library of erroneous gesture classifiers that given a surgical gesture can detect unsafe events. Our experiments using data from two surgical platforms show that the proposed system can detect unsafe events caused by accidental or malicious faults within an average reaction time window of 1,693 milliseconds and F1 score of 0.88 and human errors within an average reaction time window of 57 milliseconds and F1 score of 0.76.




al

COVID-19 Contact-tracing Apps: A Survey on the Global Deployment and Challenges. (arXiv:2005.03599v1 [cs.CR])

In response to the coronavirus disease (COVID-19) outbreak, there is an ever-increasing number of national governments that are rolling out contact-tracing Apps to aid the containment of the virus. The first hugely contentious issue facing the Apps is the deployment framework, i.e. centralised or decentralised. Based on this, the debate branches out to the corresponding technologies that underpin these architectures, i.e. GPS, QR codes, and Bluetooth. This work conducts a pioneering review of the above scenarios and contributes a geolocation mapping of the current deployment. The vulnerabilities and the directions of research are identified, with a special focus on the Bluetooth-based decentralised scheme.




al

A Local Spectral Exterior Calculus for the Sphere and Application to the Shallow Water Equations. (arXiv:2005.03598v1 [math.NA])

We introduce $Psimathrm{ec}$, a local spectral exterior calculus for the two-sphere $S^2$. $Psimathrm{ec}$ provides a discretization of Cartan's exterior calculus on $S^2$ formed by spherical differential $r$-form wavelets. These are well localized in space and frequency and provide (Stevenson) frames for the homogeneous Sobolev spaces $dot{H}^{-r+1}( Omega_{ u}^{r} , S^2 )$ of differential $r$-forms. At the same time, they satisfy important properties of the exterior calculus, such as the de Rahm complex and the Hodge-Helmholtz decomposition. Through this, $Psimathrm{ec}$ is tailored towards structure preserving discretizations that can adapt to solutions with varying regularity. The construction of $Psimathrm{ec}$ is based on a novel spherical wavelet frame for $L_2(S^2)$ that we obtain by introducing scalable reproducing kernel frames. These extend scalable frames to weighted sampling expansions and provide an alternative to quadrature rules for the discretization of needlet-like scale-discrete wavelets. We verify the practicality of $Psimathrm{ec}$ for numerical computations using the rotating shallow water equations. Our numerical results demonstrate that a $Psimathrm{ec}$-based discretization of the equations attains accuracy comparable to those of spectral methods while using a representation that is well localized in space and frequency.




al

Efficient Exact Verification of Binarized Neural Networks. (arXiv:2005.03597v1 [cs.AI])

We present a new system, EEV, for verifying binarized neural networks (BNNs). We formulate BNN verification as a Boolean satisfiability problem (SAT) with reified cardinality constraints of the form $y = (x_1 + cdots + x_n le b)$, where $x_i$ and $y$ are Boolean variables possibly with negation and $b$ is an integer constant. We also identify two properties, specifically balanced weight sparsity and lower cardinality bounds, that reduce the verification complexity of BNNs. EEV contains both a SAT solver enhanced to handle reified cardinality constraints natively and novel training strategies designed to reduce verification complexity by delivering networks with improved sparsity properties and cardinality bounds. We demonstrate the effectiveness of EEV by presenting the first exact verification results for $ell_{infty}$-bounded adversarial robustness of nontrivial convolutional BNNs on the MNIST and CIFAR10 datasets. Our results also show that, depending on the dataset and network architecture, our techniques verify BNNs between a factor of ten to ten thousand times faster than the best previous exact verification techniques for either binarized or real-valued networks.




al

A Tale of Two Perplexities: Sensitivity of Neural Language Models to Lexical Retrieval Deficits in Dementia of the Alzheimer's Type. (arXiv:2005.03593v1 [cs.CL])

In recent years there has been a burgeoning interest in the use of computational methods to distinguish between elicited speech samples produced by patients with dementia, and those from healthy controls. The difference between perplexity estimates from two neural language models (LMs) - one trained on transcripts of speech produced by healthy participants and the other trained on transcripts from patients with dementia - as a single feature for diagnostic classification of unseen transcripts has been shown to produce state-of-the-art performance. However, little is known about why this approach is effective, and on account of the lack of case/control matching in the most widely-used evaluation set of transcripts (DementiaBank), it is unclear if these approaches are truly diagnostic, or are sensitive to other variables. In this paper, we interrogate neural LMs trained on participants with and without dementia using synthetic narratives previously developed to simulate progressive semantic dementia by manipulating lexical frequency. We find that perplexity of neural LMs is strongly and differentially associated with lexical frequency, and that a mixture model resulting from interpolating control and dementia LMs improves upon the current state-of-the-art for models trained on transcript text exclusively.




al

GeoLogic -- Graphical interactive theorem prover for Euclidean geometry. (arXiv:2005.03586v1 [cs.LO])

Domain of mathematical logic in computers is dominated by automated theorem provers (ATP) and interactive theorem provers (ITP). Both of these are hard to access by AI from the human-imitation approach: ATPs often use human-unfriendly logical foundations while ITPs are meant for formalizing existing proofs rather than problem solving. We aim to create a simple human-friendly logical system for mathematical problem solving. We picked the case study of Euclidean geometry as it can be easily visualized, has simple logic, and yet potentially offers many high-school problems of various difficulty levels. To make the environment user friendly, we abandoned strict logic required by ITPs, allowing to infer topological facts from pictures. We present our system for Euclidean geometry, together with a graphical application GeoLogic, similar to GeoGebra, which allows users to interactively study and prove properties about the geometrical setup.




al

A Reduced Basis Method For Fractional Diffusion Operators II. (arXiv:2005.03574v1 [math.NA])

We present a novel numerical scheme to approximate the solution map $smapsto u(s) := mathcal{L}^{-s}f$ to partial differential equations involving fractional elliptic operators. Reinterpreting $mathcal{L}^{-s}$ as interpolation operator allows us to derive an integral representation of $u(s)$ which includes solutions to parametrized reaction-diffusion problems. We propose a reduced basis strategy on top of a finite element method to approximate its integrand. Unlike prior works, we deduce the choice of snapshots for the reduced basis procedure analytically. Avoiding further discretization, the integral is interpreted in a spectral setting to evaluate the surrogate directly. Its computation boils down to a matrix approximation $L$ of the operator whose inverse is projected to a low-dimensional space, where explicit diagonalization is feasible. The universal character of the underlying $s$-independent reduced space allows the approximation of $(u(s))_{sin(0,1)}$ in its entirety. We prove exponential convergence rates and confirm the analysis with a variety of numerical examples.

Further improvements are proposed in the second part of this investigation to avoid inversion of $L$. Instead, we directly project the matrix to the reduced space, where its negative fractional power is evaluated. A numerical comparison with the predecessor highlights its competitive performance.




al

Checking Qualitative Liveness Properties of Replicated Systems with Stochastic Scheduling. (arXiv:2005.03555v1 [cs.LO])

We present a sound and complete method for the verification of qualitative liveness properties of replicated systems under stochastic scheduling. These are systems consisting of a finite-state program, executed by an unknown number of indistinguishable agents, where the next agent to make a move is determined by the result of a random experiment. We show that if a property of such a system holds, then there is always a witness in the shape of a Presburger stage graph: a finite graph whose nodes are Presburger-definable sets of configurations. Due to the high complexity of the verification problem (non-elementary), we introduce an incomplete procedure for the construction of Presburger stage graphs, and implement it on top of an SMT solver. The procedure makes extensive use of the theory of well-quasi-orders, and of the structural theory of Petri nets and vector addition systems. We apply our results to a set of benchmarks, in particular to a large collection of population protocols, a model of distributed computation extensively studied by the distributed computing community.




al

Online Algorithms to Schedule a Proportionate Flexible Flow Shop of Batching Machines. (arXiv:2005.03552v1 [cs.DS])

This paper is the first to consider online algorithms to schedule a proportionate flexible flow shop of batching machines (PFFB). The scheduling model is motivated by manufacturing processes of individualized medicaments, which are used in modern medicine to treat some serious illnesses. We provide two different online algorithms, proving also lower bounds for the offline problem to compute their competitive ratios. The first algorithm is an easy-to-implement, general local scheduling heuristic. It is 2-competitive for PFFBs with an arbitrary number of stages and for several natural scheduling objectives. We also show that for total/average flow time, no deterministic algorithm with better competitive ratio exists. For the special case with two stages and the makespan or total completion time objective, we describe an improved algorithm that achieves the best possible competitive ratio $varphi=frac{1+sqrt{5}}{2}$, the golden ratio. All our results also hold for proportionate (non-flexible) flow shops of batching machines (PFB) for which this is also the first paper to study online algorithms.




al

Credulous Users and Fake News: a Real Case Study on the Propagation in Twitter. (arXiv:2005.03550v1 [cs.SI])

Recent studies have confirmed a growing trend, especially among youngsters, of using Online Social Media as favourite information platform at the expense of traditional mass media. Indeed, they can easily reach a wide audience at a high speed; but exactly because of this they are the preferred medium for influencing public opinion via so-called fake news. Moreover, there is a general agreement that the main vehicle of fakes news are malicious software robots (bots) that automatically interact with human users. In previous work we have considered the problem of tagging human users in Online Social Networks as credulous users. Specifically, we have considered credulous those users with relatively high number of bot friends when compared to total number of their social friends. We consider this group of users worth of attention because they might have a higher exposure to malicious activities and they may contribute to the spreading of fake information by sharing dubious content. In this work, starting from a dataset of fake news, we investigate the behaviour and the degree of involvement of credulous users in fake news diffusion. The study aims to: (i) fight fake news by considering the content diffused by credulous users; (ii) highlight the relationship between credulous users and fake news spreading; (iii) target fake news detection by focusing on the analysis of specific accounts more exposed to malicious activities of bots. Our first results demonstrate a strong involvement of credulous users in fake news diffusion. This findings are calling for tools that, by performing data streaming on credulous' users actions, enables us to perform targeted fact-checking.




al

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis. (arXiv:2005.03545v1 [cs.CL])

Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal Humor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework.




al

p for political: Participation Without Agency Is Not Enough. (arXiv:2005.03534v1 [cs.HC])

Participatory Design's vision of democratic participation assumes participants' feelings of agency in envisioning a collective future. But this assumption may be leaky when dealing with vulnerable populations. We reflect on the results of a series of activities aimed at supporting agentic-future-envisionment with a group of sex-trafficking survivors in Nepal. We observed a growing sense among the survivors that they could play a role in bringing about change in their families. They also became aware of how they could interact with available institutional resources. Reflecting on the observations, we argue that building participant agency on the small and personal interactions is necessary before demanding larger Political participation. In particular, a value of PD, especially for vulnerable populations, can lie in the process itself if it helps participants position themselves as actors in the larger world.




al

Linear Time LexDFS on Chordal Graphs. (arXiv:2005.03523v1 [cs.DM])

Lexicographic Depth First Search (LexDFS) is a special variant of a Depth First Search (DFS), which was introduced by Corneil and Krueger in 2008. While this search has been used in various applications, in contrast to other graph searches, no general linear time implementation is known to date. In 2014, K"ohler and Mouatadid achieved linear running time to compute some special LexDFS orders for cocomparability graphs. In this paper, we present a linear time implementation of LexDFS for chordal graphs. Our algorithm is able to find any LexDFS order for this graph class. To the best of our knowledge this is the first unrestricted linear time implementation of LexDFS on a non-trivial graph class. In the algorithm we use a search tree computed by Lexicographic Breadth First Search (LexBFS).




al

Practical Perspectives on Quality Estimation for Machine Translation. (arXiv:2005.03519v1 [cs.CL])

Sentence level quality estimation (QE) for machine translation (MT) attempts to predict the translation edit rate (TER) cost of post-editing work required to correct MT output. We describe our view on sentence-level QE as dictated by several practical setups encountered in the industry. We find consumers of MT output---whether human or algorithmic ones---to be primarily interested in a binary quality metric: is the translated sentence adequate as-is or does it need post-editing? Motivated by this we propose a quality classification (QC) view on sentence-level QE whereby we focus on maximizing recall at precision above a given threshold. We demonstrate that, while classical QE regression models fare poorly on this task, they can be re-purposed by replacing the output regression layer with a binary classification one, achieving 50-60\% recall at 90\% precision. For a high-quality MT system producing 75-80\% correct translations, this promises a significant reduction in post-editing work indeed.




al

Two Efficient Device Independent Quantum Dialogue Protocols. (arXiv:2005.03518v1 [quant-ph])

Quantum dialogue is a process of two way secure and simultaneous communication using a single channel. Recently, a Measurement Device Independent Quantum Dialogue (MDI-QD) protocol has been proposed (Quantum Information Processing 16.12 (2017): 305). To make the protocol secure against information leakage, the authors have discarded almost half of the qubits remaining after the error estimation phase. In this paper, we propose two modified versions of the MDI-QD protocol such that the number of discarded qubits is reduced to almost one-fourth of the remaining qubits after the error estimation phase. We use almost half of their discarded qubits along with their used qubits to make our protocol more efficient in qubits count. We show that both of our protocols are secure under the same adversarial model given in MDI-QD protocol.




al

An asynchronous distributed and scalable generalized Nash equilibrium seeking algorithm for strongly monotone games. (arXiv:2005.03507v1 [cs.GT])

In this paper, we present three distributed algorithms to solve a class of generalized Nash equilibrium (GNE) seeking problems in strongly monotone games. The first one (SD-GENO) is based on synchronous updates of the agents, while the second and the third (AD-GEED and AD-GENO) represent asynchronous solutions that are robust to communication delays. AD-GENO can be seen as a refinement of AD-GEED, since it only requires node auxiliary variables, enhancing the scalability of the algorithm. Our main contribution is to prove converge to a variational GNE of the game via an operator-theoretic approach. Finally, we apply the algorithms to network Cournot games and show how different activation sequences and delays affect convergence. We also compare the proposed algorithms to the only other in the literature (ADAGNES), and observe that AD-GENO outperforms the alternative.




al

Sunny Pointer: Designing a mouse pointer for people with peripheral vision loss. (arXiv:2005.03504v1 [cs.HC])

We present a new mouse cursor designed to facilitate the use of the mouse by people with peripheral vision loss. The pointer consists of a collection of converging straight lines covering the whole screen and following the position of the mouse cursor. We measured its positive effects with a group of participants with peripheral vision loss of different kinds and we found that it can reduce by a factor of 7 the time required to complete a targeting task using the mouse. Using eye tracking, we show that this system makes it possible to initiate the movement towards the target without having to precisely locate the mouse pointer. Using Fitts' Law, we compare these performances with those of full visual field users in order to understand the relation between the accuracy of the estimated mouse cursor position and the index of performance obtained with our tool.




al

Subtle Sensing: Detecting Differences in the Flexibility of Virtually Simulated Molecular Objects. (arXiv:2005.03503v1 [cs.HC])

During VR demos we have performed over last few years, many participants (in the absence of any haptic feedback) have commented on their perceived ability to 'feel' differences between simulated molecular objects. The mechanisms for such 'feeling' are not entirely clear: observing from outside VR, one can see that there is nothing physical for participants to 'feel'. Here we outline exploratory user studies designed to evaluate the extent to which participants can distinguish quantitative differences in the flexibility of VR-simulated molecular objects. The results suggest that an individual's capacity to detect differences in molecular flexibility is enhanced when they can interact with and manipulate the molecules, as opposed to merely observing the same interaction. Building on these results, we intend to carry out further studies investigating humans' ability to sense quantitative properties of VR simulations without haptic technology.




al

Heidelberg Colorectal Data Set for Surgical Data Science in the Sensor Operating Room. (arXiv:2005.03501v1 [cs.CV])

Image-based tracking of medical instruments is an integral part of many surgical data science applications. Previous research has addressed the tasks of detecting, segmenting and tracking medical instruments based on laparoscopic video data. However, the methods proposed still tend to fail when applied to challenging images and do not generalize well to data they have not been trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms with a specific emphasis on robustness and generalization capabilities of the methods. Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery. Annotations include surgical phase labels for all frames in the videos as well as instance-wise segmentation masks for surgical instruments in more than 10,000 individual frames. The data has successfully been used to organize international competitions in the scope of the Endoscopic Vision Challenges (EndoVis) 2017 and 2019.




al

Subquadratic-Time Algorithms for Normal Bases. (arXiv:2005.03497v1 [cs.SC])

For any finite Galois field extension $mathsf{K}/mathsf{F}$, with Galois group $G = mathrm{Gal}(mathsf{K}/mathsf{F})$, there exists an element $alpha in mathsf{K}$ whose orbit $Gcdotalpha$ forms an $mathsf{F}$-basis of $mathsf{K}$. Such an $alpha$ is called a normal element and $Gcdotalpha$ is a normal basis. We introduce a probabilistic algorithm for testing whether a given $alpha in mathsf{K}$ is normal, when $G$ is either a finite abelian or a metacyclic group. The algorithm is based on the fact that deciding whether $alpha$ is normal can be reduced to deciding whether $sum_{g in G} g(alpha)g in mathsf{K}[G]$ is invertible; it requires a slightly subquadratic number of operations. Once we know that $alpha$ is normal, we show how to perform conversions between the working basis of $mathsf{K}/mathsf{F}$ and the normal basis with the same asymptotic cost.




al

Algorithmic Averaging for Studying Periodic Orbits of Planar Differential Systems. (arXiv:2005.03487v1 [cs.SC])

One of the main open problems in the qualitative theory of real planar differential systems is the study of limit cycles. In this article, we present an algorithmic approach for detecting how many limit cycles can bifurcate from the periodic orbits of a given polynomial differential center when it is perturbed inside a class of polynomial differential systems via the averaging method. We propose four symbolic algorithms to implement the averaging method. The first algorithm is based on the change of polar coordinates that allows one to transform a considered differential system to the normal form of averaging. The second algorithm is used to derive the solutions of certain differential systems associated to the unperturbed term of the normal of averaging. The third algorithm exploits the partial Bell polynomials and allows one to compute the integral formula of the averaged functions at any order. The last algorithm is based on the aforementioned algorithms and determines the exact expressions of the averaged functions for the considered differential systems. The implementation of our algorithms is discussed and evaluated using several examples. The experimental results have extended the existing relevant results for certain classes of differential systems.




al

Bundle Recommendation with Graph Convolutional Networks. (arXiv:2005.03475v1 [cs.IR])

Bundle recommendation aims to recommend a bundle of items for a user to consume as a whole. Existing solutions integrate user-item interaction modeling into bundle recommendation by sharing model parameters or learning in a multi-task manner, which cannot explicitly model the affiliation between items and bundles, and fail to explore the decision-making when a user chooses bundles. In this work, we propose a graph neural network model named BGCN (short for extit{ extBF{B}undle extBF{G}raph extBF{C}onvolutional extBF{N}etwork}) for bundle recommendation. BGCN unifies user-item interaction, user-bundle interaction and bundle-item affiliation into a heterogeneous graph. With item nodes as the bridge, graph convolutional propagation between user and bundle nodes makes the learned representations capture the item level semantics. Through training based on hard-negative sampler, the user's fine-grained preferences for similar bundles are further distinguished. Empirical results on two real-world datasets demonstrate the strong performance gains of BGCN, which outperforms the state-of-the-art baselines by 10.77\% to 23.18\%.




al

Predictions and algorithmic statistics for infinite sequence. (arXiv:2005.03467v1 [cs.IT])

Consider the following prediction problem. Assume that there is a block box that produces bits according to some unknown computable distribution on the binary tree. We know first $n$ bits $x_1 x_2 ldots x_n$. We want to know the probability of the event that that the next bit is equal to $1$. Solomonoff suggested to use universal semimeasure $m$ for solving this task. He proved that for every computable distribution $P$ and for every $b in {0,1}$ the following holds: $$sum_{n=1}^{infty}sum_{x: l(x)=n} P(x) (P(b | x) - m(b | x))^2 < infty .$$ However, Solomonoff's method has a negative aspect: Hutter and Muchnik proved that there are an universal semimeasure $m$, computable distribution $P$ and a random (in Martin-L{"o}f sense) sequence $x_1 x_2ldots$ such that $lim_{n o infty} P(x_{n+1} | x_1ldots x_n) - m(x_{n+1} | x_1ldots x_n) rightarrow 0$. We suggest a new way for prediction. For every finite string $x$ we predict the new bit according to the best (in some sence) distribution for $x$. We prove the similar result as Solomonoff theorem for our way of prediction. Also we show that our method of prediction has no that negative aspect as Solomonoff's method.




al

ExpDNN: Explainable Deep Neural Network. (arXiv:2005.03461v1 [cs.LG])

In recent years, deep neural networks have been applied to obtain high performance of prediction, classification, and pattern recognition. However, the weights in these deep neural networks are difficult to be explained. Although a linear regression method can provide explainable results, the method is not suitable in the case of input interaction. Therefore, an explainable deep neural network (ExpDNN) with explainable layers is proposed to obtain explainable results in the case of input interaction. Three cases were given to evaluate the proposed ExpDNN, and the results showed that the absolute value of weight in an explainable layer can be used to explain the weight of corresponding input for feature extraction.




al

NTIRE 2020 Challenge on NonHomogeneous Dehazing. (arXiv:2005.03457v1 [cs.CV])

This paper reviews the NTIRE 2020 Challenge on NonHomogeneous Dehazing of images (restoration of rich details in hazy image). We focus on the proposed solutions and their results evaluated on NH-Haze, a novel dataset consisting of 55 pairs of real haze free and nonhomogeneous hazy images recorded outdoor. NH-Haze is the first realistic nonhomogeneous haze dataset that provides ground truth images. The nonhomogeneous haze has been produced using a professional haze generator that imitates the real conditions of haze scenes. 168 participants registered in the challenge and 27 teams competed in the final testing phase. The proposed solutions gauge the state-of-the-art in image dehazing.




al

An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration. (arXiv:2005.03451v1 [cs.LG])

We empirically evaluate an undervolting technique, i.e., underscaling the circuit supply voltage below the nominal level, to improve the power-efficiency of Convolutional Neural Network (CNN) accelerators mapped to Field Programmable Gate Arrays (FPGAs). Undervolting below a safe voltage level can lead to timing faults due to excessive circuit latency increase. We evaluate the reliability-power trade-off for such accelerators. Specifically, we experimentally study the reduced-voltage operation of multiple components of real FPGAs, characterize the corresponding reliability behavior of CNN accelerators, propose techniques to minimize the drawbacks of reduced-voltage operation, and combine undervolting with architectural CNN optimization techniques, i.e., quantization and pruning. We investigate the effect of environmental temperature on the reliability-power trade-off of such accelerators. We perform experiments on three identical samples of modern Xilinx ZCU102 FPGA platforms with five state-of-the-art image classification CNN benchmarks. This approach allows us to study the effects of our undervolting technique for both software and hardware variability. We achieve more than 3X power-efficiency (GOPs/W) gain via undervolting. 2.6X of this gain is the result of eliminating the voltage guardband region, i.e., the safe voltage region below the nominal level that is set by FPGA vendor to ensure correct functionality in worst-case environmental and circuit conditions. 43% of the power-efficiency gain is due to further undervolting below the guardband, which comes at the cost of accuracy loss in the CNN accelerator. We evaluate an effective frequency underscaling technique that prevents this accuracy loss, and find that it reduces the power-efficiency gain from 43% to 25%.




al

Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences. (arXiv:2005.03436v1 [cs.CL])

The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer. Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs. We propose a framework for extracting divergence patterns for any language pair from a parallel corpus, building on Universal Dependencies. We show that our framework provides a detailed picture of cross-language divergences, generalizes previous approaches, and lends itself to full automation. We further present a novel dataset, a manually word-aligned subset of the Parallel UD corpus in five languages, and use it to perform a detailed corpus study. We demonstrate the usefulness of the resulting analysis by showing that it can help account for performance patterns of a cross-lingual parser.




al

Parametrized Universality Problems for One-Counter Nets. (arXiv:2005.03435v1 [cs.FL])

We study the language universality problem for One-Counter Nets, also known as 1-dimensional Vector Addition Systems with States (1-VASS), parameterized either with an initial counter value, or with an upper bound on the allowed counter value during runs. The language accepted by an OCN (defined by reaching a final control state) is monotone in both parameters. This yields two natural questions: 1) Does there exist an initial counter value that makes the language universal? 2) Does there exist a sufficiently high ceiling so that the bounded language is universal? Despite the fact that unparameterized universality is Ackermann-complete and that these problems seem to reduce to checking basic structural properties of the underlying automaton, we show that in fact both problems are undecidable. We also look into the complexities of the problems for several decidable subclasses, namely for unambiguous, and deterministic systems, and for those over a single-letter alphabet.




al

Dirichlet spectral-Galerkin approximation method for the simply supported vibrating plate eigenvalues. (arXiv:2005.03433v1 [math.NA])

In this paper, we analyze and implement the Dirichlet spectral-Galerkin method for approximating simply supported vibrating plate eigenvalues with variable coefficients. This is a Galerkin approximation that uses the approximation space that is the span of finitely many Dirichlet eigenfunctions for the Laplacian. Convergence and error analysis for this method is presented for two and three dimensions. Here we will assume that the domain has either a smooth or Lipschitz boundary with no reentrant corners. An important component of the error analysis is Weyl's law for the Dirichlet eigenvalues. Numerical examples for computing the simply supported vibrating plate eigenvalues for the unit disk and square are presented. In order to test the accuracy of the approximation, we compare the spectral-Galerkin method to the separation of variables for the unit disk. Whereas for the unit square we will numerically test the convergence rate for a variable coefficient problem.




al

Kunster -- AR Art Video Maker -- Real time video neural style transfer on mobile devices. (arXiv:2005.03415v1 [cs.CV])

Neural style transfer is a well-known branch of deep learning research, with many interesting works and two major drawbacks. Most of the works in the field are hard to use by non-expert users and substantial hardware resources are required. In this work, we present a solution to both of these problems. We have applied neural style transfer to real-time video (over 25 frames per second), which is capable of running on mobile devices. We also investigate the works on achieving temporal coherence and present the idea of fine-tuning, already trained models, to achieve stable video. What is more, we also analyze the impact of the common deep neural network architecture on the performance of mobile devices with regard to number of layers and filters present. In the experiment section we present the results of our work with respect to the iOS devices and discuss the problems present in current Android devices as well as future possibilities. At the end we present the qualitative results of stylization and quantitative results of performance tested on the iPhone 11 Pro and iPhone 6s. The presented work is incorporated in Kunster - AR Art Video Maker application available in the Apple's App Store.




al

NTIRE 2020 Challenge on Spectral Reconstruction from an RGB Image. (arXiv:2005.03412v1 [eess.IV])

This paper reviews the second challenge on spectral reconstruction from RGB images, i.e., the recovery of whole-scene hyperspectral (HS) information from a 3-channel RGB image. As in the previous challenge, two tracks were provided: (i) a "Clean" track where HS images are estimated from noise-free RGBs, the RGB images are themselves calculated numerically using the ground-truth HS images and supplied spectral sensitivity functions (ii) a "Real World" track, simulating capture by an uncalibrated and unknown camera, where the HS images are recovered from noisy JPEG-compressed RGB images. A new, larger-than-ever, natural hyperspectral image data set is presented, containing a total of 510 HS images. The Clean and Real World tracks had 103 and 78 registered participants respectively, with 14 teams competing in the final testing phase. A description of the proposed methods, alongside their challenge scores and an extensive evaluation of top performing methods is also provided. They gauge the state-of-the-art in spectral reconstruction from an RGB image.




al

A LiDAR-based real-time capable 3D Perception System for Automated Driving in Urban Domains. (arXiv:2005.03404v1 [cs.RO])

We present a LiDAR-based and real-time capable 3D perception system for automated driving in urban domains. The hierarchical system design is able to model stationary and movable parts of the environment simultaneously and under real-time conditions. Our approach extends the state of the art by innovative in-detail enhancements for perceiving road users and drivable corridors even in case of non-flat ground surfaces and overhanging or protruding elements. We describe a runtime-efficient pointcloud processing pipeline, consisting of adaptive ground surface estimation, 3D clustering and motion classification stages. Based on the pipeline's output, the stationary environment is represented in a multi-feature mapping and fusion approach. Movable elements are represented in an object tracking system capable of using multiple reference points to account for viewpoint changes. We further enhance the tracking system by explicit consideration of occlusion and ambiguity cases. Our system is evaluated using a subset of the TUBS Road User Dataset. We enhance common performance metrics by considering application-driven aspects of real-world traffic scenarios. The perception system shows impressive results and is able to cope with the addressed scenarios while still preserving real-time capability.




al

Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation. (arXiv:2005.03393v1 [cs.CL])

In encoder-decoder neural models, multiple encoders are in general used to represent the contextual information in addition to the individual sentence. In this paper, we investigate multi-encoder approaches in documentlevel neural machine translation (NMT). Surprisingly, we find that the context encoder does not only encode the surrounding sentences but also behaves as a noise generator. This makes us rethink the real benefits of multi-encoder in context-aware translation - some of the improvements come from robust training. We compare several methods that introduce noise and/or well-tuned dropout setup into the training of these encoders. Experimental results show that noisy training plays an important role in multi-encoder-based NMT, especially when the training data is small. Also, we establish a new state-of-the-art on IWSLT Fr-En task by careful use of noise generation and dropout methods.




al

Semantic Signatures for Large-scale Visual Localization. (arXiv:2005.03388v1 [cs.CV])

Visual localization is a useful alternative to standard localization techniques. It works by utilizing cameras. In a typical scenario, features are extracted from captured images and compared with geo-referenced databases. Location information is then inferred from the matching results. Conventional schemes mainly use low-level visual features. These approaches offer good accuracy but suffer from scalability issues. In order to assist localization in large urban areas, this work explores a different path by utilizing high-level semantic information. It is found that object information in a street view can facilitate localization. A novel descriptor scheme called "semantic signature" is proposed to summarize this information. A semantic signature consists of type and angle information of visible objects at a spatial location. Several metrics and protocols are proposed for signature comparison and retrieval. They illustrate different trade-offs between accuracy and complexity. Extensive simulation results confirm the potential of the proposed scheme in large-scale applications. This paper is an extended version of a conference paper in CBMI'18. A more efficient retrieval protocol is presented with additional experiment results.




al

Playing Minecraft with Behavioural Cloning. (arXiv:2005.03374v1 [cs.AI])

MineRL 2019 competition challenged participants to train sample-efficient agents to play Minecraft, by using a dataset of human gameplay and a limit number of steps the environment. We approached this task with behavioural cloning by predicting what actions human players would take, and reached fifth place in the final ranking. Despite being a simple algorithm, we observed the performance of such an approach can vary significantly, based on when the training is stopped. In this paper, we detail our submission to the competition, run further experiments to study how performance varied over training and study how different engineering decisions affected these results.




al

JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation. (arXiv:2005.03361v1 [cs.CL])

Neural machine translation (NMT) needs large parallel corpora for state-of-the-art translation quality. Low-resource NMT is typically addressed by transfer learning which leverages large monolingual or parallel corpora for pre-training. Monolingual pre-training approaches such as MASS (MAsked Sequence to Sequence) are extremely effective in boosting NMT quality for languages with small parallel corpora. However, they do not account for linguistic information obtained using syntactic analyzers which is known to be invaluable for several Natural Language Processing (NLP) tasks. To this end, we propose JASS, Japanese-specific Sequence to Sequence, as a novel pre-training alternative to MASS for NMT involving Japanese as the source or target language. JASS is joint BMASS (Bunsetsu MASS) and BRSS (Bunsetsu Reordering Sequence to Sequence) pre-training which focuses on Japanese linguistic units called bunsetsus. In our experiments on ASPEC Japanese--English and News Commentary Japanese--Russian translation we show that JASS can give results that are competitive with if not better than those given by MASS. Furthermore, we show for the first time that joint MASS and JASS pre-training gives results that significantly surpass the individual methods indicating their complementary nature. We will release our code, pre-trained models and bunsetsu annotated data as resources for researchers to use in their own NLP tasks.




al

Estimating Blood Pressure from Photoplethysmogram Signal and Demographic Features using Machine Learning Techniques. (arXiv:2005.03357v1 [eess.SP])

Hypertension is a potentially unsafe health ailment, which can be indicated directly from the Blood pressure (BP). Hypertension always leads to other health complications. Continuous monitoring of BP is very important; however, cuff-based BP measurements are discrete and uncomfortable to the user. To address this need, a cuff-less, continuous and a non-invasive BP measurement system is proposed using Photoplethysmogram (PPG) signal and demographic features using machine learning (ML) algorithms. PPG signals were acquired from 219 subjects, which undergo pre-processing and feature extraction steps. Time, frequency and time-frequency domain features were extracted from the PPG and their derivative signals. Feature selection techniques were used to reduce the computational complexity and to decrease the chance of over-fitting the ML algorithms. The features were then used to train and evaluate ML algorithms. The best regression models were selected for Systolic BP (SBP) and Diastolic BP (DBP) estimation individually. Gaussian Process Regression (GPR) along with ReliefF feature selection algorithm outperforms other algorithms in estimating SBP and DBP with a root-mean-square error (RMSE) of 6.74 and 3.59 respectively. This ML model can be implemented in hardware systems to continuously monitor BP and avoid any critical health conditions due to sudden changes.




al

DramaQA: Character-Centered Video Story Understanding with Hierarchical QA. (arXiv:2005.03356v1 [cs.CL])

Despite recent progress on computer vision and natural language processing, developing video understanding intelligence is still hard to achieve due to the intrinsic difficulty of story in video. Moreover, there is not a theoretical metric for evaluating the degree of video understanding. In this paper, we propose a novel video question answering (Video QA) task, DramaQA, for a comprehensive understanding of the video story. The DramaQA focused on two perspectives: 1) hierarchical QAs as an evaluation metric based on the cognitive developmental stages of human intelligence. 2) character-centered video annotations to model local coherence of the story. Our dataset is built upon the TV drama "Another Miss Oh" and it contains 16,191 QA pairs from 23,928 various length video clips, with each QA pair belonging to one of four difficulty levels. We provide 217,308 annotated images with rich character-centered annotations, including visual bounding boxes, behaviors, and emotions of main characters, and coreference resolved scripts. Additionally, we provide analyses of the dataset as well as Dual Matching Multistream model which effectively learns character-centered representations of video to answer questions about the video. We are planning to release our dataset and model publicly for research purposes and expect that our work will provide a new perspective on video story understanding research.




al

Quantum correlation alignment for unsupervised domain adaptation. (arXiv:2005.03355v1 [quant-ph])

Correlation alignment (CORAL), a representative domain adaptation (DA) algorithm, decorrelates and aligns a labelled source domain dataset to an unlabelled target domain dataset to minimize the domain shift such that a classifier can be applied to predict the target domain labels. In this paper, we implement the CORAL on quantum devices by two different methods. One method utilizes quantum basic linear algebra subroutines (QBLAS) to implement the CORAL with exponential speedup in the number and dimension of the given data samples. The other method is achieved through a variational hybrid quantum-classical procedure. In addition, the numerical experiments of the CORAL with three different types of data sets, namely the synthetic data, the synthetic-Iris data, the handwritten digit data, are presented to evaluate the performance of our work. The simulation results prove that the variational quantum correlation alignment algorithm (VQCORAL) can achieve competitive performance compared with the classical CORAL.




al

DMCP: Differentiable Markov Channel Pruning for Neural Networks. (arXiv:2005.03354v1 [cs.CV])

Recent works imply that the channel pruning can be regarded as searching optimal sub-structure from unpruned networks.

However, existing works based on this observation require training and evaluating a large number of structures, which limits their application.

In this paper, we propose a novel differentiable method for channel pruning, named Differentiable Markov Channel Pruning (DMCP), to efficiently search the optimal sub-structure.

Our method is differentiable and can be directly optimized by gradient descent with respect to standard task loss and budget regularization (e.g. FLOPs constraint).

In DMCP, we model the channel pruning as a Markov process, in which each state represents for retaining the corresponding channel during pruning, and transitions between states denote the pruning process.

In the end, our method is able to implicitly select the proper number of channels in each layer by the Markov process with optimized transitions. To validate the effectiveness of our method, we perform extensive experiments on Imagenet with ResNet and MobilenetV2.

Results show our method can achieve consistent improvement than state-of-the-art pruning methods in various FLOPs settings. The code is available at https://github.com/zx55/dmcp




al

Pricing under a multinomial logit model with non linear network effects. (arXiv:2005.03352v1 [cs.GT])

We study the problem of pricing under a Multinomial Logit model where we incorporate network effects over the consumer's decisions. We analyse both cases, when sellers compete or collaborate. In particular, we pay special attention to the overall expected revenue and how the behaviour of the no purchase option is affected under variations of a network effect parameter. Where for example we prove that the market share for the no purchase option, is decreasing in terms of the value of the network effect, meaning that stronger communication among costumers increases the expected amount of sales. We also analyse how the customer's utility is altered when network effects are incorporated into the market, comparing the cases where both competitive and monopolistic prices are displayed. We use tools from stochastic approximation algorithms to prove that the probability of purchasing the available products converges to a unique stationary distribution. We model that the sellers can use this stationary distribution to establish their strategies. Finding that under those settings, a pure Nash Equilibrium represents the pricing strategies in the case of competition, and an optimal (that maximises the total revenue) fixed price characterise the case of collaboration.




al

Regression Forest-Based Atlas Localization and Direction Specific Atlas Generation for Pancreas Segmentation. (arXiv:2005.03345v1 [cs.CV])

This paper proposes a fully automated atlas-based pancreas segmentation method from CT volumes utilizing atlas localization by regression forest and atlas generation using blood vessel information. Previous probabilistic atlas-based pancreas segmentation methods cannot deal with spatial variations that are commonly found in the pancreas well. Also, shape variations are not represented by an averaged atlas. We propose a fully automated pancreas segmentation method that deals with two types of variations mentioned above. The position and size of the pancreas is estimated using a regression forest technique. After localization, a patient-specific probabilistic atlas is generated based on a new image similarity that reflects the blood vessel position and direction information around the pancreas. We segment it using the EM algorithm with the atlas as prior followed by the graph-cut. In evaluation results using 147 CT volumes, the Jaccard index and the Dice overlap of the proposed method were 62.1% and 75.1%, respectively. Although we automated all of the segmentation processes, segmentation results were superior to the other state-of-the-art methods in the Dice overlap.




al

Causal Paths in Temporal Networks of Face-to-Face Human Interactions. (arXiv:2005.03333v1 [cs.SI])

In a temporal network causal paths are characterized by the fact that links from a source to a target must respect the chronological order. In this article we study the causal paths structure in temporal networks of human face to face interactions in different social contexts. In a static network paths are transitive i.e. the existence of a link from $a$ to $b$ and from $b$ to $c$ implies the existence of a path from $a$ to $c$ via $b$. In a temporal network the chronological constraint introduces time correlations that affects transitivity. A probabilistic model based on higher order Markov chains shows that correlations that can invalidate transitivity are present only when the time gap between consecutive events is larger than the average value and are negligible below such a value. The comparison between the densities of the temporal and static accessibility matrices shows that the static representation can be used with good approximation. Moreover, we quantify the extent of the causally connected region of the networks over time.




al

Global Distribution of Google Scholar Citations: A Size-independent Institution-based Analysis. (arXiv:2005.03324v1 [cs.DL])

Most currently available schemes for performance based ranking of Universities or Research organizations, such as, Quacarelli Symonds (QS), Times Higher Education (THE), Shanghai University based All Research of World Universities (ARWU) use a variety of criteria that include productivity, citations, awards, reputation, etc., while Leiden and Scimago use only bibliometric indicators. The research performance evaluation in the aforesaid cases is based on bibliometric data from Web of Science or Scopus, which are commercially available priced databases. The coverage includes peer reviewed journals and conference proceedings. Google Scholar (GS) on the other hand, provides a free and open alternative to obtaining citations of papers available on the net, (though it is not clear exactly which journals are covered.) Citations are collected automatically from the net and also added to self created individual author profiles under Google Scholar Citations (GSC). This data was used by Webometrics Lab, Spain to create a ranked list of 4000+ institutions in 2016, based on citations from only the top 10 individual GSC profiles in each organization. (GSC excludes the top paper for reasons explained in the text; the simple selection procedure makes the ranked list size-independent as claimed by the Cybermetrics Lab). Using this data (Transparent Ranking TR, 2016), we find the regional and country wise distribution of GS-TR Citations. The size independent ranked list is subdivided into deciles of 400 institutions each and the number of institutions and citations of each country obtained for each decile. We test for correlation between institutional ranks between GS TR and the other ranking schemes for the top 20 institutions.




al

Specification and Automated Analysis of Inter-Parameter Dependencies in Web APIs. (arXiv:2005.03320v1 [cs.SE])

Web services often impose inter-parameter dependencies that restrict the way in which two or more input parameters can be combined to form valid calls to the service. Unfortunately, current specification languages for web services like the OpenAPI Specification (OAS) provide no support for the formal description of such dependencies, which makes it hardly possible to automatically discover and interact with services without human intervention. In this article, we present an approach for the specification and automated analysis of inter-parameter dependencies in web APIs. We first present a domain-specific language, called Inter-parameter Dependency Language (IDL), for the specification of dependencies among input parameters in web services. Then, we propose a mapping to translate an IDL document into a constraint satisfaction problem (CSP), enabling the automated analysis of IDL specifications using standard CSP-based reasoning operations. Specifically, we present a catalogue of nine analysis operations on IDL documents allowing to compute, for example, whether a given request satisfies all the dependencies of the service. Finally, we present a tool suite including an editor, a parser, an OAS extension, a constraint programming-aided library, and a test suite supporting IDL specifications and their analyses. Together, these contributions pave the way for a new range of specification-driven applications in areas such as code generation and testing.




al

Encoding in the Dark Grand Challenge: An Overview. (arXiv:2005.03315v1 [eess.IV])

A big part of the video content we consume from video providers consists of genres featuring low-light aesthetics. Low light sequences have special characteristics, such as spatio-temporal varying acquisition noise and light flickering, that make the encoding process challenging. To deal with the spatio-temporal incoherent noise, higher bitrates are used to achieve high objective quality. Additionally, the quality assessment metrics and methods have not been designed, trained or tested for this type of content. This has inspired us to trigger research in that area and propose a Grand Challenge on encoding low-light video sequences. In this paper, we present an overview of the proposed challenge, and test state-of-the-art methods that will be part of the benchmark methods at the stage of the participants' deliverable assessment. From this exploration, our results show that VVC already achieves a high performance compared to simply denoising the video source prior to encoding. Moreover, the quality of the video streams can be further improved by employing a post-processing image enhancement method.