rx Determinantal Point Processes in Randomized Numerical Linear Algebra. (arXiv:2005.03185v1 [cs.DS]) By arxiv.org Published On :: Randomized Numerical Linear Algebra (RandNLA) uses randomness to develop improved algorithms for matrix problems that arise in scientific computing, data science, machine learning, etc. Determinantal Point Processes (DPPs), a seemingly unrelated topic in pure and applied mathematics, is a class of stochastic point processes with probability distribution characterized by sub-determinants of a kernel matrix. Recent work has uncovered deep and fruitful connections between DPPs and RandNLA which lead to new guarantees and improved algorithms that are of interest to both areas. We provide an overview of this exciting new line of research, including brief introductions to RandNLA and DPPs, as well as applications of DPPs to classical linear algebra tasks such as least squares regression, low-rank approximation and the Nystr"om method. For example, random sampling with a DPP leads to new kinds of unbiased estimators for least squares, enabling more refined statistical and inferential understanding of these algorithms; a DPP is, in some sense, an optimal randomized algorithm for the Nystr"om method; and a RandNLA technique called leverage score sampling can be derived as the marginal distribution of a DPP. We also discuss recent algorithmic developments, illustrating that, while not quite as efficient as standard RandNLA techniques, DPP-based algorithms are only moderately more expensive. Full Article
rx A Proposal for Intelligent Agents with Episodic Memory. (arXiv:2005.03182v1 [cs.AI]) By arxiv.org Published On :: In the future we can expect that artificial intelligent agents, once deployed, will be required to learn continually from their experience during their operational lifetime. Such agents will also need to communicate with humans and other agents regarding the content of their experience, in the context of passing along their learnings, for the purpose of explaining their actions in specific circumstances or simply to relate more naturally to humans concerning experiences the agent acquires that are not necessarily related to their assigned tasks. We argue that to support these goals, an agent would benefit from an episodic memory; that is, a memory that encodes the agent's experience in such a way that the agent can relive the experience, communicate about it and use its past experience, inclusive of the agents own past actions, to learn more effective models and policies. In this short paper, we propose one potential approach to provide an AI agent with such capabilities. We draw upon the ever-growing body of work examining the function and operation of the Medial Temporal Lobe (MTL) in mammals to guide us in adding an episodic memory capability to an AI agent composed of artificial neural networks (ANNs). Based on that, we highlight important aspects to be considered in the memory organization and we propose an architecture combining ANNs and standard Computer Science techniques for supporting storage and retrieval of episodic memories. Despite being initial work, we hope this short paper can spark discussions around the creation of intelligent agents with memory or, at least, provide a different point of view on the subject. Full Article
rx Evolutionary Multi Objective Optimization Algorithm for Community Detection in Complex Social Networks. (arXiv:2005.03181v1 [cs.NE]) By arxiv.org Published On :: Most optimization-based community detection approaches formulate the problem in a single or bi-objective framework. In this paper, we propose two variants of a three-objective formulation using a customized non-dominated sorting genetic algorithm III (NSGA-III) to find community structures in a network. In the first variant, named NSGA-III-KRM, we considered Kernel k means, Ratio cut, and Modularity, as the three objectives, whereas the second variant, named NSGA-III-CCM, considers Community score, Community fitness and Modularity, as three objective functions. Experiments are conducted on four benchmark network datasets. Comparison with state-of-the-art approaches along with decomposition-based multi-objective evolutionary algorithm variants (MOEA/D-KRM and MOEA/D-CCM) indicates that the proposed variants yield comparable or better results. This is particularly significant because the addition of the third objective does not worsen the results of the other two objectives. We also propose a simple method to rank the Pareto solutions so obtained by proposing a new measure, namely the ratio of the hyper-volume and inverted generational distance (IGD). The higher the ratio, the better is the Pareto set. This strategy is particularly useful in the absence of empirical attainment function in the multi-objective framework, where the number of objectives is more than two. Full Article
rx Lattice-based public key encryption with equality test in standard model, revisited. (arXiv:2005.03178v1 [cs.CR]) By arxiv.org Published On :: Public key encryption with equality test (PKEET) allows testing whether two ciphertexts are generated by the same message or not. PKEET is a potential candidate for many practical applications like efficient data management on encrypted databases. Potential applicability of PKEET leads to intensive research from its first instantiation by Yang et al. (CT-RSA 2010). Most of the followup constructions are secure in the random oracle model. Moreover, the security of all the concrete constructions is based on number-theoretic hardness assumptions which are vulnerable in the post-quantum era. Recently, Lee et al. (ePrint 2016) proposed a generic construction of PKEET schemes in the standard model and hence it is possible to yield the first instantiation of PKEET schemes based on lattices. Their method is to use a $2$-level hierarchical identity-based encryption (HIBE) scheme together with a one-time signature scheme. In this paper, we propose, for the first time, a direct construction of a PKEET scheme based on the hardness assumption of lattices in the standard model. More specifically, the security of the proposed scheme is reduces to the hardness of the Learning With Errors problem. Full Article
rx A Parameterized Perspective on Attacking and Defending Elections. (arXiv:2005.03176v1 [cs.GT]) By arxiv.org Published On :: We consider the problem of protecting and manipulating elections by recounting and changing ballots, respectively. Our setting involves a plurality-based election held across multiple districts, and the problem formulations are based on the model proposed recently by~[Elkind et al, IJCAI 2019]. It turns out that both of the manipulation and protection problems are NP-complete even in fairly simple settings. We study these problems from a parameterized perspective with the goal of establishing a more detailed complexity landscape. The parameters we consider include the number of voters, and the budgets of the attacker and the defender. While we observe fixed-parameter tractability when parameterizing by number of voters, our main contribution is a demonstration of parameterized hardness when working with the budgets of the attacker and the defender. Full Article
rx Fact-based Dialogue Generation with Convergent and Divergent Decoding. (arXiv:2005.03174v1 [cs.CL]) By arxiv.org Published On :: Fact-based dialogue generation is a task of generating a human-like response based on both dialogue context and factual texts. Various methods were proposed to focus on generating informative words that contain facts effectively. However, previous works implicitly assume a topic to be kept on a dialogue and usually converse passively, therefore the systems have a difficulty to generate diverse responses that provide meaningful information proactively. This paper proposes an end-to-end Fact-based dialogue system augmented with the ability of convergent and divergent thinking over both context and facts, which can converse about the current topic or introduce a new topic. Specifically, our model incorporates a novel convergent and divergent decoding that can generate informative and diverse responses considering not only given inputs (context and facts) but also inputs-related topics. Both automatic and human evaluation results on DSTC7 dataset show that our model significantly outperforms state-of-the-art baselines, indicating that our model can generate more appropriate, informative, and diverse responses. Full Article
rx Nonlinear model reduction: a comparison between POD-Galerkin and POD-DEIM methods. (arXiv:2005.03173v1 [physics.comp-ph]) By arxiv.org Published On :: Several nonlinear model reduction techniques are compared for the three cases of the non-parallel version of the Kuramoto-Sivashinsky equation, the transient regime of flow past a cylinder at $Re=100$ and fully developed flow past a cylinder at the same Reynolds number. The linear terms of the governing equations are reduced by Galerkin projection onto a POD basis of the flow state, while the reduced nonlinear convection terms are obtained either by a Galerkin projection onto the same state basis, by a Galerkin projection onto a POD basis representing the nonlinearities or by applying the Discrete Empirical Interpolation Method (DEIM) to a POD basis of the nonlinearities. The quality of the reduced order models is assessed as to their stability, accuracy and robustness, and appropriate quantitative measures are introduced and compared. In particular, the properties of the reduced linear terms are compared to those of the full-scale terms, and the structure of the nonlinear quadratic terms is analyzed as to the conservation of kinetic energy. It is shown that all three reduction techniques provide excellent and similar results for the cases of the Kuramoto-Sivashinsky equation and the limit-cycle cylinder flow. For the case of the transient regime of flow past a cylinder, only the pure Galerkin techniques are successful, while the DEIM technique produces reduced-order models that diverge in finite time. Full Article
rx On Optimal Control of Discounted Cost Infinite-Horizon Markov Decision Processes Under Local State Information Structures. (arXiv:2005.03169v1 [eess.SY]) By arxiv.org Published On :: This paper investigates a class of optimal control problems associated with Markov processes with local state information. The decision-maker has only local access to a subset of a state vector information as often encountered in decentralized control problems in multi-agent systems. Under this information structure, part of the state vector cannot be observed. We leverage ab initio principles and find a new form of Bellman equations to characterize the optimal policies of the control problem under local information structures. The dynamic programming solutions feature a mixture of dynamics associated unobservable state components and the local state-feedback policy based on the observable local information. We further characterize the optimal local-state feedback policy using linear programming methods. To reduce the computational complexity of the optimal policy, we propose an approximate algorithm based on virtual beliefs to find a sub-optimal policy. We show the performance bounds on the sub-optimal solution and corroborate the results with numerical case studies. Full Article
rx Avoiding 5/4-powers on the alphabet of nonnegative integers. (arXiv:2005.03158v1 [math.CO]) By arxiv.org Published On :: We identify the structure of the lexicographically least word avoiding 5/4-powers on the alphabet of nonnegative integers. Specifically, we show that this word has the form $p au(varphi(z) varphi^2(z) cdots)$ where $p, z$ are finite words, $varphi$ is a 6-uniform morphism, and $ au$ is a coding. This description yields a recurrence for the $i$th letter, which we use to prove that the sequence of letters is 6-regular with rank 188. More generally, we prove $k$-regularity for a sequence satisfying a recurrence of the same type. Full Article
rx On the Learnability of Possibilistic Theories. (arXiv:2005.03157v1 [cs.LO]) By arxiv.org Published On :: We investigate learnability of possibilistic theories from entailments in light of Angluin's exact learning model. We consider cases in which only membership, only equivalence, and both kinds of queries can be posed by the learner. We then show that, for a large class of problems, polynomial time learnability results for classical logic can be transferred to the respective possibilistic extension. In particular, it follows from our results that the possibilistic extension of propositional Horn theories is exactly learnable in polynomial time. As polynomial time learnability in the exact model is transferable to the classical probably approximately correct model extended with membership queries, our work also establishes such results in this model. Full Article
rx Fast Mapping onto Census Blocks. (arXiv:2005.03156v1 [cs.DC]) By arxiv.org Published On :: Pandemic measures such as social distancing and contact tracing can be enhanced by rapidly integrating dynamic location data and demographic data. Projecting billions of longitude and latitude locations onto hundreds of thousands of highly irregular demographic census block polygons is computationally challenging in both research and deployment contexts. This paper describes two approaches labeled "simple" and "fast". The simple approach can be implemented in any scripting language (Matlab/Octave, Python, Julia, R) and is easily integrated and customized to a variety of research goals. This simple approach uses a novel combination of hierarchy, sparse bounding boxes, polygon crossing-number, vectorization, and parallel processing to achieve 100,000,000+ projections per second on 100 servers. The simple approach is compact, does not increase data storage requirements, and is applicable to any country or region. The fast approach exploits the thread, vector, and memory optimizations that are possible using a low-level language (C++) and achieves similar performance on a single server. This paper details these approaches with the goal of enabling the broader community to quickly integrate location and demographic data. Full Article
rx NTIRE 2020 Challenge on Image Demoireing: Methods and Results. (arXiv:2005.03155v1 [cs.CV]) By arxiv.org Published On :: This paper reviews the Challenge on Image Demoireing that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2020. Demoireing is a difficult task of removing moire patterns from an image to reveal an underlying clean image. The challenge was divided into two tracks. Track 1 targeted the single image demoireing problem, which seeks to remove moire patterns from a single image. Track 2 focused on the burst demoireing problem, where a set of degraded moire images of the same scene were provided as input, with the goal of producing a single demoired image as output. The methods were ranked in terms of their fidelity, measured using the peak signal-to-noise ratio (PSNR) between the ground truth clean images and the restored images produced by the participants' methods. The tracks had 142 and 99 registered participants, respectively, with a total of 14 and 6 submissions in the final testing stage. The entries span the current state-of-the-art in image and burst image demoireing problems. Full Article
rx Decentralized Adaptive Control for Collaborative Manipulation of Rigid Bodies. (arXiv:2005.03153v1 [cs.RO]) By arxiv.org Published On :: In this work, we consider a group of robots working together to manipulate a rigid object to track a desired trajectory in $SE(3)$. The robots have no explicit communication network among them, and they do no know the mass or friction properties of the object, or where they are attached to the object. However, we assume they share data from a common IMU placed arbitrarily on the object. To solve this problem, we propose a decentralized adaptive control scheme wherein each agent maintains and adapts its own estimate of the object parameters in order to track a reference trajectory. We present an analysis of the controller's behavior, and show that all closed-loop signals remain bounded, and that the system trajectory will almost always (except for initial conditions on a set of measure zero) converge to the desired trajectory. We study the proposed controller's performance using numerical simulations of a manipulation task in 3D, and with hardware experiments which demonstrate our algorithm on a planar manipulation task. These studies, taken together, demonstrate the effectiveness of the proposed controller even in the presence of numerous unmodelled effects, such as discretization errors and complex frictional interactions. Full Article
rx An augmented Lagrangian preconditioner for implicitly-constituted non-Newtonian incompressible flow. (arXiv:2005.03150v1 [math.NA]) By arxiv.org Published On :: We propose an augmented Lagrangian preconditioner for a three-field stress-velocity-pressure discretization of stationary non-Newtonian incompressible flow with an implicit constitutive relation of power-law type. The discretization employed makes use of the divergence-free Scott-Vogelius pair for the velocity and pressure. The preconditioner builds on the work [P. E. Farrell, L. Mitchell, and F. Wechsung, SIAM J. Sci. Comput., 41 (2019), pp. A3073-A3096], where a Reynolds-robust preconditioner for the three-dimensional Newtonian system was introduced. The preconditioner employs a specialized multigrid method for the stress-velocity block that involves a divergence-capturing space decomposition and a custom prolongation operator. The solver exhibits excellent robustness with respect to the parameters arising in the constitutive relation, allowing for the simulation of a wide range of materials. Full Article
rx Optimally Convergent Mixed Finite Element Methods for the Stochastic Stokes Equations. (arXiv:2005.03148v1 [math.NA]) By arxiv.org Published On :: We propose some new mixed finite element methods for the time dependent stochastic Stokes equations with multiplicative noise, which use the Helmholtz decomposition of the driving multiplicative noise. It is known [16] that the pressure solution has a low regularity, which manifests in sub-optimal convergence rates for well-known inf-sup stable mixed finite element methods in numerical simulations, see [10]. We show that eliminating this gradient part from the noise in the numerical scheme leads to optimally convergent mixed finite element methods, and that this conceptual idea may be used to retool numerical methods that are well-known in the deterministic setting, including pressure stabilization methods, so that their optimal convergence properties can still be maintained in the stochastic setting. Computational experiments are also provided to validate the theoretical results and to illustrate the conceptional usefulness of the proposed numerical approach. Full Article
rx A Separation Theorem for Joint Sensor and Actuator Scheduling with Guaranteed Performance Bounds. (arXiv:2005.03143v1 [eess.SY]) By arxiv.org Published On :: We study the problem of jointly designing a sparse sensor and actuator schedule for linear dynamical systems while guaranteeing a control/estimation performance that approximates the fully sensed/actuated setting. We further prove a separation principle, showing that the problem can be decomposed into finding sensor and actuator schedules separately. However, it is shown that this problem cannot be efficiently solved or approximated in polynomial, or even quasi-polynomial time for time-invariant sensor/actuator schedules; instead, we develop deterministic polynomial-time algorithms for a time-varying sensor/actuator schedule with guaranteed approximation bounds. Our main result is to provide a polynomial-time joint actuator and sensor schedule that on average selects only a constant number of sensors and actuators at each time step, irrespective of the dimension of the system. The key idea is to sparsify the controllability and observability Gramians while providing approximation guarantees for Hankel singular values. This idea is inspired by recent results in theoretical computer science literature on sparsification. Full Article
rx A Gentle Introduction to Quantum Computing Algorithms with Applications to Universal Prediction. (arXiv:2005.03137v1 [quant-ph]) By arxiv.org Published On :: In this technical report we give an elementary introduction to Quantum Computing for non-physicists. In this introduction we describe in detail some of the foundational Quantum Algorithms including: the Deutsch-Jozsa Algorithm, Shor's Algorithm, Grocer Search, and Quantum Counting Algorithm and briefly the Harrow-Lloyd Algorithm. Additionally we give an introduction to Solomonoff Induction, a theoretically optimal method for prediction. We then attempt to use Quantum computing to find better algorithms for the approximation of Solomonoff Induction. This is done by using techniques from other Quantum computing algorithms to achieve a speedup in computing the speed prior, which is an approximation of Solomonoff's prior, a key part of Solomonoff Induction. The major limiting factors are that the probabilities being computed are often so small that without a sufficient (often large) amount of trials, the error may be larger than the result. If a substantial speedup in the computation of an approximation of Solomonoff Induction can be achieved through quantum computing, then this can be applied to the field of intelligent agents as a key part of an approximation of the agent AIXI. Full Article
rx Catch Me If You Can: Using Power Analysis to Identify HPC Activity. (arXiv:2005.03135v1 [cs.CR]) By arxiv.org Published On :: Monitoring users on large computing platforms such as high performance computing (HPC) and cloud computing systems is non-trivial. Utilities such as process viewers provide limited insight into what users are running, due to granularity limitation, and other sources of data, such as system call tracing, can impose significant operational overhead. However, despite technical and procedural measures, instances of users abusing valuable HPC resources for personal gains have been documented in the past cite{hpcbitmine}, and systems that are open to large numbers of loosely-verified users from around the world are at risk of abuse. In this paper, we show how electrical power consumption data from an HPC platform can be used to identify what programs are executed. The intuition is that during execution, programs exhibit various patterns of CPU and memory activity. These patterns are reflected in the power consumption of the system and can be used to identify programs running. We test our approach on an HPC rack at Lawrence Berkeley National Laboratory using a variety of scientific benchmarks. Among other interesting observations, our results show that by monitoring the power consumption of an HPC rack, it is possible to identify if particular programs are running with precision up to and recall of 95\% even in noisy scenarios. Full Article
rx Evaluation, Tuning and Interpretation of Neural Networks for Meteorological Applications. (arXiv:2005.03126v1 [physics.ao-ph]) By arxiv.org Published On :: Neural networks have opened up many new opportunities to utilize remotely sensed images in meteorology. Common applications include image classification, e.g., to determine whether an image contains a tropical cyclone, and image translation, e.g., to emulate radar imagery for satellites that only have passive channels. However, there are yet many open questions regarding the use of neural networks in meteorology, such as best practices for evaluation, tuning and interpretation. This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of effective receptive fields, underutilized meteorological performance measures, and methods for NN interpretation, such as synthetic experiments and layer-wise relevance propagation. We also consider the process of neural network interpretation as a whole, recognizing it as an iterative scientist-driven discovery process, and breaking it down into individual steps that researchers can take. Finally, while most work on neural network interpretation in meteorology has so far focused on networks for image classification tasks, we expand the focus to also include networks for image translation. Full Article
rx Rigid Matrices From Rectangular PCPs. (arXiv:2005.03123v1 [cs.CC]) By arxiv.org Published On :: We introduce a variant of PCPs, that we refer to as rectangular PCPs, wherein proofs are thought of as square matrices, and the random coins used by the verifier can be partitioned into two disjoint sets, one determining the row of each query and the other determining the *column*. We construct PCPs that are efficient, short, smooth and (almost-)rectangular. As a key application, we show that proofs for hard languages in NTIME$(2^n)$, when viewed as matrices, are rigid infinitely often. This strengthens and considerably simplifies a recent result of Alman and Chen [FOCS, 2019] constructing explicit rigid matrices in FNP. Namely, we prove the following theorem: - There is a constant $delta in (0,1)$ such that there is an FNP-machine that, for infinitely many $N$, on input $1^N$ outputs $N imes N$ matrices with entries in $mathbb{F}_2$ that are $delta N^2$-far (in Hamming distance) from matrices of rank at most $2^{log N/Omega(log log N)}$. Our construction of rectangular PCPs starts with an analysis of how randomness yields queries in the Reed--Muller-based outer PCP of Ben-Sasson, Goldreich, Harsha, Sudan and Vadhan [SICOMP, 2006; CCC, 2005]. We then show how to preserve rectangularity under PCP composition and a smoothness-inducing transformation. This warrants refined and stronger notions of rectangularity, which we prove for the outer PCP and its transforms. Full Article
rx Electricity-Aware Heat Unit Commitment: A Bid-Validity Approach. (arXiv:2005.03120v1 [eess.SY]) By arxiv.org Published On :: Coordinating the operation of combined heat and power plants (CHPs) and heat pumps (HPs) at the interface between heat and power systems is essential to achieve a cost-effective and efficient operation of the overall energy system. Indeed, in the current sequential market practice, the heat market has no insight into the impacts of heat dispatch on the electricity market. While preserving this sequential practice, this paper introduces an electricity-aware heat unit commitment model. Coordination is achieved through bid validity constraints, which embed the techno-economic linkage between heat and electricity outputs and costs of CHPs and HPs. This approach constitutes a novel market mechanism for the coordination of heat and power systems, defining heat bids conditionally on electricity market prices. The resulting model is a trilevel optimization problem, which we recast as a mixed-integer linear program using a lexicographic function. We use a realistic case study based on the Danish power and heat system, and show that the proposed model yields a 4.5% reduction in total operating cost of heat and power systems compared to a traditional decoupled unit commitment model, while reducing the financial losses of each CHP and HP due to invalid bids by up-to 20.3 million euros. Full Article
rx Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting. (arXiv:2005.03119v1 [cs.CL]) By arxiv.org Published On :: Unsupervised machine translation (MT) has recently achieved impressive results with monolingual corpora only. However, it is still challenging to associate source-target sentences in the latent space. As people speak different languages biologically share similar visual systems, the potential of achieving better alignment through visual content is promising yet under-explored in unsupervised multimodal MT (MMT). In this paper, we investigate how to utilize visual content for disambiguation and promoting latent space alignment in unsupervised MMT. Our model employs multimodal back-translation and features pseudo visual pivoting in which we learn a shared multilingual visual-semantic embedding space and incorporate visually-pivoted captioning as additional weak supervision. The experimental results on the widely used Multi30K dataset show that the proposed model significantly improves over the state-of-the-art methods and generalizes well when the images are not available at the testing time. Full Article
rx Strong replica symmetry in high-dimensional optimal Bayesian inference. (arXiv:2005.03115v1 [math.PR]) By arxiv.org Published On :: We consider generic optimal Bayesian inference, namely, models of signal reconstruction where the posterior distribution and all hyperparameters are known. Under a standard assumption on the concentration of the free energy, we show how replica symmetry in the strong sense of concentration of all multioverlaps can be established as a consequence of the Franz-de Sanctis identities; the identities themselves in the current setting are obtained via a novel perturbation of the prior distribution of the signal. Concentration of multioverlaps means that asymptotically the posterior distribution has a particularly simple structure encoded by a random probability measure (or, in the case of binary signal, a non-random probability measure). We believe that such strong control of the model should be key in the study of inference problems with underlying sparse graphical structure (error correcting codes, block models, etc) and, in particular, in the derivation of replica symmetric formulas for the free energy and mutual information in this context. Full Article
rx Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines. (arXiv:2005.03106v1 [cs.CV]) By arxiv.org Published On :: Smart meters enable remote and automatic electricity, water and gas consumption reading and are being widely deployed in developed countries. Nonetheless, there is still a huge number of non-smart meters in operation. Image-based Automatic Meter Reading (AMR) focuses on dealing with this type of meter readings. We estimate that the Energy Company of Paran'a (Copel), in Brazil, performs more than 850,000 readings of dial meters per month. Those meters are the focus of this work. Our main contributions are: (i) a public real-world dial meter dataset (shared upon request) called UFPR-ADMR; (ii) a deep learning-based recognition baseline on the proposed dataset; and (iii) a detailed error analysis of the main issues present in AMR for dial meters. To the best of our knowledge, this is the first work to introduce deep learning approaches to multi-dial meter reading, and perform experiments on unconstrained images. We achieved a 100.0% F1-score on the dial detection stage with both Faster R-CNN and YOLO, while the recognition rates reached 93.6% for dials and 75.25% for meters using Faster R-CNN (ResNext-101). Full Article
rx Constrained de Bruijn Codes: Properties, Enumeration, Constructions, and Applications. (arXiv:2005.03102v1 [cs.IT]) By arxiv.org Published On :: The de Bruijn graph, its sequences, and their various generalizations, have found many applications in information theory, including many new ones in the last decade. In this paper, motivated by a coding problem for emerging memory technologies, a set of sequences which generalize sequences in the de Bruijn graph are defined. These sequences can be also defined and viewed as constrained sequences. Hence, they will be called constrained de Bruijn sequences and a set of such sequences will be called a constrained de Bruijn code. Several properties and alternative definitions for such codes are examined and they are analyzed as generalized sequences in the de Bruijn graph (and its generalization) and as constrained sequences. Various enumeration techniques are used to compute the total number of sequences for any given set of parameters. A construction method of such codes from the theory of shift-register sequences is proposed. Finally, we show how these constrained de Bruijn sequences and codes can be applied in constructions of codes for correcting synchronization errors in the $ell$-symbol read channel and in the racetrack memory channel. For this purpose, these codes are superior in their size on previously known codes. Full Article
rx Scale-Equalizing Pyramid Convolution for Object Detection. (arXiv:2005.03101v1 [cs.CV]) By arxiv.org Published On :: Feature pyramid has been an efficient method to extract features at different scales. Development over this method mainly focuses on aggregating contextual information at different levels while seldom touching the inter-level correlation in the feature pyramid. Early computer vision methods extracted scale-invariant features by locating the feature extrema in both spatial and scale dimension. Inspired by this, a convolution across the pyramid level is proposed in this study, which is termed pyramid convolution and is a modified 3-D convolution. Stacked pyramid convolutions directly extract 3-D (scale and spatial) features and outperforms other meticulously designed feature fusion modules. Based on the viewpoint of 3-D convolution, an integrated batch normalization that collects statistics from the whole feature pyramid is naturally inserted after the pyramid convolution. Furthermore, we also show that the naive pyramid convolution, together with the design of RetinaNet head, actually best applies for extracting features from a Gaussian pyramid, whose properties can hardly be satisfied by a feature pyramid. In order to alleviate this discrepancy, we build a scale-equalizing pyramid convolution (SEPC) that aligns the shared pyramid convolution kernel only at high-level feature maps. Being computationally efficient and compatible with the head design of most single-stage object detectors, the SEPC module brings significant performance improvement ($>4$AP increase on MS-COCO2017 dataset) in state-of-the-art one-stage object detectors, and a light version of SEPC also has $sim3.5$AP gain with only around 7% inference time increase. The pyramid convolution also functions well as a stand-alone module in two-stage object detectors and is able to improve the performance by $sim2$AP. The source code can be found at https://github.com/jshilong/SEPC. Full Article
rx Optimal Location of Cellular Base Station via Convex Optimization. (arXiv:2005.03099v1 [cs.IT]) By arxiv.org Published On :: An optimal base station (BS) location depends on the traffic (user) distribution, propagation pathloss and many system parameters, which renders its analytical study difficult so that numerical algorithms are widely used instead. In this paper, the problem is studied analytically. First, it is formulated as a convex optimization problem to minimize the total BS transmit power subject to quality-of-service (QoS) constraints, which also account for fairness among users. Due to its convex nature, Karush-Kuhn-Tucker (KKT) conditions are used to characterize a globally-optimum location as a convex combination of user locations, where convex weights depend on user parameters, pathloss exponent and overall geometry of the problem. Based on this characterization, a number of closed-form solutions are obtained. In particular, the optimum BS location is the mean of user locations in the case of free-space propagation and identical user parameters. If the user set is symmetric (as defined in the paper), the optimal BS location is independent of pathloss exponent, which is not the case in general. The analytical results show the impact of propagation conditions as well as system and user parameters on optimal BS location and can be used to develop design guidelines. Full Article
rx Inference with Choice Functions Made Practical. (arXiv:2005.03098v1 [cs.AI]) By arxiv.org Published On :: We study how to infer new choices from previous choices in a conservative manner. To make such inferences, we use the theory of choice functions: a unifying mathematical framework for conservative decision making that allows one to impose axioms directly on the represented decisions. We here adopt the coherence axioms of De Bock and De Cooman (2019). We show how to naturally extend any given choice assessment to such a coherent choice function, whenever possible, and use this natural extension to make new choices. We present a practical algorithm to compute this natural extension and provide several methods that can be used to improve its scalability. Full Article
rx Near-optimal Detector for SWIPT-enabled Differential DF Relay Networks with SER Analysis. (arXiv:2005.03096v1 [cs.IT]) By arxiv.org Published On :: In this paper, we analyze the symbol error rate (SER) performance of the simultaneous wireless information and power transfer (SWIPT) enabled three-node differential decode-and-forward (DDF) relay networks, which adopt the power splitting (PS) protocol at the relay. The use of non-coherent differential modulation eliminates the need for sending training symbols to estimate the instantaneous channel state informations (CSIs) at all network nodes, and therefore improves the power efficiency, as compared with the coherent modulation. However, performance analysis results are not yet available for the state-of-the-art detectors such as the approximate maximum-likelihood detector. Existing works rely on Monte-Carlo simulation to show that there exists an optimal PS ratio that minimizes the overall SER. In this work, we propose a near-optimal detector with linear complexity with respect to the modulation size. We derive an accurate approximate SER expression, based on which the optimal PS ratio can be accurately estimated without requiring any Monte-Carlo simulation. Full Article
rx Heterogeneous Facility Location Games. (arXiv:2005.03095v1 [cs.GT]) By arxiv.org Published On :: We study heterogeneous $k$-facility location games. In this model there are $k$ facilities where each facility serves a different purpose. Thus, the preferences of the agents over the facilities can vary arbitrarily. Our goal is to design strategy proof mechanisms that place the facilities in a way to maximize the minimum utility among the agents. For $k=1$, if the agents' locations are known, we prove that the mechanism that places the facility on an optimal location is strategy proof. For $k geq 2$, we prove that there is no optimal strategy proof mechanism, deterministic or randomized, even when $k=2$ there are only two agents with known locations, and the facilities have to be placed on a line segment. We derive inapproximability bounds for deterministic and randomized strategy proof mechanisms. Finally, we focus on the line segment and provide strategy proof mechanisms that achieve constant approximation. All of our mechanisms are simple and communication efficient. As a byproduct we show that some of our mechanisms can be used to achieve constant factor approximations for other objectives as the social welfare and the happiness. Full Article
rx AIOps for a Cloud Object Storage Service. (arXiv:2005.03094v1 [cs.DC]) By arxiv.org Published On :: With the growing reliance on the ubiquitous availability of IT systems and services, these systems become more global, scaled, and complex to operate. To maintain business viability, IT service providers must put in place reliable and cost efficient operations support. Artificial Intelligence for IT Operations (AIOps) is a promising technology for alleviating operational complexity of IT systems and services. AIOps platforms utilize big data, machine learning and other advanced analytics technologies to enhance IT operations with proactive actionable dynamic insight. In this paper we share our experience applying the AIOps approach to a production cloud object storage service to get actionable insights into system's behavior and health. We describe a real-life production cloud scale service and its operational data, present the AIOps platform we have created, and show how it has helped us resolving operational pain points. Full Article
rx Eliminating NB-IoT Interference to LTE System: a Sparse Machine Learning Based Approach. (arXiv:2005.03092v1 [cs.IT]) By arxiv.org Published On :: Narrowband internet-of-things (NB-IoT) is a competitive 5G technology for massive machine-type communication scenarios, but meanwhile introduces narrowband interference (NBI) to existing broadband transmission such as the long term evolution (LTE) systems in enhanced mobile broadband (eMBB) scenarios. In order to facilitate the harmonic and fair coexistence in wireless heterogeneous networks, it is important to eliminate NB-IoT interference to LTE systems. In this paper, a novel sparse machine learning based framework and a sparse combinatorial optimization problem is formulated for accurate NBI recovery, which can be efficiently solved using the proposed iterative sparse learning algorithm called sparse cross-entropy minimization (SCEM). To further improve the recovery accuracy and convergence rate, regularization is introduced to the loss function in the enhanced algorithm called regularized SCEM. Moreover, exploiting the spatial correlation of NBI, the framework is extended to multiple-input multiple-output systems. Simulation results demonstrate that the proposed methods are effective in eliminating NB-IoT interference to LTE systems, and significantly outperform the state-of-the-art methods. Full Article
rx Robust Trajectory and Transmit Power Optimization for Secure UAV-Enabled Cognitive Radio Networks. (arXiv:2005.03091v1 [cs.IT]) By arxiv.org Published On :: Cognitive radio is a promising technology to improve spectral efficiency. However, the secure performance of a secondary network achieved by using physical layer security techniques is limited by its transmit power and channel fading. In order to tackle this issue, a cognitive unmanned aerial vehicle (UAV) communication network is studied by exploiting the high flexibility of a UAV and the possibility of establishing line-of-sight links. The average secrecy rate of the secondary network is maximized by robustly optimizing the UAV's trajectory and transmit power. Our problem formulation takes into account two practical inaccurate location estimation cases, namely, the worst case and the outage-constrained case. In order to solve those challenging non-convex problems, an iterative algorithm based on $mathcal{S}$-Procedure is proposed for the worst case while an iterative algorithm based on Bernstein-type inequalities is proposed for the outage-constrained case. The proposed algorithms can obtain effective suboptimal solutions of the corresponding problems. Our simulation results demonstrate that the algorithm under the outage-constrained case can achieve a higher average secrecy rate with a low computational complexity compared to that of the algorithm under the worst case. Moreover, the proposed schemes can improve the secure communication performance significantly compared to other benchmark schemes. Full Article
rx A Multifactorial Optimization Paradigm for Linkage Tree Genetic Algorithm. (arXiv:2005.03090v1 [cs.NE]) By arxiv.org Published On :: Linkage Tree Genetic Algorithm (LTGA) is an effective Evolutionary Algorithm (EA) to solve complex problems using the linkage information between problem variables. LTGA performs well in various kinds of single-task optimization and yields promising results in comparison with the canonical genetic algorithm. However, LTGA is an unsuitable method for dealing with multi-task optimization problems. On the other hand, Multifactorial Optimization (MFO) can simultaneously solve independent optimization problems, which are encoded in a unified representation to take advantage of the process of knowledge transfer. In this paper, we introduce Multifactorial Linkage Tree Genetic Algorithm (MF-LTGA) by combining the main features of both LTGA and MFO. MF-LTGA is able to tackle multiple optimization tasks at the same time, each task learns the dependency between problem variables from the shared representation. This knowledge serves to determine the high-quality partial solutions for supporting other tasks in exploring the search space. Moreover, MF-LTGA speeds up convergence because of knowledge transfer of relevant problems. We demonstrate the effectiveness of the proposed algorithm on two benchmark problems: Clustered Shortest-Path Tree Problem and Deceptive Trap Function. In comparison to LTGA and existing methods, MF-LTGA outperforms in quality of the solution or in computation time. Full Article
rx Experiences from Exporting Major Proof Assistant Libraries. (arXiv:2005.03089v1 [cs.SE]) By arxiv.org Published On :: The interoperability of proof assistants and the integration of their libraries is a highly valued but elusive goal in the field of theorem proving. As a preparatory step, in previous work, we translated the libraries of multiple proof assistants, specifically the ones of Coq, HOL Light, IMPS, Isabelle, Mizar, and PVS into a universal format: OMDoc/MMT. Each translation presented tremendous theoretical, technical, and social challenges, some universal and some system-specific, some solvable and some still open. In this paper, we survey these challenges and compare and evaluate the solutions we chose. We believe similar library translations will be an essential part of any future system interoperability solution and our experiences will prove valuable to others undertaking such efforts. Full Article
rx Diagnosing the Environment Bias in Vision-and-Language Navigation. (arXiv:2005.03086v1 [cs.CL]) By arxiv.org Published On :: Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are crucial when the agent is navigating new environments about which it has no prior knowledge. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of the major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons for this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations that contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gaps between seen and unseen on multiple datasets (i.e. R2R, R4R, and CVDN) and achieve competitive unseen results to previous state-of-the-art models. Our code and features are available at: https://github.com/zhangybzbo/EnvBiasVLN Full Article
rx Beware the Normative Fallacy. (arXiv:2005.03084v1 [cs.SE]) By arxiv.org Published On :: Behavioral research can provide important insights for SE practices. But in performing it, many studies of SE are committing a normative fallacy - they misappropriate normative and prescriptive theories for descriptive purposes. The evidence from reviews of empirical studies of decision making in SE suggests that the normative fallacy may is common. This article draws on cognitive psychology and behavioral economics to explains this fallacy. Because data collection is framed by narrow and empirically invalid theories, flawed assumptions baked into those theories lead to misleading interpretations of observed behaviors and ultimately, to invalid conclusions and flawed recommendations. Researchers should be careful not to rely solely on engineering methods to explain what people do when they do engineering. Instead, insist that descriptive research be based on validated descriptive theories, listen carefully to skilled practitioners, and only rely on validated findings to prescribe what they should do. Full Article
rx Exploratory Analysis of Covid-19 Tweets using Topic Modeling, UMAP, and DiGraphs. (arXiv:2005.03082v1 [cs.SI]) By arxiv.org Published On :: This paper illustrates five different techniques to assess the distinctiveness of topics, key terms and features, speed of information dissemination, and network behaviors for Covid19 tweets. First, we use pattern matching and second, topic modeling through Latent Dirichlet Allocation (LDA) to generate twenty different topics that discuss case spread, healthcare workers, and personal protective equipment (PPE). One topic specific to U.S. cases would start to uptick immediately after live White House Coronavirus Task Force briefings, implying that many Twitter users are paying attention to government announcements. We contribute machine learning methods not previously reported in the Covid19 Twitter literature. This includes our third method, Uniform Manifold Approximation and Projection (UMAP), that identifies unique clustering-behavior of distinct topics to improve our understanding of important themes in the corpus and help assess the quality of generated topics. Fourth, we calculated retweeting times to understand how fast information about Covid19 propagates on Twitter. Our analysis indicates that the median retweeting time of Covid19 for a sample corpus in March 2020 was 2.87 hours, approximately 50 minutes faster than repostings from Chinese social media about H7N9 in March 2013. Lastly, we sought to understand retweet cascades, by visualizing the connections of users over time from fast to slow retweeting. As the time to retweet increases, the density of connections also increase where in our sample, we found distinct users dominating the attention of Covid19 retweeters. One of the simplest highlights of this analysis is that early-stage descriptive methods like regular expressions can successfully identify high-level themes which were consistently verified as important through every subsequent analysis. Full Article
rx Line Artefact Quantification in Lung Ultrasound Images of COVID-19 Patients via Non-Convex Regularisation. (arXiv:2005.03080v1 [eess.IV]) By arxiv.org Published On :: In this paper, we present a novel method for line artefacts quantification in lung ultrasound (LUS) images of COVID-19 patients. We formulate this as a non-convex regularisation problem involving a sparsity-enforcing, Cauchy-based penalty function, and the inverse Radon transform. We employ a simple local maxima detection technique in the Radon transform domain, associated with known clinical definitions of line artefacts. Despite being non-convex, the proposed method has guaranteed convergence via a proximal splitting algorithm and accurately identifies both horizontal and vertical line artefacts in LUS images. In order to reduce the number of false and missed detection, our method includes a two-stage validation mechanism, which is performed in both Radon and image domains. We evaluate the performance of the proposed method in comparison to the current state-of-the-art B-line identification method and show a considerable performance gain with 87% correctly detected B-lines in LUS images of nine COVID-19 patients. In addition, owing to its fast convergence, which takes around 12 seconds for a given frame, our proposed method is readily applicable for processing LUS image sequences. Full Article
rx AVAC: A Machine Learning based Adaptive RRAM Variability-Aware Controller for Edge Devices. (arXiv:2005.03077v1 [eess.SY]) By arxiv.org Published On :: Recently, the Edge Computing paradigm has gained significant popularity both in industry and academia. Researchers now increasingly target to improve performance and reduce energy consumption of such devices. Some recent efforts focus on using emerging RRAM technologies for improving energy efficiency, thanks to their no leakage property and high integration density. As the complexity and dynamism of applications supported by such devices escalate, it has become difficult to maintain ideal performance by static RRAM controllers. Machine Learning provides a promising solution for this, and hence, this work focuses on extending such controllers to allow dynamic parameter updates. In this work we propose an Adaptive RRAM Variability-Aware Controller, AVAC, which periodically updates Wait Buffer and batch sizes using on-the-fly learning models and gradient ascent. AVAC allows Edge devices to adapt to different applications and their stages, to improve computation performance and reduce energy consumption. Simulations demonstrate that the proposed model can provide up to 29% increase in performance and 19% decrease in energy, compared to static controllers, using traces of real-life healthcare applications on a Raspberry-Pi based Edge deployment. Full Article
rx Guided Policy Search Model-based Reinforcement Learning for Urban Autonomous Driving. (arXiv:2005.03076v1 [cs.RO]) By arxiv.org Published On :: In this paper, we continue our prior work on using imitation learning (IL) and model free reinforcement learning (RL) to learn driving policies for autonomous driving in urban scenarios, by introducing a model based RL method to drive the autonomous vehicle in the Carla urban driving simulator. Although IL and model free RL methods have been proved to be capable of solving lots of challenging tasks, including playing video games, robots, and, in our prior work, urban driving, the low sample efficiency of such methods greatly limits their applications on actual autonomous driving. In this work, we developed a model based RL algorithm of guided policy search (GPS) for urban driving tasks. The algorithm iteratively learns a parameterized dynamic model to approximate the complex and interactive driving task, and optimizes the driving policy under the nonlinear approximate dynamic model. As a model based RL approach, when applied in urban autonomous driving, the GPS has the advantages of higher sample efficiency, better interpretability, and greater stability. We provide extensive experiments validating the effectiveness of the proposed method to learn robust driving policy for urban driving in Carla. We also compare the proposed method with other policy search and model free RL baselines, showing 100x better sample efficiency of the GPS based RL method, and also that the GPS based method can learn policies for harder tasks that the baseline methods can hardly learn. Full Article
rx Categorical Vector Space Semantics for Lambek Calculus with a Relevant Modality. (arXiv:2005.03074v1 [cs.CL]) By arxiv.org Published On :: We develop a categorical compositional distributional semantics for Lambek Calculus with a Relevant Modality !L*, which has a limited edition of the contraction and permutation rules. The categorical part of the semantics is a monoidal biclosed category with a coalgebra modality, very similar to the structure of a Differential Category. We instantiate this category to finite dimensional vector spaces and linear maps via "quantisation" functors and work with three concrete interpretations of the coalgebra modality. We apply the model to construct categorical and concrete semantic interpretations for the motivating example of !L*: the derivation of a phrase with a parasitic gap. The effectiveness of the concrete interpretations are evaluated via a disambiguation task, on an extension of a sentence disambiguation dataset to parasitic gap phrase one, using BERT, Word2Vec, and FastText vectors and Relational tensors. Full Article
rx Two-Grid Deflated Krylov Methods for Linear Equations. (arXiv:2005.03070v1 [math.NA]) By arxiv.org Published On :: An approach is given for solving large linear systems that combines Krylov methods with use of two different grid levels. Eigenvectors are computed on the coarse grid and used to deflate eigenvalues on the fine grid. GMRES-type methods are first used on both the coarse and fine grids. Then another approach is given that has a restarted BiCGStab (or IDR) method on the fine grid. While BiCGStab is generally considered to be a non-restarted method, it works well in this context with deflating and restarting. Tests show this new approach can be very efficient for difficult linear equations problems. Full Article
rx I Always Feel Like Somebody's Sensing Me! A Framework to Detect, Identify, and Localize Clandestine Wireless Sensors. (arXiv:2005.03068v1 [cs.CR]) By arxiv.org Published On :: The increasing ubiquity of low-cost wireless sensors in smart homes and buildings has enabled users to easily deploy systems to remotely monitor and control their environments. However, this raises privacy concerns for third-party occupants, such as a hotel room guest who may be unaware of deployed clandestine sensors. Previous methods focused on specific modalities such as detecting cameras but do not provide a generalizable and comprehensive method to capture arbitrary sensors which may be "spying" on a user. In this work, we seek to determine whether one can walk in a room and detect any wireless sensor monitoring an individual. As such, we propose SnoopDog, a framework to not only detect wireless sensors that are actively monitoring a user, but also classify and localize each device. SnoopDog works by establishing causality between patterns in observable wireless traffic and a trusted sensor in the same space, e.g., an inertial measurement unit (IMU) that captures a user's movement. Once causality is established, SnoopDog performs packet inspection to inform the user about the monitoring device. Finally, SnoopDog localizes the clandestine device in a 2D plane using a novel trial-based localization technique. We evaluated SnoopDog across several devices and various modalities and were able to detect causality 96.6% percent of the time, classify suspicious devices with 100% accuracy, and localize devices to a sufficiently reduced sub-space. Full Article
rx Weakly-Supervised Neural Response Selection from an Ensemble of Task-Specialised Dialogue Agents. (arXiv:2005.03066v1 [cs.CL]) By arxiv.org Published On :: Dialogue engines that incorporate different types of agents to converse with humans are popular. However, conversations are dynamic in the sense that a selected response will change the conversation on-the-fly, influencing the subsequent utterances in the conversation, which makes the response selection a challenging problem. We model the problem of selecting the best response from a set of responses generated by a heterogeneous set of dialogue agents by taking into account the conversational history, and propose a emph{Neural Response Selection} method. The proposed method is trained to predict a coherent set of responses within a single conversation, considering its own predictions via a curriculum training mechanism. Our experimental results show that the proposed method can accurately select the most appropriate responses, thereby significantly improving the user experience in dialogue systems. Full Article
rx Learning, transferring, and recommending performance knowledge with Monte Carlo tree search and neural networks. (arXiv:2005.03063v1 [cs.LG]) By arxiv.org Published On :: Making changes to a program to optimize its performance is an unscalable task that relies entirely upon human intuition and experience. In addition, companies operating at large scale are at a stage where no single individual understands the code controlling its systems, and for this reason, making changes to improve performance can become intractably difficult. In this paper, a learning system is introduced that provides AI assistance for finding recommended changes to a program. Specifically, it is shown how the evaluative feedback, delayed-reward performance programming domain can be effectively formulated via the Monte Carlo tree search (MCTS) framework. It is then shown that established methods from computational games for using learning to expedite tree-search computation can be adapted to speed up computing recommended program alterations. Estimates of expected utility from MCTS trees built for previous problems are used to learn a sampling policy that remains effective across new problems, thus demonstrating transferability of optimization knowledge. This formulation is applied to the Apache Spark distributed computing environment, and a preliminary result is observed that the time required to build a search tree for finding recommendations is reduced by up to a factor of 10x. Full Article
rx CovidCTNet: An Open-Source Deep Learning Approach to Identify Covid-19 Using CT Image. (arXiv:2005.03059v1 [eess.IV]) By arxiv.org Published On :: Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method, however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source set of algorithms called CovidCTNet that successfully differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 90% compared to radiologists (70%). The model is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. In order to facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and parametric details in an open-source format. Open-source sharing of our CovidCTNet enables developers to rapidly improve and optimize services, while preserving user privacy and data ownership. Full Article
rx Extracting Headless MWEs from Dependency Parse Trees: Parsing, Tagging, and Joint Modeling Approaches. (arXiv:2005.03035v1 [cs.CL]) By arxiv.org Published On :: An interesting and frequent type of multi-word expression (MWE) is the headless MWE, for which there are no true internal syntactic dominance relations; examples include many named entities ("Wells Fargo") and dates ("July 5, 2020") as well as certain productive constructions ("blow for blow", "day after day"). Despite their special status and prevalence, current dependency-annotation schemes require treating such flat structures as if they had internal syntactic heads, and most current parsers handle them in the same fashion as headed constructions. Meanwhile, outside the context of parsing, taggers are typically used for identifying MWEs, but taggers might benefit from structural information. We empirically compare these two common strategies--parsing and tagging--for predicting flat MWEs. Additionally, we propose an efficient joint decoding algorithm that combines scores from both strategies. Experimental results on the MWE-Aware English Dependency Corpus and on six non-English dependency treebanks with frequent flat structures show that: (1) tagging is more accurate than parsing for identifying flat-structure MWEs, (2) our joint decoder reconciles the two different views and, for non-BERT features, leads to higher accuracies, and (3) most of the gains result from feature sharing between the parsers and taggers. Full Article
rx Overview of Surgical Simulation. (arXiv:2005.03011v1 [cs.HC]) By arxiv.org Published On :: Motivated by the current demand of clinical governance, surgical simulation is now a well-established modality for basic skills training and assessment. The practical deployment of the technique is a multi-disciplinary venture encompassing areas in engineering, medicine and psychology. This paper provides an overview of the key topics involved in surgical simulation and associated technical challenges. The paper discusses the clinical motivation for surgical simulation, the use of virtual environments for surgical training, model acquisition and simplification, deformable models, collision detection, tissue property measurement, haptic rendering and image synthesis. Additional topics include surgical skill training and assessment metrics as well as challenges facing the incorporation of surgical simulation into medical education curricula. Full Article
rx Evaluating text coherence based on the graph of the consistency of phrases to identify symptoms of schizophrenia. (arXiv:2005.03008v1 [cs.CL]) By arxiv.org Published On :: Different state-of-the-art methods of the detection of schizophrenia symptoms based on the estimation of text coherence have been analyzed. The analysis of a text at the level of phrases has been suggested. The method based on the graph of the consistency of phrases has been proposed to evaluate the semantic coherence and the cohesion of a text. The semantic coherence, cohesion, and other linguistic features (lexical diversity, lexical density) have been taken into account to form feature vectors for the training of a model-classifier. The training of the classifier has been performed on the set of English-language interviews. According to the retrieved results, the impact of each feature on the output of the model has been analyzed. The results obtained can indicate that the proposed method based on the graph of the consistency of phrases may be used in the different tasks of the detection of mental illness. Full Article