app

A Note on Approximations of Fixed Points for Nonexpansive Mappings in Norm-attainable Classes. (arXiv:2005.03069v1 [math.FA])

Let $H$ be an infinite dimensional, reflexive, separable Hilbert space and $NA(H)$ the class of all norm-attainble operators on $H.$ In this note, we study an implicit scheme for a canonical representation of nonexpansive contractions in norm-attainable classes.




app

Modeling nanoconfinement effects using active learning. (arXiv:2005.02587v2 [physics.app-ph] UPDATED)

Predicting the spatial configuration of gas molecules in nanopores of shale formations is crucial for fluid flow forecasting and hydrocarbon reserves estimation. The key challenge in these tight formations is that the majority of the pore sizes are less than 50 nm. At this scale, the fluid properties are affected by nanoconfinement effects due to the increased fluid-solid interactions. For instance, gas adsorption to the pore walls could account for up to 85% of the total hydrocarbon volume in a tight reservoir. Although there are analytical solutions that describe this phenomenon for simple geometries, they are not suitable for describing realistic pores, where surface roughness and geometric anisotropy play important roles. To describe these, molecular dynamics (MD) simulations are used since they consider fluid-solid and fluid-fluid interactions at the molecular level. However, MD simulations are computationally expensive, and are not able to simulate scales larger than a few connected nanopores. We present a method for building and training physics-based deep learning surrogate models to carry out fast and accurate predictions of molecular configurations of gas inside nanopores. Since training deep learning models requires extensive databases that are computationally expensive to create, we employ active learning (AL). AL reduces the overhead of creating comprehensive sets of high-fidelity data by determining where the model uncertainty is greatest, and running simulations on the fly to minimize it. The proposed workflow enables nanoconfinement effects to be rigorously considered at the mesoscale where complex connected sets of nanopores control key applications such as hydrocarbon recovery and CO2 sequestration.




app

The Cascade Transformer: an Application for Efficient Answer Sentence Selection. (arXiv:2005.02534v2 [cs.CL] UPDATED)

Large transformer-based language models have been shown to be very effective in many classification tasks. However, their computational complexity prevents their use in applications requiring the classification of a large set of candidates. While previous works have investigated approaches to reduce model size, relatively little attention has been paid to techniques to improve batch throughput during inference. In this paper, we introduce the Cascade Transformer, a simple yet effective technique to adapt transformer-based models into a cascade of rankers. Each ranker is used to prune a subset of candidates in a batch, thus dramatically increasing throughput at inference time. Partial encodings from the transformer model are shared among rerankers, providing further speed-up. When compared to a state-of-the-art transformer model, our approach reduces computation by 37% with almost no impact on accuracy, as measured on two English Question Answering datasets.




app

Toward Improving the Evaluation of Visual Attention Models: a Crowdsourcing Approach. (arXiv:2002.04407v2 [cs.CV] UPDATED)

Human visual attention is a complex phenomenon. A computational modeling of this phenomenon must take into account where people look in order to evaluate which are the salient locations (spatial distribution of the fixations), when they look in those locations to understand the temporal development of the exploration (temporal order of the fixations), and how they move from one location to another with respect to the dynamics of the scene and the mechanics of the eyes (dynamics). State-of-the-art models focus on learning saliency maps from human data, a process that only takes into account the spatial component of the phenomenon and ignore its temporal and dynamical counterparts. In this work we focus on the evaluation methodology of models of human visual attention. We underline the limits of the current metrics for saliency prediction and scanpath similarity, and we introduce a statistical measure for the evaluation of the dynamics of the simulated eye movements. While deep learning models achieve astonishing performance in saliency prediction, our analysis shows their limitations in capturing the dynamics of the process. We find that unsupervised gravitational models, despite of their simplicity, outperform all competitors. Finally, exploiting a crowd-sourcing platform, we present a study aimed at evaluating how strongly the scanpaths generated with the unsupervised gravitational models appear plausible to naive and expert human observers.




app

A Real-Time Approach for Chance-Constrained Motion Planning with Dynamic Obstacles. (arXiv:2001.08012v2 [cs.RO] UPDATED)

Uncertain dynamic obstacles, such as pedestrians or vehicles, pose a major challenge for optimal robot navigation with safety guarantees. Previous work on motion planning has followed two main strategies to provide a safe bound on an obstacle's space: a polyhedron, such as a cuboid, or a nonlinear differentiable surface, such as an ellipsoid. The former approach relies on disjunctive programming, which has a relatively high computational cost that grows exponentially with the number of obstacles. The latter approach needs to be linearized locally to find a tractable evaluation of the chance constraints, which dramatically reduces the remaining free space and leads to over-conservative trajectories or even unfeasibility. In this work, we present a hybrid approach that eludes the pitfalls of both strategies while maintaining the original safety guarantees. The key idea consists in obtaining a safe differentiable approximation for the disjunctive chance constraints bounding the obstacles. The resulting nonlinear optimization problem is free of chance constraint linearization and disjunctive programming, and therefore, it can be efficiently solved to meet fast real-time requirements with multiple obstacles. We validate our approach through mathematical proof, simulation and real experiments with an aerial robot using nonlinear model predictive control to avoid pedestrians.




app

Safe non-smooth black-box optimization with application to policy search. (arXiv:1912.09466v3 [math.OC] UPDATED)

For safety-critical black-box optimization tasks, observations of the constraints and the objective are often noisy and available only for the feasible points. We propose an approach based on log barriers to find a local solution of a non-convex non-smooth black-box optimization problem $min f^0(x)$ subject to $f^i(x)leq 0,~ i = 1,ldots, m$, at the same time, guaranteeing constraint satisfaction while learning an optimal solution with high probability. Our proposed algorithm exploits noisy observations to iteratively improve on an initial safe point until convergence. We derive the convergence rate and prove safety of our algorithm. We demonstrate its performance in an application to an iterative control design problem.




app

Numerical study on the effect of geometric approximation error in the numerical solution of PDEs using a high-order curvilinear mesh. (arXiv:1908.09917v2 [math.NA] UPDATED)

When time-dependent partial differential equations (PDEs) are solved numerically in a domain with curved boundary or on a curved surface, mesh error and geometric approximation error caused by the inaccurate location of vertices and other interior grid points, respectively, could be the main source of the inaccuracy and instability of the numerical solutions of PDEs. The role of these geometric errors in deteriorating the stability and particularly the conservation properties are largely unknown, which seems to necessitate very fine meshes especially to remove geometric approximation error. This paper aims to investigate the effect of geometric approximation error by using a high-order mesh with negligible geometric approximation error, even for high order polynomial of order p. To achieve this goal, the high-order mesh generator from CAD geometry called NekMesh is adapted for surface mesh generation in comparison to traditional meshes with non-negligible geometric approximation error. Two types of numerical tests are considered. Firstly, the accuracy of differential operators is compared for various p on a curved element of the sphere. Secondly, by applying the method of moving frames, four different time-dependent PDEs on the sphere are numerically solved to investigate the impact of geometric approximation error on the accuracy and conservation properties of high-order numerical schemes for PDEs on the sphere.




app

A Fast and Accurate Algorithm for Spherical Harmonic Analysis on HEALPix Grids with Applications to the Cosmic Microwave Background Radiation. (arXiv:1904.10514v4 [math.NA] UPDATED)

The Hierarchical Equal Area isoLatitude Pixelation (HEALPix) scheme is used extensively in astrophysics for data collection and analysis on the sphere. The scheme was originally designed for studying the Cosmic Microwave Background (CMB) radiation, which represents the first light to travel during the early stages of the universe's development and gives the strongest evidence for the Big Bang theory to date. Refined analysis of the CMB angular power spectrum can lead to revolutionary developments in understanding the nature of dark matter and dark energy. In this paper, we present a new method for performing spherical harmonic analysis for HEALPix data, which is a central component to computing and analyzing the angular power spectrum of the massive CMB data sets. The method uses a novel combination of a non-uniform fast Fourier transform, the double Fourier sphere method, and Slevinsky's fast spherical harmonic transform (Slevinsky, 2019). For a HEALPix grid with $N$ pixels (points), the computational complexity of the method is $mathcal{O}(Nlog^2 N)$, with an initial set-up cost of $mathcal{O}(N^{3/2}log N)$. This compares favorably with $mathcal{O}(N^{3/2})$ runtime complexity of the current methods available in the HEALPix software when multiple maps need to be analyzed at the same time. Using numerical experiments, we demonstrate that the new method also appears to provide better accuracy over the entire angular power spectrum of synthetic data when compared to the current methods, with a convergence rate at least two times higher.




app

Fast Cross-validation in Harmonic Approximation. (arXiv:1903.10206v3 [math.NA] UPDATED)

Finding a good regularization parameter for Tikhonov regularization problems is a though yet often asked question. One approach is to use leave-one-out cross-validation scores to indicate the goodness of fit. This utilizes only the noisy function values but, on the downside, comes with a high computational cost. In this paper we present a general approach to shift the main computations from the function in question to the node distribution and, making use of FFT and FFT-like algorithms, even reduce this cost tremendously to the cost of the Tikhonov regularization problem itself. We apply this technique in different settings on the torus, the unit interval, and the two-dimensional sphere. Given that the sampling points satisfy a quadrature rule our algorithm computes the cross-validations scores in floating-point precision. In the cases of arbitrarily scattered nodes we propose an approximating algorithm with the same complexity. Numerical experiments indicate the applicability of our algorithms.




app

Performance of the smallest-variance-first rule in appointment sequencing. (arXiv:1812.01467v4 [math.PR] UPDATED)

A classical problem in appointment scheduling, with applications in health care, concerns the determination of the patients' arrival times that minimize a cost function that is a weighted sum of mean waiting times and mean idle times. One aspect of this problem is the sequencing problem, which focuses on ordering the patients. We assess the performance of the smallest-variance-first (SVF) rule, which sequences patients in order of increasing variance of their service durations. While it was known that SVF is not always optimal, it has been widely observed that it performs well in practice and simulation. We provide a theoretical justification for this observation by proving, in various settings, quantitative worst-case bounds on the ratio between the cost incurred by the SVF rule and the minimum attainable cost. We also show that, in great generality, SVF is asymptotically optimal, i.e., the ratio approaches 1 as the number of patients grows large. While evaluating policies by considering an approximation ratio is a standard approach in many algorithmic settings, our results appear to be the first of this type in the appointment scheduling literature.




app

ErdH{o}s-P'osa property of chordless cycles and its applications. (arXiv:1711.00667v3 [math.CO] UPDATED)

A chordless cycle, or equivalently a hole, in a graph $G$ is an induced subgraph of $G$ which is a cycle of length at least $4$. We prove that the ErdH{o}s-P'osa property holds for chordless cycles, which resolves the major open question concerning the ErdH{o}s-P'osa property. Our proof for chordless cycles is constructive: in polynomial time, one can find either $k+1$ vertex-disjoint chordless cycles, or $c_1k^2 log k+c_2$ vertices hitting every chordless cycle for some constants $c_1$ and $c_2$. It immediately implies an approximation algorithm of factor $mathcal{O}(sf{opt}log {sf opt})$ for Chordal Vertex Deletion. We complement our main result by showing that chordless cycles of length at least $ell$ for any fixed $ellge 5$ do not have the ErdH{o}s-P'osa property.




app

Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity. (arXiv:1706.02205v4 [math.NA] UPDATED)

Dense kernel matrices $Theta in mathbb{R}^{N imes N}$ obtained from point evaluations of a covariance function $G$ at locations ${ x_{i} }_{1 leq i leq N} subset mathbb{R}^{d}$ arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions of elliptic boundary value problems and homogeneously-distributed sampling points, we show how to identify a subset $S subset { 1 , dots , N }^2$, with $# S = O ( N log (N) log^{d} ( N /epsilon ) )$, such that the zero fill-in incomplete Cholesky factorisation of the sparse matrix $Theta_{ij} 1_{( i, j ) in S}$ is an $epsilon$-approximation of $Theta$. This factorisation can provably be obtained in complexity $O ( N log( N ) log^{d}( N /epsilon) )$ in space and $O ( N log^{2}( N ) log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators; we further present numerical evidence that $d$ can be taken to be the intrinsic dimension of the data set rather than that of the ambient space. The algorithm only needs to know the spatial configuration of the $x_{i}$ and does not require an analytic representation of $G$. Furthermore, this factorization straightforwardly provides an approximate sparse PCA with optimal rate of convergence in the operator norm. Hence, by using only subsampling and the incomplete Cholesky factorization, we obtain, at nearly linear complexity, the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky factorization we also obtain a solver for elliptic PDE with complexity $O ( N log^{d}( N /epsilon) )$ in space and $O ( N log^{2d}( N /epsilon) )$ in time, improving upon the state of the art for general elliptic operators.




app

Delayed approximate matrix assembly in multigrid with dynamic precisions. (arXiv:2005.03606v1 [cs.MS])

The accurate assembly of the system matrix is an important step in any code that solves partial differential equations on a mesh. We either explicitly set up a matrix, or we work in a matrix-free environment where we have to be able to quickly return matrix entries upon demand. Either way, the construction can become costly due to non-trivial material parameters entering the equations, multigrid codes requiring cascades of matrices that depend upon each other, or dynamic adaptive mesh refinement that necessitates the recomputation of matrix entries or the whole equation system throughout the solve. We propose that these constructions can be performed concurrently with the multigrid cycles. Initial geometric matrices and low accuracy integrations kickstart the multigrid, while improved assembly data is fed to the solver as and when it becomes available. The time to solution is improved as we eliminate an expensive preparation phase traditionally delaying the actual computation. We eliminate algorithmic latency. Furthermore, we desynchronise the assembly from the solution process. This anarchic increase of the concurrency level improves the scalability. Assembly routines are notoriously memory- and bandwidth-demanding. As we work with iteratively improving operator accuracies, we finally propose the use of a hierarchical, lossy compression scheme such that the memory footprint is brought down aggressively where the system matrix entries carry little information or are not yet available with high accuracy.




app

COVID-19 Contact-tracing Apps: A Survey on the Global Deployment and Challenges. (arXiv:2005.03599v1 [cs.CR])

In response to the coronavirus disease (COVID-19) outbreak, there is an ever-increasing number of national governments that are rolling out contact-tracing Apps to aid the containment of the virus. The first hugely contentious issue facing the Apps is the deployment framework, i.e. centralised or decentralised. Based on this, the debate branches out to the corresponding technologies that underpin these architectures, i.e. GPS, QR codes, and Bluetooth. This work conducts a pioneering review of the above scenarios and contributes a geolocation mapping of the current deployment. The vulnerabilities and the directions of research are identified, with a special focus on the Bluetooth-based decentralised scheme.




app

A Local Spectral Exterior Calculus for the Sphere and Application to the Shallow Water Equations. (arXiv:2005.03598v1 [math.NA])

We introduce $Psimathrm{ec}$, a local spectral exterior calculus for the two-sphere $S^2$. $Psimathrm{ec}$ provides a discretization of Cartan's exterior calculus on $S^2$ formed by spherical differential $r$-form wavelets. These are well localized in space and frequency and provide (Stevenson) frames for the homogeneous Sobolev spaces $dot{H}^{-r+1}( Omega_{ u}^{r} , S^2 )$ of differential $r$-forms. At the same time, they satisfy important properties of the exterior calculus, such as the de Rahm complex and the Hodge-Helmholtz decomposition. Through this, $Psimathrm{ec}$ is tailored towards structure preserving discretizations that can adapt to solutions with varying regularity. The construction of $Psimathrm{ec}$ is based on a novel spherical wavelet frame for $L_2(S^2)$ that we obtain by introducing scalable reproducing kernel frames. These extend scalable frames to weighted sampling expansions and provide an alternative to quadrature rules for the discretization of needlet-like scale-discrete wavelets. We verify the practicality of $Psimathrm{ec}$ for numerical computations using the rotating shallow water equations. Our numerical results demonstrate that a $Psimathrm{ec}$-based discretization of the equations attains accuracy comparable to those of spectral methods while using a representation that is well localized in space and frequency.




app

Brain-like approaches to unsupervised learning of hidden representations -- a comparative study. (arXiv:2005.03476v1 [cs.NE])

Unsupervised learning of hidden representations has been one of the most vibrant research directions in machine learning in recent years. In this work we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations. The saliency and separability of the hidden representations when trained on MNIST dataset is studied using an external classifier, and compared with other unsupervised learning methods that include restricted Boltzmann machines and autoencoders.




app

Successfully Applying the Stabilized Lottery Ticket Hypothesis to the Transformer Architecture. (arXiv:2005.03454v1 [cs.LG])

Sparse models require less memory for storage and enable a faster inference by reducing the necessary number of FLOPs. This is relevant both for time-critical and on-device computations using neural networks. The stabilized lottery ticket hypothesis states that networks can be pruned after none or few training iterations, using a mask computed based on the unpruned converged model. On the transformer architecture and the WMT 2014 English-to-German and English-to-French tasks, we show that stabilized lottery ticket pruning performs similar to magnitude pruning for sparsity levels of up to 85%, and propose a new combination of pruning techniques that outperforms all other techniques for even higher levels of sparsity. Furthermore, we confirm that the parameter's initial sign and not its specific value is the primary factor for successful training, and show that magnitude pruning cannot be used to find winning lottery tickets.




app

Dirichlet spectral-Galerkin approximation method for the simply supported vibrating plate eigenvalues. (arXiv:2005.03433v1 [math.NA])

In this paper, we analyze and implement the Dirichlet spectral-Galerkin method for approximating simply supported vibrating plate eigenvalues with variable coefficients. This is a Galerkin approximation that uses the approximation space that is the span of finitely many Dirichlet eigenfunctions for the Laplacian. Convergence and error analysis for this method is presented for two and three dimensions. Here we will assume that the domain has either a smooth or Lipschitz boundary with no reentrant corners. An important component of the error analysis is Weyl's law for the Dirichlet eigenvalues. Numerical examples for computing the simply supported vibrating plate eigenvalues for the unit disk and square are presented. In order to test the accuracy of the approximation, we compare the spectral-Galerkin method to the separation of variables for the unit disk. Whereas for the unit square we will numerically test the convergence rate for a variable coefficient problem.




app

DFSeer: A Visual Analytics Approach to Facilitate Model Selection for Demand Forecasting. (arXiv:2005.03244v1 [cs.HC])

Selecting an appropriate model to forecast product demand is critical to the manufacturing industry. However, due to the data complexity, market uncertainty and users' demanding requirements for the model, it is challenging for demand analysts to select a proper model. Although existing model selection methods can reduce the manual burden to some extent, they often fail to present model performance details on individual products and reveal the potential risk of the selected model. This paper presents DFSeer, an interactive visualization system to conduct reliable model selection for demand forecasting based on the products with similar historical demand. It supports model comparison and selection with different levels of details. Besides, it shows the difference in model performance on similar products to reveal the risk of model selection and increase users' confidence in choosing a forecasting model. Two case studies and interviews with domain experts demonstrate the effectiveness and usability of DFSeer.




app

A Stochastic Geometry Approach to Doppler Characterization in a LEO Satellite Network. (arXiv:2005.03205v1 [cs.IT])

A Non-terrestrial Network (NTN) comprising Low Earth Orbit (LEO) satellites can enable connectivity to underserved areas, thus complementing existing telecom networks. The high-speed satellite motion poses several challenges at the physical layer such as large Doppler frequency shifts. In this paper, an analytical framework is developed for statistical characterization of Doppler shift in an NTN where LEO satellites provide communication services to terrestrial users. Using tools from stochastic geometry, the users within a cell are grouped into disjoint clusters to limit the differential Doppler across users. Under some simplifying assumptions, the cumulative distribution function (CDF) and the probability density function are derived for the Doppler shift magnitude at a random user within a cluster. The CDFs are also provided for the minimum and the maximum Doppler shift magnitude within a cluster. Leveraging the analytical results, the interplay between key system parameters such as the cluster size and satellite altitude is examined. Numerical results validate the insights obtained from the analysis.




app

Distributed Stabilization by Probability Control for Deterministic-Stochastic Large Scale Systems : Dissipativity Approach. (arXiv:2005.03193v1 [eess.SY])

By using dissipativity approach, we establish the stability condition for the feedback connection of a deterministic dynamical system $Sigma$ and a stochastic memoryless map $Psi$. After that, we extend the result to the class of large scale systems in which: $Sigma$ consists of many sub-systems; and $Psi$ consists of many "stochastic actuators" and "probability controllers" that control the actuator's output events. We will demonstrate the proposed approach by showing the design procedures to globally stabilize the manufacturing systems while locally balance the stock levels in any production process.




app

Fast Mapping onto Census Blocks. (arXiv:2005.03156v1 [cs.DC])

Pandemic measures such as social distancing and contact tracing can be enhanced by rapidly integrating dynamic location data and demographic data. Projecting billions of longitude and latitude locations onto hundreds of thousands of highly irregular demographic census block polygons is computationally challenging in both research and deployment contexts. This paper describes two approaches labeled "simple" and "fast". The simple approach can be implemented in any scripting language (Matlab/Octave, Python, Julia, R) and is easily integrated and customized to a variety of research goals. This simple approach uses a novel combination of hierarchy, sparse bounding boxes, polygon crossing-number, vectorization, and parallel processing to achieve 100,000,000+ projections per second on 100 servers. The simple approach is compact, does not increase data storage requirements, and is applicable to any country or region. The fast approach exploits the thread, vector, and memory optimizations that are possible using a low-level language (C++) and achieves similar performance on a single server. This paper details these approaches with the goal of enabling the broader community to quickly integrate location and demographic data.




app

A Gentle Introduction to Quantum Computing Algorithms with Applications to Universal Prediction. (arXiv:2005.03137v1 [quant-ph])

In this technical report we give an elementary introduction to Quantum Computing for non-physicists. In this introduction we describe in detail some of the foundational Quantum Algorithms including: the Deutsch-Jozsa Algorithm, Shor's Algorithm, Grocer Search, and Quantum Counting Algorithm and briefly the Harrow-Lloyd Algorithm. Additionally we give an introduction to Solomonoff Induction, a theoretically optimal method for prediction. We then attempt to use Quantum computing to find better algorithms for the approximation of Solomonoff Induction. This is done by using techniques from other Quantum computing algorithms to achieve a speedup in computing the speed prior, which is an approximation of Solomonoff's prior, a key part of Solomonoff Induction. The major limiting factors are that the probabilities being computed are often so small that without a sufficient (often large) amount of trials, the error may be larger than the result. If a substantial speedup in the computation of an approximation of Solomonoff Induction can be achieved through quantum computing, then this can be applied to the field of intelligent agents as a key part of an approximation of the agent AIXI.




app

Evaluation, Tuning and Interpretation of Neural Networks for Meteorological Applications. (arXiv:2005.03126v1 [physics.ao-ph])

Neural networks have opened up many new opportunities to utilize remotely sensed images in meteorology. Common applications include image classification, e.g., to determine whether an image contains a tropical cyclone, and image translation, e.g., to emulate radar imagery for satellites that only have passive channels. However, there are yet many open questions regarding the use of neural networks in meteorology, such as best practices for evaluation, tuning and interpretation. This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of effective receptive fields, underutilized meteorological performance measures, and methods for NN interpretation, such as synthetic experiments and layer-wise relevance propagation. We also consider the process of neural network interpretation as a whole, recognizing it as an iterative scientist-driven discovery process, and breaking it down into individual steps that researchers can take. Finally, while most work on neural network interpretation in meteorology has so far focused on networks for image classification tasks, we expand the focus to also include networks for image translation.




app

Electricity-Aware Heat Unit Commitment: A Bid-Validity Approach. (arXiv:2005.03120v1 [eess.SY])

Coordinating the operation of combined heat and power plants (CHPs) and heat pumps (HPs) at the interface between heat and power systems is essential to achieve a cost-effective and efficient operation of the overall energy system. Indeed, in the current sequential market practice, the heat market has no insight into the impacts of heat dispatch on the electricity market. While preserving this sequential practice, this paper introduces an electricity-aware heat unit commitment model. Coordination is achieved through bid validity constraints, which embed the techno-economic linkage between heat and electricity outputs and costs of CHPs and HPs. This approach constitutes a novel market mechanism for the coordination of heat and power systems, defining heat bids conditionally on electricity market prices. The resulting model is a trilevel optimization problem, which we recast as a mixed-integer linear program using a lexicographic function. We use a realistic case study based on the Danish power and heat system, and show that the proposed model yields a 4.5% reduction in total operating cost of heat and power systems compared to a traditional decoupled unit commitment model, while reducing the financial losses of each CHP and HP due to invalid bids by up-to 20.3 million euros.




app

Constrained de Bruijn Codes: Properties, Enumeration, Constructions, and Applications. (arXiv:2005.03102v1 [cs.IT])

The de Bruijn graph, its sequences, and their various generalizations, have found many applications in information theory, including many new ones in the last decade. In this paper, motivated by a coding problem for emerging memory technologies, a set of sequences which generalize sequences in the de Bruijn graph are defined. These sequences can be also defined and viewed as constrained sequences. Hence, they will be called constrained de Bruijn sequences and a set of such sequences will be called a constrained de Bruijn code. Several properties and alternative definitions for such codes are examined and they are analyzed as generalized sequences in the de Bruijn graph (and its generalization) and as constrained sequences. Various enumeration techniques are used to compute the total number of sequences for any given set of parameters. A construction method of such codes from the theory of shift-register sequences is proposed. Finally, we show how these constrained de Bruijn sequences and codes can be applied in constructions of codes for correcting synchronization errors in the $ell$-symbol read channel and in the racetrack memory channel. For this purpose, these codes are superior in their size on previously known codes.




app

Eliminating NB-IoT Interference to LTE System: a Sparse Machine Learning Based Approach. (arXiv:2005.03092v1 [cs.IT])

Narrowband internet-of-things (NB-IoT) is a competitive 5G technology for massive machine-type communication scenarios, but meanwhile introduces narrowband interference (NBI) to existing broadband transmission such as the long term evolution (LTE) systems in enhanced mobile broadband (eMBB) scenarios. In order to facilitate the harmonic and fair coexistence in wireless heterogeneous networks, it is important to eliminate NB-IoT interference to LTE systems. In this paper, a novel sparse machine learning based framework and a sparse combinatorial optimization problem is formulated for accurate NBI recovery, which can be efficiently solved using the proposed iterative sparse learning algorithm called sparse cross-entropy minimization (SCEM). To further improve the recovery accuracy and convergence rate, regularization is introduced to the loss function in the enhanced algorithm called regularized SCEM. Moreover, exploiting the spatial correlation of NBI, the framework is extended to multiple-input multiple-output systems. Simulation results demonstrate that the proposed methods are effective in eliminating NB-IoT interference to LTE systems, and significantly outperform the state-of-the-art methods.




app

CovidCTNet: An Open-Source Deep Learning Approach to Identify Covid-19 Using CT Image. (arXiv:2005.03059v1 [eess.IV])

Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method, however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source set of algorithms called CovidCTNet that successfully differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 90% compared to radiologists (70%). The model is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. In order to facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and parametric details in an open-source format. Open-source sharing of our CovidCTNet enables developers to rapidly improve and optimize services, while preserving user privacy and data ownership.




app

Extracting Headless MWEs from Dependency Parse Trees: Parsing, Tagging, and Joint Modeling Approaches. (arXiv:2005.03035v1 [cs.CL])

An interesting and frequent type of multi-word expression (MWE) is the headless MWE, for which there are no true internal syntactic dominance relations; examples include many named entities ("Wells Fargo") and dates ("July 5, 2020") as well as certain productive constructions ("blow for blow", "day after day"). Despite their special status and prevalence, current dependency-annotation schemes require treating such flat structures as if they had internal syntactic heads, and most current parsers handle them in the same fashion as headed constructions. Meanwhile, outside the context of parsing, taggers are typically used for identifying MWEs, but taggers might benefit from structural information. We empirically compare these two common strategies--parsing and tagging--for predicting flat MWEs. Additionally, we propose an efficient joint decoding algorithm that combines scores from both strategies. Experimental results on the MWE-Aware English Dependency Corpus and on six non-English dependency treebanks with frequent flat structures show that: (1) tagging is more accurate than parsing for identifying flat-structure MWEs, (2) our joint decoder reconciles the two different views and, for non-BERT features, leads to higher accuracies, and (3) most of the gains result from feature sharing between the parsers and taggers.




app

A Different Approach to Coding With React Hooks

React Hooks, introduced in React 16.8, present us with a fundamentally new approach to coding. Some may think of them as a replacement for lifecycles or classes, however, that would be wrong. Like trying to translate a word from another language, sometimes you’re facing a completely new entity, which seems identical on the surface but is very different semantically and can’t be treated as equivalent. 

React not only changed the approach from OOP to Functional. The method of rendering has changed in principle. React is now fully built on functions instead of classes. And this has to be understood on a conceptual level. 




app

Privacy is disappearing faster than we realize, and the coronavirus isn't helping

The apps and devices you use are conducting surveillance with your every move Sure, you lock your home, and you probably don't share your deepest secrets with random strangers.…



  • News/Local News

app

New music we love: Fiona Apple's thrilling Fetch the Bolt Cutters is a rush of lacerating lyrics and swirling sonics

You don't have to wander around the internet long before bumping into a rave review of Fiona Apple's new record Fetch the Bolt Cutters: It has inspired breathless acclaim, has already been labeled a masterwork and is notably the first new album in nearly a decade that Pitchfork has assigned a perfect 10/10 rating.…




app

Sneak Peek: Idaho’s DIY approach to COVID; Drink Local; mood music; Mother’s Day; and more!

The latest issue of the Inlander is hitting newsstands today. Find it at your local grocery store and hundreds of other locations; use this map to find a pickup point near you.…




app

Jill Ann Smith approaches her wide-ranging pursuits with passion and dedication

What do Arabian horses, women veterans, ceramics and the food industry have in common?…



  • Family & Parenting

app

Apparatus and method for selecting motion signifying artificial feeling

An apparatus for selecting a motion signifying artificial feeling is provided. The apparatus includes: an feeling expression setting unit configured to set probabilities of each feeling expression behavior performed for each expression element of a robot for each predetermined feeling; a behavior combination generation unit configured to generate at least one behavior combination combined by randomly extracting the feeling expression behaviors in each expression element one by one; and a behavior combination selection unit configured to calculate an average for the probabilities of the feeling expression behaviors included in each behavior combination for each feeling of a robot and select behavior combinations in which the average of the probabilities of the feeling expression behaviors most approximates the predetermined feeling value of a robot from each behavior combination.




app

Method for generating visual mapping of knowledge information from parsing of text inputs for subjects and predicates

A method for performing relational analysis of parsed input is employed to create a visual map of knowledge information. A title, header or subject line for an input item of information is parsed into syntactical components of at least a subject component and any predicate component(s) relationally linked as topic and subtopics. A search of topics and subtopics is carried out for each parsed component. If a match is found, then the parsed component is taken as a chosen topic/subtopic label. If no match is found, then the parsed component is formatted as a new entry in the knowledge map. A translation function for translating topics and subtopics from an original language into one or more target languages is enabled by user request or indicated user preference for display on a generated visual map of knowledge information.




app

Apparatus and method for recognizing representative user behavior based on recognition of unit behaviors

An apparatus for recognizing a representative user behavior includes a unit-data extracting unit configured to extract at least one unit data from sensor data, a feature-information extracting unit configured to extract feature information from each of the at least one unit data, a unit-behavior recognizing unit configured to recognize a respective unit behavior for each of the at least one unit data based on the feature information, and a representative-behavior recognizing unit configured to recognize at least one representative behavior based on the respective unit behavior recognized for each of the at least one unit data.




app

Method and apparatus for contextual content suggestion

An approach is provided for contextual content suggestion. A recommendation platform processes and/or facilitates a processing of contextual information associated with at least one device to determine one or more locations, one or more contextual parameter values, or a combination thereof. The recommendation platform also determines popularity data associated with one or more content items with respect to the one or more locations, the one or more contextual parameter values, or a combination. The popularity data is determined from one or more other devices sharing at least substantially the one or more locations, the one or more contextual parameter values, or a combination thereof. The recommendation platform then causes, at least in part, a recommendation of the one or more content items to the at least one device based, at least in part, on the popularity information.




app

Latent variable model estimation apparatus, and method

To provide a latent variable model estimation apparatus capable of implementing the model selection at high speed even if the number of model candidates increases exponentially as the latent state number and the kind of the observation probability increase. A variational probability calculating unit 71 calculates a variational probability by maximizing a reference value that is defined as a lower bound of an approximation amount, in which Laplace approximation of a marginalized log likelihood function is performed with respect to an estimator for a complete variable. A model estimation unit 72 estimates an optimum latent variable model by estimating the kind and a parameter of the observation probability with respect to each latent state. A convergence determination unit 73 determines whether a reference value, which is used by the variational probability calculating unit 71 to calculate the variational probability, converges.




app

Information providing apparatus for vehicle, and method therefor

An information providing apparatus for vehicle has a remaining capacity detecting section 110 that detects a remaining capacity of a battery; a power consumption amount detecting section 130 that detects a power consumption amount of the battery; a power consumption amount history generating section 130 that generates a power consumption amount history on the basis of the power consumption amount detected by the power consumption amount detecting section 130; a charge necessity judgment information generating section 130 that generates, on the basis of the power consumption amount history generated by the power consumption amount history generating section 130, charge necessity judgment information which is information for user's judgment about whether or not charging of the battery is necessary; and a providing section 150 that provides information of the remaining capacity of the battery and the charge necessity judgment information with these information correlated with each other to the user. The information providing apparatus can properly provide the information for user's judgment about whether or not charging of the battery to the user.




app

Method and apparatus for declarative data warehouse definition for object-relational mapped objects

A data warehouse is constructed using the relational mapping of a transactional database without reconstructing the data relationships of the transactional database. First, an application programmer analyzes an object model in order to describe facts and dimensions using the objects, attributes, and paths of the object model. Each of the dimensions has an identifier that correlates an item in the transactional database to a dimension record in the data warehouse. The fact and dimension descriptions are saved to a description file. Second, a Data Warehouse Engine (DWE) then access the description file and uses the object model, fact and dimension descriptions, and object-relational mapping to map transactional data to the data warehouse.




app

Fast efficient vocabulary computation with hashed vocabularies applying hash functions to cluster centroids that determines most frequently used cluster centroid IDs

The disclosed embodiments describe a method, an apparatus, an application specific integrated circuit, and a server that provides a fast and efficient look up for data analysis. The apparatus and server may be configured to obtain data segments from a plurality of input devices. The data segments may be individual unique subsets of the entire data set obtained by a plurality input devices. A hash function may be applied to an aggregated set of the data segments. A result of the hash function may be stored in a data structure. A codebook may be generated from the hash function results.




app

Dicarboxylate-capped estolide compounds and methods of making and using the same

Described herein are dicarboxylate-capped estolide compound and methods of making the same. Exemplary dicarboxylate-capped estolide compounds include those of the formula x is, independently for each occurrence, an integer selected from 0 to 20; y is, independently for each occurrence, an integer selected from 0 to 20; W is, independently for each occurrence, selected from —CH2— and —CH═CH—; z is an integer selected from 1 to 40; n is an integer equal to or greater than 0; R5 is selected from hydrogen, optionally substituted alkyl that is saturated or unsaturated, and branched or unbranched, and an estolide residue; and R2 is selected from hydrogen and optionally substituted alkyl that is saturated or unsaturated, and branched or unbranched, wherein each fatty acid chain residue of said at least one compound is independently optionally substituted.




app

Composite material for structural applications

Composite material that contain epoxy resin which is toughened and strengthened with thermoplastic materials and a blend of insoluble particles. The uncured matrix resins include an epoxy resin component, a soluble thermoplastic component, a curing agent and an insoluble particulate component composed of elastic particles and rigid particles. The uncured resin matrix is combined with a fibrous reinforcement and cured/molded to form composite materials that may be used for structural applications, such as primary structures in aircraft.




app

Compatibilized polypropylene heterophasic copolymer and polylactic acid blends for injection molding applications

Injection molded articles and process of forming the same are described herein. The processes generally include providing a polyolefin including one or more propylene heterophasic copolymers, the polyolefin having an ethylene content of at least 10 wt. % based on the total weight of the polyolefin; contacting the polyolefin with a polylactic acid and a reactive modifier to form a compatiblized polymeric blend, wherein the reactive modifier is produced by contacting a polypropylene, a multifunctional acrylate comonomer, and an initiator under conditions suitable for the formation of a glycidyl methacrylate grafted polypropylene (PP-g-GMA) having a grafting yield in a range from 1 wt. % to 15 wt. %; and injection molding the compatibilized polymeric blend into an article.




app

Additive combination for sealants applications

The present invention pertains to an additive combination comprising at least two sterically hindered amines, at least one further stabilizer, a dispersing agent and a plasticizer. The present invention also pertains to a composition comprising an organic material susceptible to degradation by light, oxygen and/or heat, and the additive combination and to the use and the process for stabilizing organic material against degradation by light, oxygen and/or heat by the additive combination.




app

Method and apparatus for preparing fuel components from crude tall oil

A method for preparing fuel components from crude tall oil. Feedstock containing tall oil including unsaturated fatty acids is introduced to a catalytic hydrodeoxygenation to convert unsaturated fatty acids, rosin acids and sterols to fuel components. Crude tall oil is purified in a purification by washing the crude tall oil with washing liquid and separating the purified crude tall oil from the washing liquid. The purified crude tall oil is introduced directly to the catalytic hydrodeoxygenation as a purified crude tall oil feedstock. An additional feedstock may be supplied to the catalytic hydrodeoxygenation.




app

Methods and apparatuses for isomerization of paraffins

Embodiments of methods and apparatuses for isomerization of paraffins are provided. In one example, a method comprises the steps of separating an isomerization effluent into a product stream that comprises branched paraffins and a stabilizer vapor stream that comprises HCl, H2, and C6-hydrocarbons. C6-hydrocarbons are removed from the stabilizer overhead vapor stream to form a HCl and H2-rich stream. An isomerization catalyst is activated using at least a portion of the HCl and H2-rich stream to form a chloride-promoted isomerization catalyst. A paraffin feed stream is contacted with the chloride-promoted isomerization catalyst in the presence of hydrogen for isomerization of the paraffins.




app

Composite material, method for producing the same, and apparatus for producing the same

Disclosed is a composite material wherein adhesion between a silicon surface and a plating material is enhanced. A method and an apparatus for producing the composite material are also disclosed. The method for producing a composite material comprises a dispersion/allocation step wherein the surface of a silicon substrate (102), which is a matrix provided with a silicon layer at least as the outermost layer, is immersed into a first solution containing gold (Au) ions, so that particulate or island-shaped gold (Au) serving as a first metal and substituted with a part of the silicon layer are dispersed/allocated on the matrix surface, and a plating step wherein the silicon substrate (102) is immersed into a second solution (24), which contains a reducing agent to which gold (Au) exhibits catalyst activity and metal ions which can be reduced by the reducing agent, so that the surface of the silicon substrate (102) is covered with the metal or an alloy of the metal (108) which is formed by autocatalytic electroless plating using gold (Au) as a starting point.




app

Epoxy resin composition and light emitting apparatus

Disclosed are an epoxy resin composition and a light emitting apparatus. The epoxy resin composition includes a triazine derivative epoxy resin and an alicyclic epoxy resin.