mod

Godox’s new SL150/SL200 Mark II LED lights offer fanless “silent mode” operation

The Godox SL series LED lights have proven to be extremely popular due to their low cost. Two of the models in that range, the SL150 and SL200 have seen a Mark II update today, according to an email that Godox has been sending out today. One of the features of the new SL150II and […]

The post Godox’s new SL150/SL200 Mark II LED lights offer fanless “silent mode” operation appeared first on DIY Photography.




mod

On the finiteness of ample models. (arXiv:2005.02613v2 [math.AG] UPDATED)

In this paper, we generalize the finiteness of models theorem in [BCHM06] to Kawamata log terminal pairs with fixed Kodaira dimension. As a consequence, we prove that a Kawamata log terminal pair with $mathbb{R}-$boundary has a canonical model, and can be approximated by log pairs with $mathbb{Q}-$boundary and the same canonical model.




mod

Some Quot schemes in tilted hearts and moduli spaces of stable pairs. (arXiv:2005.02202v2 [math.AG] UPDATED)

For a smooth projective variety $X$, we study analogs of Quot functors in hearts of non-standard $t$-structures of $D^b(mathrm{Coh}(X))$. The technical framework is that of families of $t$-structures, as studied in arXiv:1902.08184. We provide several examples and suggest possible directions of further investigation, as we reinterpret moduli spaces of stable pairs, in the sense of Thaddeus (arXiv:alg-geom/9210007) and Huybrechts-Lehn (arXiv:alg-geom/9211001), as instances of Quot schemes.




mod

Finite dimensional simple modules of $(q, mathbf{Q})$-current algebras. (arXiv:2004.11069v2 [math.RT] UPDATED)

The $(q, mathbf{Q})$-current algebra associated with the general linear Lie algebra was introduced by the second author in the study of representation theory of cyclotomic $q$-Schur algebras. In this paper, we study the $(q, mathbf{Q})$-current algebra $U_q(mathfrak{sl}_n^{langle mathbf{Q} angle}[x])$ associated with the special linear Lie algebra $mathfrak{sl}_n$. In particular, we classify finite dimensional simple $U_q(mathfrak{sl}_n^{langle mathbf{Q} angle}[x])$-modules.




mod

New ${cal N}{=},2$ superspace Calogero models. (arXiv:1912.05989v2 [hep-th] UPDATED)

Starting from the Hamiltonian formulation of ${cal N}{=},2$ supersymmetric Calogero models associated with the classical $A_n, B_n, C_n$ and $D_n$ series and their hyperbolic/trigonometric cousins, we provide their superspace description. The key ingredients include $n$ bosonic and $2n(n{-}1)$ fermionic ${cal N}{=},2$ superfields, the latter being subject to a nonlinear chirality constraint. This constraint has a universal form valid for all Calogero models. With its help we find more general supercharges (and a superspace Lagrangian), which provide the ${cal N}{=},2$ supersymmetrization for bosonic potentials with arbitrary repulsive two-body interactions.




mod

Integrability of moduli and regularity of Denjoy counterexamples. (arXiv:1908.06568v4 [math.DS] UPDATED)

We study the regularity of exceptional actions of groups by $C^{1,alpha}$ diffeomorphisms on the circle, i.e. ones which admit exceptional minimal sets, and whose elements have first derivatives that are continuous with concave modulus of continuity $alpha$. Let $G$ be a finitely generated group admitting a $C^{1,alpha}$ action $ ho$ with a free orbit on the circle, and such that the logarithms of derivatives of group elements are uniformly bounded at some point of the circle. We prove that if $G$ has spherical growth bounded by $c n^{d-1}$ and if the function $1/alpha^d$ is integrable near zero, then under some mild technical assumptions on $alpha$, there is a sequence of exceptional $C^{1,alpha}$ actions of $G$ which converge to $ ho$ in the $C^1$ topology. As a consequence for a single diffeomorphism, we obtain that if the function $1/alpha$ is integrable near zero, then there exists a $C^{1,alpha}$ exceptional diffeomorphism of the circle. This corollary accounts for all previously known moduli of continuity for derivatives of exceptional diffeomorphisms. We also obtain a partial converse to our main result. For finitely generated free abelian groups, the existence of an exceptional action, together with some natural hypotheses on the derivatives of group elements, puts integrability restrictions on the modulus $alpha$. These results are related to a long-standing question of D. McDuff concerning the length spectrum of exceptional $C^1$ diffeomorphisms of the circle.




mod

Mirror Symmetry for Non-Abelian Landau-Ginzburg Models. (arXiv:1812.06200v3 [math.AG] UPDATED)

We consider Landau-Ginzburg models stemming from groups comprised of non-diagonal symmetries, and we describe a rule for the mirror LG model. In particular, we present the non-abelian dual group, which serves as the appropriate choice of group for the mirror LG model. We also describe an explicit mirror map between the A-model and the B-model state spaces for two examples. Further, we prove that this mirror map is an isomorphism between the untwisted broad sectors and the narrow diagonal sectors for Fermat type polynomials.




mod

On the rationality of cycle integrals of meromorphic modular forms. (arXiv:1810.00612v3 [math.NT] UPDATED)

We derive finite rational formulas for the traces of cycle integrals of certain meromorphic modular forms. Moreover, we prove the modularity of a completion of the generating function of such traces. The theoretical framework for these results is an extension of the Shintani theta lift to meromorphic modular forms of positive even weight.




mod

Local Moduli of Semisimple Frobenius Coalescent Structures. (arXiv:1712.08575v3 [math.DG] UPDATED)

We extend the analytic theory of Frobenius manifolds to semisimple points with coalescing eigenvalues of the operator of multiplication by the Euler vector field. We clarify which freedoms, ambiguities and mutual constraints are allowed in the definition of monodromy data, in view of their importance for conjectural relationships between Frobenius manifolds and derived categories. Detailed examples and applications are taken from singularity and quantum cohomology theories. We explicitly compute the monodromy data at points of the Maxwell Stratum of the A3-Frobenius manifold, as well as at the small quantum cohomology of the Grassmannian G(2,4). In the latter case, we analyse in details the action of the braid group on the monodromy data. This proves that these data can be expressed in terms of characteristic classes of mutations of Kapranov's exceptional 5-block collection, as conjectured by one of the authors.




mod

Categorification via blocks of modular representations for sl(n). (arXiv:1612.06941v3 [math.RT] UPDATED)

Bernstein, Frenkel, and Khovanov have constructed a categorification of tensor products of the standard representation of $mathfrak{sl}_2$, where they use singular blocks of category $mathcal{O}$ for $mathfrak{sl}_n$ and translation functors. Here we construct a positive characteristic analogue using blocks of representations of $mathfrak{sl}_n$ over a field $ extbf{k}$ of characteristic $p$ with zero Frobenius character, and singular Harish-Chandra character. We show that the aforementioned categorification admits a Koszul graded lift, which is equivalent to a geometric categorification constructed by Cautis, Kamnitzer, and Licata using coherent sheaves on cotangent bundles to Grassmanians. In particular, the latter admits an abelian refinement. With respect to this abelian refinement, the stratified Mukai flop induces a perverse equivalence on the derived categories for complementary Grassmanians. This is part of a larger project to give a combinatorial approach to Lusztig's conjectures for representations of Lie algebras in positive characteristic.




mod

A Model for Optimal Human Navigation with Stochastic Effects. (arXiv:2005.03615v1 [math.OC])

We present a method for optimal path planning of human walking paths in mountainous terrain, using a control theoretic formulation and a Hamilton-Jacobi-Bellman equation. Previous models for human navigation were entirely deterministic, assuming perfect knowledge of the ambient elevation data and human walking velocity as a function of local slope of the terrain. Our model includes a stochastic component which can account for uncertainty in the problem, and thus includes a Hamilton-Jacobi-Bellman equation with viscosity. We discuss the model in the presence and absence of stochastic effects, and suggest numerical methods for simulating the model. We discuss two different notions of an optimal path when there is uncertainty in the problem. Finally, we compare the optimal paths suggested by the model at different levels of uncertainty, and observe that as the size of the uncertainty tends to zero (and thus the viscosity in the equation tends to zero), the optimal path tends toward the deterministic optimal path.




mod

Groups up to congruence relation and from categorical groups to c-crossed modules. (arXiv:2005.03601v1 [math.CT])

We introduce a notion of c-group, which is a group up to congruence relation and consider the corresponding category. Extensions, actions and crossed modules (c-crossed modules) are defined in this category and the semi-direct product is constructed. We prove that each categorical group gives rise to c-groups and to a c-crossed module, which is a connected, special and strict c-crossed module in the sense defined by us. The results obtained here will be applied in the proof of an equivalence of the categories of categorical groups and connected, special and strict c-crossed modules.




mod

A reaction-diffusion system to better comprehend the unlockdown: Application of SEIR-type model with diffusion to the spatial spread of COVID-19 in France. (arXiv:2005.03499v1 [q-bio.PE])

A reaction-diffusion model was developed describing the spread of the COVID-19 virus considering the mean daily movement of susceptible, exposed and asymptomatic individuals. The model was calibrated using data on the confirmed infection and death from France as well as their initial spatial distribution. First, the system of partial differential equations is studied, then the basic reproduction number, R0 is derived. Second, numerical simulations, based on a combination of level-set and finite differences, shown the spatial spread of COVID-19 from March 16 to June 16. Finally, scenarios of unlockdown are compared according to variation of distancing, or partially spatial lockdown.




mod

On completion of unimodular rows over polynomial extension of finitely generated rings over $mathbb{Z}$. (arXiv:2005.03485v1 [math.AC])

In this article, we prove that if $R$ is a finitely generated ring over $mathbb{Z}$ of dimension $d, dgeq2, frac{1}{d!}in R$, then any unimodular row over $R[X]$ of length $d+1$ can be mapped to a factorial row by elementary transformations.




mod

A theory of stacks with twisted fields and resolution of moduli of genus two stable maps. (arXiv:2005.03384v1 [math.AG])

We construct a smooth moduli stack of tuples consisting of genus two nodal curves, line bundles, and twisted fields. It leads to a desingularization of the moduli of genus two stable maps to projective spaces. The construction of this new moduli is based on systematical application of the theory of stacks with twisted fields (STF), which has its prototype appeared in arXiv:1906.10527 and arXiv:1201.2427 and is fully developed in this article. The results of this article are the second step of a series of works toward the resolutions of the moduli of stable maps of higher genera.




mod

A remark on the Laplacian flow and the modified Laplacian co-flow in G2-Geometry. (arXiv:2005.03332v1 [math.DG])

We observe that the DeTurck Laplacian flow of G2-structures introduced by Bryant and Xu as a gauge fixing of the Laplacian flow can be regarded as a flow of G2-structures (not necessarily closed) which fits in the general framework introduced by Hamilton in [4].




mod

Revised dynamics of the Belousov-Zhabotinsky reaction model. (arXiv:2005.03325v1 [nlin.CD])

The main aim of this paper is to detect dynamical properties of the Gy"orgyi-Field model of the Belousov-Zhabotinsky chemical reaction. The corresponding three-variable model given as a set of nonlinear ordinary differential equations depends on one parameter, the flow rate. As certain values of this parameter can give rise to chaos, the analysis was performed in order to identify different dynamics regimes. Dynamical properties were qualified and quantified using classical and also new techniques. Namely, phase portraits, bifurcation diagrams, the Fourier spectra analysis, the 0-1 test for chaos, and approximate entropy. The correlation between approximate entropy and the 0-1 test for chaos was observed and described in detail. Moreover, the three-stage system of nested subintervals of flow rates, for which in every level the 0-1 test for chaos and approximate entropy was computed, is showing the same pattern. The study leads to an open problem whether the set of flow rate parameters has Cantor like structure.




mod

The conjecture of Erd"{o}s--Straus is true for every $nequiv 13 extrm{ mod }24$. (arXiv:2005.03273v1 [math.NT])

In this short note we give a proof of the famous conjecture of Erd"{o}s-Straus for the case $nequiv13 extrm{ mod } 24.$ The Erd"{o}s--Straus conjecture states that the equation $frac{4}{n}=frac{1}{x}+frac{1}{y}+frac{1}{z}$ has positive integer solutions $x,y,z$ for every $ngeq 2$. It is open for $nequiv 1 extrm{ mod } 12$. Indeed, in all of the other cases the solutions are always easy to find. We prove that the conjecture is true for every $nequiv 13 extrm{ mod } 24$. Therefore, to solve it completely, it remains to find solutions for every $nequiv 1 extrm{ mod } 24$.




mod

Modeling nanoconfinement effects using active learning. (arXiv:2005.02587v2 [physics.app-ph] UPDATED)

Predicting the spatial configuration of gas molecules in nanopores of shale formations is crucial for fluid flow forecasting and hydrocarbon reserves estimation. The key challenge in these tight formations is that the majority of the pore sizes are less than 50 nm. At this scale, the fluid properties are affected by nanoconfinement effects due to the increased fluid-solid interactions. For instance, gas adsorption to the pore walls could account for up to 85% of the total hydrocarbon volume in a tight reservoir. Although there are analytical solutions that describe this phenomenon for simple geometries, they are not suitable for describing realistic pores, where surface roughness and geometric anisotropy play important roles. To describe these, molecular dynamics (MD) simulations are used since they consider fluid-solid and fluid-fluid interactions at the molecular level. However, MD simulations are computationally expensive, and are not able to simulate scales larger than a few connected nanopores. We present a method for building and training physics-based deep learning surrogate models to carry out fast and accurate predictions of molecular configurations of gas inside nanopores. Since training deep learning models requires extensive databases that are computationally expensive to create, we employ active learning (AL). AL reduces the overhead of creating comprehensive sets of high-fidelity data by determining where the model uncertainty is greatest, and running simulations on the fly to minimize it. The proposed workflow enables nanoconfinement effects to be rigorously considered at the mesoscale where complex connected sets of nanopores control key applications such as hydrocarbon recovery and CO2 sequestration.




mod

Temporal Event Segmentation using Attention-based Perceptual Prediction Model for Continual Learning. (arXiv:2005.02463v2 [cs.CV] UPDATED)

Temporal event segmentation of a long video into coherent events requires a high level understanding of activities' temporal features. The event segmentation problem has been tackled by researchers in an offline training scheme, either by providing full, or weak, supervision through manually annotated labels or by self-supervised epoch based training. In this work, we present a continual learning perceptual prediction framework (influenced by cognitive psychology) capable of temporal event segmentation through understanding of the underlying representation of objects within individual frames. Our framework also outputs attention maps which effectively localize and track events-causing objects in each frame. The model is tested on a wildlife monitoring dataset in a continual training manner resulting in $80\%$ recall rate at $20\%$ false positive rate for frame level segmentation. Activity level testing has yielded $80\%$ activity recall rate for one false activity detection every 50 minutes.




mod

The Sensitivity of Language Models and Humans to Winograd Schema Perturbations. (arXiv:2005.01348v2 [cs.CL] UPDATED)

Large-scale pretrained language models are the major driving force behind recent improvements in performance on the Winograd Schema Challenge, a widely employed test of common sense reasoning ability. We show, however, with a new diagnostic dataset, that these models are sensitive to linguistic perturbations of the Winograd examples that minimally affect human understanding. Our results highlight interesting differences between humans and language models: language models are more sensitive to number or gender alternations and synonym replacements than humans, and humans are more stable and consistent in their predictions, maintain a much higher absolute performance, and perform better on non-associative instances than associative ones. Overall, humans are correct more often than out-of-the-box models, and the models are sometimes right for the wrong reasons. Finally, we show that fine-tuning on a large, task-specific dataset can offer a solution to these issues.




mod

Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment. (arXiv:2005.00165v3 [cs.CL] UPDATED)

A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i.e. is a grammatical sentence more probable than an ungrammatical sentence). Our work uses ambiguous relative clause attachment to extend such evaluations to cases of multiple simultaneous valid interpretations, where stark grammaticality differences are absent. We compare model performance in English and Spanish to show that non-linguistic biases in RNN LMs advantageously overlap with syntactic structure in English but not Spanish. Thus, English models may appear to acquire human-like syntactic preferences, while models trained on Spanish fail to acquire comparable human-like preferences. We conclude by relating these results to broader concerns about the relationship between comprehension (i.e. typical language model use cases) and production (which generates the training data for language models), suggesting that necessary linguistic biases are not present in the training signal at all.




mod

Toward Improving the Evaluation of Visual Attention Models: a Crowdsourcing Approach. (arXiv:2002.04407v2 [cs.CV] UPDATED)

Human visual attention is a complex phenomenon. A computational modeling of this phenomenon must take into account where people look in order to evaluate which are the salient locations (spatial distribution of the fixations), when they look in those locations to understand the temporal development of the exploration (temporal order of the fixations), and how they move from one location to another with respect to the dynamics of the scene and the mechanics of the eyes (dynamics). State-of-the-art models focus on learning saliency maps from human data, a process that only takes into account the spatial component of the phenomenon and ignore its temporal and dynamical counterparts. In this work we focus on the evaluation methodology of models of human visual attention. We underline the limits of the current metrics for saliency prediction and scanpath similarity, and we introduce a statistical measure for the evaluation of the dynamics of the simulated eye movements. While deep learning models achieve astonishing performance in saliency prediction, our analysis shows their limitations in capturing the dynamics of the process. We find that unsupervised gravitational models, despite of their simplicity, outperform all competitors. Finally, exploiting a crowd-sourcing platform, we present a study aimed at evaluating how strongly the scanpaths generated with the unsupervised gravitational models appear plausible to naive and expert human observers.




mod

SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval. (arXiv:1912.05891v2 [cs.IR] UPDATED)

In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents. Therefore, an ideal ranking model would be a mapping from a document set to a permutation on the set, and should satisfy two critical requirements: (1)~it should have the ability to model cross-document interactions so as to capture local context information in a query; (2)~it should be permutation-invariant, which means that any permutation of the inputted documents would not change the output ranking. Previous studies on learning-to-rank either design uni-variate scoring functions that score each document separately, and thus failed to model the cross-document interactions; or construct multivariate scoring functions that score documents sequentially, which inevitably sacrifice the permutation invariance requirement. In this paper, we propose a neural learning-to-rank model called SetRank which directly learns a permutation-invariant ranking model defined on document sets of any size. SetRank employs a stack of (induced) multi-head self attention blocks as its key component for learning the embeddings for all of the retrieved documents jointly. The self-attention mechanism not only helps SetRank to capture the local context information from cross-document interactions, but also to learn permutation-equivariant representations for the inputted documents, which therefore achieving a permutation-invariant ranking model. Experimental results on three large scale benchmarks showed that the SetRank significantly outperformed the baselines include the traditional learning-to-rank models and state-of-the-art Neural IR models.




mod

Learning Robust Models for e-Commerce Product Search. (arXiv:2005.03624v1 [cs.CL])

Showing items that do not match search query intent degrades customer experience in e-commerce. These mismatches result from counterfactual biases of the ranking algorithms toward noisy behavioral signals such as clicks and purchases in the search logs. Mitigating the problem requires a large labeled dataset, which is expensive and time-consuming to obtain. In this paper, we develop a deep, end-to-end model that learns to effectively classify mismatches and to generate hard mismatched examples to improve the classifier. We train the model end-to-end by introducing a latent variable into the cross-entropy loss that alternates between using the real and generated samples. This not only makes the classifier more robust but also boosts the overall ranking performance. Our model achieves a relative gain compared to baselines by over 26% in F-score, and over 17% in Area Under PR curve. On live search traffic, our model gains significant improvement in multiple countries.




mod

A Tale of Two Perplexities: Sensitivity of Neural Language Models to Lexical Retrieval Deficits in Dementia of the Alzheimer's Type. (arXiv:2005.03593v1 [cs.CL])

In recent years there has been a burgeoning interest in the use of computational methods to distinguish between elicited speech samples produced by patients with dementia, and those from healthy controls. The difference between perplexity estimates from two neural language models (LMs) - one trained on transcripts of speech produced by healthy participants and the other trained on transcripts from patients with dementia - as a single feature for diagnostic classification of unseen transcripts has been shown to produce state-of-the-art performance. However, little is known about why this approach is effective, and on account of the lack of case/control matching in the most widely-used evaluation set of transcripts (DementiaBank), it is unclear if these approaches are truly diagnostic, or are sensitive to other variables. In this paper, we interrogate neural LMs trained on participants with and without dementia using synthetic narratives previously developed to simulate progressive semantic dementia by manipulating lexical frequency. We find that perplexity of neural LMs is strongly and differentially associated with lexical frequency, and that a mixture model resulting from interpolating control and dementia LMs improves upon the current state-of-the-art for models trained on transcript text exclusively.




mod

Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. (arXiv:2005.03572v1 [cs.CV])

Deep learning-based object detection and instance segmentation have achieved unprecedented progress. In this paper, we propose Complete-IoU (CIoU) loss and Cluster-NMS for enhancing geometric factors in both bounding box regression and Non-Maximum Suppression (NMS), leading to notable gains of average precision (AP) and average recall (AR), without the sacrifice of inference efficiency. In particular, we consider three geometric factors, i.e., overlap area, normalized central point distance and aspect ratio, which are crucial for measuring bounding box regression in object detection and instance segmentation. The three geometric factors are then incorporated into CIoU loss for better distinguishing difficult regression cases. The training of deep models using CIoU loss results in consistent AP and AR improvements in comparison to widely adopted $ell_n$-norm loss and IoU-based loss. Furthermore, we propose Cluster-NMS, where NMS during inference is done by implicitly clustering detected boxes and usually requires less iterations. Cluster-NMS is very efficient due to its pure GPU implementation, , and geometric factors can be incorporated to improve both AP and AR. In the experiments, CIoU loss and Cluster-NMS have been applied to state-of-the-art instance segmentation (e.g., YOLACT), and object detection (e.g., YOLO v3, SSD and Faster R-CNN) models. Taking YOLACT on MS COCO as an example, our method achieves performance gains as +1.7 AP and +6.2 AR$_{100}$ for object detection, and +0.9 AP and +3.5 AR$_{100}$ for instance segmentation, with 27.1 FPS on one NVIDIA GTX 1080Ti GPU. All the source code and trained models are available at https://github.com/Zzh-tju/CIoU




mod

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis. (arXiv:2005.03545v1 [cs.CL])

Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal Humor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework.




mod

A combination of 'pooling' with a prediction model can reduce by 73% the number of COVID-19 (Corona-virus) tests. (arXiv:2005.03453v1 [cs.LG])

We show that combining a prediction model (based on neural networks), with a new method of test pooling (better than the original Dorfman method, and better than double-pooling) called 'Grid', we can reduce the number of Covid-19 tests by 73%.




mod

An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration. (arXiv:2005.03451v1 [cs.LG])

We empirically evaluate an undervolting technique, i.e., underscaling the circuit supply voltage below the nominal level, to improve the power-efficiency of Convolutional Neural Network (CNN) accelerators mapped to Field Programmable Gate Arrays (FPGAs). Undervolting below a safe voltage level can lead to timing faults due to excessive circuit latency increase. We evaluate the reliability-power trade-off for such accelerators. Specifically, we experimentally study the reduced-voltage operation of multiple components of real FPGAs, characterize the corresponding reliability behavior of CNN accelerators, propose techniques to minimize the drawbacks of reduced-voltage operation, and combine undervolting with architectural CNN optimization techniques, i.e., quantization and pruning. We investigate the effect of environmental temperature on the reliability-power trade-off of such accelerators. We perform experiments on three identical samples of modern Xilinx ZCU102 FPGA platforms with five state-of-the-art image classification CNN benchmarks. This approach allows us to study the effects of our undervolting technique for both software and hardware variability. We achieve more than 3X power-efficiency (GOPs/W) gain via undervolting. 2.6X of this gain is the result of eliminating the voltage guardband region, i.e., the safe voltage region below the nominal level that is set by FPGA vendor to ensure correct functionality in worst-case environmental and circuit conditions. 43% of the power-efficiency gain is due to further undervolting below the guardband, which comes at the cost of accuracy loss in the CNN accelerator. We evaluate an effective frequency underscaling technique that prevents this accuracy loss, and find that it reduces the power-efficiency gain from 43% to 25%.




mod

The Perceptimatic English Benchmark for Speech Perception Models. (arXiv:2005.03418v1 [cs.CL])

We present the Perceptimatic English Benchmark, an open experimental benchmark for evaluating quantitative models of speech perception in English. The benchmark consists of ABX stimuli along with the responses of 91 American English-speaking listeners. The stimuli test discrimination of a large number of English and French phonemic contrasts. They are extracted directly from corpora of read speech, making them appropriate for evaluating statistical acoustic models (such as those used in automatic speech recognition) trained on typical speech data sets. We show that phone discrimination is correlated with several types of models, and give recommendations for researchers seeking easily calculated norms of acoustic distance on experimental stimuli. We show that DeepSpeech, a standard English speech recognizer, is more specialized on English phoneme discrimination than English listeners, and is poorly correlated with their behaviour, even though it yields a low error on the decision task given to humans.




mod

Datom: A Deformable modular robot for building self-reconfigurable programmable matter. (arXiv:2005.03402v1 [cs.RO])

Moving a module in a modular robot is a very complex and error-prone process. Unlike in swarm, in the modular robots we are targeting, the moving module must keep the connection to, at least, one other module. In order to miniaturize each module to few millimeters, we have proposed a design which is using electrostatic actuator. However, this movement is composed of several attachment, detachment creating the movement and each small step can fail causing a module to break the connection. The idea developed in this paper consists in creating a new kind of deformable module allowing a movement which keeps the connection between the moving and the fixed modules. We detail the geometry and the practical constraints during the conception of this new module. We then validate the possibility of movement for a module in an existing configuration. This implies the cooperation of some of the modules placed along the path and we show in simulation that it exists a motion process to reach every free positions of the surface for a given configuration.




mod

Pricing under a multinomial logit model with non linear network effects. (arXiv:2005.03352v1 [cs.GT])

We study the problem of pricing under a Multinomial Logit model where we incorporate network effects over the consumer's decisions. We analyse both cases, when sellers compete or collaborate. In particular, we pay special attention to the overall expected revenue and how the behaviour of the no purchase option is affected under variations of a network effect parameter. Where for example we prove that the market share for the no purchase option, is decreasing in terms of the value of the network effect, meaning that stronger communication among costumers increases the expected amount of sales. We also analyse how the customer's utility is altered when network effects are incorporated into the market, comparing the cases where both competitive and monopolistic prices are displayed. We use tools from stochastic approximation algorithms to prove that the probability of purchasing the available products converges to a unique stationary distribution. We model that the sellers can use this stationary distribution to establish their strategies. Finding that under those settings, a pure Nash Equilibrium represents the pricing strategies in the case of competition, and an optimal (that maximises the total revenue) fixed price characterise the case of collaboration.




mod

Adaptive Dialog Policy Learning with Hindsight and User Modeling. (arXiv:2005.03299v1 [cs.AI])

Reinforcement learning methods have been used to compute dialog policies from language-based interaction experiences. Efficiency is of particular importance in dialog policy learning, because of the considerable cost of interacting with people, and the very poor user experience from low-quality conversations. Aiming at improving the efficiency of dialog policy learning, we develop algorithm LHUA (Learning with Hindsight, User modeling, and Adaptation) that, for the first time, enables dialog agents to adaptively learn with hindsight from both simulated and real users. Simulation and hindsight provide the dialog agent with more experience and more (positive) reinforcements respectively. Experimental results suggest that, in success rate and policy quality, LHUA outperforms competitive baselines from the literature, including its no-simulation, no-adaptation, and no-hindsight counterparts.




mod

Expressing Accountability Patterns using Structural Causal Models. (arXiv:2005.03294v1 [cs.SE])

While the exact definition and implementation of accountability depend on the specific context, at its core accountability describes a mechanism that will make decisions transparent and often provides means to sanction "bad" decisions. As such, accountability is specifically relevant for Cyber-Physical Systems, such as robots or drones, that embed themselves into a human society, take decisions and might cause lasting harm. Without a notion of accountability, such systems could behave with impunity and would not fit into society. Despite its relevance, there is currently no agreement on its meaning and, more importantly, no way to express accountability properties for these systems. As a solution we propose to express the accountability properties of systems using Structural Causal Models. They can be represented as human-readable graphical models while also offering mathematical tools to analyze and reason over them. Our central contribution is to show how Structural Causal Models can be used to express and analyze the accountability properties of systems and that this approach allows us to identify accountability patterns. These accountability patterns can be catalogued and used to improve systems and their architectures.




mod

RNN-T Models Fail to Generalize to Out-of-Domain Audio: Causes and Solutions. (arXiv:2005.03271v1 [eess.AS])

In recent years, all-neural end-to-end approaches have obtained state-of-the-art results on several challenging automatic speech recognition (ASR) tasks. However, most existing works focus on building ASR models where train and test data are drawn from the same domain. This results in poor generalization characteristics on mismatched-domains: e.g., end-to-end models trained on short segments perform poorly when evaluated on longer utterances. In this work, we analyze the generalization properties of streaming and non-streaming recurrent neural network transducer (RNN-T) based end-to-end models in order to identify model components that negatively affect generalization performance. We propose two solutions: combining multiple regularization techniques during training, and using dynamic overlapping inference. On a long-form YouTube test set, when the non-streaming RNN-T model is trained with shorter segments of data, the proposed combination improves word error rate (WER) from 22.3% to 14.8%; when the streaming RNN-T model trained on short Search queries, the proposed techniques improve WER on the YouTube set from 67.0% to 25.3%. Finally, when trained on Librispeech, we find that dynamic overlapping inference improves WER on YouTube from 99.8% to 33.0%.




mod

DFSeer: A Visual Analytics Approach to Facilitate Model Selection for Demand Forecasting. (arXiv:2005.03244v1 [cs.HC])

Selecting an appropriate model to forecast product demand is critical to the manufacturing industry. However, due to the data complexity, market uncertainty and users' demanding requirements for the model, it is challenging for demand analysts to select a proper model. Although existing model selection methods can reduce the manual burden to some extent, they often fail to present model performance details on individual products and reveal the potential risk of the selected model. This paper presents DFSeer, an interactive visualization system to conduct reliable model selection for demand forecasting based on the products with similar historical demand. It supports model comparison and selection with different levels of details. Besides, it shows the difference in model performance on similar products to reveal the risk of model selection and increase users' confidence in choosing a forecasting model. Two case studies and interviews with domain experts demonstrate the effectiveness and usability of DFSeer.




mod

Hierarchical Predictive Coding Models in a Deep-Learning Framework. (arXiv:2005.03230v1 [cs.CV])

Bayesian predictive coding is a putative neuromorphic method for acquiring higher-level neural representations to account for sensory input. Although originating in the neuroscience community, there are also efforts in the machine learning community to study these models. This paper reviews some of the more well known models. Our review analyzes module connectivity and patterns of information transfer, seeking to find general principles used across the models. We also survey some recent attempts to cast these models within a deep learning framework. A defining feature of Bayesian predictive coding is that it uses top-down, reconstructive mechanisms to predict incoming sensory inputs or their lower-level representations. Discrepancies between the predicted and the actual inputs, known as prediction errors, then give rise to future learning that refines and improves the predictive accuracy of learned higher-level representations. Predictive coding models intended to describe computations in the neocortex emerged prior to the development of deep learning and used a communication structure between modules that we name the Rao-Ballard protocol. This protocol was derived from a Bayesian generative model with some rather strong statistical assumptions. The RB protocol provides a rubric to assess the fidelity of deep learning models that claim to implement predictive coding.




mod

Lattice-based public key encryption with equality test in standard model, revisited. (arXiv:2005.03178v1 [cs.CR])

Public key encryption with equality test (PKEET) allows testing whether two ciphertexts are generated by the same message or not. PKEET is a potential candidate for many practical applications like efficient data management on encrypted databases. Potential applicability of PKEET leads to intensive research from its first instantiation by Yang et al. (CT-RSA 2010). Most of the followup constructions are secure in the random oracle model. Moreover, the security of all the concrete constructions is based on number-theoretic hardness assumptions which are vulnerable in the post-quantum era. Recently, Lee et al. (ePrint 2016) proposed a generic construction of PKEET schemes in the standard model and hence it is possible to yield the first instantiation of PKEET schemes based on lattices. Their method is to use a $2$-level hierarchical identity-based encryption (HIBE) scheme together with a one-time signature scheme. In this paper, we propose, for the first time, a direct construction of a PKEET scheme based on the hardness assumption of lattices in the standard model. More specifically, the security of the proposed scheme is reduces to the hardness of the Learning With Errors problem.




mod

Nonlinear model reduction: a comparison between POD-Galerkin and POD-DEIM methods. (arXiv:2005.03173v1 [physics.comp-ph])

Several nonlinear model reduction techniques are compared for the three cases of the non-parallel version of the Kuramoto-Sivashinsky equation, the transient regime of flow past a cylinder at $Re=100$ and fully developed flow past a cylinder at the same Reynolds number. The linear terms of the governing equations are reduced by Galerkin projection onto a POD basis of the flow state, while the reduced nonlinear convection terms are obtained either by a Galerkin projection onto the same state basis, by a Galerkin projection onto a POD basis representing the nonlinearities or by applying the Discrete Empirical Interpolation Method (DEIM) to a POD basis of the nonlinearities. The quality of the reduced order models is assessed as to their stability, accuracy and robustness, and appropriate quantitative measures are introduced and compared. In particular, the properties of the reduced linear terms are compared to those of the full-scale terms, and the structure of the nonlinear quadratic terms is analyzed as to the conservation of kinetic energy. It is shown that all three reduction techniques provide excellent and similar results for the cases of the Kuramoto-Sivashinsky equation and the limit-cycle cylinder flow. For the case of the transient regime of flow past a cylinder, only the pure Galerkin techniques are successful, while the DEIM technique produces reduced-order models that diverge in finite time.




mod

Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting. (arXiv:2005.03119v1 [cs.CL])

Unsupervised machine translation (MT) has recently achieved impressive results with monolingual corpora only. However, it is still challenging to associate source-target sentences in the latent space. As people speak different languages biologically share similar visual systems, the potential of achieving better alignment through visual content is promising yet under-explored in unsupervised multimodal MT (MMT). In this paper, we investigate how to utilize visual content for disambiguation and promoting latent space alignment in unsupervised MMT. Our model employs multimodal back-translation and features pseudo visual pivoting in which we learn a shared multilingual visual-semantic embedding space and incorporate visually-pivoted captioning as additional weak supervision. The experimental results on the widely used Multi30K dataset show that the proposed model significantly improves over the state-of-the-art methods and generalizes well when the images are not available at the testing time.




mod

Exploratory Analysis of Covid-19 Tweets using Topic Modeling, UMAP, and DiGraphs. (arXiv:2005.03082v1 [cs.SI])

This paper illustrates five different techniques to assess the distinctiveness of topics, key terms and features, speed of information dissemination, and network behaviors for Covid19 tweets. First, we use pattern matching and second, topic modeling through Latent Dirichlet Allocation (LDA) to generate twenty different topics that discuss case spread, healthcare workers, and personal protective equipment (PPE). One topic specific to U.S. cases would start to uptick immediately after live White House Coronavirus Task Force briefings, implying that many Twitter users are paying attention to government announcements. We contribute machine learning methods not previously reported in the Covid19 Twitter literature. This includes our third method, Uniform Manifold Approximation and Projection (UMAP), that identifies unique clustering-behavior of distinct topics to improve our understanding of important themes in the corpus and help assess the quality of generated topics. Fourth, we calculated retweeting times to understand how fast information about Covid19 propagates on Twitter. Our analysis indicates that the median retweeting time of Covid19 for a sample corpus in March 2020 was 2.87 hours, approximately 50 minutes faster than repostings from Chinese social media about H7N9 in March 2013. Lastly, we sought to understand retweet cascades, by visualizing the connections of users over time from fast to slow retweeting. As the time to retweet increases, the density of connections also increase where in our sample, we found distinct users dominating the attention of Covid19 retweeters. One of the simplest highlights of this analysis is that early-stage descriptive methods like regular expressions can successfully identify high-level themes which were consistently verified as important through every subsequent analysis.




mod

Guided Policy Search Model-based Reinforcement Learning for Urban Autonomous Driving. (arXiv:2005.03076v1 [cs.RO])

In this paper, we continue our prior work on using imitation learning (IL) and model free reinforcement learning (RL) to learn driving policies for autonomous driving in urban scenarios, by introducing a model based RL method to drive the autonomous vehicle in the Carla urban driving simulator. Although IL and model free RL methods have been proved to be capable of solving lots of challenging tasks, including playing video games, robots, and, in our prior work, urban driving, the low sample efficiency of such methods greatly limits their applications on actual autonomous driving. In this work, we developed a model based RL algorithm of guided policy search (GPS) for urban driving tasks. The algorithm iteratively learns a parameterized dynamic model to approximate the complex and interactive driving task, and optimizes the driving policy under the nonlinear approximate dynamic model. As a model based RL approach, when applied in urban autonomous driving, the GPS has the advantages of higher sample efficiency, better interpretability, and greater stability. We provide extensive experiments validating the effectiveness of the proposed method to learn robust driving policy for urban driving in Carla. We also compare the proposed method with other policy search and model free RL baselines, showing 100x better sample efficiency of the GPS based RL method, and also that the GPS based method can learn policies for harder tasks that the baseline methods can hardly learn.




mod

Categorical Vector Space Semantics for Lambek Calculus with a Relevant Modality. (arXiv:2005.03074v1 [cs.CL])

We develop a categorical compositional distributional semantics for Lambek Calculus with a Relevant Modality !L*, which has a limited edition of the contraction and permutation rules. The categorical part of the semantics is a monoidal biclosed category with a coalgebra modality, very similar to the structure of a Differential Category. We instantiate this category to finite dimensional vector spaces and linear maps via "quantisation" functors and work with three concrete interpretations of the coalgebra modality. We apply the model to construct categorical and concrete semantic interpretations for the motivating example of !L*: the derivation of a phrase with a parasitic gap. The effectiveness of the concrete interpretations are evaluated via a disambiguation task, on an extension of a sentence disambiguation dataset to parasitic gap phrase one, using BERT, Word2Vec, and FastText vectors and Relational tensors.




mod

Extracting Headless MWEs from Dependency Parse Trees: Parsing, Tagging, and Joint Modeling Approaches. (arXiv:2005.03035v1 [cs.CL])

An interesting and frequent type of multi-word expression (MWE) is the headless MWE, for which there are no true internal syntactic dominance relations; examples include many named entities ("Wells Fargo") and dates ("July 5, 2020") as well as certain productive constructions ("blow for blow", "day after day"). Despite their special status and prevalence, current dependency-annotation schemes require treating such flat structures as if they had internal syntactic heads, and most current parsers handle them in the same fashion as headed constructions. Meanwhile, outside the context of parsing, taggers are typically used for identifying MWEs, but taggers might benefit from structural information. We empirically compare these two common strategies--parsing and tagging--for predicting flat MWEs. Additionally, we propose an efficient joint decoding algorithm that combines scores from both strategies. Experimental results on the MWE-Aware English Dependency Corpus and on six non-English dependency treebanks with frequent flat structures show that: (1) tagging is more accurate than parsing for identifying flat-structure MWEs, (2) our joint decoder reconciles the two different views and, for non-BERT features, leads to higher accuracies, and (3) most of the gains result from feature sharing between the parsers and taggers.




mod

10 Modern Web Design Trends for 2020

Web design is responsible for nearly 95% of a visitor’s first impression of your business. That’s why it’s more important than ever to incorporate modern web design into your marketing strategy. But what modern web design trends are on the horizon for 2020 — and how can you use them to freshen up your site? […]

The post 10 Modern Web Design Trends for 2020 appeared first on WebFX Blog.




mod

Trump administration models predict near doubling of daily death toll by June

By The New York Times The New York Times Company As President Donald Trump presses for states to reopen their economies, his administration is privately projecting a steady rise in the number of cases and deaths from the coronavirus over the next several weeks, reaching about 3,000 daily deaths June 1, according to an internal document obtained by The New York Times, nearly double from the current level of about 1,750.…



  • Nation & World

mod

REVIEW: The Commodores' funky, fun night at Northern Quest

One great thing about seeing "oldies" acts on tour is the vivid reminder you get that groups in the old days really knew how to serve their fans. Take the Commodores, for example, a group with a 52-year-history that swung by Northern Quest Resort & Casino Thursday night.…



  • Music/Music News

mod

Best match processing mode of decision tables

An input combination of at least one condition value to be evaluated against at least one rule of a decision table is received. The at least one rule includes at least one condition and the rule is associated with a result. The at least one rule is evaluated against the input combination to determine conditions fulfilled for the at least one condition value. In one aspect, a rule from the at least one rule that best matches the input combination is determined and a result associated with the rule that best matches the input combination is outputted.




mod

Techniques for evaluation, building and/or retraining of a classification model

Techniques for evaluation and/or retraining of a classification model built using labeled training data. In some aspects, a classification model having a first set of weights is retrained by using unlabeled input to reweight the labeled training data to have a second set of weights, and by retraining the classification model using the labeled training data weighted according to the second set of weights. In some aspects, a classification model is evaluated by building a similarity model that represents similarities between unlabeled input and the labeled training data and using the similarity model to evaluate the labeled training data to identify a subset of the plurality of items of labeled training data that is more similar to the unlabeled input than a remainder of the labeled training data.