all

10 Cool & Free Mobile Wallpapers

Guys, great news! Our friends at Freepik has released exclusively for s2o readers 10 Cool & Free Mobile Wallpapers in several awesome styles. They come in AI, EPS and jpg files. The wallpapers are easily resizable for any kind of mobile —or any other project ;)— so you can adapt them in a no time …

10 Cool & Free Mobile Wallpapers Read More »




all

Building Your Website All Alone

Whether you have created a brand new company, or you’ve been around for a long time, if you do not already have a website, you are going to have to put one up as soon as humanly possible. According to the website Mashable, online shopping accounted for $231 billion in sales in 2012. This means …

Building Your Website All Alone Read More »





all

Brighten Up Someone’s May (2020 Wallpapers Edition)

May is here! And even though the current situation makes this a different kind of May, with a new routine and different things on our minds as in the years before, luckily some things never change. Like the fact that we start into the new month with some fresh inspiration. Since more than nine years already, we challenge you, the design community, to get creative and produce wallpaper designs for our monthly posts.




all

Photography Life makes all their paid premium courses free

Photography Life has just contributed to the selection of online courses that you can take for free. While their premim courses are normally paid $150 per course, you can now access them free of charge. The founders have released them on YouTube, available for everyone to watch. The Photography Life team came to the decision […]

The post Photography Life makes all their paid premium courses free appeared first on DIY Photography.






all

ISOTYPE Book: Young, Prager, There’s Work for All

This book from 1945 contains a very interesting mix of different charts made by the ISOTYPE Institute, some classic and some quite unusual. As a book about labor and unemployment, it also makes extensive use of Gerd Arntz’s famous unemployed man icon. Michael Young and Theodor Prager’s There’s Work for All is part of a […]




all

Non-associative Frobenius algebras for simply laced Chevalley groups. (arXiv:2005.02625v1 [math.RA] CROSS LISTED)

We provide an explicit construction for a class of commutative, non-associative algebras for each of the simple Chevalley groups of simply laced type. Moreover, we equip these algebras with an associating bilinear form, which turns them into Frobenius algebras. This class includes a 3876-dimensional algebra on which the Chevalley group of type E8 acts by automorphisms. We also prove that these algebras admit the structure of (axial) decomposition algebras.




all

Resonances as Viscosity Limits for Exponentially Decaying Potentials. (arXiv:2005.01257v2 [math.SP] UPDATED)

We show that the complex absorbing potential (CAP) method for computing scattering resonances applies to the case of exponentially decaying potentials. That means that the eigenvalues of $-Delta + V - iepsilon x^2$, $|V(x)|leq e^{-2gamma |x|}$ converge, as $ epsilon o 0+ $, to the poles of the meromorphic continuation of $ ( -Delta + V -lambda^2 )^{-1} $ uniformly on compact subsets of $ extrm{Re},lambda>0$, $ extrm{Im},lambda>-gamma$, $arglambda > pi/8$.




all

Convergent normal forms for five dimensional totally nondegenerate CR manifolds in C^4. (arXiv:2004.11251v2 [math.CV] UPDATED)

Applying the equivariant moving frames method, we construct convergent normal forms for real-analytic 5-dimensional totally nondegenerate CR submanifolds of C^4. These CR manifolds are divided into several biholomorphically inequivalent subclasses, each of which has its own complete normal form. Moreover it is shown that, biholomorphically, Beloshapka's cubic model is the unique member of this class with the maximum possible dimension seven of the corresponding algebra of infinitesimal CR automorphisms. Our results are also useful in the study of biholomorphic equivalence problem between CR manifolds, in question.




all

Locally equivalent Floer complexes and unoriented link cobordisms. (arXiv:1911.03659v4 [math.GT] UPDATED)

We show that the local equivalence class of the collapsed link Floer complex $cCFL^infty(L)$, together with many $Upsilon$-type invariants extracted from this group, is a concordance invariant of links. In particular, we define a version of the invariants $Upsilon_L(t)$ and $ u^+(L)$ when $L$ is a link and we prove that they give a lower bound for the slice genus $g_4(L)$. Furthermore, in the last section of the paper we study the homology group $HFL'(L)$ and its behaviour under unoriented cobordisms. We obtain that a normalized version of the $upsilon$-set, introduced by Ozsv'ath, Stipsicz and Szab'o, produces a lower bound for the 4-dimensional smooth crosscap number $gamma_4(L)$.




all

Decentralized and Parallelized Primal and Dual Accelerated Methods for Stochastic Convex Programming Problems. (arXiv:1904.09015v10 [math.OC] UPDATED)

We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node in the class of methods with optimal number of communication steps takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique we show that all proposed methods with stochastic oracle can be additionally parallelized at each node.




all

On $p$-groups with automorphism groups related to the exceptional Chevalley groups. (arXiv:1810.08365v3 [math.GR] UPDATED)

Let $hat G$ be the finite simply connected version of an exceptional Chevalley group, and let $V$ be a nontrivial irreducible module, of minimal dimension, for $hat G$ over its field of definition. We explore the overgroup structure of $hat G$ in $mathrm{GL}(V)$, and the submodule structure of the exterior square (and sometimes the third Lie power) of $V$. When $hat G$ is defined over a field of odd prime order $p$, this allows us to construct the smallest (with respect to certain properties) $p$-groups $P$ such that the group induced by $mathrm{Aut}(P)$ on $P/Phi(P)$ is either $hat G$ or its normaliser in $mathrm{GL}(V)$.




all

On Harmonic and Asymptotically harmonic Finsler manifolds. (arXiv:2005.03616v1 [math.DG])

In this paper we introduce various types of harmonic Finsler manifolds and study the relation between them. We give several characterizations of such spaces in terms of the mean curvature and Laplacian. In addition, we prove that some harmonic Finsler manifolds are of Einstein type and a technique to construct harmonic Finsler manifolds of Rander type is given. Moreover, we provide many examples of non-Riemmanian Finsler harmonic manifolds of constant flag curvature and constant $S$-curvature. Finally, we analyze Busemann functions in a general Finsler setting and in certain kind of Finsler harmonic manifolds, namely asymptotically harmonic Finsler manifolds along with studying some applications. In particular, we show the Busemann function is smooth in asymptotically harmonic Finsler manifolds and the total Busemann function is continuous in $C^{infty}$ topology.




all

Special subvarieties of non-arithmetic ball quotients and Hodge Theory. (arXiv:2005.03524v1 [math.AG])

Let $Gamma subset operatorname{PU}(1,n)$ be a lattice, and $S_Gamma$ the associated ball quotient. We prove that, if $S_Gamma$ contains infinitely many maximal totally geodesic subvarieties, then $Gamma$ is arithmetic. We also prove an Ax-Schanuel Conjecture for $S_Gamma$, similar to the one recently proven by Mok, Pila and Tsimerman. One of the main ingredients in the proofs is to realise $S_Gamma$ inside a period domain for polarised integral variations of Hodge structures and interpret totally geodesic subvarieties as unlikely intersections.




all

The formation of trapped surfaces in the gravitational collapse of spherically symmetric scalar fields with a positive cosmological constant. (arXiv:2005.03434v1 [gr-qc])

Given spherically symmetric characteristic initial data for the Einstein-scalar field system with a positive cosmological constant, we provide a criterion, in terms of the dimensionless size and dimensionless renormalized mass content of an annular region of the data, for the formation of a future trapped surface. This corresponds to an extension of Christodoulou's classical criterion by the inclusion of the cosmological term.




all

A Note on Cores and Quasi Relative Interiors in Partially Finite Convex Programming. (arXiv:2005.03265v1 [math.FA])

The problem of minimizing an entropy functional subject to linear constraints is a useful example of partially finite convex programming. In the 1990s, Borwein and Lewis provided broad and easy-to-verify conditions that guarantee strong duality for such problems. Their approach is to construct a function in the quasi-relative interior of the relevant infinite-dimensional set, which assures the existence of a point in the core of the relevant finite-dimensional set. We revisit this problem, and provide an alternative proof by directly appealing to the definition of the core, rather than by relying on any properties of the quasi-relative interior. Our approach admits a minor relaxation of the linear independence requirements in Borwein and Lewis' framework, which allows us to work with certain piecewise-defined moment functions precluded by their conditions. We provide such a computed example that illustrates how this relaxation may be used to tame observed Gibbs phenomenon when the underlying data is discontinuous. The relaxation illustrates the understanding we may gain by tackling partially-finite problems from both the finite-dimensional and infinite-dimensional sides. The comparison of these two approaches is informative, as both proofs are constructive.




all

Games Where You Can Play Optimally with Arena-Independent Finite Memory. (arXiv:2001.03894v2 [cs.GT] UPDATED)

For decades, two-player (antagonistic) games on graphs have been a framework of choice for many important problems in theoretical computer science. A notorious one is controller synthesis, which can be rephrased through the game-theoretic metaphor as the quest for a winning strategy of the system in a game against its antagonistic environment. Depending on the specification, optimal strategies might be simple or quite complex, for example having to use (possibly infinite) memory. Hence, research strives to understand which settings allow for simple strategies.

In 2005, Gimbert and Zielonka provided a complete characterization of preference relations (a formal framework to model specifications and game objectives) that admit memoryless optimal strategies for both players. In the last fifteen years however, practical applications have driven the community toward games with complex or multiple objectives, where memory -- finite or infinite -- is almost always required. Despite much effort, the exact frontiers of the class of preference relations that admit finite-memory optimal strategies still elude us.

In this work, we establish a complete characterization of preference relations that admit optimal strategies using arena-independent finite memory, generalizing the work of Gimbert and Zielonka to the finite-memory case. We also prove an equivalent to their celebrated corollary of great practical interest: if both players have optimal (arena-independent-)finite-memory strategies in all one-player games, then it is also the case in all two-player games. Finally, we pinpoint the boundaries of our results with regard to the literature: our work completely covers the case of arena-independent memory (e.g., multiple parity objectives, lower- and upper-bounded energy objectives), and paves the way to the arena-dependent case (e.g., multiple lower-bounded energy objectives).




all

IPG-Net: Image Pyramid Guidance Network for Small Object Detection. (arXiv:1912.00632v3 [cs.CV] UPDATED)

For Convolutional Neural Network-based object detection, there is a typical dilemma: the spatial information is well kept in the shallow layers which unfortunately do not have enough semantic information, while the deep layers have a high semantic concept but lost a lot of spatial information, resulting in serious information imbalance. To acquire enough semantic information for shallow layers, Feature Pyramid Networks (FPN) is used to build a top-down propagated path. In this paper, except for top-down combining of information for shallow layers, we propose a novel network called Image Pyramid Guidance Network (IPG-Net) to make sure both the spatial information and semantic information are abundant for each layer. Our IPG-Net has two main parts: the image pyramid guidance transformation module and the image pyramid guidance fusion module. Our main idea is to introduce the image pyramid guidance into the backbone stream to solve the information imbalance problem, which alleviates the vanishment of the small object features. This IPG transformation module promises even in the deepest stage of the backbone, there is enough spatial information for bounding box regression and classification. Furthermore, we designed an effective fusion module to fuse the features from the image pyramid and features from the backbone stream. We have tried to apply this novel network to both one-stage and two-stage detection models, state of the art results are obtained on the most popular benchmark data sets, i.e. MS COCO and Pascal VOC.




all

Digital Twin: Enabling Technologies, Challenges and Open Research. (arXiv:1911.01276v3 [cs.CY] UPDATED)

Digital Twin technology is an emerging concept that has become the centre of attention for industry and, in more recent years, academia. The advancements in industry 4.0 concepts have facilitated its growth, particularly in the manufacturing industry. The Digital Twin is defined extensively but is best described as the effortless integration of data between a physical and virtual machine in either direction. The challenges, applications, and enabling technologies for Artificial Intelligence, Internet of Things (IoT) and Digital Twins are presented. A review of publications relating to Digital Twins is performed, producing a categorical review of recent papers. The review has categorised them by research areas: manufacturing, healthcare and smart cities, discussing a range of papers that reflect these areas and the current state of research. The paper provides an assessment of the enabling technologies, challenges and open research for Digital Twins.




all

A Shift Selection Strategy for Parallel Shift-Invert Spectrum Slicing in Symmetric Self-Consistent Eigenvalue Computation. (arXiv:1908.06043v2 [math.NA] UPDATED)

The central importance of large scale eigenvalue problems in scientific computation necessitates the development of massively parallel algorithms for their solution. Recent advances in dense numerical linear algebra have enabled the routine treatment of eigenvalue problems with dimensions on the order of hundreds of thousands on the world's largest supercomputers. In cases where dense treatments are not feasible, Krylov subspace methods offer an attractive alternative due to the fact that they do not require storage of the problem matrices. However, demonstration of scalability of either of these classes of eigenvalue algorithms on computing architectures capable of expressing massive parallelism is non-trivial due to communication requirements and serial bottlenecks, respectively. In this work, we introduce the SISLICE method: a parallel shift-invert algorithm for the solution of the symmetric self-consistent field (SCF) eigenvalue problem. The SISLICE method drastically reduces the communication requirement of current parallel shift-invert eigenvalue algorithms through various shift selection and migration techniques based on density of states estimation and k-means clustering, respectively. This work demonstrates the robustness and parallel performance of the SISLICE method on a representative set of SCF eigenvalue problems and outlines research directions which will be explored in future work.




all

Keeping out the Masses: Understanding the Popularity and Implications of Internet Paywalls. (arXiv:1903.01406v4 [cs.CY] UPDATED)

Funding the production of quality online content is a pressing problem for content producers. The most common funding method, online advertising, is rife with well-known performance and privacy harms, and an intractable subject-agent conflict: many users do not want to see advertisements, depriving the site of needed funding.

Because of these negative aspects of advertisement-based funding, paywalls are an increasingly popular alternative for websites. This shift to a "pay-for-access" web is one that has potentially huge implications for the web and society. Instead of a system where information (nominally) flows freely, paywalls create a web where high quality information is available to fewer and fewer people, leaving the rest of the web users with less information, that might be also less accurate and of lower quality. Despite the potential significance of a move from an "advertising-but-open" web to a "paywalled" web, we find this issue understudied.

This work addresses this gap in our understanding by measuring how widely paywalls have been adopted, what kinds of sites use paywalls, and the distribution of policies enforced by paywalls. A partial list of our findings include that (i) paywall use is accelerating (2x more paywalls every 6 months), (ii) paywall adoption differs by country (e.g. 18.75% in US, 12.69% in Australia), (iii) paywalls change how users interact with sites (e.g. higher bounce rates, less incoming links), (iv) the median cost of an annual paywall access is $108 per site, and (v) paywalls are in general trivial to circumvent.

Finally, we present the design of a novel, automated system for detecting whether a site uses a paywall, through the combination of runtime browser instrumentation and repeated programmatic interactions with the site. We intend this classifier to augment future, longitudinal measurements of paywall use and behavior.




all

Machine learning topological phases in real space. (arXiv:1901.01963v4 [cond-mat.mes-hall] UPDATED)

We develop a supervised machine learning algorithm that is able to learn topological phases for finite condensed matter systems from bulk data in real lattice space. The algorithm employs diagonalization in real space together with any supervised learning algorithm to learn topological phases through an eigenvector ensembling procedure. We combine our algorithm with decision trees and random forests to successfully recover topological phase diagrams of Su-Schrieffer-Heeger (SSH) models from bulk lattice data in real space and show how the Shannon information entropy of ensembles of lattice eigenvectors can be used to retrieve a signal detailing how topological information is distributed in the bulk. The discovery of Shannon information entropy signals associated with topological phase transitions from the analysis of data from several thousand SSH systems illustrates how model explainability in machine learning can advance the research of exotic quantum materials with properties that may power future technological applications such as qubit engineering for quantum computing.




all

Performance of the smallest-variance-first rule in appointment sequencing. (arXiv:1812.01467v4 [math.PR] UPDATED)

A classical problem in appointment scheduling, with applications in health care, concerns the determination of the patients' arrival times that minimize a cost function that is a weighted sum of mean waiting times and mean idle times. One aspect of this problem is the sequencing problem, which focuses on ordering the patients. We assess the performance of the smallest-variance-first (SVF) rule, which sequences patients in order of increasing variance of their service durations. While it was known that SVF is not always optimal, it has been widely observed that it performs well in practice and simulation. We provide a theoretical justification for this observation by proving, in various settings, quantitative worst-case bounds on the ratio between the cost incurred by the SVF rule and the minimum attainable cost. We also show that, in great generality, SVF is asymptotically optimal, i.e., the ratio approaches 1 as the number of patients grows large. While evaluating policies by considering an approximation ratio is a standard approach in many algorithmic settings, our results appear to be the first of this type in the appointment scheduling literature.




all

On Exposure Bias, Hallucination and Domain Shift in Neural Machine Translation. (arXiv:2005.03642v1 [cs.CL])

The standard training algorithm in neural machine translation (NMT) suffers from exposure bias, and alternative algorithms have been proposed to mitigate this. However, the practical impact of exposure bias is under debate. In this paper, we link exposure bias to another well-known problem in NMT, namely the tendency to generate hallucinations under domain shift. In experiments on three datasets with multiple test domains, we show that exposure bias is partially to blame for hallucinations, and that training with Minimum Risk Training, which avoids exposure bias, can mitigate this. Our analysis explains why exposure bias is more problematic under domain shift, and also links exposure bias to the beam search problem, i.e. performance deterioration with increasing beam size. Our results provide a new justification for methods that reduce exposure bias: even if they do not increase performance on in-domain test sets, they can increase model robustness to domain shift.




all

Mutli-task Learning with Alignment Loss for Far-field Small-Footprint Keyword Spotting. (arXiv:2005.03633v1 [eess.AS])

In this paper, we focus on the task of small-footprint keyword spotting under the far-field scenario. Far-field environments are commonly encountered in real-life speech applications, and it causes serve degradation of performance due to room reverberation and various kinds of noises. Our baseline system is built on the convolutional neural network trained with pooled data of both far-field and close-talking speech. To cope with the distortions, we adopt the multi-task learning scheme with alignment loss to reduce the mismatch between the embedding features learned from different domains of data. Experimental results show that our proposed method maintains the performance on close-talking speech and achieves significant improvement on the far-field test set.




all

COVID-19 Contact-tracing Apps: A Survey on the Global Deployment and Challenges. (arXiv:2005.03599v1 [cs.CR])

In response to the coronavirus disease (COVID-19) outbreak, there is an ever-increasing number of national governments that are rolling out contact-tracing Apps to aid the containment of the virus. The first hugely contentious issue facing the Apps is the deployment framework, i.e. centralised or decentralised. Based on this, the debate branches out to the corresponding technologies that underpin these architectures, i.e. GPS, QR codes, and Bluetooth. This work conducts a pioneering review of the above scenarios and contributes a geolocation mapping of the current deployment. The vulnerabilities and the directions of research are identified, with a special focus on the Bluetooth-based decentralised scheme.




all

A Local Spectral Exterior Calculus for the Sphere and Application to the Shallow Water Equations. (arXiv:2005.03598v1 [math.NA])

We introduce $Psimathrm{ec}$, a local spectral exterior calculus for the two-sphere $S^2$. $Psimathrm{ec}$ provides a discretization of Cartan's exterior calculus on $S^2$ formed by spherical differential $r$-form wavelets. These are well localized in space and frequency and provide (Stevenson) frames for the homogeneous Sobolev spaces $dot{H}^{-r+1}( Omega_{ u}^{r} , S^2 )$ of differential $r$-forms. At the same time, they satisfy important properties of the exterior calculus, such as the de Rahm complex and the Hodge-Helmholtz decomposition. Through this, $Psimathrm{ec}$ is tailored towards structure preserving discretizations that can adapt to solutions with varying regularity. The construction of $Psimathrm{ec}$ is based on a novel spherical wavelet frame for $L_2(S^2)$ that we obtain by introducing scalable reproducing kernel frames. These extend scalable frames to weighted sampling expansions and provide an alternative to quadrature rules for the discretization of needlet-like scale-discrete wavelets. We verify the practicality of $Psimathrm{ec}$ for numerical computations using the rotating shallow water equations. Our numerical results demonstrate that a $Psimathrm{ec}$-based discretization of the equations attains accuracy comparable to those of spectral methods while using a representation that is well localized in space and frequency.




all

Subtle Sensing: Detecting Differences in the Flexibility of Virtually Simulated Molecular Objects. (arXiv:2005.03503v1 [cs.HC])

During VR demos we have performed over last few years, many participants (in the absence of any haptic feedback) have commented on their perceived ability to 'feel' differences between simulated molecular objects. The mechanisms for such 'feeling' are not entirely clear: observing from outside VR, one can see that there is nothing physical for participants to 'feel'. Here we outline exploratory user studies designed to evaluate the extent to which participants can distinguish quantitative differences in the flexibility of VR-simulated molecular objects. The results suggest that an individual's capacity to detect differences in molecular flexibility is enhanced when they can interact with and manipulate the molecules, as opposed to merely observing the same interaction. Building on these results, we intend to carry out further studies investigating humans' ability to sense quantitative properties of VR simulations without haptic technology.




all

NTIRE 2020 Challenge on NonHomogeneous Dehazing. (arXiv:2005.03457v1 [cs.CV])

This paper reviews the NTIRE 2020 Challenge on NonHomogeneous Dehazing of images (restoration of rich details in hazy image). We focus on the proposed solutions and their results evaluated on NH-Haze, a novel dataset consisting of 55 pairs of real haze free and nonhomogeneous hazy images recorded outdoor. NH-Haze is the first realistic nonhomogeneous haze dataset that provides ground truth images. The nonhomogeneous haze has been produced using a professional haze generator that imitates the real conditions of haze scenes. 168 participants registered in the challenge and 27 teams competed in the final testing phase. The proposed solutions gauge the state-of-the-art in image dehazing.




all

NTIRE 2020 Challenge on Spectral Reconstruction from an RGB Image. (arXiv:2005.03412v1 [eess.IV])

This paper reviews the second challenge on spectral reconstruction from RGB images, i.e., the recovery of whole-scene hyperspectral (HS) information from a 3-channel RGB image. As in the previous challenge, two tracks were provided: (i) a "Clean" track where HS images are estimated from noise-free RGBs, the RGB images are themselves calculated numerically using the ground-truth HS images and supplied spectral sensitivity functions (ii) a "Real World" track, simulating capture by an uncalibrated and unknown camera, where the HS images are recovered from noisy JPEG-compressed RGB images. A new, larger-than-ever, natural hyperspectral image data set is presented, containing a total of 510 HS images. The Clean and Real World tracks had 103 and 78 registered participants respectively, with 14 teams competing in the final testing phase. A description of the proposed methods, alongside their challenge scores and an extensive evaluation of top performing methods is also provided. They gauge the state-of-the-art in spectral reconstruction from an RGB image.




all

Encoding in the Dark Grand Challenge: An Overview. (arXiv:2005.03315v1 [eess.IV])

A big part of the video content we consume from video providers consists of genres featuring low-light aesthetics. Low light sequences have special characteristics, such as spatio-temporal varying acquisition noise and light flickering, that make the encoding process challenging. To deal with the spatio-temporal incoherent noise, higher bitrates are used to achieve high objective quality. Additionally, the quality assessment metrics and methods have not been designed, trained or tested for this type of content. This has inspired us to trigger research in that area and propose a Grand Challenge on encoding low-light video sequences. In this paper, we present an overview of the proposed challenge, and test state-of-the-art methods that will be part of the benchmark methods at the stage of the participants' deliverable assessment. From this exploration, our results show that VVC already achieves a high performance compared to simply denoising the video source prior to encoding. Moreover, the quality of the video streams can be further improved by employing a post-processing image enhancement method.




all

Cotatron: Transcription-Guided Speech Encoder for Any-to-Many Voice Conversion without Parallel Data. (arXiv:2005.03295v1 [eess.AS])

We propose Cotatron, a transcription-guided speech encoder for speaker-independent linguistic representation. Cotatron is based on the multispeaker TTS architecture and can be trained with conventional TTS datasets. We train a voice conversion system to reconstruct speech with Cotatron features, which is similar to the previous methods based on Phonetic Posteriorgram (PPG). By training and evaluating our system with 108 speakers from the VCTK dataset, we outperform the previous method in terms of both naturalness and speaker similarity. Our system can also convert speech from speakers that are unseen during training, and utilize ASR to automate the transcription with minimal reduction of the performance. Audio samples are available at https://mindslab-ai.github.io/cotatron, and the code with a pre-trained model will be made available soon.




all

NTIRE 2020 Challenge on Image Demoireing: Methods and Results. (arXiv:2005.03155v1 [cs.CV])

This paper reviews the Challenge on Image Demoireing that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2020. Demoireing is a difficult task of removing moire patterns from an image to reveal an underlying clean image. The challenge was divided into two tracks. Track 1 targeted the single image demoireing problem, which seeks to remove moire patterns from a single image. Track 2 focused on the burst demoireing problem, where a set of degraded moire images of the same scene were provided as input, with the goal of producing a single demoired image as output. The methods were ranked in terms of their fidelity, measured using the peak signal-to-noise ratio (PSNR) between the ground truth clean images and the restored images produced by the participants' methods. The tracks had 142 and 99 registered participants, respectively, with a total of 14 and 6 submissions in the final testing stage. The entries span the current state-of-the-art in image and burst image demoireing problems.




all

Optimally Convergent Mixed Finite Element Methods for the Stochastic Stokes Equations. (arXiv:2005.03148v1 [math.NA])

We propose some new mixed finite element methods for the time dependent stochastic Stokes equations with multiplicative noise, which use the Helmholtz decomposition of the driving multiplicative noise. It is known [16] that the pressure solution has a low regularity, which manifests in sub-optimal convergence rates for well-known inf-sup stable mixed finite element methods in numerical simulations, see [10]. We show that eliminating this gradient part from the noise in the numerical scheme leads to optimally convergent mixed finite element methods, and that this conceptual idea may be used to retool numerical methods that are well-known in the deterministic setting, including pressure stabilization methods, so that their optimal convergence properties can still be maintained in the stochastic setting. Computational experiments are also provided to validate the theoretical results and to illustrate the conceptional usefulness of the proposed numerical approach.




all

Beware the Normative Fallacy. (arXiv:2005.03084v1 [cs.SE])

Behavioral research can provide important insights for SE practices. But in performing it, many studies of SE are committing a normative fallacy - they misappropriate normative and prescriptive theories for descriptive purposes. The evidence from reviews of empirical studies of decision making in SE suggests that the normative fallacy may is common. This article draws on cognitive psychology and behavioral economics to explains this fallacy. Because data collection is framed by narrow and empirically invalid theories, flawed assumptions baked into those theories lead to misleading interpretations of observed behaviors and ultimately, to invalid conclusions and flawed recommendations. Researchers should be careful not to rely solely on engineering methods to explain what people do when they do engineering. Instead, insist that descriptive research be based on validated descriptive theories, listen carefully to skilled practitioners, and only rely on validated findings to prescribe what they should do.




all

Football High: Helmets Do Not Prevent Concussions

Despite the improvements in helmet technology, helmets may prevent skull fractures, but they do not prevent concussions.




all

Football High: Keeping Up with the Joneses

Competition is steep in games like football. The desire to win often trumps safety.




all

Football High: Garrett Harper's Story, Part II

The decisions coaches make on the sidelines about returning a concussed player to the game or not can be a "game changer" for that athlete's life.




all

Football High: Small Hits Add Up

Research is showing that the accumulation of sub-concussive hits in sports like football can be just as damaging as one or two major concussions.




all

Football High: Garrett Harper's Story, Part I

For many competitive high school football players like Garrett Harper, the intensity of this contact sport has its price.




all

Football High: Owen Thomas' Story

The issues of sports-related concussions and chronic traumatic encephalopathy were intensified when the brain of a deceased 21-year-old football player was examined.




all

How Does the IMPACT Baseline Test for Athletes Really Work?

Retired Soccer Star Briana Scurry describes how the computerized baseline test works and how it is used for athletes who have sustained a concussion.




all

The Doctor Who Finally Said He Could Help

Retired soccer star Briana Scurry talks about finally finding hope and help after almost three years of being told she wouldn't get any better.




all

What “Friday Night Tykes” Can Teach Us About Youth Football

Why do some parents and coaches think it's okay to let 9-year-old kids get hit in the head over and over in football practices and games?




all

Despite risks, many in small town continue to support youth football

Despite multiple concussions, a high school freshman continues to play football. Will family tradition outweigh the risks?




all

20 Company Website Designs to Inspire Your Small Business

As a small or midsize business (SMB), your company website is often the first touchpoint for potential clients — and you want it to make a great first impression. The secret to hitting home with your audience is to have a sophisticated and lively website design that’s aesthetically pleasing and provides great user experience (UX). […]

The post 20 Company Website Designs to Inspire Your Small Business appeared first on WebFX Blog.




all

All About Lambda Functions in C++ (From C++11 to C++17)

Lambda functions are quite an intuitive concept of Modern C++ introduced in C++11, so there are already tons of articles on lambda function tutorials over the internet. But still, there are some untold things (like IIFE, types of lambda, etc.) left, which nobody talks about. Therefore, here I am to not only show you lambda function in C++, but we'll also cover how it works internally and other aspects of Lambda.

The title of this article is a bit misleading. Because lambda doesn't always synthesize to function pointer. It's an expression (precisely unique closure). But I have kept it that way for simplicity. So from now on, I will use lambda function and expression interchangeably.