ana

You Are Not Your Ego with Cheri Huber and Ashwini Narayanan

Cheri has been a student and teacher of Zen for over 35 years. She is the author of over 20 books on Zen, and founded the Mountain View Zen Center and the Zen Monastery Peace Center. Cheri also founded a non-profit dedicated to transforming lives and ending suffering, Living Compassion, whose primary work is the Africa Vulnerable Children Project in Zambia. Ashwini co-facilitates and creates workshops with Cheri. She runs the operations of the two nonprofits that Cheri founded. Her eclectic background includes degrees in physics, business, and computer science to working in advertising, an investment bank, a social enterprise, and several technology startups in the Silicon Valley. Cheri and Ashwini have co-written multiple books, including their latest Don’t Suffer, Communicate. Today’s episode isn’t just about awareness practice, it’s about a framework for navigating life. A few highlights: Zen isn’t just the practice of keeping things nicely organized, it’s also a spiritual practice largely focused on awareness and where you direct attention. Self-improvement is an endless diss. The very nature of saying we need improvement implies we’re not enough. Cheri and Ashwini share some useful tools to redirect the attention, such as using a recorder to access the wisdom, love, and compassion that is […]

The post You Are Not Your Ego with Cheri Huber and Ashwini Narayanan appeared first on Chase Jarvis Photography.




ana

Setting New Project Managers Up for Success

At Viget, we’ve brought on more than a few new Project Managers over the past couple of years, as we continue to grow. The awesome new people we’ve hired have ranged in their levels of experience, but some of them are earlier in their careers and need support from more experienced PMs to develop their skills and flourish.

We have different levels of training and support for new PMs. These broadly fall into four categories:

  • Onboarding: Learning about Viget tools and processes
  • Shadowing: Learning by watching others
  • Pairing: Learning by doing collaboratively
  • Leading: Learning by doing solo

Onboarding

In addition to conducting intro sessions to each discipline at Viget, new Viget PMs go through a lengthy set of training sessions that are specific to the PM lab. These include intros to:

PM tools and resourcesProject processes
Project typesProject checklists
Project taskingProject planning
Budgets, schedules, and resourcingRetrospectives
Working with remote teamsProject kickoffs
Thinking about developmentGithub and development workflow
Tickets, definition, and documentationQA testing
Account management

Shadowing

After PMs complete the onboarding process, they start shadowing other PMs’ projects to get exposure to the different types of projects we run (since the variety is large). We cater length and depth of shadowing based on how much experience a PM has coming in. We also try to expose PMs to multiple project managers, so they can see how PM style differs person-to-person.

We’ve found that it can be most effective to have PMs shadow activities that are more difficult to teach in theory, such as shadowing a PM having a difficult conversation with a client, or shadowing a front-end build-out demo to see how the PM positions the meeting and our process to the client. More straightforward tasks like setting up a Harvest project could be done via pairing, since it’s easy to get the hang of with a little guidance.

Pairing

While shadowing is certainly helpful, we try to get PMs into pairing mode pretty quickly, since we’ve found that most folks learn better by doing than by watching. Sometimes this might mean having a new PM setting up an invoice or budget sheet for a client while a more experienced PM sits next to them, talking them through the process. We’ve found that having a newer PM lead straightforward activities with guidance tends to be more effective than the newer PM merely watching the more experienced PM do that activity.

Another tactic we take is to have both PMs complete a task independently, and then meet and talk through their work, with the more experienced PM giving the less experienced PM feedback. That helps the newer PM think through a task on their own, and gain experience, but still have the chance to see how someone else would have approached the task and get meaningful feedback.

Leading

Once new PMs are ready to be in the driver’s seat, they are staffed as the lead on projects. The timing of when someone shifts into a lead role depends on how much prior experience that person has, as well as what types of projects are actively ready to be worked on.

Most early-career project managers have a behind-the-scenes project mentor (another PM) on at least their first couple projects, so they have a dedicated person to ask questions and get advice from who also has more detailed context than that person’s manager would. For example, mentors often shadow key client and internal meetings and have more frequent check-ins with mentees. This might be less necessary at a company where all the projects are fairly similar, but at Viget, our projects vary widely in scale and services provided, as well as client needs. Because of this, there’s no “one size fits all” process and we have a significant amount of customization per project, which can be daunting to new PMs who are still getting the hang of things.

For these mentorship pairings, we use a mentorship plan document (template here) to help the mentor and mentee work together to define goals, mentorship focuses, and touchpoints. Sometimes the mentee’s manager will take a first stab at filling out the plan, other times, the mentor will start that process.

Management Touchpoints

Along the way, we make sure new PMs have touchpoints with their managers to get the level of support they need to grow and succeed. Managers have regular 1:1s with PMs that are referred to as “project 1:1s”, and are used for the managee to talk through and get advice on challenges or questions related to the projects they’re working on—though really, they can be used for whatever topics are on the managee’s mind. PMs typically have 1:1s with managers daily the first week, two to three times per week after that for the first month or so, then scale down to once per week, and then scale down to bi-weekly after the first six months.

In addition to project 1:1s, we also have monthly 1:1s that are more bigger-picture and focused on goal-setting and progress, project feedback from that person’s peers, reflection on how satisfied and fulfilled they’re feeling in their role, and talking through project/industry interests which informs what projects we should advocate for them to be staffed on. We have a progress log template that we customize per PM to keep track of goals and progress.

We try to foster a supportive environment that encourages growth, feedback, and experiential learning, but also that lets folks have the autonomy to get in the driver’s seat as soon as they’re comfortable. Interested in learning more about what it’s like to work at Viget? Check out our open positions here.




ana

EMSx: A Numerical Benchmark for Energy Management Systems. (arXiv:2001.00450v2 [math.OC] UPDATED)

Inserting renewable energy in the electric grid in a decentralized manneris a key challenge of the energy transition. However, at local scale, both production and demand display erratic behavior, which makes it delicate to match them. It is the goal of Energy Management Systems (EMS) to achieve such balance at least cost. We present EMSx, a numerical benchmark for testing control algorithms for the management of electric microgrids equipped with a photovoltaic unit and an energy storage system. EMSx is made of three key components: the EMSx dataset, provided by Schneider Electric, contains a diverse pool of realistic microgrids with a rich collection of historical observations and forecasts; the EMSx mathematical framework is an explicit description of the assessment of electric microgrid control techniques and algorithms; the EMSx software EMSx.jl is a package, implemented in the Julia language, which enables to easily implement a microgrid controller and to test it. All components of the benchmark are publicly available, so that other researchers willing to test controllers on EMSx may reproduce experiments easily. Eventually, we showcase the results of standard microgrid control methods, including Model Predictive Control, Open Loop Feedback Control and Stochastic Dynamic Programming.




ana

A stand-alone analysis of quasidensity. (arXiv:1907.07278v8 [math.FA] UPDATED)

In this paper we consider the "quasidensity" of a subset of the product of a Banach space and its dual, and give a connection between quasidense sets and sets of "type (NI)". We discuss "coincidence sets" of certain convex functions and prove two sum theorems for coincidence sets. We obtain new results on the Fitzpatrick extension of a closed quasidense monotone multifunction. The analysis in this paper is self-contained, and independent of previous work on "Banach SN spaces". This version differs from the previous version because it is shown that the (well known) equivalence of quasidensity and "type (NI)" for maximally monotone sets is not true without the monotonicity assumption and that the appendix has been moved to the end of Section 10, where it rightfully belongs.




ana

Quasi-Sure Stochastic Analysis through Aggregation and SLE$_kappa$ Theory. (arXiv:2005.03152v1 [math.PR])

We study SLE$_{kappa}$ theory with elements of Quasi-Sure Stochastic Analysis through Aggregation. Specifically, we show how the latter can be used to construct the SLE$_{kappa}$ traces quasi-surely (i.e. simultaneously for a family of probability measures with certain properties) for $kappa in mathcal{K}cap mathbb{R}_+ setminus ([0, epsilon) cup {8})$, for any $epsilon>0$ with $mathcal{K} subset mathbb{R}_{+}$ a nontrivial compact interval, i.e. for all $kappa$ that are not in a neighborhood of zero and are different from $8$. As a by-product of the analysis, we show in this language a version of the continuity in $kappa$ of the SLE$_{kappa}$ traces for all $kappa$ in compact intervals as above.




ana

On planar graphs of uniform polynomial growth. (arXiv:2005.03139v1 [math.PR])

Consider an infinite planar graph with uniform polynomial growth of degree d > 2. Many examples of such graphs exhibit similar geometric and spectral properties, and it has been conjectured that this is necessary. We present a family of counterexamples. In particular, we show that for every rational d > 2, there is a planar graph with uniform polynomial growth of degree d on which the random walk is transient, disproving a conjecture of Benjamini (2011).

By a well-known theorem of Benjamini and Schramm, such a graph cannot be a unimodular random graph. We also give examples of unimodular random planar graphs of uniform polynomial growth with unexpected properties. For instance, graphs of (almost sure) uniform polynomial growth of every rational degree d > 2 for which the speed exponent of the walk is larger than 1/d, and in which the complements of all balls are connected. This resolves negatively two questions of Benjamini and Papasoglou (2011).




ana

Continuous speech separation: dataset and analysis. (arXiv:2001.11482v3 [cs.SD] UPDATED)

This paper describes a dataset and protocols for evaluating continuous speech separation algorithms. Most prior studies on speech separation use pre-segmented signals of artificially mixed speech utterances which are mostly emph{fully} overlapped, and the algorithms are evaluated based on signal-to-distortion ratio or similar performance metrics. However, in natural conversations, a speech signal is continuous, containing both overlapped and overlap-free components. In addition, the signal-based metrics have very weak correlations with automatic speech recognition (ASR) accuracy. We think that not only does this make it hard to assess the practical relevance of the tested algorithms, it also hinders researchers from developing systems that can be readily applied to real scenarios. In this paper, we define continuous speech separation (CSS) as a task of generating a set of non-overlapped speech signals from a extit{continuous} audio stream that contains multiple utterances that are emph{partially} overlapped by a varying degree. A new real recorded dataset, called LibriCSS, is derived from LibriSpeech by concatenating the corpus utterances to simulate a conversation and capturing the audio replays with far-field microphones. A Kaldi-based ASR evaluation protocol is also established by using a well-trained multi-conditional acoustic model. By using this dataset, several aspects of a recently proposed speaker-independent CSS algorithm are investigated. The dataset and evaluation scripts are available to facilitate the research in this direction.




ana

Intra-Variable Handwriting Inspection Reinforced with Idiosyncrasy Analysis. (arXiv:1912.12168v2 [cs.CV] UPDATED)

In this paper, we work on intra-variable handwriting, where the writing samples of an individual can vary significantly. Such within-writer variation throws a challenge for automatic writer inspection, where the state-of-the-art methods do not perform well. To deal with intra-variability, we analyze the idiosyncrasy in individual handwriting. We identify/verify the writer from highly idiosyncratic text-patches. Such patches are detected using a deep recurrent reinforcement learning-based architecture. An idiosyncratic score is assigned to every patch, which is predicted by employing deep regression analysis. For writer identification, we propose a deep neural architecture, which makes the final decision by the idiosyncratic score-induced weighted average of patch-based decisions. For writer verification, we propose two algorithms for patch-fed deep feature aggregation, which assist in authentication using a triplet network. The experiments were performed on two databases, where we obtained encouraging results.




ana

Revisiting Semantics of Interactions for Trace Validity Analysis. (arXiv:1911.03094v2 [cs.SE] UPDATED)

Interaction languages such as MSC are often associated with formal semantics by means of translations into distinct behavioral formalisms such as automatas or Petri nets. In contrast to translational approaches we propose an operational approach. Its principle is to identify which elementary communication actions can be immediately executed, and then to compute, for every such action, a new interaction representing the possible continuations to its execution. We also define an algorithm for checking the validity of execution traces (i.e. whether or not they belong to an interaction's semantics). Algorithms for semantic computation and trace validity are analyzed by means of experiments.




ana

Over-the-Air Computation Systems: Optimization, Analysis and Scaling Laws. (arXiv:1909.00329v2 [cs.IT] UPDATED)

For future Internet of Things (IoT)-based Big Data applications (e.g., smart cities/transportation), wireless data collection from ubiquitous massive smart sensors with limited spectrum bandwidth is very challenging. On the other hand, to interpret the meaning behind the collected data, it is also challenging for edge fusion centers running computing tasks over large data sets with limited computation capacity. To tackle these challenges, by exploiting the superposition property of a multiple-access channel and the functional decomposition properties, the recently proposed technique, over-the-air computation (AirComp), enables an effective joint data collection and computation from concurrent sensor transmissions. In this paper, we focus on a single-antenna AirComp system consisting of $K$ sensors and one receiver (i.e., the fusion center). We consider an optimization problem to minimize the computation mean-squared error (MSE) of the $K$ sensors' signals at the receiver by optimizing the transmitting-receiving (Tx-Rx) policy, under the peak power constraint of each sensor. Although the problem is not convex, we derive the computation-optimal policy in closed form. Also, we comprehensively investigate the ergodic performance of AirComp systems in terms of the average computation MSE and the average power consumption under Rayleigh fading channels with different Tx-Rx policies. For the computation-optimal policy, we prove that its average computation MSE has a decay rate of $O(1/sqrt{K})$, and our numerical results illustrate that the policy also has a vanishing average power consumption with the increasing $K$, which jointly show the computation effectiveness and the energy efficiency of the policy with a large number of sensors.




ana

On analog quantum algorithms for the mixing of Markov chains. (arXiv:1904.11895v2 [quant-ph] UPDATED)

The problem of sampling from the stationary distribution of a Markov chain finds widespread applications in a variety of fields. The time required for a Markov chain to converge to its stationary distribution is known as the classical mixing time. In this article, we deal with analog quantum algorithms for mixing. First, we provide an analog quantum algorithm that given a Markov chain, allows us to sample from its stationary distribution in a time that scales as the sum of the square root of the classical mixing time and the square root of the classical hitting time. Our algorithm makes use of the framework of interpolated quantum walks and relies on Hamiltonian evolution in conjunction with von Neumann measurements.

There also exists a different notion for quantum mixing: the problem of sampling from the limiting distribution of quantum walks, defined in a time-averaged sense. In this scenario, the quantum mixing time is defined as the time required to sample from a distribution that is close to this limiting distribution. Recently we provided an upper bound on the quantum mixing time for Erd"os-Renyi random graphs [Phys. Rev. Lett. 124, 050501 (2020)]. Here, we also extend and expand upon our findings therein. Namely, we provide an intuitive understanding of the state-of-the-art random matrix theory tools used to derive our results. In particular, for our analysis we require information about macroscopic, mesoscopic and microscopic statistics of eigenvalues of random matrices which we highlight here. Furthermore, we provide numerical simulations that corroborate our analytical findings and extend this notion of mixing from simple graphs to any ergodic, reversible, Markov chain.




ana

A Fast and Accurate Algorithm for Spherical Harmonic Analysis on HEALPix Grids with Applications to the Cosmic Microwave Background Radiation. (arXiv:1904.10514v4 [math.NA] UPDATED)

The Hierarchical Equal Area isoLatitude Pixelation (HEALPix) scheme is used extensively in astrophysics for data collection and analysis on the sphere. The scheme was originally designed for studying the Cosmic Microwave Background (CMB) radiation, which represents the first light to travel during the early stages of the universe's development and gives the strongest evidence for the Big Bang theory to date. Refined analysis of the CMB angular power spectrum can lead to revolutionary developments in understanding the nature of dark matter and dark energy. In this paper, we present a new method for performing spherical harmonic analysis for HEALPix data, which is a central component to computing and analyzing the angular power spectrum of the massive CMB data sets. The method uses a novel combination of a non-uniform fast Fourier transform, the double Fourier sphere method, and Slevinsky's fast spherical harmonic transform (Slevinsky, 2019). For a HEALPix grid with $N$ pixels (points), the computational complexity of the method is $mathcal{O}(Nlog^2 N)$, with an initial set-up cost of $mathcal{O}(N^{3/2}log N)$. This compares favorably with $mathcal{O}(N^{3/2})$ runtime complexity of the current methods available in the HEALPix software when multiple maps need to be analyzed at the same time. Using numerical experiments, we demonstrate that the new method also appears to provide better accuracy over the entire angular power spectrum of synthetic data when compared to the current methods, with a convergence rate at least two times higher.




ana

Identifying Compromised Accounts on Social Media Using Statistical Text Analysis. (arXiv:1804.07247v3 [cs.SI] UPDATED)

Compromised accounts on social networks are regular user accounts that have been taken over by an entity with malicious intent. Since the adversary exploits the already established trust of a compromised account, it is crucial to detect these accounts to limit the damage they can cause. We propose a novel general framework for discovering compromised accounts by semantic analysis of text messages coming out from an account. Our framework is built on the observation that normal users will use language that is measurably different from the language that an adversary would use when the account is compromised. We use our framework to develop specific algorithms that use the difference of language models of users and adversaries as features in a supervised learning setup. Evaluation results show that the proposed framework is effective for discovering compromised accounts on social networks and a KL-divergence-based language model feature works best.




ana

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis. (arXiv:2005.03545v1 [cs.CL])

Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal Humor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework.




ana

Algorithmic Averaging for Studying Periodic Orbits of Planar Differential Systems. (arXiv:2005.03487v1 [cs.SC])

One of the main open problems in the qualitative theory of real planar differential systems is the study of limit cycles. In this article, we present an algorithmic approach for detecting how many limit cycles can bifurcate from the periodic orbits of a given polynomial differential center when it is perturbed inside a class of polynomial differential systems via the averaging method. We propose four symbolic algorithms to implement the averaging method. The first algorithm is based on the change of polar coordinates that allows one to transform a considered differential system to the normal form of averaging. The second algorithm is used to derive the solutions of certain differential systems associated to the unperturbed term of the normal of averaging. The third algorithm exploits the partial Bell polynomials and allows one to compute the integral formula of the averaged functions at any order. The last algorithm is based on the aforementioned algorithms and determines the exact expressions of the averaged functions for the considered differential systems. The implementation of our algorithms is discussed and evaluated using several examples. The experimental results have extended the existing relevant results for certain classes of differential systems.




ana

Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences. (arXiv:2005.03436v1 [cs.CL])

The patterns in which the syntax of different languages converges and diverges are often used to inform work on cross-lingual transfer. Nevertheless, little empirical work has been done on quantifying the prevalence of different syntactic divergences across language pairs. We propose a framework for extracting divergence patterns for any language pair from a parallel corpus, building on Universal Dependencies. We show that our framework provides a detailed picture of cross-language divergences, generalizes previous approaches, and lends itself to full automation. We further present a novel dataset, a manually word-aligned subset of the Parallel UD corpus in five languages, and use it to perform a detailed corpus study. We demonstrate the usefulness of the resulting analysis by showing that it can help account for performance patterns of a cross-lingual parser.




ana

Global Distribution of Google Scholar Citations: A Size-independent Institution-based Analysis. (arXiv:2005.03324v1 [cs.DL])

Most currently available schemes for performance based ranking of Universities or Research organizations, such as, Quacarelli Symonds (QS), Times Higher Education (THE), Shanghai University based All Research of World Universities (ARWU) use a variety of criteria that include productivity, citations, awards, reputation, etc., while Leiden and Scimago use only bibliometric indicators. The research performance evaluation in the aforesaid cases is based on bibliometric data from Web of Science or Scopus, which are commercially available priced databases. The coverage includes peer reviewed journals and conference proceedings. Google Scholar (GS) on the other hand, provides a free and open alternative to obtaining citations of papers available on the net, (though it is not clear exactly which journals are covered.) Citations are collected automatically from the net and also added to self created individual author profiles under Google Scholar Citations (GSC). This data was used by Webometrics Lab, Spain to create a ranked list of 4000+ institutions in 2016, based on citations from only the top 10 individual GSC profiles in each organization. (GSC excludes the top paper for reasons explained in the text; the simple selection procedure makes the ranked list size-independent as claimed by the Cybermetrics Lab). Using this data (Transparent Ranking TR, 2016), we find the regional and country wise distribution of GS-TR Citations. The size independent ranked list is subdivided into deciles of 400 institutions each and the number of institutions and citations of each country obtained for each decile. We test for correlation between institutional ranks between GS TR and the other ranking schemes for the top 20 institutions.




ana

Specification and Automated Analysis of Inter-Parameter Dependencies in Web APIs. (arXiv:2005.03320v1 [cs.SE])

Web services often impose inter-parameter dependencies that restrict the way in which two or more input parameters can be combined to form valid calls to the service. Unfortunately, current specification languages for web services like the OpenAPI Specification (OAS) provide no support for the formal description of such dependencies, which makes it hardly possible to automatically discover and interact with services without human intervention. In this article, we present an approach for the specification and automated analysis of inter-parameter dependencies in web APIs. We first present a domain-specific language, called Inter-parameter Dependency Language (IDL), for the specification of dependencies among input parameters in web services. Then, we propose a mapping to translate an IDL document into a constraint satisfaction problem (CSP), enabling the automated analysis of IDL specifications using standard CSP-based reasoning operations. Specifically, we present a catalogue of nine analysis operations on IDL documents allowing to compute, for example, whether a given request satisfies all the dependencies of the service. Finally, we present a tool suite including an editor, a parser, an OAS extension, a constraint programming-aided library, and a test suite supporting IDL specifications and their analyses. Together, these contributions pave the way for a new range of specification-driven applications in areas such as code generation and testing.




ana

Boosting Cloud Data Analytics using Multi-Objective Optimization. (arXiv:2005.03314v1 [cs.DB])

Data analytics in the cloud has become an integral part of enterprise businesses. Big data analytics systems, however, still lack the ability to take user performance goals and budgetary constraints for a task, collectively referred to as task objectives, and automatically configure an analytic job to achieve these objectives. This paper presents a data analytics optimizer that can automatically determine a cluster configuration with a suitable number of cores as well as other system parameters that best meet the task objectives. At a core of our work is a principled multi-objective optimization (MOO) approach that computes a Pareto optimal set of job configurations to reveal tradeoffs between different user objectives, recommends a new job configuration that best explores such tradeoffs, and employs novel optimizations to enable such recommendations within a few seconds. We present efficient incremental algorithms based on the notion of a Progressive Frontier for realizing our MOO approach and implement them into a Spark-based prototype. Detailed experiments using benchmark workloads show that our MOO techniques provide a 2-50x speedup over existing MOO methods, while offering good coverage of the Pareto frontier. When compared to Ottertune, a state-of-the-art performance tuning system, our approach recommends configurations that yield 26\%-49\% reduction of running time of the TPCx-BB benchmark while adapting to different application preferences on multiple objectives.




ana

Quda: Natural Language Queries for Visual Data Analytics. (arXiv:2005.03257v1 [cs.CL])

Visualization-oriented natural language interfaces (V-NLIs) have been explored and developed in recent years. One challenge faced by V-NLIs is in the formation of effective design decisions that usually requires a deep understanding of user queries. Learning-based approaches have shown potential in V-NLIs and reached state-of-the-art performance in various NLP tasks. However, because of the lack of sufficient training samples that cater to visual data analytics, cutting-edge techniques have rarely been employed to facilitate the development of V-NLIs. We present a new dataset, called Quda, to help V-NLIs understand free-form natural language. Our dataset contains 14;035 diverse user queries annotated with 10 low-level analytic tasks that assist in the deployment of state-of-the-art techniques for parsing complex human language. We achieve this goal by first gathering seed queries with data analysts who are target users of V-NLIs. Then we employ extensive crowd force for paraphrase generation and validation. We demonstrate the usefulness of Quda in building V-NLIs by creating a prototype that makes effective design decisions for free-form user queries. We also show that Quda can be beneficial for a wide range of applications in the visualization community by analyzing the design tasks described in academic publications.




ana

DFSeer: A Visual Analytics Approach to Facilitate Model Selection for Demand Forecasting. (arXiv:2005.03244v1 [cs.HC])

Selecting an appropriate model to forecast product demand is critical to the manufacturing industry. However, due to the data complexity, market uncertainty and users' demanding requirements for the model, it is challenging for demand analysts to select a proper model. Although existing model selection methods can reduce the manual burden to some extent, they often fail to present model performance details on individual products and reveal the potential risk of the selected model. This paper presents DFSeer, an interactive visualization system to conduct reliable model selection for demand forecasting based on the products with similar historical demand. It supports model comparison and selection with different levels of details. Besides, it shows the difference in model performance on similar products to reveal the risk of model selection and increase users' confidence in choosing a forecasting model. Two case studies and interviews with domain experts demonstrate the effectiveness and usability of DFSeer.




ana

Catch Me If You Can: Using Power Analysis to Identify HPC Activity. (arXiv:2005.03135v1 [cs.CR])

Monitoring users on large computing platforms such as high performance computing (HPC) and cloud computing systems is non-trivial. Utilities such as process viewers provide limited insight into what users are running, due to granularity limitation, and other sources of data, such as system call tracing, can impose significant operational overhead. However, despite technical and procedural measures, instances of users abusing valuable HPC resources for personal gains have been documented in the past cite{hpcbitmine}, and systems that are open to large numbers of loosely-verified users from around the world are at risk of abuse. In this paper, we show how electrical power consumption data from an HPC platform can be used to identify what programs are executed. The intuition is that during execution, programs exhibit various patterns of CPU and memory activity. These patterns are reflected in the power consumption of the system and can be used to identify programs running. We test our approach on an HPC rack at Lawrence Berkeley National Laboratory using a variety of scientific benchmarks. Among other interesting observations, our results show that by monitoring the power consumption of an HPC rack, it is possible to identify if particular programs are running with precision up to and recall of 95\% even in noisy scenarios.




ana

Near-optimal Detector for SWIPT-enabled Differential DF Relay Networks with SER Analysis. (arXiv:2005.03096v1 [cs.IT])

In this paper, we analyze the symbol error rate (SER) performance of the simultaneous wireless information and power transfer (SWIPT) enabled three-node differential decode-and-forward (DDF) relay networks, which adopt the power splitting (PS) protocol at the relay. The use of non-coherent differential modulation eliminates the need for sending training symbols to estimate the instantaneous channel state informations (CSIs) at all network nodes, and therefore improves the power efficiency, as compared with the coherent modulation. However, performance analysis results are not yet available for the state-of-the-art detectors such as the approximate maximum-likelihood detector. Existing works rely on Monte-Carlo simulation to show that there exists an optimal PS ratio that minimizes the overall SER. In this work, we propose a near-optimal detector with linear complexity with respect to the modulation size. We derive an accurate approximate SER expression, based on which the optimal PS ratio can be accurately estimated without requiring any Monte-Carlo simulation.




ana

Exploratory Analysis of Covid-19 Tweets using Topic Modeling, UMAP, and DiGraphs. (arXiv:2005.03082v1 [cs.SI])

This paper illustrates five different techniques to assess the distinctiveness of topics, key terms and features, speed of information dissemination, and network behaviors for Covid19 tweets. First, we use pattern matching and second, topic modeling through Latent Dirichlet Allocation (LDA) to generate twenty different topics that discuss case spread, healthcare workers, and personal protective equipment (PPE). One topic specific to U.S. cases would start to uptick immediately after live White House Coronavirus Task Force briefings, implying that many Twitter users are paying attention to government announcements. We contribute machine learning methods not previously reported in the Covid19 Twitter literature. This includes our third method, Uniform Manifold Approximation and Projection (UMAP), that identifies unique clustering-behavior of distinct topics to improve our understanding of important themes in the corpus and help assess the quality of generated topics. Fourth, we calculated retweeting times to understand how fast information about Covid19 propagates on Twitter. Our analysis indicates that the median retweeting time of Covid19 for a sample corpus in March 2020 was 2.87 hours, approximately 50 minutes faster than repostings from Chinese social media about H7N9 in March 2013. Lastly, we sought to understand retweet cascades, by visualizing the connections of users over time from fast to slow retweeting. As the time to retweet increases, the density of connections also increase where in our sample, we found distinct users dominating the attention of Covid19 retweeters. One of the simplest highlights of this analysis is that early-stage descriptive methods like regular expressions can successfully identify high-level themes which were consistently verified as important through every subsequent analysis.




ana

Fault Tree Analysis: Identifying Maximum Probability Minimal Cut Sets with MaxSAT. (arXiv:2005.03003v1 [cs.AI])

In this paper, we present a novel MaxSAT-based technique to compute Maximum Probability Minimal Cut Sets (MPMCSs) in fault trees. We model the MPMCS problem as a Weighted Partial MaxSAT problem and solve it using a parallel SAT-solving architecture. The results obtained with our open source tool indicate that the approach is effective and efficient.




ana

What Soccer Was Like When Retired Soccer Star Briana Scurry First Started Playing

Soccer great Briana Scurry started playing soccer at 12 on an all boys team and in the goal — the "safest" position for a girl ...




ana

Retired Soccer Star Briana Scurry on Sharing "Her Hell"

For a long time after her injury, soccer great Briana Scurry "hid her hell." Now, she knows that that was not the right thing to do and she wants to teach others to become more open and understanding about concussion.




ana

Retired Soccer Star Briana Scurry on What a Concussion Feels Like

After she was hit, retired soccer star Briana Scurry felt off balance, sensitive to light and sound,and felt intense pain in her head and neck.




ana

Retired Soccer Star Briana Scurry: "This Has Been the Most Difficult Thing"

"The penalty kicks, the final goals in the Olympics, playing in front of the president, in front of 90,000 people ... that is what I was born to do ... and my brain is what I used to get myself there."




ana

Retired Soccer Star Briana Scurry: Message to People Struggling After Concussions

If you don't feel right after a concussion, talk to your parents, your coach, your doctor ... get a second, third, fourth opinion ... Do not accept that you will not get better.




ana

How Occipital Nerve Surgery Helped Retired Soccer Star Briana Scurry

Bilateral occipital nerve release surgery was the first, significant step to relieving Scurry's debilitating post-concussive headaches.




ana

Retired Soccer Star Briana Scurry on Girls Soccer and Concussion Protocols

One out of two girls will sustain a concussion playing soccer, but most will recover and return to play with ease. Nevertheless, awareness and education are key to keeping players safe.




ana

Retired Soccer Star Briana Scurry on "Being Me Again"

"The Briana Scurry who could tune out 90,000 people during the World Cup and focus on a single ball and know I could keep it out of the goal ... that is who I want to be again."




ana

Retired Soccer Star Briana Scurry: "My Brain Was Broken"

Retired soccer star Briana Scurry talks about how all her successes started with her mind and her ability to overcome obstacles. After her injury, she felt lost, broken.




ana

Retired Soccer Star Briana Scurry on Her Post-Concussion Depression

Was her depression physiological from the hit to her head or because her professional soccer career was over?




ana

When Retired Soccer Star Briana Scurry Knew Her Career Was Over

After several weeks of not playing because of a concussion and then failing  several baseline tests, Briana Scurry became very worried.




ana

Why Retired Soccer Star Briana Scurry Is Speaking Out About Concussion

As someone who had a phenomenal career in professional soccer and that had a career-ending head injury, Briana Scurry knows she can help other female — and male — athletes.




ana

The Hit That Ended Briana Scurry's Soccer Career

"I knew I was in trouble ... I didn't know how much trouble," says retired soccer star Briana Scurry.




ana

Briana Scurry's Letter to Young Soccer Players

Soccer great Briana Scurry writes an open letter to young athletes about her love for soccer and the importance of taking concussions seriously.




ana

Build a Real-Time Phone Conversation Analytics

What Is Real-Time Conversation Analytics?

In contrast to post-call conversation analytics, which provides insights after the fact, real-time call conversation analytics can point them out at present times.

In this blog, I will walk through the essential steps to build a web app that can analyze call conversations in real-time to assist an agent. Once we’re finished, we’ll have an app which will:




ana

Real time safety management system and method

A system and method assesses and manages risk of an operation of a user. A rules engine of computer executable instructions stored in the storage device determines at least one of a safety risk measurement based on key performance indicators, an operational safety risk measurement for the operation as a function of the operational safety risk measurement information stored in a storage device and/or a conditional safety risk measurement for the operation as a function of the conditional safety risk measurement information stored in the storage device. A processor connected to the storage device executes the rules engine. An output interface connected to the processor indicates the determined safety risk for the operation.




ana

Process analysis

An apparatus and method are disclosed for analysing a process. An exemplary method includes: generating a process template; and determining a probabilistic model specifying the process template. The method can include use of task nodes for tasks of the process; observables nodes for observables that may be caused by performance of the tasks; and a background activities node, wherein observables may further be caused by background activities of the background node. The method can include use of task nodes for tasks of the process; observables nodes for observables that may be caused by performance of the tasks; and a background activities node, observables may further be caused by background activities of the background node. The method can include measuring values of an observable corresponding to one of the observables nodes; and updating a probabilistic estimate of the process state using the measured values.




ana

Systems and methods for analysis of network equipment command line interface (CLI) and runtime management of user interface (UI) generation for same

Systems and methods are disclosed that may be implemented for network management system (NMS) configuration management support for network devices using a learning and natural language processing application to capture the usage and behavior of the Command Line Interface (CLI) of a network device with the aid of a CLI knowledge model, which in one example may be ontology-based.




ana

Optimization to identify nearest objects in a dataset for data analysis

In one embodiment, a plurality of objects associated with a dataset and a specified number of nearest objects to be identified are received. The received objects are sorted in a structured format. Further, a key object and a number of adjacent objects corresponding to the key object are selected from the sorted plurality of objects, wherein the number of adjacent objects is selected based on the specified number of nearest objects to be identified. Furthermore, distances between the key object and the number of adjacent objects are determined to identify the specified number of nearest objects, wherein the distances are determined until the specified number of nearest objects is identified. Based on the determined distances, the specified number of nearest objects in the dataset is identified for data analysis.




ana

Data mining and model generation using an in-database analytic flow generator

Embodiments are described for a system and method of providing a data miner that decouples the analytic flow solution components from the data source. An analytic-flow solution then couples with the target data source through a simple set of data source connector, table and transformation objects, to perform the requisite analytic flow function. As a result, the analytic-flow solution needs to be designed only once and can be re-used across multiple target data sources. The analytic flow can be modified and updated at one place and then deployed for use on various different target data sources.




ana

Heterocyclyl pyrazolopyrimidine analogues as selective JAK inhibitors

The present invention relates to compounds of formula (I) wherein X1 to X5, Y, Z1 to Z3, and R have the meaning as cited in the description and the claims. Said compounds are useful as JAK inhibitors for the treatment or prophylaxis of immunological, inflammatory, autoimmune, allergic disorders, and immunologically-mediated diseases. The invention also relates to pharmaceutical compositions including said compounds, the preparation of such compounds as well as the use as medicaments.




ana

Methods for forming lead zirconate titanate nanoparticles

Methods for forming lead zirconate titanate (PZT) nanoparticles are provided. The PZT nanoparticles are formed from a precursor solution, comprising a source of lead, a source of titanium, a source of zirconium, and a mineralizer, that undergoes a hydrothermal process. The size and morphology of the PZT nanoparticles are controlled, in part, by the heating schedule used during the hydrothermal process.




ana

Solid ganaxolone compositions and methods for the making and use thereof

In certain embodiments, the invention is directed to composition comprising stable particles comprising ganaxolone, wherein the volume weighted median diameter (D50) of the particles is from about 50 nm to about 500 nm.




ana

Polymerization process and raman analysis for olefin-based polymers

The invention provides a process for monitoring and/or adjusting a dispersion polymerization of an olefin-based polymer, the process comprising monitoring the concentration of the carbon-carbon unsaturations in the dispersion using Raman Spectroscopy. The invention also provides a process for polymerizing an olefin-based polymer, the process comprising polymerizing one or more monomer types, in the presence of at least one catalyst and at least one solvent, to form the polymer as a dispersed phase in the solvent; and monitoring the concentration of the carbon-carbon unsaturations in the dispersion using Raman Spectroscopy.




ana

Nonvolatile memory device and bad area managing method thereof

Example embodiments relate to a bad area managing method of a nonvolatile memory device. The nonvolatile memory device may include a plurality of memory blocks and each block may contain memory layers stacked on a substrate. According to example embodiments, a method includes accessing one of the memory blocks, judging whether the accessed memory block includes at least one memory layer containing a bad memory cell. If a bad memory cell is detected, the method may further include configuring the memory device to treat the at least one memory layer of the accessed memory block as a bad area.