base

Creating a Block-based Theme Using Block Templates

This post outlines the steps I took to create a block-based theme version of Twenty Twenty. Thanks to Kjell Reigstad for helping develop the theme and write this post. There’s been a lot of conversation around how theme development changes as Full Site Editing using Gutenberg becomes a reality. Block templates are an experimental feature … Continue reading "Creating a Block-based Theme Using Block Templates"




base

How To Build A Vue Survey App Using Firebase Authentication And Database

https://www.smashingmagazine.com/2020/05/vue-survey-app-firebase-authentication-database/




base

Troops to receive Purple Hearts for injuries during Iranian missile barrage on al-Asad airbase in Iraq

There will be Purple Hearts awarded to troops injured during the Jan. 8 Iranian missile barrage on the al-Asad airbase in Iraq, a defense official told Military Times.




base

Troops to receive Purple Hearts for injuries during Iranian missile barrage on al-Asad airbase in Iraq

There will be Purple Hearts awarded to troops injured during the Jan. 8 Iranian missile barrage on the al-Asad airbase in Iraq, a defense official told Military Times.




base

Should you use Userbase for your next static site?

During the winter 2020 Pointless Weekend, we built TrailBuddy (working app coming soon). Our team consisted of four developers, two project managers, two front-end developers, a digital-analyst, a UXer, and a designer. In about 48 hours, we took an idea from Jeremy Field’s head to a (mostly) working app. We broke up the project in two parts:. First, a back-end that crunches trail, weather, and soil data. That data is exposed via a GraphQL API for a web app to consume.

While developers built the API, I built a static front end using Next.js. Famously, static front-ends don’t have a database, or a concept of “users.” A bit of functionality I wanted to add was saving favorite trails. I didn’t want to be hacky about it, I needed some way to add users and a database. I knew it’d be hard for the developers to set this up as part of the API, they had their hands full with all the #soil-soil-soil-soil-soil (a slack channel dedicated solely to figuring out our soil data problem—those were plentiful.) I had been looking for an excuse to use Userbase, and this seemed like as good a time as any.

A textbook Userbase use case

“When would I use it?” The Usebase site lists these reasons:

  • If you want to build a web app without writing any backend code.
  • If you never want to see your users' data.
  • If you're tired of dealing with databases.
  • If you want to radically simplify your GDPR compliance.
  • And if you want to keep things really simple.

This was a perfect fit for my problem. I didn’t want to write any more backend code for this. I didn’t want to see our user’s data, I don’t care to know anyone’s favorite trails.* A nice bonus to not having users in our backend was not having to worry about keeping their data safe. We don’t have their data at all, it’s end-to-end encrypted by Userbase. We can offer a reasonable amount of privacy for free (well for the price of using Userbase: $49 a year.) I am not tired of dealing with databases, but I’d rather not. I don’t think anyone doesn’t want to simplify their GDPR compliance. Finally, given our tight timeline I wanted nothing more than to keep things really simple.

A sign up form that I didn't have to write a back-end for

Using Userbase

Userbase can be tried for free, so I set aside thirty minutes or so to do a quick proof of concept to make sure this would work out for us. I made an account and followed their Quickstart. Userbase is a fundamentally easy tool to use, but their quickstart is everything I’d want out of a quickstart:

  • Written in the most vanilla way possible (just HTML and vanilla JS). This means I can adapt it to my needs, in this case React using Next.js
  • Easy to follow, it does the most barebones tour of the functionality you can expect to get out of the SDK (software development kit.) In other words it is quick and it is a start
  • It has a live demo and code samples you can download and run yourself

It didn’t take long after that to integrate Userbase into our app with more help from their great docs. I debated whether to add code samples of what we did here, and I didn’t because any reader would be better off using the great quickstart and docs Userbase provides—they are that clear, and that good. Depending on your use case you’ll need to adapt the examples to your needs, for us the trickiest things were creating a top level authentication context to manage users in the app, and a custom hook to encapsulate all the logic for setting, updating, and deleting favourite trails in the app. Userbase’s SDK worked seamlessly for us.

A log in form that I didn't have to write a back-end for

Is Userbase for you?

Maybe. I am definitely a fan, so much so that this blog post probably reads like an advert. Userbase saved me a ton of time in this project. It reminded me of “The All Powerful Front End Developer” talk by Chris Coyer. I don’t fully subscribe to all the ideas in that talk, but it is nice to have “serverless” tools like Userbase, and all the new JAMstacky things. There are limits to the Userbase serverless experience in terms of scale, and control. Obviously relying on a third party for something always carries some (probably small) risk—it’s worth noting Usebase includes a note on their pricing page that says “You can host it yourself always under your control, or we can run it for you for a full serverless experience”—Still, I wouldn’t hesitate this to use in future projects.

One of the great things about Viget and Pointless Weekend is the opportunity to try new things. For me that was Next.js and Userbase for Trailbuddy. It doesn’t always work out (in fact this is my first pointless weekend where a risk hasn’t blown up in my face) but it is always fun. Getting to try out Userbase and beginning to think about how we may use it in the future made the weekend worthwhile for me, and it made my job on this project much more enjoyable.

*I will write a future post about privacy conscious analytics in TrailBuddy when I’ve figured that out. I am looking into Fathom Analytics for that.



  • Code
  • Front-end Engineering

base

Troops to receive Purple Hearts for injuries during Iranian missile barrage on al-Asad airbase in Iraq

There will be Purple Hearts awarded to troops injured during the Jan. 8 Iranian missile barrage on the al-Asad airbase in Iraq, a defense official told Military Times.




base

Should you use Userbase for your next static site?

During the winter 2020 Pointless Weekend, we built TrailBuddy (working app coming soon). Our team consisted of four developers, two project managers, two front-end developers, a digital-analyst, a UXer, and a designer. In about 48 hours, we took an idea from Jeremy Field’s head to a (mostly) working app. We broke up the project in two parts:. First, a back-end that crunches trail, weather, and soil data. That data is exposed via a GraphQL API for a web app to consume.

While developers built the API, I built a static front end using Next.js. Famously, static front-ends don’t have a database, or a concept of “users.” A bit of functionality I wanted to add was saving favorite trails. I didn’t want to be hacky about it, I needed some way to add users and a database. I knew it’d be hard for the developers to set this up as part of the API, they had their hands full with all the #soil-soil-soil-soil-soil (a slack channel dedicated solely to figuring out our soil data problem—those were plentiful.) I had been looking for an excuse to use Userbase, and this seemed like as good a time as any.

A textbook Userbase use case

“When would I use it?” The Usebase site lists these reasons:

  • If you want to build a web app without writing any backend code.
  • If you never want to see your users' data.
  • If you're tired of dealing with databases.
  • If you want to radically simplify your GDPR compliance.
  • And if you want to keep things really simple.

This was a perfect fit for my problem. I didn’t want to write any more backend code for this. I didn’t want to see our user’s data, I don’t care to know anyone’s favorite trails.* A nice bonus to not having users in our backend was not having to worry about keeping their data safe. We don’t have their data at all, it’s end-to-end encrypted by Userbase. We can offer a reasonable amount of privacy for free (well for the price of using Userbase: $49 a year.) I am not tired of dealing with databases, but I’d rather not. I don’t think anyone doesn’t want to simplify their GDPR compliance. Finally, given our tight timeline I wanted nothing more than to keep things really simple.

A sign up form that I didn't have to write a back-end for

Using Userbase

Userbase can be tried for free, so I set aside thirty minutes or so to do a quick proof of concept to make sure this would work out for us. I made an account and followed their Quickstart. Userbase is a fundamentally easy tool to use, but their quickstart is everything I’d want out of a quickstart:

  • Written in the most vanilla way possible (just HTML and vanilla JS). This means I can adapt it to my needs, in this case React using Next.js
  • Easy to follow, it does the most barebones tour of the functionality you can expect to get out of the SDK (software development kit.) In other words it is quick and it is a start
  • It has a live demo and code samples you can download and run yourself

It didn’t take long after that to integrate Userbase into our app with more help from their great docs. I debated whether to add code samples of what we did here, and I didn’t because any reader would be better off using the great quickstart and docs Userbase provides—they are that clear, and that good. Depending on your use case you’ll need to adapt the examples to your needs, for us the trickiest things were creating a top level authentication context to manage users in the app, and a custom hook to encapsulate all the logic for setting, updating, and deleting favourite trails in the app. Userbase’s SDK worked seamlessly for us.

A log in form that I didn't have to write a back-end for

Is Userbase for you?

Maybe. I am definitely a fan, so much so that this blog post probably reads like an advert. Userbase saved me a ton of time in this project. It reminded me of “The All Powerful Front End Developer” talk by Chris Coyer. I don’t fully subscribe to all the ideas in that talk, but it is nice to have “serverless” tools like Userbase, and all the new JAMstacky things. There are limits to the Userbase serverless experience in terms of scale, and control. Obviously relying on a third party for something always carries some (probably small) risk—it’s worth noting Usebase includes a note on their pricing page that says “You can host it yourself always under your control, or we can run it for you for a full serverless experience”—Still, I wouldn’t hesitate this to use in future projects.

One of the great things about Viget and Pointless Weekend is the opportunity to try new things. For me that was Next.js and Userbase for Trailbuddy. It doesn’t always work out (in fact this is my first pointless weekend where a risk hasn’t blown up in my face) but it is always fun. Getting to try out Userbase and beginning to think about how we may use it in the future made the weekend worthwhile for me, and it made my job on this project much more enjoyable.

*I will write a future post about privacy conscious analytics in TrailBuddy when I’ve figured that out. I am looking into Fathom Analytics for that.



  • Code
  • Front-end Engineering

base

How To Build A Vue Survey App Using Firebase Authentication And Database

In this tutorial, you’ll be building a Survey App, where we’ll learn to validate our users form data, implement Authentication in Vue, and be able to receive survey data using Vue and Firebase (a BaaS platform). As we build this app, we’ll be learning how to handle form validation for different kinds of data, including reaching out to the backend to check if an email is already taken, even before the user submits the form during sign up.




base

Expansion of Iterated Stratonovich Stochastic Integrals of Arbitrary Multiplicity Based on Generalized Iterated Fourier Series Converging Pointwise. (arXiv:1801.00784v9 [math.PR] UPDATED)

The article is devoted to the expansion of iterated Stratonovich stochastic integrals of arbitrary multiplicity $k$ $(kinmathbb{N})$ based on the generalized iterated Fourier series. The case of Fourier-Legendre series as well as the case of trigonotemric Fourier series are considered in details. The obtained expansion provides a possibility to represent the iterated Stratonovich stochastic integral in the form of iterated series of products of standard Gaussian random variables. Convergence in the mean of degree $2n$ $(nin mathbb{N})$ of the expansion is proved. Some modifications of the mentioned expansion were derived for the case $k=2$. One of them is based of multiple trigonomentric Fourier series converging almost everywhere in the square $[t, T]^2$. The results of the article can be applied to the numerical solution of Ito stochastic differential equations.




base

GraphBLAST: A High-Performance Linear Algebra-based Graph Framework on the GPU. (arXiv:1908.01407v3 [cs.DC] CROSS LISTED)

High-performance implementations of graph algorithms are challenging to implement on new parallel hardware such as GPUs, because of three challenges: (1) difficulty of coming up with graph building blocks, (2) load imbalance on parallel hardware, and (3) graph problems having low arithmetic intensity. To address these challenges, GraphBLAS is an innovative, on-going effort by the graph analytics community to propose building blocks based in sparse linear algebra, which will allow graph algorithms to be expressed in a performant, succinct, composable and portable manner. In this paper, we examine the performance challenges of a linear algebra-based approach to building graph frameworks and describe new design principles for overcoming these bottlenecks. Among the new design principles is exploiting input sparsity, which allows users to write graph algorithms without specifying push and pull direction. Exploiting output sparsity allows users to tell the backend which values of the output in a single vectorized computation they do not want computed. Load-balancing is an important feature for balancing work amongst parallel workers. We describe the important load-balancing features for handling graphs with different characteristics. The design principles described in this paper have been implemented in "GraphBLAST", the first open-source linear algebra-based graph framework on GPU targeting high-performance computing. The results show that on a single GPU, GraphBLAST has on average at least an order of magnitude speedup over previous GraphBLAS implementations SuiteSparse and GBTL, comparable performance to the fastest GPU hardwired primitives and shared-memory graph frameworks Ligra and Gunrock, and better performance than any other GPU graph framework, while offering a simpler and more concise programming model.




base

Temporal Event Segmentation using Attention-based Perceptual Prediction Model for Continual Learning. (arXiv:2005.02463v2 [cs.CV] UPDATED)

Temporal event segmentation of a long video into coherent events requires a high level understanding of activities' temporal features. The event segmentation problem has been tackled by researchers in an offline training scheme, either by providing full, or weak, supervision through manually annotated labels or by self-supervised epoch based training. In this work, we present a continual learning perceptual prediction framework (influenced by cognitive psychology) capable of temporal event segmentation through understanding of the underlying representation of objects within individual frames. Our framework also outputs attention maps which effectively localize and track events-causing objects in each frame. The model is tested on a wildlife monitoring dataset in a continual training manner resulting in $80\%$ recall rate at $20\%$ false positive rate for frame level segmentation. Activity level testing has yielded $80\%$ activity recall rate for one false activity detection every 50 minutes.




base

Quantum arithmetic operations based on quantum Fourier transform on signed integers. (arXiv:2005.00443v2 [cs.IT] UPDATED)

The quantum Fourier transform brings efficiency in many respects, especially usage of resource, for most operations on quantum computers. In this study, the existing QFT-based and non-QFT-based quantum arithmetic operations are examined. The capabilities of QFT-based addition and multiplication are improved with some modifications. The proposed operations are compared with the nearest quantum arithmetic operations. Furthermore, novel QFT-based subtraction and division operations are presented. The proposed arithmetic operations can perform non-modular operations on all signed numbers without any limitation by using less resources. In addition, novel quantum circuits of two's complement, absolute value and comparison operations are also presented by using the proposed QFT based addition and subtraction operations.




base

On-board Deep-learning-based Unmanned Aerial Vehicle Fault Cause Detection and Identification. (arXiv:2005.00336v2 [eess.SP] UPDATED)

With the increase in use of Unmanned Aerial Vehicles (UAVs)/drones, it is important to detect and identify causes of failure in real time for proper recovery from a potential crash-like scenario or post incident forensics analysis. The cause of crash could be either a fault in the sensor/actuator system, a physical damage/attack, or a cyber attack on the drone's software. In this paper, we propose novel architectures based on deep Convolutional and Long Short-Term Memory Neural Networks (CNNs and LSTMs) to detect (via Autoencoder) and classify drone mis-operations based on sensor data. The proposed architectures are able to learn high-level features automatically from the raw sensor data and learn the spatial and temporal dynamics in the sensor data. We validate the proposed deep-learning architectures via simulations and experiments on a real drone. Empirical results show that our solution is able to detect with over 90% accuracy and classify various types of drone mis-operations (with about 99% accuracy (simulation data) and upto 88% accuracy (experimental data)).




base

Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016. (arXiv:2004.06286v3 [cs.HC] UPDATED)

A brain-computer interface (BCI) enables a user to communicate with a computer directly using brain signals. Electroencephalograms (EEGs) used in BCIs are weak, easily contaminated by interference and noise, non-stationary for the same subject, and varying across different subjects and sessions. Therefore, it is difficult to build a generic pattern recognition model in an EEG-based BCI system that is optimal for different subjects, during different sessions, for different devices and tasks. Usually, a calibration session is needed to collect some training data for a new subject, which is time consuming and user unfriendly. Transfer learning (TL), which utilizes data or knowledge from similar or relevant subjects/sessions/devices/tasks to facilitate learning for a new subject/session/device/task, is frequently used to reduce the amount of calibration effort. This paper reviews journal publications on TL approaches in EEG-based BCIs in the last few years, i.e., since 2016. Six paradigms and applications -- motor imagery, event-related potentials, steady-state visual evoked potentials, affective BCIs, regression problems, and adversarial attacks -- are considered. For each paradigm/application, we group the TL approaches into cross-subject/session, cross-device, and cross-task settings and review them separately. Observations and conclusions are made at the end of the paper, which may point to future research directions.




base

Defending Hardware-based Malware Detectors against Adversarial Attacks. (arXiv:2005.03644v1 [cs.CR])

In the era of Internet of Things (IoT), Malware has been proliferating exponentially over the past decade. Traditional anti-virus software are ineffective against modern complex Malware. In order to address this challenge, researchers have proposed Hardware-assisted Malware Detection (HMD) using Hardware Performance Counters (HPCs). The HPCs are used to train a set of Machine learning (ML) classifiers, which in turn, are used to distinguish benign programs from Malware. Recently, adversarial attacks have been designed by introducing perturbations in the HPC traces using an adversarial sample predictor to misclassify a program for specific HPCs. These attacks are designed with the basic assumption that the attacker is aware of the HPCs being used to detect Malware. Since modern processors consist of hundreds of HPCs, restricting to only a few of them for Malware detection aids the attacker. In this paper, we propose a Moving target defense (MTD) for this adversarial attack by designing multiple ML classifiers trained on different sets of HPCs. The MTD randomly selects a classifier; thus, confusing the attacker about the HPCs or the number of classifiers applied. We have developed an analytical model which proves that the probability of an attacker to guess the perfect HPC-classifier combination for MTD is extremely low (in the range of $10^{-1864}$ for a system with 20 HPCs). Our experimental results prove that the proposed defense is able to improve the classification accuracy of HPC traces that have been modified through an adversarial sample generator by up to 31.5%, for a near perfect (99.4%) restoration of the original accuracy.




base

Seismic Shot Gather Noise Localization Using a Multi-Scale Feature-Fusion-Based Neural Network. (arXiv:2005.03626v1 [cs.CV])

Deep learning-based models, such as convolutional neural networks, have advanced various segments of computer vision. However, this technology is rarely applied to seismic shot gather noise localization problem. This letter presents an investigation on the effectiveness of a multi-scale feature-fusion-based network for seismic shot-gather noise localization. Herein, we describe the following: (1) the construction of a real-world dataset of seismic noise localization based on 6,500 seismograms; (2) a multi-scale feature-fusion-based detector that uses the MobileNet combined with the Feature Pyramid Net as the backbone; and (3) the Single Shot multi-box detector for box classification/regression. Additionally, we propose the use of the Focal Loss function that improves the detector's prediction accuracy. The proposed detector achieves an AP@0.5 of 78.67\% in our empirical evaluation.




base

VM placement over WDM-TDM AWGR PON Based Data Centre Architecture. (arXiv:2005.03590v1 [cs.NI])

Passive optical networks (PON) can play a vital role in data centres and access fog solutions by providing scalable, cost and energy efficient architectures. This paper proposes a Mixed Integer Linear Programming (MILP) model to optimize the placement of virtual machines (VMs) over an energy efficient WDM-TDM AWGR PON based data centre architecture. In this optimization, the use of VMs and their requirements affect the optimum number of servers utilized in the data centre when minimizing the power consumption and enabling more efficient utilization of servers is considered. Two power consumption minimization objectives were examined for up to 20 VMs with different computing and networking requirements. The results indicate that considering the minimization of the processing and networking power consumption in the allocation of VMs in the WDM-TDM AWGR PON can reduce the networking power consumption by up to 70% compared to the minimization of the processing power consumption.




base

QuickSync: A Quickly Synchronizing PoS-Based Blockchain Protocol. (arXiv:2005.03564v1 [cs.CR])

To implement a blockchain, we need a blockchain protocol for all the nodes to follow. To design a blockchain protocol, we need a block publisher selection mechanism and a chain selection rule. In Proof-of-Stake (PoS) based blockchain protocols, block publisher selection mechanism selects the node to publish the next block based on the relative stake held by the node. However, PoS protocols may face vulnerability to fully adaptive corruptions. In literature, researchers address this issue at the cost of performance.

In this paper, we propose a novel PoS-based blockchain protocol, QuickSync, to achieve security against fully adaptive corruptions without compromising on performance. We propose a metric called block power, a value defined for each block, derived from the output of the verifiable random function based on the digital signature of the block publisher. With this metric, we compute chain power, the sum of block powers of all the blocks comprising the chain, for all the valid chains. These metrics are a function of the block publisher's stake to enable the PoS aspect of the protocol. The chain selection rule selects the chain with the highest chain power as the one to extend. This chain selection rule hence determines the selected block publisher of the previous block. When we use metrics to define the chain selection rule, it may lead to vulnerabilities against Sybil attacks. QuickSync uses a Sybil attack resistant function implemented using histogram matching. We prove that QuickSync satisfies common prefix, chain growth, and chain quality properties and hence it is secure. We also show that it is resilient to different types of adversarial attack strategies. Our analysis demonstrates that QuickSync performs better than Bitcoin by an order of magnitude on both transactions per second and time to finality, and better than Ouroboros v1 by a factor of three on time to finality.




base

CounQER: A System for Discovering and Linking Count Information in Knowledge Bases. (arXiv:2005.03529v1 [cs.IR])

Predicate constraints of general-purpose knowledge bases (KBs) like Wikidata, DBpedia and Freebase are often limited to subproperty, domain and range constraints. In this demo we showcase CounQER, a system that illustrates the alignment of counting predicates, like staffSize, and enumerating predicates, like workInstitution^{-1} . In the demonstration session, attendees can inspect these alignments, and will learn about the importance of these alignments for KB question answering and curation. CounQER is available at https://counqer.mpi-inf.mpg.de/spo.




base

Subquadratic-Time Algorithms for Normal Bases. (arXiv:2005.03497v1 [cs.SC])

For any finite Galois field extension $mathsf{K}/mathsf{F}$, with Galois group $G = mathrm{Gal}(mathsf{K}/mathsf{F})$, there exists an element $alpha in mathsf{K}$ whose orbit $Gcdotalpha$ forms an $mathsf{F}$-basis of $mathsf{K}$. Such an $alpha$ is called a normal element and $Gcdotalpha$ is a normal basis. We introduce a probabilistic algorithm for testing whether a given $alpha in mathsf{K}$ is normal, when $G$ is either a finite abelian or a metacyclic group. The algorithm is based on the fact that deciding whether $alpha$ is normal can be reduced to deciding whether $sum_{g in G} g(alpha)g in mathsf{K}[G]$ is invertible; it requires a slightly subquadratic number of operations. Once we know that $alpha$ is normal, we show how to perform conversions between the working basis of $mathsf{K}/mathsf{F}$ and the normal basis with the same asymptotic cost.




base

Detection and Feeder Identification of the High Impedance Fault at Distribution Networks Based on Synchronous Waveform Distortions. (arXiv:2005.03411v1 [eess.SY])

Diagnosis of high impedance fault (HIF) is a challenge for nowadays distribution network protections. The fault current of a HIF is much lower than that of a normal load, and fault feature is significantly affected by fault scenarios. A detection and feeder identification algorithm for HIFs is proposed in this paper, based on the high-resolution and synchronous waveform data. In the algorithm, an interval slope is defined to describe the waveform distortions, which guarantees a uniform feature description under various HIF nonlinearities and noise interferences. For three typical types of network neutrals, i.e.,isolated neutral, resonant neutral, and low-resistor-earthed neutral, differences of the distorted components between the zero-sequence currents of healthy and faulty feeders are mathematically deduced, respectively. As a result, the proposed criterion, which is based on the distortion relationships between zero-sequence currents of feeders and the zero-sequence voltage at the substation, is theoretically supported. 28 HIFs grounded to various materials are tested in a 10kV distribution networkwith three neutral types, and are utilized to verify the effectiveness of the proposed algorithm.




base

A LiDAR-based real-time capable 3D Perception System for Automated Driving in Urban Domains. (arXiv:2005.03404v1 [cs.RO])

We present a LiDAR-based and real-time capable 3D perception system for automated driving in urban domains. The hierarchical system design is able to model stationary and movable parts of the environment simultaneously and under real-time conditions. Our approach extends the state of the art by innovative in-detail enhancements for perceiving road users and drivable corridors even in case of non-flat ground surfaces and overhanging or protruding elements. We describe a runtime-efficient pointcloud processing pipeline, consisting of adaptive ground surface estimation, 3D clustering and motion classification stages. Based on the pipeline's output, the stationary environment is represented in a multi-feature mapping and fusion approach. Movable elements are represented in an object tracking system capable of using multiple reference points to account for viewpoint changes. We further enhance the tracking system by explicit consideration of occlusion and ambiguity cases. Our system is evaluated using a subset of the TUBS Road User Dataset. We enhance common performance metrics by considering application-driven aspects of real-world traffic scenarios. The perception system shows impressive results and is able to cope with the addressed scenarios while still preserving real-time capability.




base

Regression Forest-Based Atlas Localization and Direction Specific Atlas Generation for Pancreas Segmentation. (arXiv:2005.03345v1 [cs.CV])

This paper proposes a fully automated atlas-based pancreas segmentation method from CT volumes utilizing atlas localization by regression forest and atlas generation using blood vessel information. Previous probabilistic atlas-based pancreas segmentation methods cannot deal with spatial variations that are commonly found in the pancreas well. Also, shape variations are not represented by an averaged atlas. We propose a fully automated pancreas segmentation method that deals with two types of variations mentioned above. The position and size of the pancreas is estimated using a regression forest technique. After localization, a patient-specific probabilistic atlas is generated based on a new image similarity that reflects the blood vessel position and direction information around the pancreas. We segment it using the EM algorithm with the atlas as prior followed by the graph-cut. In evaluation results using 147 CT volumes, the Jaccard index and the Dice overlap of the proposed method were 62.1% and 75.1%, respectively. Although we automated all of the segmentation processes, segmentation results were superior to the other state-of-the-art methods in the Dice overlap.




base

Global Distribution of Google Scholar Citations: A Size-independent Institution-based Analysis. (arXiv:2005.03324v1 [cs.DL])

Most currently available schemes for performance based ranking of Universities or Research organizations, such as, Quacarelli Symonds (QS), Times Higher Education (THE), Shanghai University based All Research of World Universities (ARWU) use a variety of criteria that include productivity, citations, awards, reputation, etc., while Leiden and Scimago use only bibliometric indicators. The research performance evaluation in the aforesaid cases is based on bibliometric data from Web of Science or Scopus, which are commercially available priced databases. The coverage includes peer reviewed journals and conference proceedings. Google Scholar (GS) on the other hand, provides a free and open alternative to obtaining citations of papers available on the net, (though it is not clear exactly which journals are covered.) Citations are collected automatically from the net and also added to self created individual author profiles under Google Scholar Citations (GSC). This data was used by Webometrics Lab, Spain to create a ranked list of 4000+ institutions in 2016, based on citations from only the top 10 individual GSC profiles in each organization. (GSC excludes the top paper for reasons explained in the text; the simple selection procedure makes the ranked list size-independent as claimed by the Cybermetrics Lab). Using this data (Transparent Ranking TR, 2016), we find the regional and country wise distribution of GS-TR Citations. The size independent ranked list is subdivided into deciles of 400 institutions each and the number of institutions and citations of each country obtained for each decile. We test for correlation between institutional ranks between GS TR and the other ranking schemes for the top 20 institutions.




base

Database Traffic Interception for Graybox Detection of Stored and Context-Sensitive XSS. (arXiv:2005.03322v1 [cs.CR])

XSS is a security vulnerability that permits injecting malicious code into the client side of a web application. In the simplest situations, XSS vulnerabilities arise when a web application includes the user input in the web output without due sanitization. Such simple XSS vulnerabilities can be detected fairly reliably with blackbox scanners, which inject malicious payload into sensitive parts of HTTP requests and look for the reflected values in the web output.

Contemporary blackbox scanners are not effective against stored XSS vulnerabilities, where the malicious payload in an HTTP response originates from the database storage of the web application, rather than from the associated HTTP request. Similarly, many blackbox scanners do not systematically handle context-sensitive XSS vulnerabilities, where the user input is included in the web output after a transformation that prevents the scanner from recognizing the original value, but does not sanitize the value sufficiently. Among the combination of two basic data sources (stored vs reflected) and two basic vulnerability patterns (context sensitive vs not so), only one is therefore tested systematically by state-of-the-art blackbox scanners.

Our work focuses on systematic coverage of the three remaining combinations. We present a graybox mechanism that extends a general purpose database to cooperate with our XSS scanner, reporting and injecting the test inputs at the boundary between the database and the web application. Furthermore, we design a mechanism for identifying the injected inputs in the web output even after encoding by the web application, and check whether the encoding sanitizes the injected inputs correctly in the respective browser context. We evaluate our approach on eight mature and technologically diverse web applications, discovering previously unknown and exploitable XSS flaws in each of those applications.




base

Interval type-2 fuzzy logic system based similarity evaluation for image steganography. (arXiv:2005.03310v1 [cs.MM])

Similarity measure, also called information measure, is a concept used to distinguish different objects. It has been studied from different contexts by employing mathematical, psychological, and fuzzy approaches. Image steganography is the art of hiding secret data into an image in such a way that it cannot be detected by an intruder. In image steganography, hiding secret data in the plain or non-edge regions of the image is significant due to the high similarity and redundancy of the pixels in their neighborhood. However, the similarity measure of the neighboring pixels, i.e., their proximity in color space, is perceptual rather than mathematical. This paper proposes an interval type 2 fuzzy logic system (IT2 FLS) to determine the similarity between the neighboring pixels by involving an instinctive human perception through a rule-based approach. The pixels of the image having high similarity values, calculated using the proposed IT2 FLS similarity measure, are selected for embedding via the least significant bit (LSB) method. We term the proposed procedure of steganography as IT2 FLS LSB method. Moreover, we have developed two more methods, namely, type 1 fuzzy logic system based least significant bits (T1FLS LSB) and Euclidean distance based similarity measures for least significant bit (SM LSB) steganographic methods. Experimental simulations were conducted for a collection of images and quality index metrics, such as PSNR, UQI, and SSIM are used. All the three steganographic methods are applied on datasets and the quality metrics are calculated. The obtained stego images and results are shown and thoroughly compared to determine the efficacy of the IT2 FLS LSB method. Finally, we have done a comparative analysis of the proposed approach with the existing well-known steganographic methods to show the effectiveness of our proposed steganographic method.




base

Deep Learning based Person Re-identification. (arXiv:2005.03293v1 [cs.CV])

Automated person re-identification in a multi-camera surveillance setup is very important for effective tracking and monitoring crowd movement. In the recent years, few deep learning based re-identification approaches have been developed which are quite accurate but time-intensive, and hence not very suitable for practical purposes. In this paper, we propose an efficient hierarchical re-identification approach in which color histogram based comparison is first employed to find the closest matches in the gallery set, and next deep feature based comparison is carried out using Siamese network. Reduction in search space after the first level of matching helps in achieving a fast response time as well as improving the accuracy of prediction by the Siamese network by eliminating vastly dissimilar elements. A silhouette part-based feature extraction scheme is adopted in each level of hierarchy to preserve the relative locations of the different body structures and make the appearance descriptors more discriminating in nature. The proposed approach has been evaluated on five public data sets and also a new data set captured by our team in our laboratory. Results reveal that it outperforms most state-of-the-art approaches in terms of overall accuracy.




base

Mortar-based entropy-stable discontinuous Galerkin methods on non-conforming quadrilateral and hexahedral meshes. (arXiv:2005.03237v1 [math.NA])

High-order entropy-stable discontinuous Galerkin (DG) methods for nonlinear conservation laws reproduce a discrete entropy inequality by combining entropy conservative finite volume fluxes with summation-by-parts (SBP) discretization matrices. In the DG context, on tensor product (quadrilateral and hexahedral) elements, SBP matrices are typically constructed by collocating at Lobatto quadrature points. Recent work has extended the construction of entropy-stable DG schemes to collocation at more accurate Gauss quadrature points.

In this work, we extend entropy-stable Gauss collocation schemes to non-conforming meshes. Entropy-stable DG schemes require computing entropy conservative numerical fluxes between volume and surface quadrature nodes. On conforming tensor product meshes where volume and surface nodes are aligned, flux evaluations are required only between "lines" of nodes. However, on non-conforming meshes, volume and surface nodes are no longer aligned, resulting in a larger number of flux evaluations. We reduce this expense by introducing an entropy-stable mortar-based treatment of non-conforming interfaces via a face-local correction term, and provide necessary conditions for high-order accuracy. Numerical experiments in both two and three dimensions confirm the stability and accuracy of this approach.




base

OTFS-NOMA based on SCMA. (arXiv:2005.03216v1 [cs.IT])

Orthogonal Time Frequency Space (OTFS) is a $ ext{2-D}$ modulation technique that has the potential to overcome the challenges faced by orthogonal frequency division multiplexing (OFDM) in high Doppler environments. The performance of OTFS in a multi-user scenario with orthogonal multiple access (OMA) techniques has been impressive. Due to the requirement of massive connectivity in 5G and beyond, it is immensely essential to devise and examine the OTFS system with the existing Non-orthogonal Multiple Access (NOMA) techniques.

In this paper, we propose a multi-user OTFS system based on a code-domain NOMA technique called Sparse Code Multiple Access (SCMA). This system is referred to as the OTFS-SCMA model. The framework for OTFS-SCMA is designed for both downlink and uplink. First, the sparse SCMA codewords are strategically placed on the delay-Doppler plane such that the overall overloading factor of the OTFS-SCMA system is equal to that of the underlying basic SCMA system. The receiver in downlink performs the detection in two sequential phases: first, the conventional OTFS detection using the method of linear minimum mean square error (LMMSE), and then the conventional SCMA detection. For uplink, we propose a single-phase detector based on message-passing algorithm (MPA) to detect the multiple users' symbols. The performance of the proposed OTFS-SCMA system is validated through extensive simulations both in downlink and uplink. We consider delay-Doppler planes of different parameters and various SCMA systems of overloading factor up to 200$\%$. The performance of OTFS-SCMA is compared with those of existing OTFS-OMA techniques. The comprehensive investigation demonstrates the usefulness of OTFS-SCMA in future wireless communication standards.




base

Lattice-based public key encryption with equality test in standard model, revisited. (arXiv:2005.03178v1 [cs.CR])

Public key encryption with equality test (PKEET) allows testing whether two ciphertexts are generated by the same message or not. PKEET is a potential candidate for many practical applications like efficient data management on encrypted databases. Potential applicability of PKEET leads to intensive research from its first instantiation by Yang et al. (CT-RSA 2010). Most of the followup constructions are secure in the random oracle model. Moreover, the security of all the concrete constructions is based on number-theoretic hardness assumptions which are vulnerable in the post-quantum era. Recently, Lee et al. (ePrint 2016) proposed a generic construction of PKEET schemes in the standard model and hence it is possible to yield the first instantiation of PKEET schemes based on lattices. Their method is to use a $2$-level hierarchical identity-based encryption (HIBE) scheme together with a one-time signature scheme. In this paper, we propose, for the first time, a direct construction of a PKEET scheme based on the hardness assumption of lattices in the standard model. More specifically, the security of the proposed scheme is reduces to the hardness of the Learning With Errors problem.




base

Fact-based Dialogue Generation with Convergent and Divergent Decoding. (arXiv:2005.03174v1 [cs.CL])

Fact-based dialogue generation is a task of generating a human-like response based on both dialogue context and factual texts. Various methods were proposed to focus on generating informative words that contain facts effectively. However, previous works implicitly assume a topic to be kept on a dialogue and usually converse passively, therefore the systems have a difficulty to generate diverse responses that provide meaningful information proactively. This paper proposes an end-to-end Fact-based dialogue system augmented with the ability of convergent and divergent thinking over both context and facts, which can converse about the current topic or introduce a new topic. Specifically, our model incorporates a novel convergent and divergent decoding that can generate informative and diverse responses considering not only given inputs (context and facts) but also inputs-related topics. Both automatic and human evaluation results on DSTC7 dataset show that our model significantly outperforms state-of-the-art baselines, indicating that our model can generate more appropriate, informative, and diverse responses.




base

Deep Learning for Image-based Automatic Dial Meter Reading: Dataset and Baselines. (arXiv:2005.03106v1 [cs.CV])

Smart meters enable remote and automatic electricity, water and gas consumption reading and are being widely deployed in developed countries. Nonetheless, there is still a huge number of non-smart meters in operation. Image-based Automatic Meter Reading (AMR) focuses on dealing with this type of meter readings. We estimate that the Energy Company of Paran'a (Copel), in Brazil, performs more than 850,000 readings of dial meters per month. Those meters are the focus of this work. Our main contributions are: (i) a public real-world dial meter dataset (shared upon request) called UFPR-ADMR; (ii) a deep learning-based recognition baseline on the proposed dataset; and (iii) a detailed error analysis of the main issues present in AMR for dial meters. To the best of our knowledge, this is the first work to introduce deep learning approaches to multi-dial meter reading, and perform experiments on unconstrained images. We achieved a 100.0% F1-score on the dial detection stage with both Faster R-CNN and YOLO, while the recognition rates reached 93.6% for dials and 75.25% for meters using Faster R-CNN (ResNext-101).




base

Optimal Location of Cellular Base Station via Convex Optimization. (arXiv:2005.03099v1 [cs.IT])

An optimal base station (BS) location depends on the traffic (user) distribution, propagation pathloss and many system parameters, which renders its analytical study difficult so that numerical algorithms are widely used instead. In this paper, the problem is studied analytically. First, it is formulated as a convex optimization problem to minimize the total BS transmit power subject to quality-of-service (QoS) constraints, which also account for fairness among users. Due to its convex nature, Karush-Kuhn-Tucker (KKT) conditions are used to characterize a globally-optimum location as a convex combination of user locations, where convex weights depend on user parameters, pathloss exponent and overall geometry of the problem. Based on this characterization, a number of closed-form solutions are obtained. In particular, the optimum BS location is the mean of user locations in the case of free-space propagation and identical user parameters. If the user set is symmetric (as defined in the paper), the optimal BS location is independent of pathloss exponent, which is not the case in general. The analytical results show the impact of propagation conditions as well as system and user parameters on optimal BS location and can be used to develop design guidelines.




base

Eliminating NB-IoT Interference to LTE System: a Sparse Machine Learning Based Approach. (arXiv:2005.03092v1 [cs.IT])

Narrowband internet-of-things (NB-IoT) is a competitive 5G technology for massive machine-type communication scenarios, but meanwhile introduces narrowband interference (NBI) to existing broadband transmission such as the long term evolution (LTE) systems in enhanced mobile broadband (eMBB) scenarios. In order to facilitate the harmonic and fair coexistence in wireless heterogeneous networks, it is important to eliminate NB-IoT interference to LTE systems. In this paper, a novel sparse machine learning based framework and a sparse combinatorial optimization problem is formulated for accurate NBI recovery, which can be efficiently solved using the proposed iterative sparse learning algorithm called sparse cross-entropy minimization (SCEM). To further improve the recovery accuracy and convergence rate, regularization is introduced to the loss function in the enhanced algorithm called regularized SCEM. Moreover, exploiting the spatial correlation of NBI, the framework is extended to multiple-input multiple-output systems. Simulation results demonstrate that the proposed methods are effective in eliminating NB-IoT interference to LTE systems, and significantly outperform the state-of-the-art methods.




base

AVAC: A Machine Learning based Adaptive RRAM Variability-Aware Controller for Edge Devices. (arXiv:2005.03077v1 [eess.SY])

Recently, the Edge Computing paradigm has gained significant popularity both in industry and academia. Researchers now increasingly target to improve performance and reduce energy consumption of such devices. Some recent efforts focus on using emerging RRAM technologies for improving energy efficiency, thanks to their no leakage property and high integration density. As the complexity and dynamism of applications supported by such devices escalate, it has become difficult to maintain ideal performance by static RRAM controllers. Machine Learning provides a promising solution for this, and hence, this work focuses on extending such controllers to allow dynamic parameter updates. In this work we propose an Adaptive RRAM Variability-Aware Controller, AVAC, which periodically updates Wait Buffer and batch sizes using on-the-fly learning models and gradient ascent. AVAC allows Edge devices to adapt to different applications and their stages, to improve computation performance and reduce energy consumption. Simulations demonstrate that the proposed model can provide up to 29% increase in performance and 19% decrease in energy, compared to static controllers, using traces of real-life healthcare applications on a Raspberry-Pi based Edge deployment.




base

Guided Policy Search Model-based Reinforcement Learning for Urban Autonomous Driving. (arXiv:2005.03076v1 [cs.RO])

In this paper, we continue our prior work on using imitation learning (IL) and model free reinforcement learning (RL) to learn driving policies for autonomous driving in urban scenarios, by introducing a model based RL method to drive the autonomous vehicle in the Carla urban driving simulator. Although IL and model free RL methods have been proved to be capable of solving lots of challenging tasks, including playing video games, robots, and, in our prior work, urban driving, the low sample efficiency of such methods greatly limits their applications on actual autonomous driving. In this work, we developed a model based RL algorithm of guided policy search (GPS) for urban driving tasks. The algorithm iteratively learns a parameterized dynamic model to approximate the complex and interactive driving task, and optimizes the driving policy under the nonlinear approximate dynamic model. As a model based RL approach, when applied in urban autonomous driving, the GPS has the advantages of higher sample efficiency, better interpretability, and greater stability. We provide extensive experiments validating the effectiveness of the proposed method to learn robust driving policy for urban driving in Carla. We also compare the proposed method with other policy search and model free RL baselines, showing 100x better sample efficiency of the GPS based RL method, and also that the GPS based method can learn policies for harder tasks that the baseline methods can hardly learn.




base

Evaluating text coherence based on the graph of the consistency of phrases to identify symptoms of schizophrenia. (arXiv:2005.03008v1 [cs.CL])

Different state-of-the-art methods of the detection of schizophrenia symptoms based on the estimation of text coherence have been analyzed. The analysis of a text at the level of phrases has been suggested. The method based on the graph of the consistency of phrases has been proposed to evaluate the semantic coherence and the cohesion of a text. The semantic coherence, cohesion, and other linguistic features (lexical diversity, lexical density) have been taken into account to form feature vectors for the training of a model-classifier. The training of the classifier has been performed on the set of English-language interviews. According to the retrieved results, the impact of each feature on the output of the model has been analyzed. The results obtained can indicate that the proposed method based on the graph of the consistency of phrases may be used in the different tasks of the detection of mental illness.




base

How Does the IMPACT Baseline Test for Athletes Really Work?

Retired Soccer Star Briana Scurry describes how the computerized baseline test works and how it is used for athletes who have sustained a concussion.




base

The Complete Tutorial on the Top 5 Ways to Query Your Relational Database in JavaScript - Part 2

Welcome back! In the first part of this series, we looked at a very "low-level" way to interact with a relational database by sending it raw SQL strings and retrieving the results. We created a very simple Express application that we can use as an example and deployed it on Heroku with a Postgres database.

In this part, we're going to examine a few libraries which build on top of that foundation, adding layers of abstraction that let you read and manipulate database data in a more "JavaScript-like" way.




base

Baseball/Softball

Summer Camps 2020 G-Prep Softball Camp A fundamental camp for girls; details TBA.…




base

Based on a powerful true story, Just Mercy examines racial injustice within the American legal system

[IMAGE-1] I honestly don't know how people like Bryan Stevenson keep up the fight. Just Mercy is the true origin story of a literal social justice warrior, a Harvard-educated lawyer who, in the late 1980s, launched the Equal Justice Initiative in Montgomery, Alabama, to take on the neediest, most desperate cases.…



  • Film/Film News

base

Aminoalcohol and biocide compositions for aqueous based systems

Biocidal compositions and their use in aqueous media, such as metalworking fluids, the compositions comprising a biocidal agent; and a non-biocidal primary amino alcohol compound of the formula (I); wherein R1, R2, R3, R4, and R5 are as defined herein.




base

Urban traffic state detection based on support vector machine and multilayer perceptron

A system and method that facilitates urban traffic state detection based on support vector machine (SVM) and multilayer perceptron (MLP) classifiers is provided. Moreover, the SVM and MLP classifiers are fused into a cascaded two-tier classifier that improves the accuracy of the traffic state classification. To further improve the accuracy, the cascaded two-tier classifier (e.g., MLP-SVM), a single SVM classifier and a single MLP classifier are fused to determine a final decision for a traffic state. In addition, fusion strategies are employed during training and implementation phases to compensate for data acquisition and classification errors caused by noise and/or outliers.




base

Wearable electromyography-based human-computer interface

A “Wearable Electromyography-Based Controller” includes a plurality of Electromyography (EMG) sensors and provides a wired or wireless human-computer interface (HCI) for interacting with computing systems and attached devices via electrical signals generated by specific movement of the user's muscles. Following initial automated self-calibration and positional localization processes, measurement and interpretation of muscle generated electrical signals is accomplished by sampling signals from the EMG sensors of the Wearable Electromyography-Based Controller. In operation, the Wearable Electromyography-Based Controller is donned by the user and placed into a coarsely approximate position on the surface of the user's skin. Automated cues or instructions are then provided to the user for fine-tuning placement of the Wearable Electromyography-Based Controller. Examples of Wearable Electromyography-Based Controllers include articles of manufacture, such as an armband, wristwatch, or article of clothing having a plurality of integrated EMG-based sensor nodes and associated electronics.




base

Inferring user preferences from an internet based social interactive construct

In embodiments of the present invention improved capabilities are described for a computer program product embodied in a computer readable medium that, when executing on one or more computers, helps determine an unknown user's preferences through the use of internet based social interactive graphical representations on a computer facility by performing the steps of (1) ascertaining preferences of a plurality of users who are part of an internet based social interactive construct, wherein the plurality of users become a plurality of known users; (2) determining the internet based social interactive graphical representation for the plurality of known users; and (3) inferring the preferences of an unknown user present in the internet based social interactive graphical representation of the plurality of known users based on the interrelationships between the unknown user and the plurality of known users within the graphical representation.




base

Learning rewrite rules for search database systems using query logs

Methods and arrangements for conducting a search using query logs. A query log is consulted and query rewrite rules are learned automatically based on data in the query log. The learning includes obtaining click-through data present in the query log.




base

Apparatus and method for recognizing representative user behavior based on recognition of unit behaviors

An apparatus for recognizing a representative user behavior includes a unit-data extracting unit configured to extract at least one unit data from sensor data, a feature-information extracting unit configured to extract feature information from each of the at least one unit data, a unit-behavior recognizing unit configured to recognize a respective unit behavior for each of the at least one unit data based on the feature information, and a representative-behavior recognizing unit configured to recognize at least one representative behavior based on the respective unit behavior recognized for each of the at least one unit data.




base

Observation-based user profiling and profile matching

Observation-based user profiling and profile matching are provided. The network behavior of users of a computer-implemented social network are observed and used for user profiling. By observing network behavior instead of necessarily relying on user self-reported data, accurate and objective user profiles can be formed; user profiling is accomplished based on the observed network behaviors with or without the knowledge of the user being profiled. The observed network behaviors can include the customization of a visual graphic, a media preference, a communication preference, or a selection of words from a word list. The user profiles can be with respect to a domain and two or more users can be matched based on their profiles with respect to the same domain. User ratings and profile updating based on the ratings are also provided.




base

Methods and systems for constructing intelligent glossaries from distinction-based reasoning

A computer implemented method of constructing formal definitions in intelligent glossaries for interpreting text, comprising the steps of: providing at least one Distinction having a Boundary, an Indication, a Counter-indication and a Frame; modeling each Distinction as a diagram to provide a Distinction Model; verifying each distinction model as being an instantiation of a generic Distinction Pattern; providing at least one Arrangement made of nonintersecting Marks of Distinction containing Indications from the verified Distinction Model; writing at least one Formulation for each Indication appearing in verified Distinction model and Arrangement, providing well-founded Indications; calculating precise Formulations in Natural Language from well-founded Indications by substituting Variables symbols and/or replacing Constants symbols to transform imprecise Formulations into precise Formulations; selecting a Definition type and embedding at least one precise Formulation and Definition type as a formal Definition in an Intelligent Glossary to provide computerized Semantic Systems of Intelligent Glossaries.




base

Data mining and model generation using an in-database analytic flow generator

Embodiments are described for a system and method of providing a data miner that decouples the analytic flow solution components from the data source. An analytic-flow solution then couples with the target data source through a simple set of data source connector, table and transformation objects, to perform the requisite analytic flow function. As a result, the analytic-flow solution needs to be designed only once and can be re-used across multiple target data sources. The analytic flow can be modified and updated at one place and then deployed for use on various different target data sources.