sin

Subtle Sensing: Detecting Differences in the Flexibility of Virtually Simulated Molecular Objects. (arXiv:2005.03503v1 [cs.HC])

During VR demos we have performed over last few years, many participants (in the absence of any haptic feedback) have commented on their perceived ability to 'feel' differences between simulated molecular objects. The mechanisms for such 'feeling' are not entirely clear: observing from outside VR, one can see that there is nothing physical for participants to 'feel'. Here we outline exploratory user studies designed to evaluate the extent to which participants can distinguish quantitative differences in the flexibility of VR-simulated molecular objects. The results suggest that an individual's capacity to detect differences in molecular flexibility is enhanced when they can interact with and manipulate the molecules, as opposed to merely observing the same interaction. Building on these results, we intend to carry out further studies investigating humans' ability to sense quantitative properties of VR simulations without haptic technology.




sin

Joint Prediction and Time Estimation of COVID-19 Developing Severe Symptoms using Chest CT Scan. (arXiv:2005.03405v1 [eess.IV])

With the rapidly worldwide spread of Coronavirus disease (COVID-19), it is of great importance to conduct early diagnosis of COVID-19 and predict the time that patients might convert to the severe stage, for designing effective treatment plan and reducing the clinicians' workloads. In this study, we propose a joint classification and regression method to determine whether the patient would develop severe symptoms in the later time, and if yes, predict the possible conversion time that the patient would spend to convert to the severe stage. To do this, the proposed method takes into account 1) the weight for each sample to reduce the outliers' influence and explore the problem of imbalance classification, and 2) the weight for each feature via a sparsity regularization term to remove the redundant features of high-dimensional data and learn the shared information across the classification task and the regression task. To our knowledge, this study is the first work to predict the disease progression and the conversion time, which could help clinicians to deal with the potential severe cases in time or even save the patients' lives. Experimental analysis was conducted on a real data set from two hospitals with 422 chest computed tomography (CT) scans, where 52 cases were converted to severe on average 5.64 days and 34 cases were severe at admission. Results show that our method achieves the best classification (e.g., 85.91% of accuracy) and regression (e.g., 0.462 of the correlation coefficient) performance, compared to all comparison methods. Moreover, our proposed method yields 76.97% of accuracy for predicting the severe cases, 0.524 of the correlation coefficient, and 0.55 days difference for the converted time.




sin

Scheduling with a processing time oracle. (arXiv:2005.03394v1 [cs.DS])

In this paper we study a single machine scheduling problem on a set of independent jobs whose execution time is not known, but guaranteed to be either short or long, for two given processing times. At every time step, the scheduler has the possibility either to test a job, by querying a processing time oracle, which reveals its processing time, and occupies one time unit on the schedule. Or the scheduler can execute a job, might it be previously tested or not. The objective value is the total completion time over all jobs, and is compared with the objective value of an optimal schedule, which does not need to test. The resulting competitive ratio measures the price of hidden processing time.

Two models are studied in this paper. In the non-adaptive model, the algorithm needs to decide before hand which jobs to test, and which jobs to execute untested. However in the adaptive model, the algorithm can make these decisions adaptively to the outcomes of the job tests. In both models we provide optimal polynomial time two-phase algorithms, which consist of a first phase where jobs are tested, and a second phase where jobs are executed untested. Experiments give strong evidence that optimal algorithms have this structure. Proving this property is left as an open problem.




sin

WSMN: An optimized multipurpose blind watermarking in Shearlet domain using MLP and NSGA-II. (arXiv:2005.03382v1 [cs.CR])

Digital watermarking is a remarkable issue in the field of information security to avoid the misuse of images in multimedia networks. Although access to unauthorized persons can be prevented through cryptography, it cannot be simultaneously used for copyright protection or content authentication with the preservation of image integrity. Hence, this paper presents an optimized multipurpose blind watermarking in Shearlet domain with the help of smart algorithms including MLP and NSGA-II. In this method, four copies of the robust copyright logo are embedded in the approximate coefficients of Shearlet by using an effective quantization technique. Furthermore, an embedded random sequence as a semi-fragile authentication mark is effectively extracted from details by the neural network. Due to performing an effective optimization algorithm for selecting optimum embedding thresholds, and also distinguishing the texture of blocks, the imperceptibility and robustness have been preserved. The experimental results reveal the superiority of the scheme with regard to the quality of watermarked images and robustness against hybrid attacks over other state-of-the-art schemes. The average PSNR and SSIM of the dual watermarked images are 38 dB and 0.95, respectively; Besides, it can effectively extract the copyright logo and locates forgery regions under severe attacks with satisfactory accuracy.




sin

Energy-efficient topology to enhance the wireless sensor network lifetime using connectivity control. (arXiv:2005.03370v1 [cs.NI])

Wireless sensor networks have attracted much attention because of many applications in the fields of industry, military, medicine, agriculture, and education. In addition, the vast majority of researches has been done to expand its applications and improve its efficiency. However, there are still many challenges for increasing the efficiency in different parts of this network. One of the most important parts is to improve the network lifetime in the wireless sensor network. Since the sensor nodes are generally powered by batteries, the most important issue to consider in these types of networks is to reduce the power consumption of the nodes in such a way as to increase the network lifetime to an acceptable level. The contribution of this paper is using topology control, the threshold for the remaining energy in nodes, and two of the meta-algorithms include SA (Simulated annealing) and VNS (Variable Neighbourhood Search) to increase the energy remaining in the sensors. Moreover, using a low-cost spanning tree, an appropriate connectivity control among nodes is created in the network in order to increase the network lifetime. The results of simulations show that the proposed method improves the sensor lifetime and reduces the energy consumed.




sin

Scoring Root Necrosis in Cassava Using Semantic Segmentation. (arXiv:2005.03367v1 [eess.IV])

Cassava a major food crop in many parts of Africa, has majorly been affected by Cassava Brown Streak Disease (CBSD). The disease affects tuberous roots and presents symptoms that include a yellow/brown, dry, corky necrosis within the starch-bearing tissues. Cassava breeders currently depend on visual inspection to score necrosis in roots based on a qualitative score which is quite subjective. In this paper we present an approach to automate root necrosis scoring using deep convolutional neural networks with semantic segmentation. Our experiments show that the UNet model performs this task with high accuracy achieving a mean Intersection over Union (IoU) of 0.90 on the test set. This method provides a means to use a quantitative measure for necrosis scoring on root cross-sections. This is done by segmentation and classifying the necrotized and non-necrotized pixels of cassava root cross-sections without any additional feature engineering.




sin

Estimating Blood Pressure from Photoplethysmogram Signal and Demographic Features using Machine Learning Techniques. (arXiv:2005.03357v1 [eess.SP])

Hypertension is a potentially unsafe health ailment, which can be indicated directly from the Blood pressure (BP). Hypertension always leads to other health complications. Continuous monitoring of BP is very important; however, cuff-based BP measurements are discrete and uncomfortable to the user. To address this need, a cuff-less, continuous and a non-invasive BP measurement system is proposed using Photoplethysmogram (PPG) signal and demographic features using machine learning (ML) algorithms. PPG signals were acquired from 219 subjects, which undergo pre-processing and feature extraction steps. Time, frequency and time-frequency domain features were extracted from the PPG and their derivative signals. Feature selection techniques were used to reduce the computational complexity and to decrease the chance of over-fitting the ML algorithms. The features were then used to train and evaluate ML algorithms. The best regression models were selected for Systolic BP (SBP) and Diastolic BP (DBP) estimation individually. Gaussian Process Regression (GPR) along with ReliefF feature selection algorithm outperforms other algorithms in estimating SBP and DBP with a root-mean-square error (RMSE) of 6.74 and 3.59 respectively. This ML model can be implemented in hardware systems to continuously monitor BP and avoid any critical health conditions due to sudden changes.




sin

Arranging Test Tubes in Racks Using Combined Task and Motion Planning. (arXiv:2005.03342v1 [cs.RO])

The paper develops a robotic manipulation system to treat the pressing needs for handling a large number of test tubes in clinical examination and replace or reduce human labor. It presents the technical details of the system, which separates and arranges test tubes in racks with the help of 3D vision and artificial intelligence (AI) reasoning/planning. The developed system only requires a person to put a rack with mixed and non-arranged tubes in front of a robot. The robot autonomously performs recognition, reasoning, planning, manipulation, etc., and returns a rack with separated and arranged tubes. The system is simple-to-use, and there are no requests for expert knowledge in robotics. We expect such a system to play an important role in helping managing public health and hope similar systems could be extended to other clinical manipulation like handling mixers and pipettes in the future.




sin

Crop Aggregating for short utterances speaker verification using raw waveforms. (arXiv:2005.03329v1 [eess.AS])

Most studies on speaker verification systems focus on long-duration utterances, which are composed of sufficient phonetic information. However, the performances of these systems are known to degrade when short-duration utterances are inputted due to the lack of phonetic information as compared to the long utterances. In this paper, we propose a method that compensates for the performance degradation of speaker verification for short utterances, referred to as "crop aggregating". The proposed method adopts an ensemble-based design to improve the stability and accuracy of speaker verification systems. The proposed method segments an input utterance into several short utterances and then aggregates the segment embeddings extracted from the segmented inputs to compose a speaker embedding. Then, this method simultaneously trains the segment embeddings and the aggregated speaker embedding. In addition, we also modified the teacher-student learning method for the proposed method. Experimental results on different input duration using the VoxCeleb1 test set demonstrate that the proposed technique improves speaker verification performance by about 45.37% relatively compared to the baseline system with 1-second test utterance condition.




sin

Boosting Cloud Data Analytics using Multi-Objective Optimization. (arXiv:2005.03314v1 [cs.DB])

Data analytics in the cloud has become an integral part of enterprise businesses. Big data analytics systems, however, still lack the ability to take user performance goals and budgetary constraints for a task, collectively referred to as task objectives, and automatically configure an analytic job to achieve these objectives. This paper presents a data analytics optimizer that can automatically determine a cluster configuration with a suitable number of cores as well as other system parameters that best meet the task objectives. At a core of our work is a principled multi-objective optimization (MOO) approach that computes a Pareto optimal set of job configurations to reveal tradeoffs between different user objectives, recommends a new job configuration that best explores such tradeoffs, and employs novel optimizations to enable such recommendations within a few seconds. We present efficient incremental algorithms based on the notion of a Progressive Frontier for realizing our MOO approach and implement them into a Spark-based prototype. Detailed experiments using benchmark workloads show that our MOO techniques provide a 2-50x speedup over existing MOO methods, while offering good coverage of the Pareto frontier. When compared to Ottertune, a state-of-the-art performance tuning system, our approach recommends configurations that yield 26\%-49\% reduction of running time of the TPCx-BB benchmark while adapting to different application preferences on multiple objectives.




sin

Expressing Accountability Patterns using Structural Causal Models. (arXiv:2005.03294v1 [cs.SE])

While the exact definition and implementation of accountability depend on the specific context, at its core accountability describes a mechanism that will make decisions transparent and often provides means to sanction "bad" decisions. As such, accountability is specifically relevant for Cyber-Physical Systems, such as robots or drones, that embed themselves into a human society, take decisions and might cause lasting harm. Without a notion of accountability, such systems could behave with impunity and would not fit into society. Despite its relevance, there is currently no agreement on its meaning and, more importantly, no way to express accountability properties for these systems. As a solution we propose to express the accountability properties of systems using Structural Causal Models. They can be represented as human-readable graphical models while also offering mathematical tools to analyze and reason over them. Our central contribution is to show how Structural Causal Models can be used to express and analyze the accountability properties of systems and that this approach allows us to identify accountability patterns. These accountability patterns can be catalogued and used to improve systems and their architectures.




sin

Multi-view data capture using edge-synchronised mobiles. (arXiv:2005.03286v1 [cs.MM])

Multi-view data capture permits free-viewpoint video (FVV) content creation. To this end, several users must capture video streams, calibrated in both time and pose, framing the same object/scene, from different viewpoints. New-generation network architectures (e.g. 5G) promise lower latency and larger bandwidth connections supported by powerful edge computing, properties that seem ideal for reliable FVV capture. We have explored this possibility, aiming to remove the need for bespoke synchronisation hardware when capturing a scene from multiple viewpoints, making it possible through off-the-shelf mobiles. We propose a novel and scalable data capture architecture that exploits edge resources to synchronise and harvest frame captures. We have designed an edge computing unit that supervises the relaying of timing triggers to and from multiple mobiles, in addition to synchronising frame harvesting. We empirically show the benefits of our edge computing unit by analysing latencies and show the quality of 3D reconstruction outputs against an alternative and popular centralised solution based on Unity3D.




sin

Enhancing Software Development Process Using Automated Adaptation of Object Ensembles. (arXiv:2005.03241v1 [cs.SE])

Software development has been changing rapidly. This development process can be influenced through changing developer friendly approaches. We can save time consumption and accelerate the development process if we can automatically guide programmer during software development. There are some approaches that recommended relevant code snippets and APIitems to the developer. Some approaches apply general code, searching techniques and some approaches use an online based repository mining strategies. But it gets quite difficult to help programmers when they need particular type conversion problems. More specifically when they want to adapt existing interfaces according to their expectation. One of the familiar triumph to guide developers in such situation is adapting collections and arrays through automated adaptation of object ensembles. But how does it help to a novice developer in real time software development that is not explicitly specified? In this paper, we have developed a system that works as a plugin-tool integrated with a particular Data Mining Integrated environment (DMIE) to recommend relevant interface while they seek for a type conversion situation. We have a mined repository of respective adapter classes and related APIs from where developer, search their query and get their result using the relevant transformer classes. The system that recommends developers titled automated objective ensembles (AOE plugin).From the investigation as we have ever made, we can see that our approach much better than some of the existing approaches.




sin

Catch Me If You Can: Using Power Analysis to Identify HPC Activity. (arXiv:2005.03135v1 [cs.CR])

Monitoring users on large computing platforms such as high performance computing (HPC) and cloud computing systems is non-trivial. Utilities such as process viewers provide limited insight into what users are running, due to granularity limitation, and other sources of data, such as system call tracing, can impose significant operational overhead. However, despite technical and procedural measures, instances of users abusing valuable HPC resources for personal gains have been documented in the past cite{hpcbitmine}, and systems that are open to large numbers of loosely-verified users from around the world are at risk of abuse. In this paper, we show how electrical power consumption data from an HPC platform can be used to identify what programs are executed. The intuition is that during execution, programs exhibit various patterns of CPU and memory activity. These patterns are reflected in the power consumption of the system and can be used to identify programs running. We test our approach on an HPC rack at Lawrence Berkeley National Laboratory using a variety of scientific benchmarks. Among other interesting observations, our results show that by monitoring the power consumption of an HPC rack, it is possible to identify if particular programs are running with precision up to and recall of 95\% even in noisy scenarios.




sin

Diagnosing the Environment Bias in Vision-and-Language Navigation. (arXiv:2005.03086v1 [cs.CL])

Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are crucial when the agent is navigating new environments about which it has no prior knowledge. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of the major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons for this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations that contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gaps between seen and unseen on multiple datasets (i.e. R2R, R4R, and CVDN) and achieve competitive unseen results to previous state-of-the-art models. Our code and features are available at: https://github.com/zhangybzbo/EnvBiasVLN




sin

Exploratory Analysis of Covid-19 Tweets using Topic Modeling, UMAP, and DiGraphs. (arXiv:2005.03082v1 [cs.SI])

This paper illustrates five different techniques to assess the distinctiveness of topics, key terms and features, speed of information dissemination, and network behaviors for Covid19 tweets. First, we use pattern matching and second, topic modeling through Latent Dirichlet Allocation (LDA) to generate twenty different topics that discuss case spread, healthcare workers, and personal protective equipment (PPE). One topic specific to U.S. cases would start to uptick immediately after live White House Coronavirus Task Force briefings, implying that many Twitter users are paying attention to government announcements. We contribute machine learning methods not previously reported in the Covid19 Twitter literature. This includes our third method, Uniform Manifold Approximation and Projection (UMAP), that identifies unique clustering-behavior of distinct topics to improve our understanding of important themes in the corpus and help assess the quality of generated topics. Fourth, we calculated retweeting times to understand how fast information about Covid19 propagates on Twitter. Our analysis indicates that the median retweeting time of Covid19 for a sample corpus in March 2020 was 2.87 hours, approximately 50 minutes faster than repostings from Chinese social media about H7N9 in March 2013. Lastly, we sought to understand retweet cascades, by visualizing the connections of users over time from fast to slow retweeting. As the time to retweet increases, the density of connections also increase where in our sample, we found distinct users dominating the attention of Covid19 retweeters. One of the simplest highlights of this analysis is that early-stage descriptive methods like regular expressions can successfully identify high-level themes which were consistently verified as important through every subsequent analysis.




sin

I Always Feel Like Somebody's Sensing Me! A Framework to Detect, Identify, and Localize Clandestine Wireless Sensors. (arXiv:2005.03068v1 [cs.CR])

The increasing ubiquity of low-cost wireless sensors in smart homes and buildings has enabled users to easily deploy systems to remotely monitor and control their environments. However, this raises privacy concerns for third-party occupants, such as a hotel room guest who may be unaware of deployed clandestine sensors. Previous methods focused on specific modalities such as detecting cameras but do not provide a generalizable and comprehensive method to capture arbitrary sensors which may be "spying" on a user. In this work, we seek to determine whether one can walk in a room and detect any wireless sensor monitoring an individual. As such, we propose SnoopDog, a framework to not only detect wireless sensors that are actively monitoring a user, but also classify and localize each device. SnoopDog works by establishing causality between patterns in observable wireless traffic and a trusted sensor in the same space, e.g., an inertial measurement unit (IMU) that captures a user's movement. Once causality is established, SnoopDog performs packet inspection to inform the user about the monitoring device. Finally, SnoopDog localizes the clandestine device in a 2D plane using a novel trial-based localization technique. We evaluated SnoopDog across several devices and various modalities and were able to detect causality 96.6% percent of the time, classify suspicious devices with 100% accuracy, and localize devices to a sufficiently reduced sub-space.




sin

CovidCTNet: An Open-Source Deep Learning Approach to Identify Covid-19 Using CT Image. (arXiv:2005.03059v1 [eess.IV])

Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method, however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source set of algorithms called CovidCTNet that successfully differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 90% compared to radiologists (70%). The model is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. In order to facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and parametric details in an open-source format. Open-source sharing of our CovidCTNet enables developers to rapidly improve and optimize services, while preserving user privacy and data ownership.




sin

Extracting Headless MWEs from Dependency Parse Trees: Parsing, Tagging, and Joint Modeling Approaches. (arXiv:2005.03035v1 [cs.CL])

An interesting and frequent type of multi-word expression (MWE) is the headless MWE, for which there are no true internal syntactic dominance relations; examples include many named entities ("Wells Fargo") and dates ("July 5, 2020") as well as certain productive constructions ("blow for blow", "day after day"). Despite their special status and prevalence, current dependency-annotation schemes require treating such flat structures as if they had internal syntactic heads, and most current parsers handle them in the same fashion as headed constructions. Meanwhile, outside the context of parsing, taggers are typically used for identifying MWEs, but taggers might benefit from structural information. We empirically compare these two common strategies--parsing and tagging--for predicting flat MWEs. Additionally, we propose an efficient joint decoding algorithm that combines scores from both strategies. Experimental results on the MWE-Aware English Dependency Corpus and on six non-English dependency treebanks with frequent flat structures show that: (1) tagging is more accurate than parsing for identifying flat-structure MWEs, (2) our joint decoder reconciles the two different views and, for non-BERT features, leads to higher accuracies, and (3) most of the gains result from feature sharing between the parsers and taggers.




sin

20 Company Website Designs to Inspire Your Small Business

As a small or midsize business (SMB), your company website is often the first touchpoint for potential clients — and you want it to make a great first impression. The secret to hitting home with your audience is to have a sophisticated and lively website design that’s aesthetically pleasing and provides great user experience (UX). […]

The post 20 Company Website Designs to Inspire Your Small Business appeared first on WebFX Blog.




sin

Need a Website? 6 Reasons to Have a Website for Your Business

Did you know that over 70% of people will research a company on the web before deciding to buy or visit, yet 46% of small businesses in the U.S. don’t have a website? If you don’t have a site, people can’t research your company and determine if you’re a good fit for their needs. If […]

The post Need a Website? 6 Reasons to Have a Website for Your Business appeared first on WebFX Blog.




sin

Using Heroku for Static Web Content

In the "Moving Away From AWS and Onto Heroku" article, I provided an introduction of the application I wanted to migrate from Amazon's popular AWS solution to Heroku.  Subsequently, the "Destination Heroku" article illustrated the establishment of a new Heroku account and focused on introducing a Java API (written in Spring Boot) connecting to a ClearDB instance within this new platform-as-a-service (PaaS) ecosystem.  My primary goal is to find a solution that allows my limited time to be focused on providing business solutions instead of getting up to speed with DevOps processes.

Quick Recap

As a TL;DR (too long; didn't read) to the original article, I built an Angular client and a Java API for the small business owned by my mother-in-law.  After a year of running the application on Elastic Beanstalk and S3, I wanted to see if there was a better solution that would allow me to focus more on writing features and enhancements and not have to worry about learning, understanding, and executing DevOps-like aspects inherent within the AWS ecosystem.




sin

A father sees his son for the final time through a pane of glass at a Lewiston nursing home

Monty Spears didn't know it at the time, but the last time he'd see his father would be through the window at the Life Care Center of Lewiston.…



  • News/Local News

sin

1917 is designed to look like a single take. Here are some other films that use similar tricks to great effect

Sam Mendes' 1917, which took Best Picture and Best Director awards at the Golden Globes earlier this week, looks like a standard period piece.…



  • Film/Film News

sin

Harrison Ford goes full curmudgeon in this surprisingly sweet, old-fashioned version of The Call of the Wild

[IMAGE-1] Harrison Ford has gone full Grizzly Adams and Buck the canine hero is fully CGI, 100-percent digital, not a scrap of real fur or dog farts about him. There is so much about this new umpteenth film version of Jack London's classic novel The Call of the Wild that is ready-made for meme-iriffic snarking.…



  • Film/Film News

sin

In lieu of in-person performances, musicians are using social media and live streams to connect with fans

Ask any working musician why they play live, why they lug their equipment to and from bars and restaurants and wine-tasting rooms week after week, and they'll point to the same nebulous thing: It's the connection with an audience.…




sin

Tim and Eric rock the Beef House, Danzig sings Elvis, and more you need to know

The Buzz Bin HERE'S THE BEEF…



  • Arts & Culture

sin

White House projects COVID-19 death toll of 3,000 people per day, Washington casinos weigh reopening, and other headlines

ON INLANDER.COM WORLD: Roughly two weeks after Canada's deadliest mass shooting, Prime Minister Justin Trudeau introduced an immediate ban on what he called “military-style assault weapons.”…




sin

What will Northern Quest Resort & Casino look like when it reopens Tuesday?

Northern Quest Resort & Casino is set to reopen Tuesday, albeit with strict social-distancing and other safety protocols in place, becoming the second regional casino to reopen after closures caused by the coronavirus. Resort officials expect a crowd due to pent-up interest in the community for getting out of the house (not to mention Cinco de Mayo).…




sin

How local wineries are trying to adjust to the new business landscape

Drink Local Life under the COVID-19 pandemic is rough for everyone, individuals and businesses alike.…



  • Food/Food News

sin

Can harnessing the psychological power of video games make you healthier?

Growing up, Luke Parker played sports.…




sin

The Zags are the WCC champs. But why was this season so surprising?

It’s March. The regular season is now in the rearview mirror.…




sin

Usually cannabis business booms in April. Will the coronavirus change that?

The Cannabis Issue In a normal year, cannabis stores would be cashing in this April.…



  • Special Guides/Cannabis Issue

sin

Graphene prepared by using edge functionalization of graphite

Disclosed is a method for producing graphene functionalized at its edge positions of graphite. Organic material having one or more functional groups is reacted with graphite in reaction medium comprising methanesulfonic acid and phosphorus pentoxide, or in reaction medium comprising trifluoromethanesulfonic acid, to produce graphene having organic material fuctionalized at edges. And then, high purity and large scaled graphene and film can be obtained by dispersing, centrifugal separating the functionalized graphene in a solvent and reducing, in particular heat treating the graphene. According to the present invention graphene can be produced inexpensively in a large amount with a minimum loss of graphite. (FIG. 1)




sin

Anti-microbial and anti-static surface treatment agent with quaternary ammonium salt as active ingredient and method for preventing static electricity in polymer fibers using same

Provided are an anti-static and anti-microbial surface treatment agent including a quaternary ammonium salt compound as an active ingredient and a method of preventing a polymer fiber from developing static electricity by using the surface treatment agent. The quaternary ammonium salt compound has excellent anti-static and anti-microbial effects for the prevention or improvement of static electricity in a polymer fiber. Accordingly, the quaternary ammonium salt compound is suitable for use as a fabric softener, or an anti-static agent, and also, provides anti-microbial effects to a polymer fiber.




sin

Best match processing mode of decision tables

An input combination of at least one condition value to be evaluated against at least one rule of a decision table is received. The at least one rule includes at least one condition and the rule is associated with a result. The at least one rule is evaluated against the input combination to determine conditions fulfilled for the at least one condition value. In one aspect, a rule from the at least one rule that best matches the input combination is determined and a result associated with the rule that best matches the input combination is outputted.




sin

Parsing and rendering structured images

Systems and methods for generating a tuple of structured data files are described herein. In one example, a method includes detecting an expression that describes a structure of a structured image using a constructor. The method can also include using an inference-rule based search strategy to identify a hierarchical arrangement of bounding boxes in the structured image that match the expression. Furthermore, the method can include generating a first tuple of structured data files based on the identified hierarchical arrangement of bounding boxes in the structured image.




sin

Matching metadata sources using rules for characterizing matches

Processing metadata includes storing, in a data storage system, a specification for each of multiple sources, each specification including information identifying one or more data elements of the corresponding source; and processing, in a data processing system coupled to the data storage system, data elements from the sources, including generating a set of rules for each source based on a corresponding one of the stored specifications, and matching data elements of different sources and determining a quality metric characterizing a given match between a first data element of a first source and a second data element of a second source according to the set of rules generated for the first source and the set of rules generated for the second source.




sin

Modeling of time-variant threshability due to interactions between a crop in a field and atmospheric and soil conditions for prediction of daily opportunity windows for harvest operations using field-level diagnosis and prediction of weather conditions an

A modeling framework for evaluating the impact of weather conditions on farming and harvest operations applies real-time, field-level weather data and forecasts of meteorological and climatological conditions together with user-provided and/or observed feedback of a present state of a harvest-related condition to agronomic models and to generate a plurality of harvest advisory outputs for precision agriculture. A harvest advisory model simulates and predicts the impacts of this weather information and user-provided and/or observed feedback in one or more physical, empirical, or artificial intelligence models of precision agriculture to analyze crops, plants, soils, and resulting agricultural commodities, and provides harvest advisory outputs to a diagnostic support tool for users to enhance farming and harvest decision-making, whether by providing pre-, post-, or in situ-harvest operations and crop analyzes.




sin

Correlating data from multiple business processes to a business process scenario

The present disclosure involves systems, software, and computer-implemented methods for providing process intelligence by correlating events from multiple business process systems to a single business scenario using configurable correlation strategies. An example method includes identifying a raw event associated with a sending business process and a receiving business process, identifying a sending business process attribute associated with the sending business process and a receiving business process attribute associated with the receiving business process, determining a correlation strategy for associating the raw event with a business scenario instance, the determination based at least in part on the sending business process attribute and the receiving business process attribute, and generating a visibility scenario event from the raw event according to the correlation strategy, the visibility scenario event associated with the business scenario instance.




sin

Using dotplots for comparing and finding patterns in sequences of data points

Embodiments of the invention provide systems and methods for analyzing sequential data. The sequential data can comprise a sequence of data points arranged in a particular order and thus representing a sequence. A number of such sequences can be analyzed, for example, to identify patterns or commonalities within the sequences or portions of sequences represented by the data. According to one embodiment, a method of identifying patterns in sequences of data points can comprise reading a set of sequential data. The sequential data can comprises a plurality of sequences and each of the plurality of sequences can represent an ordered sequence of tokens. A dotplot representing matches between each sequence of the plurality sequences can be generated. One or more patterns within the sequential data can then be identified based on the dotplot.




sin

Method for generating visual mapping of knowledge information from parsing of text inputs for subjects and predicates

A method for performing relational analysis of parsed input is employed to create a visual map of knowledge information. A title, header or subject line for an input item of information is parsed into syntactical components of at least a subject component and any predicate component(s) relationally linked as topic and subtopics. A search of topics and subtopics is carried out for each parsed component. If a match is found, then the parsed component is taken as a chosen topic/subtopic label. If no match is found, then the parsed component is formatted as a new entry in the knowledge map. A translation function for translating topics and subtopics from an original language into one or more target languages is enabled by user request or indicated user preference for display on a generated visual map of knowledge information.




sin

Providing answers to questions using logical synthesis of candidate answers

A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving an input query, decomposing the input query into a plurality of different subqueries, and conducting a search in one or more data sources to identify at least one candidate answer to each of the subqueries. A ranking function is applied to each of the candidate answers to determine a ranking for each of these candidate answers; and for each of the subqueries, one of the candidate answers to the subquery is selected based on this ranking. A logical synthesis component is applied to synthesize a candidate answer for the input query from the selected the candidate answers to the subqueries. In one embodiment, the procedure applied by the logical synthesis component to synthesize the candidate answer for the input query is determined from the input query.




sin

Learning rewrite rules for search database systems using query logs

Methods and arrangements for conducting a search using query logs. A query log is consulted and query rewrite rules are learned automatically based on data in the query log. The learning includes obtaining click-through data present in the query log.




sin

Automatic chemical assay classification using a space enhancing proximity

A computer implemented method for automatic chemical assay classification, the method comprising steps the computer is programmed to perform, the steps comprising: receiving a plurality of sets of parameters, each one of the received sets of parameters characterizing a respective assay of a chemical reaction, calculating a space enhancing proximity among points representative of assays of qualitatively identical chemical reactions, and representing each one of at least two of the received sets of parameters as a respective point in the calculated space, and dividing the points in the calculated space into a number of groups, according to proximity among the points in the calculated space, each group pertaining to a respective chemical reaction, thereby classifying the assays.




sin

Systems and methods for control reliability operations using TMR

In one embodiment, a system includes a data collection system configured to collect a data from a control system by using an offline mode of operations. The system further includes a configuration management system configured to manage a hardware configuration and a software configuration for the control system based on the data. The system additionally includes a rule engine configured to use the data as input and to output a health assessment by using a rule database, and a report generator configured to provide a health assessment for the control system.




sin

Data mining and model generation using an in-database analytic flow generator

Embodiments are described for a system and method of providing a data miner that decouples the analytic flow solution components from the data source. An analytic-flow solution then couples with the target data source through a simple set of data source connector, table and transformation objects, to perform the requisite analytic flow function. As a result, the analytic-flow solution needs to be designed only once and can be re-used across multiple target data sources. The analytic flow can be modified and updated at one place and then deployed for use on various different target data sources.




sin

System and method for using cluster level quorum to prevent split brain scenario in a data grid cluster

A system and method is described for use with a data grid cluster, which uses cluster quorum to prevent split brain scenario. The data grid cluster includes a plurality of cluster nodes, each of which runs a cluster service. Each cluster service collects and maintains statistics regarding communication flow between its cluster node and the other cluster nodes in the data grid cluster. The statistics are used to determine a status associated with other cluster nodes in the data grid cluster whenever a disconnect event happens. The data grid cluster is associated with a quorum policy, which is defined in a cache configuration file, and which specifies a time period that a cluster node will wait before making a decision on whether or not to evict one or more cluster nodes from the data grid cluster.




sin

Nitrated lipids and methods of making and using thereof

Described herein are nitrated lipids and methods of making and using the nitrated lipids.




sin

Using cavitation to increase oil separation

Methods and systems are provided that apply cavitation to a grain-based liquid medium processing stream of an oil separation process in order to achieve increased yields. Ultrasonic sources can be used in generating the cavitation. Typically, the oil processing system is a downstream process of an alcohol (such as ethanol) production facility utilizing a dry grind, a modified dry grind or a wet mill alcohol production process.