predictive

Treatment Advances, Predictive Biomarkers Stand to Improve Bladder Cancer Care

Recent advances in bladder cancer treatments may offer hope of curative care to more patients, including those with high-risk localized, muscle-invasive disease, according to a New England Journal of Medicine editorial published by Matthew Milowsky, MD, FASCO, a bladder cancer expert at UNC School of Medicine and UNC Lineberger Comprehensive Cancer Center.




predictive

Treatment Advances, Predictive Biomarkers Stand to Improve Bladder Cancer Care

Recent advances in bladder cancer treatments may offer hope of curative care to more patients, including those with high-risk localized, muscle-invasive disease, according to a New England Journal of Medicine editorial published by Matthew Milowsky, MD, FASCO, a bladder cancer expert at UNC School of Medicine and UNC Lineberger Comprehensive Cancer Center.




predictive

Predictive surficial geology, Cape Stang area, Victoria Island, Nunavut, NTS 77-H and 77-G east

Sharpe, D R; Lesemann, J -E; Parkinson, W; Armstrong, L; Dods, E. Geological Survey of Canada, Canadian Geoscience Map 173, 2020, 2 sheets, https://doi.org/10.4095/295702
<a href="https://geoscan.nrcan.gc.ca/images/geoscan/gid_295702-1.jpg"><img src="https://geoscan.nrcan.gc.ca/images/geoscan/gid_295702-1.jpg" title="Geological Survey of Canada, Canadian Geoscience Map 173, 2020, 2 sheets, https://doi.org/10.4095/295702" height="150" border="1" /></a>




predictive

Predictive surficial geology, Denmark Bay-Qikiqtagafaaluk area, Victoria Island Nunavut, NTS 67-C and F

Sharpe, D R; Lesemann, J -E; Parkinson, W; Armstrong, L; Dods, E. Geological Survey of Canada, Canadian Geoscience Map 174, surficial data model v.2.3.14 conversion, 2023, 2 sheets, https://doi.org/10.4095/295703
<a href="https://geoscan.nrcan.gc.ca/images/geoscan/gid_295703.jpg"><img src="https://geoscan.nrcan.gc.ca/images/geoscan/gid_295703.jpg" title="Geological Survey of Canada, Canadian Geoscience Map 174, surficial data model v.2.3.14 conversion, 2023, 2 sheets, https://doi.org/10.4095/295703" height="150" border="1" /></a>




predictive

Introducing Show-Level Streaming Audiences, the Newest Addition to Proximic’s Predictive Audiences Suite

If you took away one thing from this year’s UpFronts it was likely about the growing popularity of connected TV. 




predictive

Loan delinquency analysis using predictive model

The research uses a machine learning approach to appraising the validity of customer aptness for a loan. Banks and non-banking financial companies (NBFC) face significant non-performing assets (NPAs) threats because of the non-payment of loans. In this study, the data is collected from Kaggle and tested using various machine learning models to determine if the borrower can repay its loan. In addition, we analysed the performance of the models [K-nearest neighbours (K-NN), logistic regression, support vector machines (SVM), decision tree, naive Bayes and neural networks]. The purpose is to support decisions that are based not on subjective aspects but objective data analysis. This work aims to analyse how objective factors influence borrowers to default loans, identify the leading causes contributing to a borrower's default loan. The results show that the decision tree classifier gives the best result, with a recall rate of 0.0885 and a false- negative rate of 5.4%.




predictive

Decision Making for Predictive Maintenance in Asset Information Management




predictive

Analysis of Explanatory and Predictive Architectures and the Relevance in Explaining the Adoption of IT in SMEs




predictive

Data Quality in Linear Regression Models: Effect of Errors in Test Data and Errors in Training Data on Predictive Accuracy




predictive

GSX 2024 Recap: The Impact of Proactive & Predictive Data

GSX in Orlando, held just before Hurricane Helene, showcased over 200 educational sessions and 500 exhibitors, emphasizing a shift from traditional product-focused displays to innovative solutions that leverage data for improved efficiency and predictive security management.




predictive

Benchmarking predictive methods for small-angle X-ray scattering from atomic coordinates of proteins using maximum likelihood consensus data

Stimulated by informal conversations at the XVII International Small Angle Scattering (SAS) conference (Traverse City, 2017), an international team of experts undertook a round-robin exercise to produce a large dataset from proteins under standard solution conditions. These data were used to generate consensus SAS profiles for xylose isomerase, urate oxidase, xylanase, lysozyme and ribonuclease A. Here, we apply a new protocol using maximum likelihood with a larger number of the contributed datasets to generate improved consensus profiles. We investigate the fits of these profiles to predicted profiles from atomic coordinates that incorporate different models to account for the contribution to the scattering of water molecules of hydration surrounding proteins in solution. Programs using an implicit, shell-type hydration layer generally optimize fits to experimental data with the aid of two parameters that adjust the volume of the bulk solvent excluded by the protein and the contrast of the hydration layer. For these models, we found the error-weighted residual differences between the model and the experiment generally reflected the subsidiary maxima and minima in the consensus profiles that are determined by the size of the protein plus the hydration layer. By comparison, all-atom solute and solvent molecular dynamics (MD) simulations are without the benefit of adjustable parameters and, nonetheless, they yielded at least equally good fits with residual differences that are less reflective of the structure in the consensus profile. Further, where MD simulations accounted for the precise solvent composition of the experiment, specifically the inclusion of ions, the modelled radius of gyration values were significantly closer to the experiment. The power of adjustable parameters to mask real differences between a model and the structure present in solution is demonstrated by the results for the conformationally dynamic ribonuclease A and calculations with pseudo-experimental data. This study shows that, while methods invoking an implicit hydration layer have the unequivocal advantage of speed, care is needed to understand the influence of the adjustable parameters. All-atom solute and solvent MD simulations are slower but are less susceptible to false positives, and can account for thermal fluctuations in atomic positions, and more accurately represent the water molecules of hydration that contribute to the scattering profile.




predictive

Predictive Health Solutions Technology Shows Promise in Combating Patient Appointment No-Shows

The PHS Patient No-Show Predictor solution can help enhance the healthcare experience for both patients and caregivers alike by maximizing health outcomes, minimizing revenue loss and improving operational efficiencies.




predictive

The City of Euless Repeals Texas’s Only Predictive Scheduling Ordinance

The Euless, Texas Fair Overtime and Scheduling Standards Ordinance that imposed predictive scheduling obligations on covered employers is no more.   

The Unusual Origin of the Ordinance 




predictive

ETQ AI-Based Predictive Quality Analytics Solution

ETQ, part of Hexagon, launched the ETQ Reliance® Predictive Quality Analytics solution, bringing a new level of artificial intelligence (AI)-driven analytics to its ETQ quality management system (QMS).




predictive

Statistical Process Control: From Reactive to Predictive

Statistical Process Control (SPC) is evolving to not just detect defects, but also to predict and prevent issues. Modern factories use more sensors and collect more data, allowing SPC to analyze real-time patterns and forecast potential issues.




predictive

Predictive Heat Pump Thermostat Could Reduce Energy Bills

Purdue University researchers have designed a predictive thermostat for heat pumps that has been shown to significantly reduce electricity use.




predictive

SE-Radio Episode 305: Charlie Berger on Predictive Applications

Edaena Salinas talks with Charlie Berger about Predictive Applications. The discussion begins with an overview of how to build a Predictive Application and the role of Machine Learning. It then explores different Machine Learning algorithms that can be implemented natively in a database.




predictive

How Predictive Analytics Has Been Impacting Manufacturing Logistics

By Nidhi Gupta, CEO of Portcast.

The manufacturing industry is undergoing a profound transformation in the wake of the post-pandemic era. With volumes stabilising, profit margins narrowing, and rivals multiplying, setting themselves apart in all aspects of business has become paramount for manufacturing companies.




predictive

Predictive Text Detox: Unplugging the Suggestions in iOS

In this episode, Thomas Domville demonstrates how to enable or disable predictive text and inline predictive text on iOS. Predictive text allows you to write entire sentences with just a few taps. As you type, suggested words, emoji, and information appear above the onscreen keyboard. You can double tap a suggestion to apply it. Inline predictions complete the word or phrase you’re currently typing, appearing in gray text. To accept an inline prediction, double tap the Space bar; to reject it, keep typing. You can manage predictive text settings in Keyboard settings on your iPhone 12 or newer models running iOS 17 or later versions.

Open Settings on your iPhone.
Scroll down and double tap on General.
Double tap Keyboard.
Double tap the Predictive switch to enable or disable predictive text.
To manage inline predictive text, ensure that the Predictive switch is enabled or disabled.
While typing, you’ll see inline predictions. To accept a suggestion, double tap the Space bar; to reject it, keep typing.

transcription:
Disclaimer: This transcript is generated by AIKO, an automated transcription service. It is not edited or formatted, and it may not accurately capture the speakers’ names, voices, or content.

Hello and welcome.

My name is Thomas Donville also known as AnonyMouse.

Now every so often when I am composing an email or I'm trying to send off a text to a friend or family whatever that might be and I'm using the keyboard as I'm typing there is something called predictive text that will pop up so it tries to predict what you are going to spell out and this feature what this does it tries to help you shorten your typing that you have to do so you just find the various words on top of your keyboard tap on that and it selects that then they have this inline predictive text now which is a newer feature which takes it a next step that allows you to highlight within the text itself and allows you to choose those words but for me those are distracting I am trying to focus and I am NOT a multitasker as I would love to be but as I'm typing along I it just bothers me hearing these words pop up and they don't help me at all I want to kind of stay focused and type in what I want so I'm going to show you how you can turn those features off if you are interested in doing so and if you are distracted like I am when those things come up I'm also going to show you some pointers and advice some other things you can turn off as well what they call features and for some may be distracting to you as well or something that drives you bonkers so do in order to go and change this settings we are going to head over to the native settings itself settings double tap to open now that you located the settings let's do one finger double tap to open this up settings now you are going to need to swipe to the right until we get to something called general general button and we are going to do one finger double tap here about button and now we are looking for something called keyboard so swipe to the right until you get to keyboard keyboard button and at last we are here one finger double tap on keyboard keyboards to button now the easiest way to get to the area that we need to do is set your rotor to headings and go to the first heading all keyboards heading and now what we're looking for is predictive text so swipe to the right a couple times and…




predictive

How to Disable Predictive Text Suggestions on macOS

In this episode, Tyler demonstrates how to disable predictive text suggestions on macOS.

As you type on your Mac, macOS by default attempts to finish words and phrases it thinks you're trying to type. If you find that hearing these suggestions spoken by VoiceOver is more distracting than helpful, you can turn them off by going to System Settings > Keyboard, clicking the Edit button under the "Text input" heading, and toggling the "show inline predictive text" switch off.

transcription:

Disclaimer: This transcript is generated by AIKO, an automated transcription service. It is not edited or formatted, and it may not accurately capture the speakers’ names, voices, or content.

Hey, Apple vissers, Tyler here, with a quick tip for how to disable predictive text suggestions on macOS.

By default, as you type on your Mac, macOS attempts to finish words and phrases that it thinks you're trying to type.

While this may increase the speed of text entry for some, if you're a voiceover user, you may find that hearing these suggestions spoken while you're trying to type is more distracting than helpful, in which case you can turn them off.

To do that, go into system settings, keyboard, hit the edit button under the text input heading, and turn the show inline predictive text switch off.

And I'm going to demonstrate that now, I'm going to system settings on my Mac, k for keyboard, vio command h to get to the text input heading, vio right, edit, and at the leftmost of this dialog, k, that's what we want, vio right, scroll area, interact with vio shift down arrow, and vio right until I find the setting I want, show inline predictive text, if I vio right once more, show inline predictive text off switch, it's off for me because I turned it off.

If it's on for you and you want to turn it off, just press vio space, then stop interacting with the scroll area, with vio shift up arrow, and vio right to done, hit it, and here we are back in keyboard settings.

So now as you type, you will not hear text suggestions predicted by macOS, which could almost ironically increase your speed of text entry because this feature is off and is no longer giving you distracting or potentially distracting feedback.

So that's a tip for how to disable predictive text suggestions on macOS, I hope you found it helpful.

Peace.

Thank you.




predictive

Quantum Algorithms Institute Drives Predictive Model Accuracy with Quantum Collaboration

SURREY, British Columbia, Nov. 12, 2024 — Today, the Quantum Algorithms Institute (QAI) announced a partnership with Canadian companies, AbaQus and InvestDEFY Technologies, to solve common challenges in training machine learning […]

The post Quantum Algorithms Institute Drives Predictive Model Accuracy with Quantum Collaboration appeared first on HPCwire.




predictive

Getting Started with Python Integration to SAS Viya for Predictive Modeling - Comparing Logistic Regression and Decision Tree

Comparing Logistic Regression and Decision Tree - Which of our models is better at predicting our outcome? Learn how to compare models using misclassification, area under the curve (ROC) charts, and lift charts with validation data. In part 6 and part 7 of this series we fit a logistic regression [...]

Getting Started with Python Integration to SAS Viya for Predictive Modeling - Comparing Logistic Regression and Decision Tree was published on SAS Users.




predictive

Getting Started with Python Integration to SAS Viya for Predictive Modeling - Fitting a Random Forest

Learn how to fit a random forest and use your model to score new data. In Part 6 and Part 7 of this series, we fit a logistic regression and decision tree to the Home Equity data we saved in Part 4. In this post we will fit a Random [...]

Getting Started with Python Integration to SAS Viya for Predictive Modeling - Fitting a Random Forest was published on SAS Users.




predictive

Getting Started with Python Integration to SAS Viya for Predictive Modeling - Fitting a Gradient Boosting Model

Fitting a Gradient Boosting Model - Learn how to fit a gradient boosting model and use your model to score new data In Part 6, Part 7, and Part 9 of this series, we fit a logistic regression, decision tree and random forest model to the Home Equity data we [...]

Getting Started with Python Integration to SAS Viya for Predictive Modeling - Fitting a Gradient Boosting Model was published on SAS Users.




predictive

Precautions to Consider in the Analysis of Prognostic and Predictive Indices

Understanding the differences between prognostic and predictive indices is imperative for medical research advances. We have developed a new prognostic measure that will identify the strengths, limitations, and potential applications in clinical practice.




predictive

AI Predictive Maintenance: How to ensure peak asset performance | WIRED Brand Lab

Produced by Wired Brand Lab with IBM | In industries reliant on heavy assets, unexpected challenges like machinery failures are common. How can businesses predict and prevent these issues? Discover how IBM Maximo on AWS empowers companies to shift from reactive to predictive maintenance, harnessing AI-driven alerts to help detect anomalies and swiftly address challenges.




predictive

Predictive Problem-Solving: How Automation is Changing the Game | WIRED Brand Lab

Produced by Wired Brand Lab with IBM | Developers monitoring robust cloud systems face numerous challenges with traditional software, reacting to issues as they occur. Even finding where to start can take longer than actually fixing the problem. But what if AI could help recognize and predict potential problems before they ever impact IT systems? Discover how IBM Instana on AWS with AI-driven analytics proactively detects and helps you resolve issues while providing insight to contextualize contributing factors.




predictive

Credit Building or Credit Crumbling? A Credit Builder Loan's Effects on Consumer Behavior, Credit Scores and Their Predictive Power [electronic journal].




predictive

Model predictive control of non-interacting active Brownian particles

Soft Matter, 2024, 20,8581-8588
DOI: 10.1039/D4SM00902A, Paper
Titus Quah, Kevin J. Modica, James B. Rawlings, Sho C. Takatori
Model predictive control is used to guide the spatiotemporal distribution of active Brownian particles by forecasting future states and optimizing control inputs to achieve tasks like dividing a population into two groups.
The content of this RSS Feed (c) The Royal Society of Chemistry




predictive

Beyond the predictive text

When knowledge gained is not acknowledged, and the textbook is considered the sole source of answers, education becomes a foreign language.




predictive

Predictive testing: A Pandora's box


Once a medical approach is accepted, its use tends to spread across the population and income groups. We therefore need to start preparing for the advance of personalised medicine, writes Sujatha Byravan




predictive

The predictive power of data-processing statistics

This study describes a method to estimate the likelihood of success in determining a macromolecular structure by X-ray crystallography and experimental single-wavelength anomalous dispersion (SAD) or multiple-wavelength anomalous dispersion (MAD) phasing based on initial data-processing statistics and sample crystal properties. Such a predictive tool can rapidly assess the usefulness of data and guide the collection of an optimal data set. The increase in data rates from modern macromolecular crystallography beamlines, together with a demand from users for real-time feedback, has led to pressure on computational resources and a need for smarter data handling. Statistical and machine-learning methods have been applied to construct a classifier that displays 95% accuracy for training and testing data sets compiled from 440 solved structures. Applying this classifier to new data achieved 79% accuracy. These scores already provide clear guidance as to the effective use of computing resources and offer a starting point for a personalized data-collection assistant.




predictive

Cardiolyse Predictive Cardiac Analytics Cloud Platform Won the EXECInsurtech Startup Pitching Competition

The patented technology for cardiac and fatigue risks monitoring was named the most promising for the insurance sector.




predictive

Simons joins hands with predictive analytics firm Retalon




predictive

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

predictive

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

predictive

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

predictive

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

predictive

A Chance Constraint Predictive Control and Estimation Framework for Spacecraft Descent with Field Of View Constraints. (arXiv:2005.03245v1 [math.OC])

Recent studies of optimization methods and GNC of spacecraft near small bodies focusing on descent, landing, rendezvous, etc., with key safety constraints such as line-of-sight conic zones and soft landings have shown promising results; this paper considers descent missions to an asteroid surface with a constraint that consists of an onboard camera and asteroid surface markers while using a stochastic convex MPC law. An undermodeled asteroid gravity and spacecraft technology inspired measurement model is established to develop the constraint. Then a computationally light stochastic Linear Quadratic MPC strategy is presented to keep the spacecraft in satisfactory field of view of the surface markers while trajectory tracking, employing chance based constraints and up-to-date estimation uncertainty from navigation. The estimation uncertainty giving rise to the tightened constraints is particularly addressed. Results suggest robust tracking performance across a variety of trajectories.




predictive

A memory of motion for visual predictive control tasks. (arXiv:2001.11759v3 [cs.RO] UPDATED)

This paper addresses the problem of efficiently achieving visual predictive control tasks. To this end, a memory of motion, containing a set of trajectories built off-line, is used for leveraging precomputation and dealing with difficult visual tasks. Standard regression techniques, such as k-nearest neighbors and Gaussian process regression, are used to query the memory and provide on-line a warm-start and a way point to the control optimization process. The proposed technique allows the control scheme to achieve high performance and, at the same time, keep the computational time limited. Simulation and experimental results, carried out with a 7-axis manipulator, show the effectiveness of the approach.




predictive

A predictive path-following controller for multi-steered articulated vehicles. (arXiv:1912.06259v5 [math.OC] UPDATED)

Stabilizing multi-steered articulated vehicles in backward motion is a complex task for any human driver. Unless the vehicle is accurately steered, its structurally unstable joint-angle kinematics during reverse maneuvers can cause the vehicle segments to fold and enter a jack-knife state. In this work, a model predictive path-following controller is proposed enabling automatic low-speed steering control of multi-steered articulated vehicles, comprising a car-like tractor and an arbitrary number of trailers with passive or active steering. The proposed path-following controller is tailored to follow nominal paths that contains full state and control-input information, and is designed to satisfy various physical constraints on the vehicle states as well as saturations and rate limitations on the tractor's curvature and the trailer steering angles. The performance of the proposed model predictive path-following controller is evaluated in a set of simulations for a multi-steered 2-trailer with a car-like tractor where the last trailer has steerable wheels.




predictive

Hierarchical Predictive Coding Models in a Deep-Learning Framework. (arXiv:2005.03230v1 [cs.CV])

Bayesian predictive coding is a putative neuromorphic method for acquiring higher-level neural representations to account for sensory input. Although originating in the neuroscience community, there are also efforts in the machine learning community to study these models. This paper reviews some of the more well known models. Our review analyzes module connectivity and patterns of information transfer, seeking to find general principles used across the models. We also survey some recent attempts to cast these models within a deep learning framework. A defining feature of Bayesian predictive coding is that it uses top-down, reconstructive mechanisms to predict incoming sensory inputs or their lower-level representations. Discrepancies between the predicted and the actual inputs, known as prediction errors, then give rise to future learning that refines and improves the predictive accuracy of learned higher-level representations. Predictive coding models intended to describe computations in the neocortex emerged prior to the development of deep learning and used a communication structure between modules that we name the Rao-Ballard protocol. This protocol was derived from a Bayesian generative model with some rather strong statistical assumptions. The RB protocol provides a rubric to assess the fidelity of deep learning models that claim to implement predictive coding.




predictive

Systems and methods for anti-causal noise predictive filtering in a data channel

Various embodiments of the present invention provide systems and methods for data processing. As an example, a data processing circuit is disclosed that includes a data detector circuit. The data detector circuit includes an anti-causal noise predictive filter circuit and a data detection circuit. In some cases, the anti-causal noise predictive filter circuit is operable to apply noise predictive filtering to a detector input to yield a filtered output, and the data detection circuit is operable to apply a data detection algorithm to the filtered output derived from the anti-causal noise predictive filter circuit.




predictive

Predictive software streaming

A software streaming platform may be implemented that predictively chooses units of a program to download based on the value of downloading the unit. In one example, a program is divided into blocks. The sequence in which blocks of the program historically have been requested is analyzed in order to determine, for a given history, what block is the next most likely to be requested. Blocks then may be combined into chunks, where each chunk represents a chain of blocks that have a high likelihood of occurring in a sequence. A table is then constructed indicating, for a given chunk, the chunks that are most likely to follow the given chunk. Based on the likelihood table and various other considerations, the value of downloading particular chunks is determined, and the chunk with the highest expected value is downloaded.




predictive

Predictive natural guidance

In one embodiment, a navigation system provides predictive natural guidance utilizing a mobile landmark based on location data. The location data may be a schedule. A controller receives data of a schedule of a mobile landmark. The location data could be collected in real time or estimated. The mobile landmark may be a vehicle or a celestial body. The controller correlates a route from an origin location to a destination location and the location of the mobile landmark. The controller generates a message based on the correlation. The message is output during presentation of the route and references the mobile landmark.




predictive

Real-time predictive systems for intelligent energy monitoring and management of electrical power networks

A system for intelligent monitoring and management of an electrical system is disclosed. The system includes a data acquisition component, a power analytics server and a client terminal. The data acquisition component acquires real-time data output from the electrical system. The power analytics server is comprised of a real-time energy pricing engine, virtual system modeling engine, an analytics engine, a machine learning engine and a schematic user interface creator engine. The real-time energy pricing engine generates real-time utility power pricing data. The virtual system modeling engine generates predicted data output for the electrical system. The analytics engine monitors real-time data output and predicted data output of the electrical system. The machine learning engine stores acid processes patterns observed from the real-time data output and the predicted data output to forecast an aspect of the electrical system.




predictive

Systems and methods for phase predictive impedance loss model calibration and compensation

The systems and methods of the present disclosure calibrate impedance loss model parameters associated with an electrosurgical system having no external cabling or having external cabling with a fixed or known reactance, and obtain accurate electrical measurements of a tissue site by compensating for impedance losses associated with the transmission line of an electrosurgical device using the calibrated impedance loss model parameters. A computer system stores voltage and current sensor data for a range of different test loads and calculates sensed impedance values for each test load. The computer system then predicts a phase value for each load using each respective load impedance value. The computer system back calculates impedance loss model parameters including a source impedance parameter and a leakage impedance parameter based upon the voltage and current sensor data, the predicted phase values, and the impedance values of the test loads.




predictive

Predictive pulse width modulation for an open delta H-bridge driven high efficiency ironless permanent magnet machine

Embodiments of the present method and system permit an effective method for determining the optimum selection of pulse width modulation polarity and type including determining machine parameters, inputting the machine parameters into a predicted duty cycle module, determining the optimum polarity of the pulse width modulation for a predicted duty cycle based on a pulse width modulation generation algorithm, and determining the optimum type of the pulse width modulation for a predicted duty cycle based on the pulse width modulation generation algorithm.




predictive

Automating predictive maintenance for automobiles

An approach is provided to automate predictive vehicle maintenance. In the approach, a vehicle's information handling system receives vehicle data transmissions from a number of other vehicles in geographic proximity to the vehicle. Both the vehicle and the other vehicles correspond to various vehicle types that are used to identify those other vehicles that are similar to the vehicle. The sets of received vehicle data transmissions that are received to similar vehicles are analyzed with respect to a plurality of vehicle maintenance data corresponding to the vehicle. The analysis of the vehicle data transmissions resulting in predictive vehicle maintenance recommendations pertaining to the first vehicle.




predictive

Voltage controlled oscillator band-select fast searching using predictive searching

A method, an apparatus, and a computer program product are provided. The apparatus tunes a frequency provided by a VCO. The apparatus determines a relative capacitance change associated with a first frequency and a desired frequency from a look-up table. The apparatus adjusts a capacitor circuit in the VCO based on the determined relative capacitance change determined from the look-up table in order to tune from the first frequency to the desired frequency. The apparatus determines that the frequency provided by the VCO is a second frequency different than the desired frequency after adjusting the capacitor circuit. The apparatus performs an iterative search to further adjust the capacitor circuit when a difference between the second frequency and the desired frequency is greater than a threshold.