sing

Lensing: Leadership on education funding, mental health and accessible voting

Serving as state representative of House District 85 for the past few years has been a privilege and an honor. I have worked hard to stand for the people of my district fighting for issues that are important to them and to the voters of Iowa City. I want to continue that advocacy and am running for another term in the Iowa House and ask for your vote.

I vigorously support adequate funding for education from pre-school to our community colleges and universities. Our young people are Iowa’s future and deserve the best start available through our excellent education system in Iowa. But we need to provide the dollars necessary to keep our teachers in the classroom so our children are prepared for whatever may lie ahead of them.

I have advocated for the fair treatment of workers in Iowa and support their right to organize. I have worked on laws for equal pay for equal work and whistle blower protection.

I am for essential funding for mental health services for Iowans of all ages. Children and adults who are struggling with mental health issues should have services available to them no matter where they live in this state.

I have fought to keep government open and accessible to Iowans. I support open records and open meetings laws to ensure that availability and transparency to all Iowans.

Keeping voting easy and accessible to voters has been a priority of mine. I support a fair and balanced redistricting system for voting in Iowa.

I have advocated to keep the bottle deposit law in place and expand it to cover the many new types of containers available.

I have worked on oversight legislation after several investigations into defrauding government which involved boarding homes, government agencies and pharmacy benefit managers (the “middleman” between pharmacies/Medicaid and the healthcare insurance companies.)

I cannot avoid mentioning the challenge of the coronavirus in Iowa. It has impacted our health, jobs, families and businesses. No one could have predicted this pandemic but as Iowans, we need to do our best to limit contact and the spread of this disease. My sincere appreciation goes to those workers on the frontlines of this crisis: the healthcare workers, store owners, businesses, farmers, teachers and workers who show up every day to keep this state moving forward. Thank you all!

There is still much work to be done to keep Iowa the great place where we live, work and raise our families. I am asking for your vote to allow me the privilege of continuing that work.

Vicki Lensing is a candidate in the Democratic primary for Iowa House District 85.




sing

How to Duplicate WordPress Database using phpMyAdmin

Do you want to duplicate your WordPress database using phpMyAdmin? WordPress stores all your website data in a MySQL database. Sometimes you may need to quickly clone a WordPress database to transfer a website or to create manual backups.




sing

Missing Berlin’s gorgeous buildings again. ???? (at Berlin,...



Missing Berlin’s gorgeous buildings again. ???? (at Berlin, Germany)




sing

And while we’re in the process of missing European...



And while we’re in the process of missing European architecture… ????

4 more days left to catch my Lightroom presets for 50% off! ⌛️ (at Copenhagen, Denmark)




sing

Dynamic Range Processing in Audio Post Production

If listeners find themselves using the volume up and down buttons a lot, level differences within your podcast or audio file are too big.
In this article, we are discussing why audio dynamic range processing (or leveling) is more important than loudness normalization, why it depends on factors like the listening environment and the individual character of the content, and why the loudness range descriptor (LRA) is only reliable for speech programs.

Photo by Alexey Ruban.

Why loudness normalization is not enough

Everybody who has lived in an apartment building knows the problem: you want to enjoy a movie late at night, but you're constantly on the edge - not only because of the thrilling story, but because your index finger is hovering over the volume down button of your remote. The next loud sound effect is going to come sooner rather than later, and you want to avoid waking up your neighbors with some gunshot sounds blasting from your TV.

In our previous post, we talked about the overall loudness of a production. While that's certainly important to keep in mind, the loudness target is only an average value, ignoring how much the loudness varies within a production. The loudness target of your movie might be in the ideal range, yet the level differences between a gunshot and someone whispering can still be enormous - having you turn the volume down for the former and up for the latter.

While the average loudness might be perfect, level differences can lead to an unpleasant listening experience.

Of course, this doesn't apply to movies alone. The image above shows a podcast or radio production. The loud section is music, the very quiet section just breathing, and the remaining sections are different voices.

To be clear, we're not saying that the above example is problematic per se. There are many situations, where a big difference in levels - a high dynamic range - is justified: for instance, in a movie theater, optimized for listening and without any outside noise, or in classical music.
Also, if the dynamic range is too small, listening can be tiring.

But if you watch the same movie in an outdoor screening in the summer on a beach next to the crashing waves or in the middle of a noisy city, it can be tricky to hear the softer parts.
Spoken word usually has a smaller dynamic range, and if you produce your podcast for a target audience of train or car commuters, the dynamic range should be even smaller, adjusting for the listening situation.

Therefore, hitting the loudness target has less impact on the listening experience than level differences (dynamic range) within one file!
What makes a suitable dynamic range does not only depend on the listening environment, but also on the nature of the content itself. If the dynamic range is too small, the audio can be tiring to listen to, whereas more variability in levels can make a program more interesting, but might not work in all environments, such as a noisy car.

Dynamic range experiment in a car

Wolfgang Rein, audio technician at SWR, a public broadcaster in Germany, did an experiment to test how drivers react to programs with different dynamic ranges. They monitored to what level drivers set the car stereo depending on speed (thus noise level) and audio dynamic range.
While the results are preliminary, it seems like drivers set the volume as low as possible so that they can still understand the content, but don't get distracted by loud sounds.

As drivers adjust the volume to the loudest voice in a program, they won't understand quieter speakers in content with a high dynamic range anymore. To some degree and for short periods of time, they can compensate by focusing more on the radio program, but over time that's tiring. Therefore, if the loudness varies too much, drivers tend to switch to another program rather than adjusting the volume.
Similar results have been found in a study conducted by NPR Labs and Towson University.

On the other hand, the perception was different in pure music programs. When drivers set the volume according to louder parts, they weren't able to hear softer segments or the beginning of a song very well. But that did not matter to them as much and didn't make them want to turn up the volume or switch the program.

Listener's reaction in response to frequent loudness changes. (from John Kean, Eli Johnson, Dr. Ellyn Sheffield: Study of Audio Loudness Range for Consumers in Various Listening Modes and Ambient Noise Levels)

Loudness comfort zone

The reaction of drivers to variable loudness hints at something that BBC sound engineer Mike Thornton calls the loudness comfort zone.

Tests (...) have shown that if the short-term loudness stays within the "comfort zone" then the consumer doesn’t feel the need to reach for the remote control to adjust the volume.
In a blog post, he highlights how the series Blue Planet 2 and Planet Earth 2 might not always have been the easiest to listen to. The graph below shows an excerpt with very loud music, followed by commentary just at the bottom of the green comfort zone. Thornton writes: "with the volume set at a level that was comfortable when the music was playing we couldn’t always hear the excellent commentary from Sir David Attenborough and had to resort to turning on the subtitles to be sure we knew what Sir David was saying!"

Planet Earth 2 Loudness Plot Excerpt. Colored green: comfort zone of +3 to -5LU around the loudness target. (from Mike Thornton: BBC Blue Planet 2 Latest Show In Firing Line For Sound Issues - Are They Right?)

As already mentioned above, a good mix considers the maximum and minimum possible loudness in the target listening environment.
In a movie theater the loudness comfort zone is big (loudness can vary a lot), and loud music is part of the fun, while quiet scenes work just as well. The opposite was true in the aforementioned experiment with drivers, where the loudness comfort zone is much smaller and quiet voices are difficult to understand.

Hence, the loudness comfort zone determines how much dynamic range an audio signal can use in a specific listening environment.

How to measure dynamic range: LRA

When producing audio for various environments, it would be great to have a target value for dynamic range, (the difference between the smallest and largest signal values of an audio signal) as well. Then you could just set a dynamic range target, similarly to a loudness target.

Theoretically, the maximum possible dynamic range of a production is defined by the bit-depth of the audio format. A 16-bit recording can have a dynamic range of 96 dB; for 24-bit, it's 144 dB - which is well above the approx. 120 dB the human ear can handle. However, most of those bits are typically being used to get to a reasonable base volume. Picture a glass of water: you want it to be almost full, with some headroom so that it doesn't spill when there's a sudden movement, i.e. a bigger amplitude wave at the top.

Determining the dynamic range of a production is easier said than done, though. It depends on which signals are included in the measurement: for example, if something like background music or breathing should be considered at all.
The currently preferred method for broadcasting is called Loudness Range, LRA. It is measured in Loudness Units (LU), and takes into account everything between the 10th and the 95th percentile of a loudness distribution, after an additional gating method. In other words, the loudest 5% and quietest 10% of the audio signal are being ignored. This way, quiet breathing or an occasional loud sound effect won't affect the measurement.

Loudness distribution and LRA for the film 'The Matrix'. Figure from EBU Tech Doc 3343 (p.13).

However, the main difficulty is which signals should be included in the loudness range measurement and which ones should be gated. This is unfortunately often very subjective and difficult to define with a purely statistical method like LRA.

Where LRA falls short

Therefore, only pure speech programs give reliable LRA values that are comparable!
For instance, a typical LRA for news programs is 3 LU; for talks and discussions 5 LU is common. LRA values for features, radio dramas, movies or music very much depend on the individual character and might be in the range between 5 and 25 LU.

To further illustrate this, here are some typical LRA values, according to a paper by Thomas Lund (table 2):

ProgramLoudness Range
Matrix, full movie25.0
NBC Interstitials, Jan. 2008, all together (3:30)9.4
Friends Episode 166.6
Speak Ref., Male, German, SQUAM Trk 546.2
Speak Ref., Female, French, SQUAM Trk 514.8
Speak Ref., Male, English, Sound Check3.3
Wish You Were Here, Pink Floyd22.1
Gilgamesh, Battle of Titans, Osaka Symph.19.7
Don’t Cry For Me Arg., Sinead O’Conner13.7
Beethoven Son in F, Op17, Kliegel & Tichman12.0
Rock’n Roll Train, AC/DC6.0
I.G.Y., Donald Fagen3.6

LRA values of music are very unpredictable as well.
For instance, Tom Frampton measured the LRA of songs in multiple genres, and the differences within each genre are quite big. The ten pop songs that he analyzed varied in LRA between 3.7 and 12 LU, country songs between 3.6 and 14.9 LU. In the Electronic genre the individual LRAs were between 3.7 and 15.2 LU. Please see the tables at the bottom of his blog post for more details.

We at Auphonic also tried to base our Adaptive Leveler parameters on the LRA descriptor. Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values. Therefore we developed our own algorithm to measure the dynamic range of audio signals.

In conclusion, LRA comparisons are only useful for productions with spoken word only and the LRA value is therefore not applicable as a general dynamic range target value. The more complex a production gets, the more difficult it is to make any judgment based on the LRA.
This is, because the definition of LRA is purely statistical. There's no smart measurement using classifiers that distinguish between music, speech, quiet breathing, background noises and other types of audio. One would need a more intelligent algorithm (as we use in our Adaptive Leveler), that knows which audio segments should be included and excluded from the measurement.

From theory to application: tools

Loudness and dynamic range clearly is a complicated topic. Luckily, there are tools that can help. To keep short-term loudness in range, a compressor can help control sudden changes in loudness - such as p-pops or consonants like t or k. To achieve a good mid-term loudness, i.e. a signal that doesn't go outside the comfort zone too much, a leveler is a good option. Or, just use a fader or manually adjust volume curves. And to make sure that separate productions sound consistent, loudness normalization is the way to go. We have covered all of this in-depth before.

Looking at the audio from above again, with an adaptive leveler applied it looks like this:

Leveler example. Output at the top, input with leveler envelope at the bottom.

Now, the voices are evened out and the music is at a comfortable level, while the breathing has not been touched at all.
We recently extended Auphonic's adaptive leveler, so that it is now possible to customize the dynamic range - please see adaptive leveler customization and advanced multitrack audio algorithms.
If you wanted to increase the loudness comfort zone (or dynamic range) of the standard preset by 10 dB (or LU), for example, the envelope would look like this:

Leveler with higher dynamic range, only touching sections with extremely low or extremely high loudness to fit into a specific loudness comfort zone.

When a production is done, our adaptive leveler uses classifiers to also calculate the integrated loudness and loudness range of dialog and music sections separately. This way it is possible to just compare the dialog LRA and loudness of complex productions.

Assessing the LRA and loudness of dialog and music separately.

Conclusion

Getting audio dynamics right is not easy. Yet, it is an important thing to keep in mind, because focusing on loudness normalization alone is not enough. In fact, hitting the loudness target often has less impact on the listening experience than level differences, i.e. audio dynamics.

If the dynamic range is too small, the audio can be tiring to listen to, whereas a bigger dynamic range can make a program more interesting, but might not work in loud environments, such as a noisy train.
Therefore, a good mix adapts the audio dynamic range according to the target listening environment (different loudness comfort zones in cinema, at home, in a car) and according to the nature of the content (radio feature, movie, podcast, music, etc.).

Furthermore, because the definition of the loudness range / LRA is purely statistical, only speech programs give reliable LRA values that are comparable.
More "intelligent" algorithms are in development, which use classifiers to decide which signals should be included and excluded from the dynamic range measurement.

If you understand German, take a look at our presentation about audio dynamic processing in podcasts for further information:







sing

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

sing

If You’re Using Beaver Builder Lite, You Need This Addon

Hey there, I’m Ben, and I’m a guest author here at WPZOOM. Today I thought I’d share with you my experience of one of their rather awesome plugins, an addon for Beaver Builder. I know the team at WPZOOM are big fans of Beaver Builder, why not? It’s a great page builder with an excellent feature set; chances are if […]




sing

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

sing

Creating a Block-based Theme Using Block Templates

This post outlines the steps I took to create a block-based theme version of Twenty Twenty. Thanks to Kjell Reigstad for helping develop the theme and write this post. There’s been a lot of conversation around how theme development changes as Full Site Editing using Gutenberg becomes a reality. Block templates are an experimental feature … Continue reading "Creating a Block-based Theme Using Block Templates"




sing

Creating Choropleth Map Data Visualization Using JavaScript, on COVID-19 Stats

https://www.anychart.com/blog/2020/05/06/javascript-choropleth-map-tutorial/




sing

How To Build A Vue Survey App Using Firebase Authentication And Database

https://www.smashingmagazine.com/2020/05/vue-survey-app-firebase-authentication-database/




sing

10 Websites and Apps All Designers Should Be Using

As a designer, we’re overloaded with choices every day, but there are some apps that are absolutely worth your time and investment. Finding the best ones and most useful ones can be a difficult task, so we’re going to make things easy for you and give you our top 10 apps and websites we couldn’t […]

Read More at 10 Websites and Apps All Designers Should Be Using




sing

Reducing brain damage in sport without losing the thrills

When Olympic gold medallist Shona McCallin was hit on the side of her head by a seemingly innocuous shoulder challenge, she suffered what was originally thought to be a concussion.




sing

Save time by using these builders for portfolio websites and pages

If you’re a professional wanting to showcase your products, what better way is there to do so than with a personal portfolio? Maybe one that’s presented in a way that invites close study? A portfolio used to be a folder of papers you would carry around with you when visiting one potential customer after another. […]

The post Save time by using these builders for portfolio websites and pages appeared first on WebAppers.




sing

Need Help Choosing the Right Plugin for Your Website? Check These Options

WordPress is an ideal platform for building your own portfolio, blog, or eCommerce site. It’s packed with all the basic tools you need to build a professional-looking site. Plus, it has tools that can take your web-building skills to an even higher level. Get even more impressive results or add features to a website that […]

The post Need Help Choosing the Right Plugin for Your Website? Check These Options appeared first on WebAppers.




sing

10 Step Tutorial: How to Design Flat Skateboards Using Adobe Illustrator

Summer is in full swing here in the states! It’s a perfect time to grab your skateboard and go cruising. Today we’re going to learn how to design flat skateboards and colorful vector longboards in Adobe Illustrator! We’ll be working with Clipping Masks, Stroke, and Pathfinder panel. Let’s get started! Tutorial Details Program: Adobe Illustrator CC Difficulty: […]

The post 10 Step Tutorial: How to Design Flat Skateboards Using Adobe Illustrator appeared first on Vectips.




sing

Create a NAS Icon in Just 30 Minutes Using Adobe Illustrator

Welcome back to another Illustrator tutorial from our retro hardware series! In this how-to, we’re going to learn to create a NAS Icon (or a Network-Attached Storage icon) using some simple geometric shapes and tools. So, get your software up and running let’s jump straight into it! Tutorial Details: How to Create a NAS Icon Program: Adobe […]

The post Create a NAS Icon in Just 30 Minutes Using Adobe Illustrator appeared first on Vectips.




sing

Easy CSS Animation Using @keyframes

CSS Transitions and transforms work beautifully for creating visual interactions based on single state changes. To have more control over what happens and when, you can use the CSS animation property to create easy CSS animation using @keyframes. This technique has a wide range of design application and can be used to build dazzling pre-loaders, […]


The post Easy CSS Animation Using @keyframes appeared first on Web Designer Wall.




sing

How A Web Design Business Can Benefit From Using Accounting Applications

Accounting applications help web design businesses in many ways. As a web design service provider, you should use them to boost your business. Start by browsing some resources online that provide...




sing

Why Choosing The Best Web Hosting Is Crucial For Your Business

Not many business owners think about hosting when building a new website for their business. But failing to choose the right web hosting can have a great impact on your website and, of course, your...





sing

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

sing

Here comes Traversty traversing the DOM

The Traversty DOM utility has as its purpose to allow you to traverse the DOM and manage collections of DOM elements. Proponents admit core Traversty traversal methods are inspired by Prototype’s DOM Traversal toolkit, but now in a multi-element environment that is more like jQuery and less like Prototype’s single element implementation.






sing

Human Activity Increasing Rate of Record-Breaking Hot Years

American Geophysical Union (AGU) Press Release A new study finds human-caused global warming is significantly increasing the rate at which hot temperature records are being broken around the world. Global annual temperature records show there were 17 record hot years … Continue reading




sing

Using Funds from Disability Compensation and the GI Bill for Going Back to School

Receiving service-related disability compensation does not interfere with the funds veterans receive from the GI Bill, explains Adam.




sing

Using Communities to Further the True Meaning of Resiliency

Service members, veterans, and their caregivers are incredibly resilient, says Adam, but learning to connect with whatever community you are in will only strengthen that resiliency.




sing

Inform user about automatic comment closing time

To prevent spammers from flooding old articles with useless comments you can set WordPress to close comments after a certain […]




sing

Self Reliance + Personal Uprising with John Jantsch

John Jantsch is a veteran marketer. He’s written several bestselling books including Duct Tape Marketing and The Referral Engine. He’s out with a new book called the Self-Reliant Entrepreneur: 366 Daily Meditations to Feed Your Soul and Grow Your Business  As you might know, I’m a bit of a fan of daily habits, so of course John gives us a little preview into some of the daily explorations of thoughts and writings from notable American authors. Of course, that’s not all…  we also get into: We go deep into following your own path and listening to your intuition. What we can learn from rabble rousers of our history and those who embraced counter culture to follow their own beliefs. The role that self-awareness has in pursuing our dreams. and much more. Enjoy! FOLLOW JOHN: facebook | twitter | website Listen to the Podcast Subscribe   This podcast is brought to you by CreativeLive. CreativeLive is the world’s largest hub for online creative education in photo/video, art/design, music/audio, craft/maker, money/life and the ability to make a living in any of those disciplines. They are high quality, highly curated classes taught by the world’s top experts — Pulitzer, Oscar, Grammy Award winners, New […]

The post Self Reliance + Personal Uprising with John Jantsch appeared first on Chase Jarvis Photography.




sing

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

sing

Implementing Dark Mode In React Apps Using styled-components

One of the most commonly requested software features is dark mode (or night mode, as others call it). We see dark mode in the apps that we use every day. From mobile to web apps, dark mode has become vital for companies that want to take care of their users’ eyes. Dark mode is a supplemental feature that displays mostly dark surfaces in the UI. Most major companies (such as YouTube, Twitter, and Netflix) have adopted dark mode in their mobile and web apps.




sing

How To Build A Vue Survey App Using Firebase Authentication And Database

In this tutorial, you’ll be building a Survey App, where we’ll learn to validate our users form data, implement Authentication in Vue, and be able to receive survey data using Vue and Firebase (a BaaS platform). As we build this app, we’ll be learning how to handle form validation for different kinds of data, including reaching out to the backend to check if an email is already taken, even before the user submits the form during sign up.




sing

Nonlinear singular problems with indefinite potential term. (arXiv:2005.01789v3 [math.AP] UPDATED)

We consider a nonlinear Dirichlet problem driven by a nonhomogeneous differential operator plus an indefinite potential. In the reaction we have the competing effects of a singular term and of concave and convex nonlinearities. In this paper the concave term is parametric. We prove a bifurcation-type theorem describing the changes in the set of positive solutions as the positive parameter $lambda$ varies. This work continues our research published in arXiv:2004.12583, where $xi equiv 0 $ and in the reaction the parametric term is the singular one.




sing

Solving an inverse problem for the Sturm-Liouville operator with a singular potential by Yurko's method. (arXiv:2004.14721v2 [math.SP] UPDATED)

An inverse spectral problem for the Sturm-Liouville operator with a singular potential from the class $W_2^{-1}$ is solved by the method of spectral mappings. We prove the uniqueness theorem, develop a constructive algorithm for solution, and obtain necessary and sufficient conditions of solvability for the inverse problem in the self-adjoint and the non-self-adjoint cases




sing

Local mollification of Riemannian metrics using Ricci flow, and Ricci limit spaces. (arXiv:1706.09490v2 [math.DG] UPDATED)

We use Ricci flow to obtain a local bi-Holder correspondence between Ricci limit spaces in three dimensions and smooth manifolds. This is more than a complete resolution of the three-dimensional case of the conjecture of Anderson-Cheeger-Colding-Tian, describing how Ricci limit spaces in three dimensions must be homeomorphic to manifolds, and we obtain this in the most general, locally non-collapsed case. The proofs build on results and ideas from recent papers of Hochard and the current authors.




sing

Positive Geometries and Differential Forms with Non-Logarithmic Singularities I. (arXiv:2005.03612v1 [hep-th])

Positive geometries encode the physics of scattering amplitudes in flat space-time and the wavefunction of the universe in cosmology for a large class of models. Their unique canonical forms, providing such quantum mechanical observables, are characterised by having only logarithmic singularities along all the boundaries of the positive geometry. However, physical observables have logarithmic singularities just for a subset of theories. Thus, it becomes crucial to understand whether a similar paradigm can underlie their structure in more general cases. In this paper we start a systematic investigation of a geometric-combinatorial characterisation of differential forms with non-logarithmic singularities, focusing on projective polytopes and related meromorphic forms with multiple poles. We introduce the notions of covariant forms and covariant pairings. Covariant forms have poles only along the boundaries of the given polytope; moreover, their leading Laurent coefficients along any of the boundaries are still covariant forms on the specific boundary. Whereas meromorphic forms in covariant pairing with a polytope are associated to a specific (signed) triangulation, in which poles on spurious boundaries do not cancel completely, but their order is lowered. These meromorphic forms can be fully characterised if the polytope they are associated to is viewed as the restriction of a higher dimensional one onto a hyperplane. The canonical form of the latter can be mapped into a covariant form or a form in covariant pairing via a covariant restriction. We show how the geometry of the higher dimensional polytope determines the structure of these differential forms. Finally, we discuss how these notions are related to Jeffrey-Kirwan residues and cosmological polytopes.




sing

Toric Sasaki-Einstein metrics with conical singularities. (arXiv:2005.03502v1 [math.DG])

We show that any toric K"ahler cone with smooth compact cross-section admits a family of Calabi-Yau cone metrics with conical singularities along its toric divisors. The family is parametrized by the Reeb cone and the angles are given explicitly in terms of the Reeb vector field. The result is optimal, in the sense that any toric Calabi-Yau cone metric with conical singularities along the toric divisor (and smooth elsewhere) belongs to this family. We also provide examples and interpret our results in terms of Sasaki-Einstein metrics.




sing

Removable singularities for Lipschitz caloric functions in time varying domains. (arXiv:2005.03397v1 [math.CA])

In this paper we study removable singularities for regular $(1,1/2)$-Lipschitz solutions of the heat equation in time varying domains. We introduce an associated Lipschitz caloric capacity and we study its metric and geometric properties and the connection with the $L^2$ boundedness of the singular integral whose kernel is given by the gradient of the fundamental solution of the heat equation.




sing

Semiglobal non-oscillatory big bang singular spacetimes for the Einstein-scalar field system. (arXiv:2005.03395v1 [math-ph])

We construct semiglobal singular spacetimes for the Einstein equations coupled to a massless scalar field. Consistent with the heuristic analysis of Belinskii, Khalatnikov, Lifshitz or BKL for this system, there are no oscillations due to the scalar field. (This is much simpler than the oscillatory BKL heuristics for the Einstein vacuum equations.) Prior results are due to Andersson and Rendall in the real analytic case, and Rodnianski and Speck in the smooth near-spatially-flat-FLRW case. Similar to Andersson and Rendall we give asymptotic data at the singularity, which we refer to as final data, but our construction is not limited to real analytic solutions. This paper is a test application of tools (a graded Lie algebra formulation of the Einstein equations and a filtration) intended for the more subtle vacuum case. We use homological algebra tools to construct a formal series solution, then symmetric hyperbolic energy estimates to construct a true solution well-approximated by truncations of the formal one. We conjecture that the image of the map from final data to initial data is an open set of anisotropic initial data.




sing

Converging outer approximations to global attractors using semidefinite programming. (arXiv:2005.03346v1 [math.OC])

This paper develops a method for obtaining guaranteed outer approximations for global attractors of continuous and discrete time nonlinear dynamical systems. The method is based on a hierarchy of semidefinite programming problems of increasing size with guaranteed convergence to the global attractor. The approach taken follows an established line of reasoning, where we first characterize the global attractor via an infinite dimensional linear programming problem (LP) in the space of Borel measures. The dual to this LP is in the space of continuous functions and its feasible solutions provide guaranteed outer approximations to the global attractor. For systems with polynomial dynamics, a hierarchy of finite-dimensional sum-of-squares tightenings of the dual LP provides a sequence of outer approximations to the global attractor with guaranteed convergence in the sense of volume discrepancy tending to zero. The method is very simple to use and based purely on convex optimization. Numerical examples with the code available online demonstrate the method.




sing

Homotopy invariance of the space of metrics with positive scalar curvature on manifolds with singularities. (arXiv:2005.03073v1 [math.AT])

In this paper we study manifolds $M_{Sigma}$ with fibered singularities, more specifically, a relevant space $Riem^{psc}(X_{Sigma})$ of Riemannian metrics with positive scalar curvature. Our main goal is to prove that the space $Riem^{psc}(X_{Sigma})$ is homotopy invariant under certain surgeries on $M_{Sigma}$.




sing

Modeling nanoconfinement effects using active learning. (arXiv:2005.02587v2 [physics.app-ph] UPDATED)

Predicting the spatial configuration of gas molecules in nanopores of shale formations is crucial for fluid flow forecasting and hydrocarbon reserves estimation. The key challenge in these tight formations is that the majority of the pore sizes are less than 50 nm. At this scale, the fluid properties are affected by nanoconfinement effects due to the increased fluid-solid interactions. For instance, gas adsorption to the pore walls could account for up to 85% of the total hydrocarbon volume in a tight reservoir. Although there are analytical solutions that describe this phenomenon for simple geometries, they are not suitable for describing realistic pores, where surface roughness and geometric anisotropy play important roles. To describe these, molecular dynamics (MD) simulations are used since they consider fluid-solid and fluid-fluid interactions at the molecular level. However, MD simulations are computationally expensive, and are not able to simulate scales larger than a few connected nanopores. We present a method for building and training physics-based deep learning surrogate models to carry out fast and accurate predictions of molecular configurations of gas inside nanopores. Since training deep learning models requires extensive databases that are computationally expensive to create, we employ active learning (AL). AL reduces the overhead of creating comprehensive sets of high-fidelity data by determining where the model uncertainty is greatest, and running simulations on the fly to minimize it. The proposed workflow enables nanoconfinement effects to be rigorously considered at the mesoscale where complex connected sets of nanopores control key applications such as hydrocarbon recovery and CO2 sequestration.




sing

Temporal Event Segmentation using Attention-based Perceptual Prediction Model for Continual Learning. (arXiv:2005.02463v2 [cs.CV] UPDATED)

Temporal event segmentation of a long video into coherent events requires a high level understanding of activities' temporal features. The event segmentation problem has been tackled by researchers in an offline training scheme, either by providing full, or weak, supervision through manually annotated labels or by self-supervised epoch based training. In this work, we present a continual learning perceptual prediction framework (influenced by cognitive psychology) capable of temporal event segmentation through understanding of the underlying representation of objects within individual frames. Our framework also outputs attention maps which effectively localize and track events-causing objects in each frame. The model is tested on a wildlife monitoring dataset in a continual training manner resulting in $80\%$ recall rate at $20\%$ false positive rate for frame level segmentation. Activity level testing has yielded $80\%$ activity recall rate for one false activity detection every 50 minutes.




sing

Prediction of Event Related Potential Speller Performance Using Resting-State EEG. (arXiv:2005.01325v3 [cs.HC] UPDATED)

Event-related potential (ERP) speller can be utilized in device control and communication for locked-in or severely injured patients. However, problems such as inter-subject performance instability and ERP-illiteracy are still unresolved. Therefore, it is necessary to predict classification performance before performing an ERP speller in order to use it efficiently. In this study, we investigated the correlations with ERP speller performance using a resting-state before an ERP speller. In specific, we used spectral power and functional connectivity according to four brain regions and five frequency bands. As a result, the delta power in the frontal region and functional connectivity in the delta, alpha, gamma bands are significantly correlated with the ERP speller performance. Also, we predicted the ERP speller performance using EEG features in the resting-state. These findings may contribute to investigating the ERP-illiteracy and considering the appropriate alternatives for each user.




sing

SPECTER: Document-level Representation Learning using Citation-informed Transformers. (arXiv:2004.07180v3 [cs.CL] UPDATED)

Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, the embeddings power strong performance on end tasks. We propose SPECTER, a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that SPECTER outperforms a variety of competitive baselines on the benchmark.




sing

Improved RawNet with Feature Map Scaling for Text-independent Speaker Verification using Raw Waveforms. (arXiv:2004.00526v2 [eess.AS] UPDATED)

Recent advances in deep learning have facilitated the design of speaker verification systems that directly input raw waveforms. For example, RawNet extracts speaker embeddings from raw waveforms, which simplifies the process pipeline and demonstrates competitive performance. In this study, we improve RawNet by scaling feature maps using various methods. The proposed mechanism utilizes a scale vector that adopts a sigmoid non-linear function. It refers to a vector with dimensionality equal to the number of filters in a given feature map. Using a scale vector, we propose to scale the feature map multiplicatively, additively, or both. In addition, we investigate replacing the first convolution layer with the sinc-convolution layer of SincNet. Experiments performed on the VoxCeleb1 evaluation dataset demonstrate the effectiveness of the proposed methods, and the best performing system reduces the equal error rate by half compared to the original RawNet. Expanded evaluation results obtained using the VoxCeleb1-E and VoxCeleb-H protocols marginally outperform existing state-of-the-art systems.




sing

Hierarchical Neural Architecture Search for Single Image Super-Resolution. (arXiv:2003.04619v2 [cs.CV] UPDATED)

Deep neural networks have exhibited promising performance in image super-resolution (SR). Most SR models follow a hierarchical architecture that contains both the cell-level design of computational blocks and the network-level design of the positions of upsampling blocks. However, designing SR models heavily relies on human expertise and is very labor-intensive. More critically, these SR models often contain a huge number of parameters and may not meet the requirements of computation resources in real-world applications. To address the above issues, we propose a Hierarchical Neural Architecture Search (HNAS) method to automatically design promising architectures with different requirements of computation cost. To this end, we design a hierarchical SR search space and propose a hierarchical controller for architecture search. Such a hierarchical controller is able to simultaneously find promising cell-level blocks and network-level positions of upsampling layers. Moreover, to design compact architectures with promising performance, we build a joint reward by considering both the performance and computation cost to guide the search process. Extensive experiments on five benchmark datasets demonstrate the superiority of our method over existing methods.




sing

SCAttNet: Semantic Segmentation Network with Spatial and Channel Attention Mechanism for High-Resolution Remote Sensing Images. (arXiv:1912.09121v2 [cs.CV] UPDATED)

High-resolution remote sensing images (HRRSIs) contain substantial ground object information, such as texture, shape, and spatial location. Semantic segmentation, which is an important task for element extraction, has been widely used in processing mass HRRSIs. However, HRRSIs often exhibit large intraclass variance and small interclass variance due to the diversity and complexity of ground objects, thereby bringing great challenges to a semantic segmentation task. In this paper, we propose a new end-to-end semantic segmentation network, which integrates lightweight spatial and channel attention modules that can refine features adaptively. We compare our method with several classic methods on the ISPRS Vaihingen and Potsdam datasets. Experimental results show that our method can achieve better semantic segmentation results. The source codes are available at https://github.com/lehaifeng/SCAttNet.




sing

Biologic and Prognostic Feature Scores from Whole-Slide Histology Images Using Deep Learning. (arXiv:1910.09100v4 [q-bio.QM] UPDATED)

Histopathology is a reflection of the molecular changes and provides prognostic phenotypes representing the disease progression. In this study, we introduced feature scores generated from hematoxylin and eosin histology images based on deep learning (DL) models developed for prostate pathology. We demonstrated that these feature scores were significantly prognostic for time to event endpoints (biochemical recurrence and cancer-specific survival) and had simultaneously molecular biologic associations to relevant genomic alterations and molecular subtypes using already trained DL models that were not previously exposed to the datasets of the current study. Further, we discussed the potential of such feature scores to improve the current tumor grading system and the challenges that are associated with tumor heterogeneity and the development of prognostic models from histology images. Our findings uncover the potential of feature scores from histology images as digital biomarkers in precision medicine and as an expanding utility for digital pathology.