pp

10 Websites and Apps All Designers Should Be Using

As a designer, we’re overloaded with choices every day, but there are some apps that are absolutely worth your time and investment. Finding the best ones and most useful ones can be a difficult task, so we’re going to make things easy for you and give you our top 10 apps and websites we couldn’t […]

Read More at 10 Websites and Apps All Designers Should Be Using




pp

Treating PTSD Involves Science, Counseling, Group Support

In the years since he had returned from Vietnam, Elmer “Snubby” Burket was a self-described workaholic, raising a son, keeping up his house and always taking jobs where he could be by himself as he tried to put the war behind him.




pp

Support Communication During Conversation




pp

Remapping the Neural Pathways of Humanity

The pandemic has changed the daily lives of everyone. How we work, how we shop, and how we interact with each other are all shifting. Comparing life as it is now with how it used to be can lead to sadness or despair and what's called "ambiguous loss."




pp

Benefits of Approval Studio Proofing Tool for Designers and Creative Teams

Among all of the design agencies’ headaches, artwork proofing is probably one of the most acute ones. Forwarding countless numbers of requests, following up your approvers with reminders that they have a file to check, searching for their feedback in the endless pile of emails or messages… Quite daunting, to say the least, and quite […]




pp

5 Important Tips When Building Your First Mobile App

Building a mobile application is a complex process, and mistake can be costly in time and money. To make sure that your mobile app projects are a success, here are a few tips that will be helpful. 1. Plan Ahead When building anything complex, you should never start without a plan. Building a mobile app […]




pp

How to personalize the mobile experience for app users

Mobile user experience somehow ‘imposed itself’ with all the development and improvement of mobile communication devices. In fact, it is the quality of user experience that divides outstanding apps from their less outstanding counterparts. The same factor enables startups to learn from big brands and to improve their products. User experience for mobile applications – […]




pp

Mobile App Website Inspiration: 20 Application Websites and Tips to Help You Design One

It may seem a bit curious that more than a few app websites are only given a cursory inspection by app owners. It is given before being largely ignored because visitors have gone elsewhere. The reason for a given website may be completely valid in that it addresses a well-established need. It has a poor […]

The post Mobile App Website Inspiration: 20 Application Websites and Tips to Help You Design One appeared first on WebAppers.




pp

Themify Shoppe – The Ultimate WooCommerce WordPress Theme

I’m excited to announce that Themify has released another awesome theme – Themify Shoppe. Designed by Liam McKay and coded by Themify team, Shoppe works hand-in-hand with WooCommerce, making it the ultimate multi-purpose eCommerce theme. It features the popular drag and drop Themify Builder that can help you design and build your online store to […]


The post Themify Shoppe – The Ultimate WooCommerce WordPress Theme appeared first on Web Designer Wall.




pp

Upper Yosemite Falls & Half Dome Moonbow

This past week was the optimal time to photograph moonbows in Yosemite Valley. I revisited photographing the moonbow at Upper Yosemite Falls as I had last year, but this time there was considerable more water and as a result the moonbow (rainbow by moonlight) was more easily seen. It was considerably larger, more vivid in color and wider arching. Conditions were great and at times a little too good as the 3 cameras I set up were completely drenched. If you’d like to read about what it took to get this photo be sure to check out my last blog post, Upper Yosemite Falls Moonbow – Getting The Shot, as it goes into a lot of detail about the hike and the challenges I faced.  If you’re curious about gear and settings this was taken with a Canon 5D Mark IV and Canon 11-24mm f/4 lens. Settings were ISO 640, 15 seconds at f/4.




pp

Older Arctic Sea Ice is Disappearing

Video by NASA’s Goddard Space Flight Center / Jefferson Beck Arctic sea ice has not only been shrinking in surface area in recent years, it’s becoming younger and thinner as well. In this animation, where the ice cover almost looks … Continue reading




pp

How A Web Design Business Can Benefit From Using Accounting Applications

Accounting applications help web design businesses in many ways. As a web design service provider, you should use them to boost your business. Start by browsing some resources online that provide...




pp

Top 5 Best Internet Live Support Extension To Increase Customers Interactions

Creative interactions call for creative measures - numerous extensions reduce, minimize or dilute the frustration of the customers and resolve issues quickly without the customer support team need....





pp

Why's it so hard to get the cool stuff approved?

The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

The fear of change. 

Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

The fear of the unknown. 

Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

    The fear of losing control. 

    We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

    So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

    What are low-fidelity prototypes?

    Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

    A rough animation created in Principle illustrating less programatic animation:

    And even creating an animated storyboard in Photoshop:

    They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

    That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

    It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

    And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

    Lo-fi prototypes: client-tested, internal team-approved

    There are a lot of reasons to love lo-fi prototypes internally, too!

    They’re quick and easy. 

    We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

    They’re great to share with developers. 

    Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

      Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



      • Design & Content

      pp

      TrailBuddy: Using AI to Create a Predictive Trail Conditions App

      Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

      While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

      There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

      Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

      We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

      The quest for data.

      We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

      First on that list was weather.

      We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

      But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

      We used Whimsical to build our initial wireframes.

      Putting our design hats on.

      From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

      We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

      • How easy or difficult of a trail are they looking for?
      • How long is this trail?
      • What does the trail look like?
      • How far away is the trail in relation to my location?
      • For what activity am I needing a trail for?
      • Is this a trail I’d want to come back to in the future?

      By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

      We used Figma to design, prototype, and gather feedback on TrailBuddy.

      Using machine learning to predict trail conditions.

      As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

      We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

      We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

      With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

      Where we go from here.

      It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



      • News & Culture

      pp

      Scurry: A Race-To-Finish Scavenger Hunt App

      We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

      Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

      The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

      The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

      Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

      1. An activity all Vigets could do together, where they could create memories, and share broadly on social
      2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
      3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

      We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

      • Quick and easy to set up hunts
      • Free and intuitive for users
      • A nice combination of trivia and activities
      • Social! We wanted to enable teams to share photos and progress

      One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

      We tested out Notion as our primary tool, writing tickets and tracking progress.

      When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

      There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

      This is a snap shot of our app and Expo.

      Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

      On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

      Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

      Whimsical is one of our favorite tools for building out mockups of an app.

      We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



      • News & Culture

      pp

      5 things to Note in a New Phoenix 1.5 App

      Yesterday (Apr 22, 2020) Phoenix 1.5 was officially released ????

      There’s a long list of changes and improvements, but the big feature is better integration with LiveView. I’ve previously written about why LiveView interests me, so I was quite excited to dive into this release. After watching this awesome Twitter clone in 15 minutes demo from Chris McCord, I had to try out some of the new features. I generated a new phoenix app with the —live flag, installed dependencies and started a server. Here are five new features I noticed.

      1. Database actions in browser

      Oops! Looks like I forgot to configure the database before starting the server. There’s now a helpful message and a button in the browser that can run the command for me. There’s a similar button when migrations are pending. This is a really smooth UX to fix a very common error while developing.

      2. New Tagline!

      Peace-of-mind from prototype to production

      This phrase looked unfamiliar, so I went digging. Turns out that the old tagline was “A productive web framework that does not compromise speed or maintainability.” (I also noticed that it was previously “speed and maintainability” until this PR from 2019 was opened on a dare to clarify the language.)

      Chris McCord updated the language while adding phx.new —live. I love this framing, particularly for LiveView. I am very excited about the progressive enhancement path for LiveView apps. A project can start out with regular, server rendered HTML templates. This is a very productive way to work, and a great way to start a prototype for just about any website. Updating those templates to work with LiveView is an easier lift than a full rebuild in React. And finally, when you’re in production you have the peace-of-mind that the reliable BEAM provides.

      3. Live dependency search

      There’s now a big search bar right in the middle of the page. You can search through the dependencies in your app and navigate to the hexdocs for them. This doesn’t seem terribly useful, but is a cool demo of LiveView. The implementation is a good illustration of how compact a feature like this can be using LiveView.

      4. LiveDashboard

      This is the really cool one. In the top right of that page you see a link to LiveDashboard. Clicking it will take you to a page that looks like this.

      This page is built with LiveView, and gives you a ton of information about your running system. This landing page has version numbers, memory usage, and atom count.

      Clicking over to metrics brings you to this page.

      By default it will tell you how long average queries are taking, but the metrics are configurable so you can define your own custom telemetry options.

      The other tabs include process info, so you can monitor specific processes in your system:

      And ETS tables, the in memory storage that many apps use for caching:

      The dashboard is a really nice thing to get out of the box and makes it free for application developers to monitor their running system. It’s also developing very quickly. I tried an earlier version a week ago which didn’t support ETS tables, ports or sockets. I made a note to look into adding them, but it's already done! I’m excited to follow along and see where this project goes.

      5. New LiveView generators

      1.5 introduces a new generator mix phx.gen.live.. Like other generators, it will create all the code you need for a basic resource in your app, including the LiveView modules. The interesting part here is that it introduces patterns for organizing LiveView code, which is something I have previously been unsure about. At first glance, the new organization makes sense and feels like a good approach. I look forward to seeing how this works on a real project.

      Conclusion

      The 1.5 release brings more changes under the hood of course, but these are the first five differences you’ll notice after generating a new Phoenix 1.5 app with LiveView. Congratulations to the entire Phoenix team, but particularly José Valim and Chris McCord for getting this work released.



      • Code
      • Back-end Engineering

      pp

      Windows 8 HTML5 WinRT RSS reader app

      WinJS is a JavaScript framework for Windows 8, and David Rousset uses it here to create a quick RSS reader. He shows how in a tutorial series. This first article shows the way to build a welcome screen that employs WinJS ListView control. Blend and CSS3 are employed. The second tutorial shows work on the Read the rest...






      pp

      Smartwatch Showdown: Apple Watch vs. Fitbit Versa

      In the world of smartwatches, the two big contenders are the Apple Watch and the Fitbit Versa. The Smartwatch Showdown infographic from The Watchstrap is very timely with recent news that Google has just acquired Fitbit.

      In the world of wearable gadgets, smartwatches are all the rage at the moment. The smartwatch market is growing by the day, and new and improved devices are constantly being released. This means that picking the right smartwatch can be a real head-scratcher. To help you choose the right device for your needs, we’ve compared two of the hottest smartwatches on the market: the Apple Watch Series 4 and Fitbit Versa!

      If you want to find out which of these devices came on top in the end, don’t miss the comprehensive infographic below!

      First, this is a great use of infographics in content marketing! The Watchstrap is an online retailer of watch bands, and the infographic is a comparison design without being a sales pitch. It draws in traffic by providing valuable information, which build credibility for their brand.

      There are a handful of things I didn’t like about the design itself that could be easily improved to make this a better infographic design:

      • Too much text. I realize there isn’t much data to work with, but they need to cut down the text in the infographic. Paragraphs of explanation don’t belong in the infographic, they belong on the landing page. The infographic should be short and draw in readers to the website if they want to learn more.

      • The scale is wrong in the Size & Design section of the infographic. The dimensions of the Apple Watch are larger, but the graphic illustration on the page is smaller. The illustrations should be visually correct to scale.

      • Eliminate any word wrap when possible. There are a number of list points that have one hanging word wrapping to a second line. This could be avoided by shortening the text or just widening the text box. There’s room in the design without wrapping some of these words.

      • The URL in the footer should link to the infographic landing page, not the home page of the company site.

      • Copyright or Creative Commons license is completely missing.

      • Don’t obscure the source by only listing the home page URL. What’s the link to the research data?




      pp

      Astra Pro with Gutenberg Review – Practical Application

      At 3.7 Designs we have an array of strategies we use to solve business problems. For example, when it comes to redesigning a website we might recommend recommend a completely custom design that starts with a design discovery engagement. Typically this process can take three to six months with ample time upfront to research the […]

      The post Astra Pro with Gutenberg Review – Practical Application appeared first on Psychology of Web Design | 3.7 Blog.




      pp

      Support Communication During Conversation




      pp

      Remapping the Neural Pathways of Humanity

      The pandemic has changed the daily lives of everyone. How we work, how we shop, and how we interact with each other are all shifting. Comparing life as it is now with how it used to be can lead to sadness or despair and what's called "ambiguous loss."




      pp

      Coronavirus is Shutting Down the Meat Supply Chain

      The United States faces a major meat shortage due to virus infections at processing plants. It means millions of pigs could be put down without ever making it to table. This is what the predicament looks like on a Minnesota farm. ... According to the Minnesota Pork Producers Association, an estimated 10,000 pigs are being euthanised every day in the state. ... [Farmer Mike Boerboom:] "On the same day that we're euthanising pigs - and it's a horrible day - is the same day that a grocery store 10 miles away may not get a shipment of pork. It's just that the supply chain is broken at this point."




      pp

      Why's it so hard to get the cool stuff approved?

      The classic adage is “good design speaks for itself.” Which would mean that if something’s as good of an idea as you think it is, a client will instantly see that it’s good too, right?

      Here at Viget, we’re always working with new and different clients. Each with their own challenges and sensibilities. But after ten years of client work, I can’t help but notice a pattern emerge when we’re trying to get approval on especially cool, unconventional parts of a design.

      So let’s break down some of those patterns to hopefully better understand why clients hesitate, and what strategies we’ve been using lately to help get the work we’re excited about approved.

      Imagine this: the parallax homepage with elements that move around in surprising ways or a unique navigation menu that conceptually reinforces a site’s message. The way the content cards on a page will, like, be literal cards that will shuffle and move around. Basically, any design that feels like an exciting, novel challenge, will need the client to “get it.” And that often turns out to be the biggest challenge of all.

      There are plenty of practical reasons cool designs get shot down. A client is usually more than one stakeholder, and more than the team of people you’re working with directly. On any project, there’s an amount of telephone you end up playing. Or, there’s always the classic foes: budgets and deadlines. Any idea should fit in those predetermined constraints. But as a project goes along, budgets and deadlines find a way to get tighter than you planned.

      But innovative designs and interactions can seem especially scary for clients to approve. There’s three fears that often pop up on projects:

      The fear of change. 

      Maybe the client expected something simple, a light refresh. Something that doesn’t challenge their design expectations or require more time and effort to understand. And on our side, maybe we didn’t sufficiently ease them into our way of thinking and open them up to why we think something bigger and bolder is the right solution for them. Baby steps, y’all.

      The fear of the unknown. 

      Or, less dramatically, a lack of understanding of the medium. In the past, we have struggled with how to present an interactive, animated design to a client before it’s actually built. Looking at a site that does something conceptually similar as an example can be tough. It’s asking a lot of a client’s imagination to show them a site about boots that has a cool spinning animation and get meaningful feedback about how a spinning animation would work on their site about after-school tutoring. Or maybe we’ve created static designs, then talked around what we envision happening. Again, what seems so clear in our minds as professionals entrenched in this stuff every day can be tough for someone outside the tech world to clearly understand.

        The fear of losing control. 

        We’re all about learning from past mistakes. So lets say, after dealing with that fear of the unknown on a project, next time you go in the opposite direction. You invest time up front creating something polished. Maybe you even get the developer to build a prototype that moves and looks like the real thing. You’ve taken all the vague mystery out of the process, so a client will be thrilled, right? Surprise, probably not! Most clients are working with you because they want to conquer the noble quest that is their redesign together. When we jump straight to showing something that looks polished, even if it’s not really, it can feel like we jumped ahead without keeping them involved. Like we took away their input. They can also feel demotivated to give good, meaningful feedback on a polished prototype because it looks “done.”

        So what to do? Lately we have found low-fidelity prototypes to be a great tool for combating these fears and better communicating our ideas.

        What are low-fidelity prototypes?

        Low fidelity prototypes are a tool that designers can create quickly to illustrate an idea, without sinking time into making it pixel-perfect. Some recent examples of prototypes we've created include a clickable Figma or Invision prototype put together with Whimsical wireframes:

        A rough animation created in Principle illustrating less programatic animation:

        And even creating an animated storyboard in Photoshop:

        They’re rough enough that there’s no way they could be confused for a final product. But customized so that a client can immediately understand what they’re looking at and what they need to respond to. Low-fidelity prototypes hit a sweet spot that addresses those client fears head on.

        That fear of change? A lo-fi prototype starts rough and small, so it can ease a client into a dramatic change without overwhelming them. It’s just a first step. It gives them time to react and warm up to something that’ll ultimately be a big change.

        It also cuts out the fear of the unknown. Seeing something moving around, even if it’s rough, can be so much more clear than talking ourselves in circles about how we think it will move, and hoping the client can imagine it. The feature is no longer an enigma cloaked in mystery and big talk, but something tangible they can point at and ask concrete questions about.

        And finally, a lo-fi prototype doesn’t threaten a client’s sense of control. Low-fidelity means it’s clearly still a work in progress! It’s just an early step in the creative process, and therefore communicates that we’re still in the middle of that process together. There’s still plenty of room for their ideas and feedback.

        Lo-fi prototypes: client-tested, internal team-approved

        There are a lot of reasons to love lo-fi prototypes internally, too!

        They’re quick and easy. 

        We can whip up multiple ideas within a few hours, without sinking the time into getting our hearts set on any one thing. In an agency setting especially, time is limited, so the faster we can get an idea out of our own heads, the better.

        They’re great to share with developers. 

        Ideally, the whole team is working together simultaneously, collaborating every step of the way. Realistically, a developer often doesn’t have time during a project’s early design phase. Lo-fi prototypes are concrete enough that a developer can quickly tell if building an idea will be within scope. It helps us catch impractical ideas early and helps us all collaborate to create something that’s both cool and feasible.

          Stay tuned for posts in the near future diving into some of our favorite processes for creating lo-fi prototypes!



          • Design & Content

          pp

          TrailBuddy: Using AI to Create a Predictive Trail Conditions App

          Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

          While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

          There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

          Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

          We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

          The quest for data.

          We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

          First on that list was weather.

          We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

          But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

          We used Whimsical to build our initial wireframes.

          Putting our design hats on.

          From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

          We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

          • How easy or difficult of a trail are they looking for?
          • How long is this trail?
          • What does the trail look like?
          • How far away is the trail in relation to my location?
          • For what activity am I needing a trail for?
          • Is this a trail I’d want to come back to in the future?

          By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

          We used Figma to design, prototype, and gather feedback on TrailBuddy.

          Using machine learning to predict trail conditions.

          As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

          We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

          We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

          With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

          Where we go from here.

          It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



          • News & Culture

          pp

          Scurry: A Race-To-Finish Scavenger Hunt App

          We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

          Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

          The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

          The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

          Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

          1. An activity all Vigets could do together, where they could create memories, and share broadly on social
          2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
          3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

          We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

          • Quick and easy to set up hunts
          • Free and intuitive for users
          • A nice combination of trivia and activities
          • Social! We wanted to enable teams to share photos and progress

          One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

          We tested out Notion as our primary tool, writing tickets and tracking progress.

          When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

          There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

          This is a snap shot of our app and Expo.

          Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

          On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

          Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

          Whimsical is one of our favorite tools for building out mockups of an app.

          We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



          • News & Culture

          pp

          5 things to Note in a New Phoenix 1.5 App

          Yesterday (Apr 22, 2020) Phoenix 1.5 was officially released ????

          There’s a long list of changes and improvements, but the big feature is better integration with LiveView. I’ve previously written about why LiveView interests me, so I was quite excited to dive into this release. After watching this awesome Twitter clone in 15 minutes demo from Chris McCord, I had to try out some of the new features. I generated a new phoenix app with the —live flag, installed dependencies and started a server. Here are five new features I noticed.

          1. Database actions in browser

          Oops! Looks like I forgot to configure the database before starting the server. There’s now a helpful message and a button in the browser that can run the command for me. There’s a similar button when migrations are pending. This is a really smooth UX to fix a very common error while developing.

          2. New Tagline!

          Peace-of-mind from prototype to production

          This phrase looked unfamiliar, so I went digging. Turns out that the old tagline was “A productive web framework that does not compromise speed or maintainability.” (I also noticed that it was previously “speed and maintainability” until this PR from 2019 was opened on a dare to clarify the language.)

          Chris McCord updated the language while adding phx.new —live. I love this framing, particularly for LiveView. I am very excited about the progressive enhancement path for LiveView apps. A project can start out with regular, server rendered HTML templates. This is a very productive way to work, and a great way to start a prototype for just about any website. Updating those templates to work with LiveView is an easier lift than a full rebuild in React. And finally, when you’re in production you have the peace-of-mind that the reliable BEAM provides.

          3. Live dependency search

          There’s now a big search bar right in the middle of the page. You can search through the dependencies in your app and navigate to the hexdocs for them. This doesn’t seem terribly useful, but is a cool demo of LiveView. The implementation is a good illustration of how compact a feature like this can be using LiveView.

          4. LiveDashboard

          This is the really cool one. In the top right of that page you see a link to LiveDashboard. Clicking it will take you to a page that looks like this.

          This page is built with LiveView, and gives you a ton of information about your running system. This landing page has version numbers, memory usage, and atom count.

          Clicking over to metrics brings you to this page.

          By default it will tell you how long average queries are taking, but the metrics are configurable so you can define your own custom telemetry options.

          The other tabs include process info, so you can monitor specific processes in your system:

          And ETS tables, the in memory storage that many apps use for caching:

          The dashboard is a really nice thing to get out of the box and makes it free for application developers to monitor their running system. It’s also developing very quickly. I tried an earlier version a week ago which didn’t support ETS tables, ports or sockets. I made a note to look into adding them, but it's already done! I’m excited to follow along and see where this project goes.

          5. New LiveView generators

          1.5 introduces a new generator mix phx.gen.live.. Like other generators, it will create all the code you need for a basic resource in your app, including the LiveView modules. The interesting part here is that it introduces patterns for organizing LiveView code, which is something I have previously been unsure about. At first glance, the new organization makes sense and feels like a good approach. I look forward to seeing how this works on a real project.

          Conclusion

          The 1.5 release brings more changes under the hood of course, but these are the first five differences you’ll notice after generating a new Phoenix 1.5 app with LiveView. Congratulations to the entire Phoenix team, but particularly José Valim and Chris McCord for getting this work released.



          • Code
          • Back-end Engineering

          pp

          Implementing Dark Mode In React Apps Using styled-components

          One of the most commonly requested software features is dark mode (or night mode, as others call it). We see dark mode in the apps that we use every day. From mobile to web apps, dark mode has become vital for companies that want to take care of their users’ eyes. Dark mode is a supplemental feature that displays mostly dark surfaces in the UI. Most major companies (such as YouTube, Twitter, and Netflix) have adopted dark mode in their mobile and web apps.




          pp

          Smashing Podcast Episode 15 With Phil Smith: How Can I Build An App In 10 Days?

          In this episode of the Smashing Podcast, we’re talking about building apps on a tight timeline. How can you quickly turn around a project to respond to an emerging situation like COVID-19? Drew McLellan talks to Phil Smith to find out. Show Notes CardMedic React Native React Native for Web Expo Apiary Phil’s company amillionmonkeys Phil’s personal blog and Twitter Weekly Update Getting Started With Nuxt Implementing Dark Mode In React Apps Using styled-components How To Succeed In Wireframe Design Mirage JS Deep Dive: Understanding Mirage JS Models And Associations (Part 1) Readability Algorithms Should Be Tools, Not Targets Transcript Drew McLellan: He is director of the full-stack web development studio amillionmonkeys, where he partners with business owners and creative agencies to build digital products that make an impact.




          pp

          How To Build A Vue Survey App Using Firebase Authentication And Database

          In this tutorial, you’ll be building a Survey App, where we’ll learn to validate our users form data, implement Authentication in Vue, and be able to receive survey data using Vue and Firebase (a BaaS platform). As we build this app, we’ll be learning how to handle form validation for different kinds of data, including reaching out to the backend to check if an email is already taken, even before the user submits the form during sign up.




          pp

          Nikon has confirmed that their flagship D6 DSLR will start shipping on May 21st

          It feels like forever since Nikon announced their newest flagship DSLR; the Nikon D6. It’s actually only been three months, but that hasn’t stopped some people getting anxious. Recently, customers were being told that the D6 would start shipping right about now, but now Nikon has officially come out to announce that the Nikon D6 […]

          The post Nikon has confirmed that their flagship D6 DSLR will start shipping on May 21st appeared first on DIY Photography.




          pp

          Watch YouTube’s most informed sock puppet teach you how to shoot with manual exposure

          For those who’ve never seen TheCrafsMan SteadyCraftin on YouTube, you’re in for a treat – even if you already understand everything contained within this 25-minute video. For those who have, you know exactly what to expect. I’ve been following this rather unconventional channel for a while now. It covers a lot of handy DIY and […]

          The post Watch YouTube’s most informed sock puppet teach you how to shoot with manual exposure appeared first on DIY Photography.




          pp

          Approximate Two-Sphere One-Cylinder Inequality in Parabolic Periodic Homogenization. (arXiv:2005.00989v2 [math.AP] UPDATED)

          In this paper, for a family of second-order parabolic equation with rapidly oscillating and time-dependent periodic coefficients, we are interested in an approximate two-sphere one-cylinder inequality for these solutions in parabolic periodic homogenization, which implies an approximate quantitative propagation of smallness. The proof relies on the asymptotic behavior of fundamental solutions and the Lagrange interpolation technique.




          pp

          The $kappa$-Newtonian and $kappa$-Carrollian algebras and their noncommutative spacetimes. (arXiv:2003.03921v2 [hep-th] UPDATED)

          We derive the non-relativistic $c oinfty$ and ultra-relativistic $c o 0$ limits of the $kappa$-deformed symmetries and corresponding spacetime in (3+1) dimensions, with and without a cosmological constant. We apply the theory of Lie bialgebra contractions to the Poisson version of the $kappa$-(A)dS quantum algebra, and quantize the resulting contracted Poisson-Hopf algebras, thus giving rise to the $kappa$-deformation of the Newtonian (Newton-Hooke and Galilei) and Carrollian (Para-Poincar'e, Para-Euclidean and Carroll) quantum symmetries, including their deformed quadratic Casimir operators. The corresponding $kappa$-Newtonian and $kappa$-Carrollian noncommutative spacetimes are also obtained as the non-relativistic and ultra-relativistic limits of the $kappa$-(A)dS noncommutative spacetime. These constructions allow us to analyze the non-trivial interplay between the quantum deformation parameter $kappa$, the curvature parameter $eta$ and the speed of light parameter $c$.




          pp

          A stochastic approach to the synchronization of coupled oscillators. (arXiv:2002.04472v2 [nlin.AO] UPDATED)

          This paper deals with an optimal control problem associated to the Kuramoto model describing the dynamical behavior of a network of coupled oscillators. Our aim is to design a suitable control function allowing us to steer the system to a synchronized configuration in which all the oscillators are aligned on the same phase. This control is computed via the minimization of a given cost functional associated with the dynamics considered. For this minimization, we propose a novel approach based on the combination of a standard Gradient Descent (GD) methodology with the recently-developed Random Batch Method (RBM) for the efficient numerical approximation of collective dynamics. Our simulations show that the employment of RBM improves the performances of the GD algorithm, reducing the computational complexity of the minimization process and allowing for a more efficient control calculation.




          pp

          Regularized vortex approximation for 2D Euler equations with transport noise. (arXiv:1912.07233v2 [math.PR] UPDATED)

          We study a mean field approximation for the 2D Euler vorticity equation driven by a transport noise. We prove that the Euler equations can be approximated by interacting point vortices driven by a regularized Biot-Savart kernel and the same common noise. The approximation happens by sending the number of particles $N$ to infinity and the regularization $epsilon$ in the Biot-Savart kernel to $0$, as a suitable function of $N$.




          pp

          Simulation of Integro-Differential Equation and Application in Estimation of Ruin Probability with Mixed Fractional Brownian Motion. (arXiv:1709.03418v6 [math.PR] UPDATED)

          In this paper, we are concerned with the numerical solution of one type integro-differential equation by a probability method based on the fundamental martingale of mixed Gaussian processes. As an application, we will try to simulate the estimation of ruin probability with an unknown parameter driven not by the classical L'evy process but by the mixed fractional Brownian motion.




          pp

          A Class of Functional Inequalities and their Applications to Fourth-Order Nonlinear Parabolic Equations. (arXiv:1612.03508v3 [math.AP] UPDATED)

          We study a class of fourth order nonlinear parabolic equations which include the thin-film equation and the quantum drift-diffusion model as special cases. We investigate these equations by first developing functional inequalities of the type $ int_Omega u^{2gamma-alpha-eta}Delta u^alphaDelta u^eta dx geq cint_Omega|Delta u^gamma |^2dx $, which seem to be of interest on their own right.




          pp

          The Fourier Transform Approach to Inversion of lambda-Cosine and Funk Transforms on the Unit Sphere. (arXiv:2005.03607v1 [math.FA])

          We use the classical Fourier analysis to introduce analytic families of weighted differential operators on the unit sphere. These operators are polynomial functions of the usual Beltrami-Laplace operator. New inversion formulas are obtained for totally geodesic Funk transforms on the sphere and the correpsonding lambda-cosine transforms.




          pp

          A reaction-diffusion system to better comprehend the unlockdown: Application of SEIR-type model with diffusion to the spatial spread of COVID-19 in France. (arXiv:2005.03499v1 [q-bio.PE])

          A reaction-diffusion model was developed describing the spread of the COVID-19 virus considering the mean daily movement of susceptible, exposed and asymptomatic individuals. The model was calibrated using data on the confirmed infection and death from France as well as their initial spatial distribution. First, the system of partial differential equations is studied, then the basic reproduction number, R0 is derived. Second, numerical simulations, based on a combination of level-set and finite differences, shown the spatial spread of COVID-19 from March 16 to June 16. Finally, scenarios of unlockdown are compared according to variation of distancing, or partially spatial lockdown.




          pp

          The formation of trapped surfaces in the gravitational collapse of spherically symmetric scalar fields with a positive cosmological constant. (arXiv:2005.03434v1 [gr-qc])

          Given spherically symmetric characteristic initial data for the Einstein-scalar field system with a positive cosmological constant, we provide a criterion, in terms of the dimensionless size and dimensionless renormalized mass content of an annular region of the data, for the formation of a future trapped surface. This corresponds to an extension of Christodoulou's classical criterion by the inclusion of the cosmological term.




          pp

          Converging outer approximations to global attractors using semidefinite programming. (arXiv:2005.03346v1 [math.OC])

          This paper develops a method for obtaining guaranteed outer approximations for global attractors of continuous and discrete time nonlinear dynamical systems. The method is based on a hierarchy of semidefinite programming problems of increasing size with guaranteed convergence to the global attractor. The approach taken follows an established line of reasoning, where we first characterize the global attractor via an infinite dimensional linear programming problem (LP) in the space of Borel measures. The dual to this LP is in the space of continuous functions and its feasible solutions provide guaranteed outer approximations to the global attractor. For systems with polynomial dynamics, a hierarchy of finite-dimensional sum-of-squares tightenings of the dual LP provides a sequence of outer approximations to the global attractor with guaranteed convergence in the sense of volume discrepancy tending to zero. The method is very simple to use and based purely on convex optimization. Numerical examples with the code available online demonstrate the method.




          pp

          Riemann-Hilbert approach and N-soliton formula for the N-component Fokas-Lenells equations. (arXiv:2005.03319v1 [nlin.SI])

          In this work, the generalized $N$-component Fokas-Lenells(FL) equations, which have been studied by Guo and Ling (2012 J. Math. Phys. 53 (7) 073506) for $N=2$, are first investigated via Riemann-Hilbert(RH) approach. The main purpose of this is to study the soliton solutions of the coupled Fokas-Lenells(FL) equations for any positive integer $N$, which have more complex linear relationship than the analogues reported before. We first analyze the spectral analysis of the Lax pair associated with a $(N+1) imes (N+1)$ matrix spectral problem for the $N$-component FL equations. Then, a kind of RH problem is successfully formulated. By introducing the special conditions of irregularity and reflectionless case, the $N$-soliton solution formula of the equations are derived through solving the corresponding RH problem. Furthermore, take $N=2,3$ and $4$ for examples, the localized structures and dynamic propagation behavior of their soliton solutions and their interactions are discussed by some graphical analysis.




          pp

          Approximate Performance Measures for a Two-Stage Reneging Queue. (arXiv:2005.03239v1 [math.PR])

          We study a two-stage reneging queue with Poisson arrivals, exponential services, and two levels of exponential reneging behaviors, extending the popular Erlang A model that assumes a constant reneging rate. We derive approximate analytical formulas representing performance measures for the two-stage queue following the Markov chain decomposition approach. Our formulas not only give accurate results spanning the heavy-traffic to the light-traffic regimes, but also provide insight into capacity decisions.




          pp

          Quasi-Sure Stochastic Analysis through Aggregation and SLE$_kappa$ Theory. (arXiv:2005.03152v1 [math.PR])

          We study SLE$_{kappa}$ theory with elements of Quasi-Sure Stochastic Analysis through Aggregation. Specifically, we show how the latter can be used to construct the SLE$_{kappa}$ traces quasi-surely (i.e. simultaneously for a family of probability measures with certain properties) for $kappa in mathcal{K}cap mathbb{R}_+ setminus ([0, epsilon) cup {8})$, for any $epsilon>0$ with $mathcal{K} subset mathbb{R}_{+}$ a nontrivial compact interval, i.e. for all $kappa$ that are not in a neighborhood of zero and are different from $8$. As a by-product of the analysis, we show in this language a version of the continuity in $kappa$ of the SLE$_{kappa}$ traces for all $kappa$ in compact intervals as above.




          pp

          A Note on Approximations of Fixed Points for Nonexpansive Mappings in Norm-attainable Classes. (arXiv:2005.03069v1 [math.FA])

          Let $H$ be an infinite dimensional, reflexive, separable Hilbert space and $NA(H)$ the class of all norm-attainble operators on $H.$ In this note, we study an implicit scheme for a canonical representation of nonexpansive contractions in norm-attainable classes.




          pp

          Modeling nanoconfinement effects using active learning. (arXiv:2005.02587v2 [physics.app-ph] UPDATED)

          Predicting the spatial configuration of gas molecules in nanopores of shale formations is crucial for fluid flow forecasting and hydrocarbon reserves estimation. The key challenge in these tight formations is that the majority of the pore sizes are less than 50 nm. At this scale, the fluid properties are affected by nanoconfinement effects due to the increased fluid-solid interactions. For instance, gas adsorption to the pore walls could account for up to 85% of the total hydrocarbon volume in a tight reservoir. Although there are analytical solutions that describe this phenomenon for simple geometries, they are not suitable for describing realistic pores, where surface roughness and geometric anisotropy play important roles. To describe these, molecular dynamics (MD) simulations are used since they consider fluid-solid and fluid-fluid interactions at the molecular level. However, MD simulations are computationally expensive, and are not able to simulate scales larger than a few connected nanopores. We present a method for building and training physics-based deep learning surrogate models to carry out fast and accurate predictions of molecular configurations of gas inside nanopores. Since training deep learning models requires extensive databases that are computationally expensive to create, we employ active learning (AL). AL reduces the overhead of creating comprehensive sets of high-fidelity data by determining where the model uncertainty is greatest, and running simulations on the fly to minimize it. The proposed workflow enables nanoconfinement effects to be rigorously considered at the mesoscale where complex connected sets of nanopores control key applications such as hydrocarbon recovery and CO2 sequestration.