con

10 On-Page SEO Factors You Should Consider [2019]

When you want to succeed in the organic search engine results today, you have to focus on your website and learn what you should do to optimize it. There are many factors that can help you with that, form the technical, off-page, and on-page. All these factors and parts of a website require updating and […]

Original post: 10 On-Page SEO Factors You Should Consider [2019]

The post 10 On-Page SEO Factors You Should Consider [2019] appeared first on Daily Blog Tips.




con

3 Ways to Optimise Blog Content so it Ranks Well on Google

When it comes to your blog’s position within search engine results, there’s far more than just sheer luck at play. Search Engine Optimisation is a skill that any content creator – professional or amateur – should develop in order to make their content as discoverable as possible. It involves making certain tweaks to your blog […]

Original post: 3 Ways to Optimise Blog Content so it Ranks Well on Google

The post 3 Ways to Optimise Blog Content so it Ranks Well on Google appeared first on Daily Blog Tips.




con

Support Communication During Conversation






con

Children’s Exposure to Secondhand Smoke May Be Vastly Underestimated by Parents

Tel Aviv University Press Release Smoking parents misperceive where and when their kids are exposed to cigarette smoke, Tel Aviv University researchers say Four out of 10 children in the US are exposed to secondhand smoke, according to the American … Continue reading




con

You Know Clean Air is Good for Your Health. It’s Good for the Economy, Too.

By Rachel Cernansky Ensia When the Clean Air Act of 1970 became law, members of the business community in the United States responded with opposition. Such regulations are a drag on growth, some economists say, for individual businesses and for … Continue reading




con

Mobility Pricing Relieves Congestion, Helps People Breathe Easier

By David Suzuki with contributions from Senior Editor Ian Hanington David Suzuki Foundation By 2002, drivers in London, England, were spending as much as half their commuting time stalled in traffic, contributing to much of the city centre’s dangerous particulate … Continue reading




con

Redefine Creativity – A conversation with Kevin Rose

Today I’m sitting down with investor, serial entrepreneur and all around good human, Kevin Rose. If you’re a long timer listener, you might remember Kevin was part of 30 Days of Genius. Now the tables are turned and I’m in the hot seat as a guest on his podcast, the Kevin Rose Show. Of course, it’s always fun sitting down with one of my long time homies to unpack some of my favorite topics, including: How to build your creative muscle and why it’s becoming more important Standing out and why you’re uniquely qualified. Forgetting the “shoulds” is a must do to uncork our richest lives and much more… Big shoutout to Kevin for having me on the show … and if you haven’t already, be sure to check out his podcast The Kevin Rose Show anywhere you listen to podcasts. Enjoy! FOLLOW KEVIN: instagram | twitter | website Listen to the Podcast Subscribe   This podcast is brought to you by CreativeLive. CreativeLive is the world’s largest hub for online creative education in photo/video, art/design, music/audio, craft/maker, money/life and the ability to make a living in any of those disciplines. They are high quality, highly curated classes taught by the world’s top […]

The post Redefine Creativity – A conversation with Kevin Rose appeared first on Chase Jarvis Photography.




con

Choose Creativity – A Conversation with Jordon Harbinger

Recently sat down with my man Jordan Harbinger on his podcast The Jordan Harbinger Show. As a radio personality and a podcaster long before it was cool, Jordan is no stranger to the mic. It was a fun conversation and I hope you enjoy! A few of my fav topics: I share my framework for learning from the masters by deconstructing what they do and applying it My creative slumps and how I dug out How mindset matters and unwinding our self-limiting beliefs and much more … Big shoutout to Jordan for having me on the show … and if you haven’t already, be sure to check out his podcast The Jordan Harbinger Show anywhere you listen to podcasts. Enjoy! FOLLOW JORDAN: instagram| facebook | twitter | website Listen to the Podcast Subscribe   This podcast is brought to you by CreativeLive. CreativeLive is the world’s largest hub for online creative education in photo/video, art/design, music/audio, craft/maker, money/life and the ability to make a living in any of those disciplines. They are high quality, highly curated classes taught by the world’s top experts — Pulitzer, Oscar, Grammy Award winners, New York Times best selling authors and the best entrepreneurs of our […]

The post Choose Creativity – A Conversation with Jordon Harbinger appeared first on Chase Jarvis Photography.




con

Finding Mastery: A Conversation with Michael Gervais

This week I’m in the hot seat with one of the leading experts in mindset training. Dr. Michael Gervais is a high performance psychologist working in the trenches of high-stakes environments with some of the best in the world. His clients include world record holders, Olympians, internationally acclaimed artists, MVPs from every major sport and Fortune 100 CEOs. Dr. Gervais is also the co-founder of Compete to Create, an educational platform for mindset training. Today I’m on his podcast Finding Mastery which unpacks & decodes each guest’s journey to mastery through mindset skills and practices. If you’ve been a listener for awhile, you’ll know this is one of my favorite topics and something I wholeheartedly credit to unlocking my best work. In this episode: How I learned to trust my intuition Dr. Gervais aptly calls out two journeys to mastery: one of self, and one of craft. I share my perspective on how mastery of craft is a required step to mastering oneself We’re taught that making mistakes is bad so we should avoid them. What we really should be taught is it’s not about avoiding mistakes, it’s about error recovery. and much more… Enjoy! FOLLOW MICHAEL: instagram | twitter […]

The post Finding Mastery: A Conversation with Michael Gervais appeared first on Chase Jarvis Photography.




con

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




con

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

con

Cute Collection of 210 User Interface Icons

Did you remember how was your life before Freepik and Flaticon. No I can’t remember the dark ages either. To celebrate this golden times, they are giving away once more an incredible package of 210 User Interface Icons in 3 versions: Flat, filled and lineal.  Download This work is licensed under a Creative Commons Attribution 3.0 License …

Cute Collection of 210 User Interface Icons Read More »




con

Freebie: 264 Vector Audio DJ Pack Icons

Icons packs are among the most desirable freebies around. There are several out there, going from a wide array of topics from user interfaces to personal finance. But sometimes you can find some rather unusual but clever additions to the icons universe. This Vector Audio DJ Pack is a nice example, brought to you exclusively …

Freebie: 264 Vector Audio DJ Pack Icons Read More »




con

240 Basic Icons Vector Freebie

Flat design is everywhere. Nowadays aesthetics is a lot more simple. No more glossy buttons or gradients background, or what a about the shiny table effect every client asked for?.It is all gone now. In favor of a more “undesigned” look a back to basics trend. Following that idea the guys at your favorite resources …

240 Basic Icons Vector Freebie Read More »




con

Meet SmashingConf Live: Our New Interactive Online Conference

In these strange times when everything is connected, it’s too easy to feel lonely and detached. Yes, everybody is just one message away, but there is always something in the way — deadlines to meet, Slack messages to reply, or urgent PRs to review. Connections need time and space to grow, just like learning, and conferences are a great way to find that time and that space. In fact, with SmashingConfs, we’ve always been trying to create such friendly and inclusive spaces.




con

Nikon has confirmed that their flagship D6 DSLR will start shipping on May 21st

It feels like forever since Nikon announced their newest flagship DSLR; the Nikon D6. It’s actually only been three months, but that hasn’t stopped some people getting anxious. Recently, customers were being told that the D6 would start shipping right about now, but now Nikon has officially come out to announce that the Nikon D6 […]

The post Nikon has confirmed that their flagship D6 DSLR will start shipping on May 21st appeared first on DIY Photography.




con

Differentiating through Log-Log Convex Programs. (arXiv:2004.12553v2 [math.OC] UPDATED)

We show how to efficiently compute the derivative (when it exists) of the solution map of log-log convex programs (LLCPs). These are nonconvex, nonsmooth optimization problems with positive variables that become convex when the variables, objective functions, and constraint functions are replaced with their logs. We focus specifically on LLCPs generated by disciplined geometric programming, a grammar consisting of a set of atomic functions with known log-log curvature and a composition rule for combining them. We represent a parametrized LLCP as the composition of a smooth transformation of parameters, a convex optimization problem, and an exponential transformation of the convex optimization problem's solution. The derivative of this composition can be computed efficiently, using recently developed methods for differentiating through convex optimization problems. We implement our method in CVXPY, a Python-embedded modeling language and rewriting system for convex optimization. In just a few lines of code, a user can specify a parametrized LLCP, solve it, and evaluate the derivative or its adjoint at a vector. This makes it possible to conduct sensitivity analyses of solutions, given perturbations to the parameters, and to compute the gradient of a function of the solution with respect to the parameters. We use the adjoint of the derivative to implement differentiable log-log convex optimization layers in PyTorch and TensorFlow. Finally, we present applications to designing queuing systems and fitting structured prediction models.




con

Convergent normal forms for five dimensional totally nondegenerate CR manifolds in C^4. (arXiv:2004.11251v2 [math.CV] UPDATED)

Applying the equivariant moving frames method, we construct convergent normal forms for real-analytic 5-dimensional totally nondegenerate CR submanifolds of C^4. These CR manifolds are divided into several biholomorphically inequivalent subclasses, each of which has its own complete normal form. Moreover it is shown that, biholomorphically, Beloshapka's cubic model is the unique member of this class with the maximum possible dimension seven of the corresponding algebra of infinitesimal CR automorphisms. Our results are also useful in the study of biholomorphic equivalence problem between CR manifolds, in question.




con

Surface Effects in Superconductors with Corners. (arXiv:2003.00521v2 [math-ph] UPDATED)

We review some recent results on the phenomenon of surface superconductivity in the framework of Ginzburg-Landau theory for extreme type-II materials. In particular, we focus on the response of the superconductor to a strong longitudinal magnetic field in the regime where superconductivity survives only along the boundary of the wire. We derive the energy and density asymptotics for samples with smooth cross section, up to curvature-dependent terms. Furthermore, we discuss the corrections in presence of corners at the boundary of the sample.




con

Linear Convergence of First- and Zeroth-Order Primal-Dual Algorithms for Distributed Nonconvex Optimization. (arXiv:1912.12110v2 [math.OC] UPDATED)

This paper considers the distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of local cost functions by using local information exchange. We first propose a distributed first-order primal-dual algorithm. We show that it converges sublinearly to the stationary point if each local cost function is smooth and linearly to the global optimum under an additional condition that the global cost function satisfies the Polyak-{L}ojasiewicz condition. This condition is weaker than strong convexity, which is a standard condition for proving the linear convergence of distributed optimization algorithms, and the global minimizer is not necessarily unique or finite. Motivated by the situations where the gradients are unavailable, we then propose a distributed zeroth-order algorithm, derived from the proposed distributed first-order algorithm by using a deterministic gradient estimator, and show that it has the same convergence properties as the proposed first-order algorithm under the same conditions. The theoretical results are illustrated by numerical simulations.




con

Data-driven parameterizations of suboptimal LQR and H2 controllers. (arXiv:1912.07671v2 [math.OC] UPDATED)

In this paper we design suboptimal control laws for an unknown linear system on the basis of measured data. We focus on the suboptimal linear quadratic regulator problem and the suboptimal H2 control problem. For both problems, we establish conditions under which a given data set contains sufficient information for controller design. We follow up by providing a data-driven parameterization of all suboptimal controllers. We will illustrate our results by numerical simulations, which will reveal an interesting trade-off between the number of collected data samples and the achieved controller performance.




con

Topology Identification of Heterogeneous Networks: Identifiability and Reconstruction. (arXiv:1909.11054v2 [math.OC] UPDATED)

This paper addresses the problem of identifying the graph structure of a dynamical network using measured input/output data. This problem is known as topology identification and has received considerable attention in recent literature. Most existing literature focuses on topology identification for networks with node dynamics modeled by single integrators or single-input single-output (SISO) systems. The goal of the current paper is to identify the topology of a more general class of heterogeneous networks, in which the dynamics of the nodes are modeled by general (possibly distinct) linear systems. Our two main contributions are the following. First, we establish conditions for topological identifiability, i.e., conditions under which the network topology can be uniquely reconstructed from measured data. We also specialize our results to homogeneous networks of SISO systems and we will see that such networks have quite particular identifiability properties. Secondly, we develop a topology identification method that reconstructs the network topology from input/output data. The solution of a generalized Sylvester equation will play an important role in our identification scheme.




con

On boundedness, gradient estimate, blow-up and convergence in a two-species and two-stimuli chemotaxis system with/without loop. (arXiv:1909.04587v4 [math.AP] UPDATED)

In this work, we study dynamic properties of classical solutions to a homogenous Neumann initial-boundary value problem (IBVP) for a two-species and two-stimuli chemotaxis model with/without chemical signalling loop in a 2D bounded and smooth domain. We successfully detect the product of two species masses as a feature to determine boundedness, gradient estimates, blow-up and $W^{j,infty}(1leq jleq 3)$-exponential convergence of classical solutions for the corresponding IBVP. More specifically, we first show generally a smallness on the product of both species masses, thus allowing one species mass to be suitably large, is sufficient to guarantee global boundedness, higher order gradient estimates and $W^{j,infty}$-convergence with rates of convergence to constant equilibria; and then, in a special case, we detect a straight line of masses on which blow-up occurs for large product of masses. Our findings provide new understandings about the underlying model, and thus, improve and extend greatly the existing knowledge relevant to this model.




con

Convolutions on the complex torus. (arXiv:1908.11815v3 [math.RA] UPDATED)

"Quasi-elliptic" functions can be given a ring structure in two different ways, using either ordinary multiplication, or convolution. The map between the corresponding standard bases is calculated and given by Eisenstein series. A related structure has appeared recently in the computation of Feynman integrals. The two approaches are related by a sequence of polynomials with interlacing zeroes.




con

Decentralized and Parallelized Primal and Dual Accelerated Methods for Stochastic Convex Programming Problems. (arXiv:1904.09015v10 [math.OC] UPDATED)

We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node in the class of methods with optimal number of communication steps takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique we show that all proposed methods with stochastic oracle can be additionally parallelized at each node.




con

Grothendieck's inequalities for JB$^*$-triples: Proof of the Barton-Friedman conjecture. (arXiv:1903.08931v3 [math.OA] UPDATED)

We prove that, given a constant $K> 2$ and a bounded linear operator $T$ from a JB$^*$-triple $E$ into a complex Hilbert space $H$, there exists a norm-one functional $psiin E^*$ satisfying $$|T(x)| leq K , |T| , |x|_{psi},$$ for all $xin E$. Applying this result we show that, given $G > 8 (1+2sqrt{3})$ and a bounded bilinear form $V$ on the Cartesian product of two JB$^*$-triples $E$ and $B$, there exist norm-one functionals $varphiin E^{*}$ and $psiin B^{*}$ satisfying $$|V(x,y)| leq G |V| , |x|_{varphi} , |y|_{psi}$$ for all $(x,y)in E imes B$. These results prove a conjecture pursued during almost twenty years.




con

Optimal construction of Koopman eigenfunctions for prediction and control. (arXiv:1810.08733v3 [math.OC] UPDATED)

This work presents a novel data-driven framework for constructing eigenfunctions of the Koopman operator geared toward prediction and control. The method leverages the richness of the spectrum of the Koopman operator away from attractors to construct a rich set of eigenfunctions such that the state (or any other observable quantity of interest) is in the span of these eigenfunctions and hence predictable in a linear fashion. The eigenfunction construction is optimization-based with no dictionary selection required. Once a predictor for the uncontrolled part of the system is obtained in this way, the incorporation of control is done through a multi-step prediction error minimization, carried out by a simple linear least-squares regression. The predictor so obtained is in the form of a linear controlled dynamical system and can be readily applied within the Koopman model predictive control framework of [12] to control nonlinear dynamical systems using linear model predictive control tools. The method is entirely data-driven and based purely on convex optimization, with no reliance on neural networks or other non-convex machine learning tools. The novel eigenfunction construction method is also analyzed theoretically, proving rigorously that the family of eigenfunctions obtained is rich enough to span the space of all continuous functions. In addition, the method is extended to construct generalized eigenfunctions that also give rise Koopman invariant subspaces and hence can be used for linear prediction. Detailed numerical examples with code available online demonstrate the approach, both for prediction and feedback control.




con

The 2d-directed spanning forest converges to the Brownian web. (arXiv:1805.09399v3 [math.PR] UPDATED)

The two-dimensional directed spanning forest (DSF) introduced by Baccelli and Bordenave is a planar directed forest whose vertex set is given by a homogeneous Poisson point process $mathcal{N}$ on $mathbb{R}^2$. If the DSF has direction $-e_y$, the ancestor $h(u)$ of a vertex $u in mathcal{N}$ is the nearest Poisson point (in the $L_2$ distance) having strictly larger $y$-coordinate. This construction induces complex geometrical dependencies. In this paper we show that the collection of DSF paths, properly scaled, converges in distribution to the Brownian web (BW). This verifies a conjecture made by Baccelli and Bordenave in 2007.




con

Conservative stochastic 2-dimensional Cahn-Hilliard equation. (arXiv:1802.04141v2 [math.PR] UPDATED)

We consider the stochastic 2-dimensional Cahn-Hilliard equation which is driven by the derivative in space of a space-time white noise. We use two different approaches to study this equation. First we prove that there exists a unique solution $Y$ to the shifted equation (see (1.4) below), then $X:=Y+{Z}$ is the unique solution to stochastic Cahn-Hilliard equaiton, where ${Z}$ is the corresponding O-U process. Moreover, we use Dirichlet form approach in cite{Albeverio:1991hk} to construct the probabilistically weak solution the the original equation (1.1) below. By clarifying the precise relation between the solutions obtained by the Dirichlet forms aprroach and $X$, we can also get the restricted Markov uniquness of the generator and the uniqueness of martingale solutions to the equation (1.1).




con

Expansion of Iterated Stratonovich Stochastic Integrals of Arbitrary Multiplicity Based on Generalized Iterated Fourier Series Converging Pointwise. (arXiv:1801.00784v9 [math.PR] UPDATED)

The article is devoted to the expansion of iterated Stratonovich stochastic integrals of arbitrary multiplicity $k$ $(kinmathbb{N})$ based on the generalized iterated Fourier series. The case of Fourier-Legendre series as well as the case of trigonotemric Fourier series are considered in details. The obtained expansion provides a possibility to represent the iterated Stratonovich stochastic integral in the form of iterated series of products of standard Gaussian random variables. Convergence in the mean of degree $2n$ $(nin mathbb{N})$ of the expansion is proved. Some modifications of the mentioned expansion were derived for the case $k=2$. One of them is based of multiple trigonomentric Fourier series converging almost everywhere in the square $[t, T]^2$. The results of the article can be applied to the numerical solution of Ito stochastic differential equations.




con

Groups up to congruence relation and from categorical groups to c-crossed modules. (arXiv:2005.03601v1 [math.CT])

We introduce a notion of c-group, which is a group up to congruence relation and consider the corresponding category. Extensions, actions and crossed modules (c-crossed modules) are defined in this category and the semi-direct product is constructed. We prove that each categorical group gives rise to c-groups and to a c-crossed module, which is a connected, special and strict c-crossed module in the sense defined by us. The results obtained here will be applied in the proof of an equivalence of the categories of categorical groups and connected, special and strict c-crossed modules.




con

Connectedness of square-free Groebner Deformations. (arXiv:2005.03569v1 [math.AC])

Let $Isubseteq S=K[x_1,ldots,x_n]$ be a homogeneous ideal equipped with a monomial order $<$. We show that if $operatorname{in}_<(I)$ is a square-free monomial ideal, then $S/I$ and $S/operatorname{in}_<(I)$ have the same connectedness dimension. We also show that graphs related to connectedness of these quotient rings have the same number of components. We also provide consequences regarding Lyubeznik numbers. We obtain these results by furthering the study of connectedness modulo a parameter in a local ring.




con

Continuity properties of the shearlet transform and the shearlet synthesis operator on the Lizorkin type spaces. (arXiv:2005.03505v1 [math.FA])

We develop a distributional framework for the shearlet transform $mathcal{S}_{psi}colonmathcal{S}_0(mathbb{R}^2) omathcal{S}(mathbb{S})$ and the shearlet synthesis operator $mathcal{S}^t_{psi}colonmathcal{S}(mathbb{S}) omathcal{S}_0(mathbb{R}^2)$, where $mathcal{S}_0(mathbb{R}^2)$ is the Lizorkin test function space and $mathcal{S}(mathbb{S})$ is the space of highly localized test functions on the standard shearlet group $mathbb{S}$. These spaces and their duals $mathcal{S}_0^prime (mathbb R^2),, mathcal{S}^prime (mathbb{S})$ are called Lizorkin type spaces of test functions and distributions. We analyze the continuity properties of these transforms when the admissible vector $psi$ belongs to $mathcal{S}_0(mathbb{R}^2)$. Then, we define the shearlet transform and the shearlet synthesis operator of Lizorkin type distributions as transpose mappings of the shearlet synthesis operator and the shearlet transform, respectively. They yield continuous mappings from $mathcal{S}_0^prime (mathbb R^2)$ to $mathcal{S}^prime (mathbb{S})$ and from $mathcal{S}^prime (mathbb S)$ to $mathcal{S}_0^prime (mathbb{R}^2)$. Furthermore, we show the consistency of our definition with the shearlet transform defined by direct evaluation of a distribution on the shearlets. The same can be done for the shearlet synthesis operator. Finally, we give a reconstruction formula for Lizorkin type distributions, from which follows that the action of such generalized functions can be written as an absolutely convergent integral over the standard shearlet group.




con

Toric Sasaki-Einstein metrics with conical singularities. (arXiv:2005.03502v1 [math.DG])

We show that any toric K"ahler cone with smooth compact cross-section admits a family of Calabi-Yau cone metrics with conical singularities along its toric divisors. The family is parametrized by the Reeb cone and the angles are given explicitly in terms of the Reeb vector field. The result is optimal, in the sense that any toric Calabi-Yau cone metric with conical singularities along the toric divisor (and smooth elsewhere) belongs to this family. We also provide examples and interpret our results in terms of Sasaki-Einstein metrics.




con

Continuity in a parameter of solutions to boundary-value problems in Sobolev spaces. (arXiv:2005.03494v1 [math.CA])

We consider the most general class of linear inhomogeneous boundary-value problems for systems of ordinary differential equations of an arbitrary order whose solutions and right-hand sides belong to appropriate Sobolev spaces. For parameter-dependent problems from this class, we prove a constructive criterion for their solutions to be continuous in the Sobolev space with respect to the parameter. We also prove a two-sided estimate for the degree of convergence of these solutions to the solution of the nonperturbed problem.




con

On the connection problem for the second Painlev'e equation with large initial data. (arXiv:2005.03440v1 [math.CA])

We consider two special cases of the connection problem for the second Painlev'e equation (PII) using the method of uniform asymptotics proposed by Bassom et al.. We give a classification of the real solutions of PII on the negative (positive) real axis with respect to their initial data. By product, a rigorous proof of a property associate with the nonlinear eigenvalue problem of PII on the real axis, recently revealed by Bender and Komijani, is given by deriving the asymptotic behavior of the Stokes multipliers.




con

The formation of trapped surfaces in the gravitational collapse of spherically symmetric scalar fields with a positive cosmological constant. (arXiv:2005.03434v1 [gr-qc])

Given spherically symmetric characteristic initial data for the Einstein-scalar field system with a positive cosmological constant, we provide a criterion, in terms of the dimensionless size and dimensionless renormalized mass content of an annular region of the data, for the formation of a future trapped surface. This corresponds to an extension of Christodoulou's classical criterion by the inclusion of the cosmological term.




con

Minimum pair degree condition for tight Hamiltonian cycles in $4$-uniform hypergraphs. (arXiv:2005.03391v1 [math.CO])

We show that every 4-uniform hypergraph with $n$ vertices and minimum pair degree at least $(5/9+o(1))n^2/2$ contains a tight Hamiltonian cycle. This degree condition is asymptotically optimal.




con

Constructions of new matroids and designs over GF(q). (arXiv:2005.03369v1 [math.CO])

A perfect matroid design (PMD) is a matroid whose flats of the same rank all have the same size. In this paper we introduce the q-analogue of a PMD and its properties. In order to do that, we first establish new cryptomorphic definitions for q-matroids. We show that q-Steiner systems are examples of q-PMD's and we use this matroid structure to construct subspace designs from q-Steiner systems. We apply this construction to S(2, 13, 3; q) Steiner systems and hence establish the existence of subspace designs with previously unknown parameters.




con

Converging outer approximations to global attractors using semidefinite programming. (arXiv:2005.03346v1 [math.OC])

This paper develops a method for obtaining guaranteed outer approximations for global attractors of continuous and discrete time nonlinear dynamical systems. The method is based on a hierarchy of semidefinite programming problems of increasing size with guaranteed convergence to the global attractor. The approach taken follows an established line of reasoning, where we first characterize the global attractor via an infinite dimensional linear programming problem (LP) in the space of Borel measures. The dual to this LP is in the space of continuous functions and its feasible solutions provide guaranteed outer approximations to the global attractor. For systems with polynomial dynamics, a hierarchy of finite-dimensional sum-of-squares tightenings of the dual LP provides a sequence of outer approximations to the global attractor with guaranteed convergence in the sense of volume discrepancy tending to zero. The method is very simple to use and based purely on convex optimization. Numerical examples with the code available online demonstrate the method.




con

Asymptotics of PDE in random environment by paracontrolled calculus. (arXiv:2005.03326v1 [math.PR])

We apply the paracontrolled calculus to study the asymptotic behavior of a certain quasilinear PDE with smeared mild noise, which originally appears as the space-time scaling limit of a particle system in random environment on one dimensional discrete lattice. We establish the convergence result and show a local in time well-posedness of the limit stochastic PDE with spatial white noise. It turns out that our limit stochastic PDE does not require any renormalization. We also show a comparison theorem for the limit equation.




con

The conjecture of Erd"{o}s--Straus is true for every $nequiv 13 extrm{ mod }24$. (arXiv:2005.03273v1 [math.NT])

In this short note we give a proof of the famous conjecture of Erd"{o}s-Straus for the case $nequiv13 extrm{ mod } 24.$ The Erd"{o}s--Straus conjecture states that the equation $frac{4}{n}=frac{1}{x}+frac{1}{y}+frac{1}{z}$ has positive integer solutions $x,y,z$ for every $ngeq 2$. It is open for $nequiv 1 extrm{ mod } 12$. Indeed, in all of the other cases the solutions are always easy to find. We prove that the conjecture is true for every $nequiv 13 extrm{ mod } 24$. Therefore, to solve it completely, it remains to find solutions for every $nequiv 1 extrm{ mod } 24$.




con

A Note on Cores and Quasi Relative Interiors in Partially Finite Convex Programming. (arXiv:2005.03265v1 [math.FA])

The problem of minimizing an entropy functional subject to linear constraints is a useful example of partially finite convex programming. In the 1990s, Borwein and Lewis provided broad and easy-to-verify conditions that guarantee strong duality for such problems. Their approach is to construct a function in the quasi-relative interior of the relevant infinite-dimensional set, which assures the existence of a point in the core of the relevant finite-dimensional set. We revisit this problem, and provide an alternative proof by directly appealing to the definition of the core, rather than by relying on any properties of the quasi-relative interior. Our approach admits a minor relaxation of the linear independence requirements in Borwein and Lewis' framework, which allows us to work with certain piecewise-defined moment functions precluded by their conditions. We provide such a computed example that illustrates how this relaxation may be used to tame observed Gibbs phenomenon when the underlying data is discontinuous. The relaxation illustrates the understanding we may gain by tackling partially-finite problems from both the finite-dimensional and infinite-dimensional sides. The comparison of these two approaches is informative, as both proofs are constructive.




con

The Congruence Subgroup Problem for finitely generated Nilpotent Groups. (arXiv:2005.03263v1 [math.GR])

The congruence subgroup problem for a finitely generated group $Gamma$ and $Gleq Aut(Gamma)$ asks whether the map $hat{G} o Aut(hat{Gamma})$ is injective, or more generally, what is its kernel $Cleft(G,Gamma ight)$? Here $hat{X}$ denotes the profinite completion of $X$. In the case $G=Aut(Gamma)$ we denote $Cleft(Gamma ight)=Cleft(Aut(Gamma),Gamma ight)$.

Let $Gamma$ be a finitely generated group, $ar{Gamma}=Gamma/[Gamma,Gamma]$, and $Gamma^{*}=ar{Gamma}/tor(ar{Gamma})congmathbb{Z}^{(d)}$. Denote $Aut^{*}(Gamma)= extrm{Im}(Aut(Gamma) o Aut(Gamma^{*}))leq GL_{d}(mathbb{Z})$. In this paper we show that when $Gamma$ is nilpotent, there is a canonical isomorphism $Cleft(Gamma ight)simeq C(Aut^{*}(Gamma),Gamma^{*})$. In other words, $Cleft(Gamma ight)$ is completely determined by the solution to the classical congruence subgroup problem for the arithmetic group $Aut^{*}(Gamma)$.

In particular, in the case where $Gamma=Psi_{n,c}$ is a finitely generated free nilpotent group of class $c$ on $n$ elements, we get that $C(Psi_{n,c})=C(mathbb{Z}^{(n)})={e}$ whenever $ngeq3$, and $C(Psi_{2,c})=C(mathbb{Z}^{(2)})=hat{F}_{omega}$ = the free profinite group on countable number of generators.




con

Dynamical Phase Transitions for Fluxes of Mass on Finite Graphs. (arXiv:2005.03262v1 [cond-mat.stat-mech])

We study the time-averaged flux in a model of particles that randomly hop on a finite directed graph. In the limit as the number of particles and the time window go to infinity but the graph remains finite, the large-deviation rate functional of the average flux is given by a variational formulation involving paths of the density and flux. We give sufficient conditions under which the large deviations of a given time averaged flux is determined by paths that are constant in time. We then consider a class of models on a discrete ring for which it is possible to show that a better strategy is obtained producing a time-dependent path. This phenomenon, called a dynamical phase transition, is known to occur for some particle systems in the hydrodynamic scaling limit, which is thus extended to the setting of a finite graph.




con

An Issue Raised in 1978 by a Then-Future Editor-in-Chief of the Journal "Order": Does the Endomorphism Poset of a Finite Connected Poset Tell Us That the Poset Is Connected?. (arXiv:2005.03255v1 [math.CO])

In 1978, Dwight Duffus---editor-in-chief of the journal "Order" from 2010 to 2018 and chair of the Mathematics Department at Emory University from 1991 to 2005---wrote that "it is not obvious that $P$ is connected and $P^P$ isomorphic to $Q^Q$ implies that $Q$ is connected," where $P$ and $Q$ are finite non-empty posets. We show that, indeed, under these hypotheses $Q$ is connected and $Pcong Q$.




con

A Chance Constraint Predictive Control and Estimation Framework for Spacecraft Descent with Field Of View Constraints. (arXiv:2005.03245v1 [math.OC])

Recent studies of optimization methods and GNC of spacecraft near small bodies focusing on descent, landing, rendezvous, etc., with key safety constraints such as line-of-sight conic zones and soft landings have shown promising results; this paper considers descent missions to an asteroid surface with a constraint that consists of an onboard camera and asteroid surface markers while using a stochastic convex MPC law. An undermodeled asteroid gravity and spacecraft technology inspired measurement model is established to develop the constraint. Then a computationally light stochastic Linear Quadratic MPC strategy is presented to keep the spacecraft in satisfactory field of view of the surface markers while trajectory tracking, employing chance based constraints and up-to-date estimation uncertainty from navigation. The estimation uncertainty giving rise to the tightened constraints is particularly addressed. Results suggest robust tracking performance across a variety of trajectories.




con

New constructions of strongly regular Cayley graphs on abelian groups. (arXiv:2005.03183v1 [math.CO])

In this paper, we give new constructions of strongly regular Cayley graphs on abelian groups as generalizations of a series of known constructions: the construction of covering extended building sets in finite fields by Xia (1992), the product construction of Menon-Hadamard difference sets by Turyn (1984), and the construction of Paley type partial difference sets by Polhill (2010). Then, we obtain new large families of strongly regular Cayley graphs of Latin square type or negative Latin square type.