io

Remix and make music with audio from the Library of Congress

Brian Foo is the current Innovator-in-Residence at the Library of Congress. His latest…

Tags: , ,




io

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




io

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

io

A Viget Exploration: How Tech Can Help in a Pandemic

Viget Explorations have always been the result of our shared curiosities. They’re usually a spontaneous outcome of team downtime and a shared problem we’ve experienced. We use our Explorations to pursue our diverse interests and contribute to the conversations about building a better digital world.

As the COVID-19 crisis emerged, we were certainly experiencing a shared problem. As a way to keep busy and manage our anxieties, a small team came together to dive into how technology has helped, and, unfortunately, hindered the community response to the current pandemic.

We started by researching the challenges we saw: information overload, a lack of clarity, individual responsibility, and change. Then we brainstormed possible technical solutions that could further improve how communities respond to a pandemic. Click here to see our Exploration on some possible ways to take the panic out of pandemics.

While we aren’t currently pursuing the solutions outlined in the Exploration, we’d love to hear what you think about these approaches, as well as any ideas you have for how technology can help address the outlined challenges.

Please note, this Exploration doesn’t provide medical information. Visit the Center for Disease Control’s website for current information and COVID-19, its symptoms, and treatments.

At Viget, we’re adjusting to this crisis for the safety of our clients, our staff, and our communities. If you’d like to hear from Viget's co-founder, Brian Williams, you can read his article on our response to the situation.



  • News & Culture

io

Pursuing A Professional Certification In Scrum

Professional certifications have become increasingly popular in this age of career switchers and the freelance gig economy. A certification can be a useful way to advance your skill set quickly or make your resume stand out, which can be especially important for those trying to break into a new industry or attract business while self-employed. Whatever your reason may be for pursuing a professional certificate, there is one question only you can answer for yourself: is it worth it?

Finding first-hand experiences from professionals with similar career goals and passions was the most helpful research I used to answer that question for myself. So, here’s mine; why I decided to get Scrum certified, how I evaluated my options, and if it was really worth it.

A shift in mindset

My background originates in brand strategy where it’s typical for work to follow a predictable order, each step informing the next. This made linear techniques like water-fall timelines, completing one phase of work in its entirety before moving onto the next, and documenting granular tasks weeks in advance helpful and easy to implement. When I made the move to more digitally focused work, tasks followed a much looser set of ‘typical’ milestones. While the general outline remained the same (strategy, design, development, launch) there was a lot more overlap with how tasks informed each other, and would keep informing and re-informing as an iterative workflow would encourage.

Trying to fit a very fluid process into my very stiff linear approach to project planning didn’t work so well. I didn’t have the right strategies to manage risks in a productive way without feeling like the whole project was off track; with the habit of account for granular details all the time, I struggled to lean on others to help define what we should work on and when, and being okay if that changed once, or twice, or three times. Everything I learned about the process of product development came from learning on the job and making a ton of mistakes—and I knew I wanted to get better.

Photo by Christin Hume on Unsplash

I was fortunate enough to work with a group of developers who were looking to make a change, too. Being ‘agile’-enthusiasts, this group of developers were desperately looking for ways to infuse our approach to product work with agile-minded principles (the broad definition of ‘agile’ comes from ‘The Agile Manifesto’, which has influenced frameworks for organizing people and information, often applied in product development). This not only applied to how I worked with them, but how they worked with each other, and the way we all onboarded clients to these new expectations. This was a huge eye opener to me. Soon enough, I started applying these agile strategies to my day-to-day— running stand-ups, setting up backlogs, and reorganizing the way I thought about work output. It’s from this experience that I decided it may be worth learning these principles more formally.

The choice to get certified

There is a lot of literature out there about agile methodologies and a lot to be learned from casual research. This benefitted me for a while until I started to work on more complicated projects, or projects with more ambitious feature requests. My decision to ultimately pursue a formal agile certification really came down to three things:

  1. An increased use of agile methods across my team. Within my day-to-day I would encounter more team members who were familiar with these tactics and wanted to use them to structure the projects they worked on.
  2. The need for a clear definition of what processes to follow. I needed to grasp a real understanding of how to implement agile processes and stay consistent with using them to be an effective champion of these principles.
  3. Being able to diversify my experience. Finding ways to differentiate my resume from others with similar experience would be an added benefit to getting a certification. If nothing else, it would demonstrate that I’m curious-minded and proactive about my career.

To achieve these things, I gravitated towards a more foundational education in a specific agile-methodology. This made Scrum the most logical choice given it’s the basis for many of the agile strategies out there and its dominance in the field.

Evaluating all the options

For Scrum education and certification, there are really two major players to consider.

  1. Scrum Alliance - Probably the most well known Scrum organization is Scrum Alliance. They are a highly recognizable organization that does a lot to further the broader understanding of Scrum as a practice.
  2. Scrum.org - Led by the original co-founder of Scrum, Ken Schwaber, Scrum.org is well-respected and touted for its authority in the industry.

Each has their own approach to teaching and awarding certifications as well as differences in price point and course style that are important to be aware of.

SCRUM ALLIANCE

Pros

  • Strong name recognition and leaders in the Scrum field
  • Offers both in-person and online courses
  • Hosts in-person events, webinars, and global conferences
  • Provides robust amounts of educational resources for its members
  • Has specialization tracks for folks looking to apply Scrum to their specific discipline
  • Members are required to keep their skills up to date by earning educational credits throughout the year to retain their certification
  • Consistent information across all course administrators ensuring you'll be set up to succeed when taking your certification test.

Cons

  • High cost creates a significant barrier to entry (we’re talking in the thousands of dollars here)
  • Courses are required to take the certification test
  • Certification expires after two years, requiring additional investment in time and/or money to retain credentials
  • Difficult to find sample course material ahead of committing to a course
  • Courses are several days long which may mean taking time away from a day job to complete them

SCRUM.ORG

Pros

  • Strong clout due to its founder, Ken Schwaber, who is the originator of Scrum
  • Offers in-person classes and self-paced options
  • Hosts in-person events and meetups around the world
  • Provides free resources and materials to the public, including practice tests
  • Has specialization tracks for folks looking to apply Scrum to their specific discipline
  • Minimum score on certification test required to pass; certification lasts for life
  • Lower cost for certification when compared to peers

Cons

  • Much lesser known to the general public, as compared to its counterpart
  • Less sophisticated educational resources (mostly confined to PDFs or online forums) making digesting the material challenging
  • Practice tests are slightly out of date making them less effective as a study tool
  • Self-paced education is not structured and therefore can’t ensure you’re learning everything you need to know for the test
  • Lack of active and engaging community will leave something to be desired

Before coming to a decision, it was helpful to me to weigh these pros and cons against a set of criteria. Here’s a helpful scorecard I used to compare the two institutions.

Scrum Alliance Scrum.org
Affordability ⚪⚪⚪
Rigor⚪⚪⚪⚪⚪
Reputation⚪⚪⚪⚪⚪
Recognition⚪⚪⚪
Community⚪⚪⚪
Access⚪⚪⚪⚪⚪
Flexibility⚪⚪⚪
Specialization⚪⚪⚪⚪⚪⚪
Requirements⚪⚪⚪
Longevity⚪⚪⚪

For me, the four areas that were most important to me were:

  • Affordability - I’d be self-funding this certificate so the investment of cost would need to be manageable.
  • Self-paced - Not having a lot of time to devote in one sitting, the ability to chip away at coursework was appealing to me.
  • Reputation - Having a certificate backed by a well-respected institution was important to me if I was going to put in the time to achieve this credential.
  • Access - Because I wanted to be a champion for this framework for others in my organization, having access to resources and materials would help me do that more effectively.

Ultimately, I decided upon a Professional Scrum Master certification from Scrum.org! The price and flexibility of learning course content were most important to me. I found a ton of free materials on Scrum.org that I could study myself and their practice tests gave me a good idea of how well I was progressing before I committed to the cost of actually taking the test. And, the pedigree of certification felt comparable to that of Scrum Alliance, especially considering that the founder of Scrum himself ran the organization.

Putting a certificate to good use

I don’t work in a formal Agile company, and not everyone I work with knows the ins and outs of Scrum. I didn’t use my certification to leverage a career change or new job title. So after all that time, money, and energy, was it worth it?

I think so. I feel like I use my certification every day and employ many of the principles of Scrum in my day-to-day management of projects and people.

  • Self-organizing teams is really important when fostering trust and collaboration among project members. This means leaning on each other’s past experiences and lessons learned to inform our own approach to work. It also means taking a step back as a project manager to recognize the strengths on your team and trust their lead.
  • Approaching things in bite size pieces is also a best practice I use every day. Even when there isn't a mandated sprint rhythm, breaking things down into effort level, goals, and requirements is an excellent way to approach work confidently and avoid getting too overwhelmed.
  • Retrospectives and stand ups are also absolute musts for Scrum practices, and these can be modified to work for companies and project teams of all shapes and sizes. Keeping a practice of collective communication and reflection will keep a team humming and provides a safe space to vent and improve.
Photo by Gautam Lakum on Unsplash

Parting advice

I think furthering your understanding of industry standards and keeping yourself open to new ways of working will always benefit you as a professional. Professional certifications are readily available and may be more relevant than ever.

If you’re on this path, good luck! And here are some things to consider:

  • Do your research – With so many educational institutions out there, you can definitely find the right one for you, with the level of rigor you’re looking for.
  • Look for company credits or incentives – some companies cover part or all of the cost for continuing education.
  • Get started ASAP – You don’t need a full certification to start implementing small tactics to your workflows. Implementing learnings gradually will help you determine if it’s really something you want to pursue more formally.




io

"What is deceptive, especially in the West, is our assumption that repetitive and mindless jobs are..."

What is deceptive, especially in the West, is our assumption that repetitive and mindless jobs are dehumanizing. On the other hand, the jobs that require us to use the abilities that are uniquely human, we assume to be humanizing. This is not necessarily true. The determining factor is not so much the nature of our jobs, but for whom they serve.

‘Burnout’ is a result of consuming yourself for something other than yourself. You could be burnt out for an abstract concept, ideal, or even nothing (predicament). You end up burning yourself as fuel for something or someone else. This is what feels dehumanizing. In repetitive physical jobs, you could burn out your body for something other than yourself. In creative jobs, you could burn out your soul. Either way, it would be dehumanizing. Completely mindless jobs and incessantly mindful jobs could both be harmful to us.



- Dsyke Suematsu from his white paper discussed at Why Ad People Burn Out.




io

Illustrator Tutorial: How to Create a Notification Bell Icon

n today’s tutorial, we’re going to take a quick look behind the process of creating a notification bell icon, and see how easy it is to do so using nothing more than a couple of basic geometric shapes and tools. So, assuming you already have the software up and running, let’s jump straight into it! […]

The post Illustrator Tutorial: How to Create a Notification Bell Icon appeared first on Bittbox.




io

Illustrator Tutorial: How to Create a Recycle Bin Notification Icon

Welcome back to another Illustrator based tutorial, in which we’re going to learn how to create a recycle bin notification icon, using nothing more than a couple of basic geometric shapes that we’re going to adjust here and there. So, assuming you already have the software running in the background, bring it up and let’s […]

The post Illustrator Tutorial: How to Create a Recycle Bin Notification Icon appeared first on Bittbox.




io

The State – Sort of – of HTML5 Audio

The State – Sort of – of HTML5 Audio Scott Schiller discusses the high level of hype around HTML5 and CSS3. The two specs render ”many years of feature hacks redundant by replacing them with native features,” he writes in an insightful blog. Blogging, he says: CSS3’s border-radius, box-shadow, text-shadow and gradients, and HTML5’s <canvas>, Read the rest...




io

Code injection, error throwing

In a blog, Opera Software Developer Relations team member Tiffany B. Brown looks at code injection, error throwing and handling and mobile debugging. She notes Opera Dragonfly and its remote debug features provide a way to debug mobile sites from their desktop. Brown mentions WebKit’s recently added remote debugging capabilities, folded into Google Chrome developer Read the rest...




io

Intel’s Parallel Extensions for JavaScript

Intel’s Parallel Extensions for JavaScript, code named River Trail, hooks into on-chip vector extensions to improve performance of Web applications. Details of Intel’s attempt to get on the JavaScript juggernaut emerged last month at its developer event. The prototype JavaScript extension offered by Intel is intended to allow JavaScript apps to take advantage of modern parallel Read the rest...




io

Upcoming: Google IO

At Google IO June 27-29 the Android platform will be on display. Direct from a recent slamdown legal court grudge win against Java steward Oracle, the Android crew will be able to tell you  about what is new and what is upcoming in Android, how you can monetize Google apps, multiversioning and more. Much will Read the rest...




io

METAL INJECTION LIVECAST #535 - Liddle' Ditch

We kick things off with Rob talking about his favorite albums of the year. We then discuss the shocking news...

The post METAL INJECTION LIVECAST #535 - Liddle' Ditch appeared first on Metal Injection.



  • Metal Injection Livecast








io

METAL INJECTION LIVECAST #542 - Dull By Obb

We kick things off talking about the Motley Crue reunion. Darren shares a story from work. Rob talks about going...

The post METAL INJECTION LIVECAST #542 - Dull By Obb appeared first on Metal Injection.



  • Metal Injection Livecast

io

METAL INJECTION LIVECAST #543 - Gong Solo

We kicked things off with Rob recapping his experience at the Tool concert. We then discussed Dave Mustaine's recent interview...

The post METAL INJECTION LIVECAST #543 - Gong Solo appeared first on Metal Injection.



  • Metal Injection Livecast

io

METAL INJECTION LIVECAST #544 - 33% Drained

This week, we had a very special guest, our Livecastard of the Month, Eric, who actually signed up for our...

The post METAL INJECTION LIVECAST #544 - 33% Drained appeared first on Metal Injection.



  • Metal Injection Livecast


io

METAL INJECTION LIVECAST #546 - Grandma Smoothie

We kick things off talking about annoying holiday commercials. We discuss Christmas music this episode, and why Hanukkah lands on...

The post METAL INJECTION LIVECAST #546 - Grandma Smoothie appeared first on Metal Injection.



  • Metal Injection Livecast

io

METAL INJECTION LIVECAST #547 - Crab Rangoomba

We kick things off talking about how we spent the holiday break and previewing the bonus Patreon episode coming next...

The post METAL INJECTION LIVECAST #547 - Crab Rangoomba appeared first on Metal Injection.



  • Metal Injection Livecast


io

METAL INJECTION LIVECAST #549 - Loose Hot Dog

We kick things off with our New Year's resolutions. Noa explains her future vision board. We discuss climate change and...

The post METAL INJECTION LIVECAST #549 - Loose Hot Dog appeared first on Metal Injection.



  • Metal Injection Livecast

io

METAL INJECTION LIVECAST #550 - Don Docking

We kick things off discussing our Tad's Patreon episode, and our fast food preferences. We then discuss the sad status...

The post METAL INJECTION LIVECAST #550 - Don Docking appeared first on Metal Injection.



  • Metal Injection Livecast


io

METAL INJECTION LIVECAST #552 - Penis II Society

What's with all the good drummers dying? We kick things off discussing the sad news. Noa discussed locking herself out...

The post METAL INJECTION LIVECAST #552 - Penis II Society appeared first on Metal Injection.



  • Metal Injection Livecast

io

METAL INJECTION LIVECAST #553 - Full On Lip

What an eventful edition of the Metal Injection Livecast! We kick things off talking about the new Dave Mustaine memoir...

The post METAL INJECTION LIVECAST #553 - Full On Lip appeared first on Metal Injection.



  • Metal Injection Livecast

io

METAL INJECTION LIVECAST #554 - Rob's Nolita

We kick things off talking about the Rage Against the Machine reunion. We then discuss the summer touring season and...

The post METAL INJECTION LIVECAST #554 - Rob's Nolita appeared first on Metal Injection.



  • Metal Injection Livecast

io

Top Construction Projects in the World

Highlighting some of humanity’s biggest construction achievements is the 10 of the Top Construction Projects in the World informational infographic from 53 Quantum.

Human societies have always looked to build the biggest and best monuments to their ingenuity, resources and craftsmanship. From the Empire State Building, to the Eiffel Tower, and the Great Pyramids of Giza to the Great Wall of China, we’ve always looked to build the biggest and best. Mind blowing historic building projects now litter travel-minded folks bucket lists everywhere, and they make up a huge part of the historic tapestry.

But it’s 2019 now. We need to talk about the Burj Khalifas and Libyan Irrigation projects of the world. For the first time in history, there are societies wealthy, powerful, and most importantly, cohesive enough to build spectacularly huge projects. By working together with our geographical neighbours, we’ve been able to give the world some truly unbelievable, innovative projects. The International Space Station leaps to mind as a collaboration between nations.

With so many stunning projects continually on the go around the planet, we at 53 Quantum thought it’d be an idea to put together a quick infographic of ten of the biggest and best. Of course, there’ll be examples we missed and things we left off, because how do you compare a massive railway restoration and modernisation with a super skyscraper project? Apples and oranges!

Nevertheless, here are ten of the world’s most impressive construction projects.

Although I like the infographic, this design falls short in a few areas.

  • Where’s the Data Visualization? The biggest missed opportunity in this design is that the data isn’t visualized. You want readers to understand how big or how expensive these projects were, you need a visualization that puts that into context!

  • Nice illustrations, but that isn’t enough. Most of the impact of the size and scale of these projects is lost because it’s buried in the text.

  • A map of the locations would be nice.

  • Is there any logic to the order of these projects in the infographic? Readers will look through the list from top-to-bottom in order. They’re not sorted by cost or on a timeline.

Thanks to David for submitting the link!




io

Punxsutawney Phil vs. the U.S. National Weather Service

Punxsutawney Phil’s predictions for the coming of Spring on Groundhog Day haven’t been that accurate, and the U.S. National Weather Service is here to prove it with an infographic!

Every February 2, a crowd of thousands gathers at Gobbler’s Knob in Punxsutawney, Pennsylvania, to await a special forecast from a groundhog named Phil. If the 20-pound groundhog emerges and sees his shadow, the United States can expect six more weeks of winter weather according to legend. But, if Phil doesn’t see his shadow, we can expect warmer temperatures and the arrival of an early spring.

Even though he’s been forecasting since 1887, Phil’s track record for the entire country isn’t perfect. To determine just how accurate he is, we’ve compared U.S. national temperatures with Phil’s forecasts. On average, Phil has gotten it right 40% of the time over the past 10 years.

Using real data wins!

For what it’s worth, Phil didn’t see his shadow in 2020, and predicted that Spring would be coming soon!




io

Santa Fe National Forest Spared From Fracking

WildEarth Guardians Press Release Federal Court Overturns Leasing of Lands to Oil and Gas Industry SANTA FE, NM — In a victory for New Mexico’s air, climate, and water, the U.S. District Court for the District of New Mexico today … Continue reading




io

Urging Multi-Pronged Effort to Halt Climate Crisis, Scientists Say Protecting World’s Forests as Vital as Cutting Emissions

By Julia  Conley Common Dreams “Our message as scientists is simple: Our planet’s future climate is inextricably tied to the future of its forest.” With a new statement rejecting the notion that drastically curbing emissions alone is enough to curb … Continue reading




io

Scientists Warn Crashing Insect Population Puts ‘Planet’s Ecosystems and Survival of Mankind’ at Risk

By Jon Queally Common Dreams “This is the stuff that worries me most. We don’t know what we’re doing, not trying to stop it, [and] with big consequences we don’t really understand.” The first global scientific review of its kind … Continue reading




io

‘Coming Mass Extinction’ Caused by Human Destruction Could Wipe Out 1 Million Species, Warns UN Draft Report

By Jessica Corbett Common Dreams Far-reaching global assessment details how humanity is undermining the very foundations of the natural world     On the heels of an Earth Day that featured calls for radical action to address the current “age … Continue reading





io

Leonardo DiCaprio Premiers “Before the Flood” Climate Change Documentary

Environmental activist and Academy Award®-winning actor Leonardo DiCaprio and Academy Award®-winning filmmaker Fisher Stevens premier their documentary film, Before the Flood, a compelling account of the powerful changes occurring on our planet due to climate change. Before the Flood will … Continue reading




io

Climate Change Driving Population Shifts to Urban Areas

By Kristie Auman-Bauer Penn State News Climate change is causing glaciers to shrink, temperatures to rise, and shifts in human migration in parts of the world, according to a Penn State researcher. Brian Thiede, assistant professor of rural sociology, along … Continue reading




io

Why Reducing Our Carbon Emissions Matters

By The Conversation While it’s true that Earth’s temperatures and carbon dioxide levels have always fluctuated, the reality is that humans’ greenhouse emissions since the industrial revolution have put us in uncharted territory. Written by Dr Benjamin Henley and Assoc … Continue reading




io

Scientists Warn Crashing Insect Population Puts ‘Planet’s Ecosystems and Survival of Mankind’ at Risk

By Jon Queally Common Dreams “This is the stuff that worries me most. We don’t know what we’re doing, not trying to stop it, [and] with big consequences we don’t really understand.” The first global scientific review of its kind … Continue reading




io

‘Coming Mass Extinction’ Caused by Human Destruction Could Wipe Out 1 Million Species, Warns UN Draft Report

By Jessica Corbett Common Dreams Far-reaching global assessment details how humanity is undermining the very foundations of the natural world     On the heels of an Earth Day that featured calls for radical action to address the current “age … Continue reading




io

Is UX Certification worth it?

BCS launched their Foundation Certificate in User Experience 3 years ago. We thought this was an opportune time to review its effectiveness. We contacted candidates who had taken (and passed) the certificate through Userfocus and asked, 'What impact has attaining the BCS Foundation Certificate in UX had on your job?' Ten key themes emerged.




io

Talking to computers (part 1): Why is speech recognition so difficult?

Although the performance of today's speech recognition systems is impressive, the experience for many is still one of errors, corrections, frustration and abandoning speech in favour of alternative interaction methods. We take a closer look at speech and find out why speech recognition is so difficult.




io

Transitioning from academic research to UX research

Doing UX research in a university is very different to doing UX research in a business setting. If you're an academic making the leap, what are the main differences you need to keep in mind?




io

Usability task scenarios: The beating heart of a usability test

Usability tests are unique. We ask people to do real tasks with the system and watch. As the person completes the task, we watch their behaviour and listen to their stream-of-consciousness narrative. But what makes a good usability task scenario?