urr Maharashra accident: Rail safety watchdog calls for 'abundant caution' to avoid recurrence of such incident By economictimes.indiatimes.com Published On :: 2020-05-08T18:46:51+05:30 The Commission of Railway Safety, which investigates all serious rail accidents and clears all rail projects, also said that now that such an incident of migrant or other persons walking along the tracks leading to consequent deaths have come to notice, all-out efforts must be made to prevent recurrence of such incidents in future. Full Article
urr Nicolás Maduro Moros and 14 Current and Former Venezuelan Officials Charged with Narco-Terrorism, Corruption, Drug Trafficking and Other Criminal Charges By www.justice.gov Published On :: Thu, 26 Mar 2020 00:00:00 -0400 Former President of Venezuela Nicolás Maduro Moros, Venezuela’s vice president for the economy, Venezuela’s Minister of Defense, and Venezuela’s Chief Supreme Court Justice are among those charged in New York City; Washington, DC; and Miami, along with current and former Venezuelan government officials as well as two Fuerzas Armadas Revolucionarias de Colombia (FARC) leaders, announced U.S. Attorney General William P. Barr, U.S. Attorney Geoffrey S. Berman of the Southern District of New York, U.S. Attorney Ariana Fajardo Orshan of the Southern District of Florida, Assistant Attorney General Brian A. Benczkowski of the Justice Department’s Criminal Division, Acting Administrator Uttam Dhillon of the U.S. Drug Enforcement Administration (DEA) and Acting Executive Associate Director Alysa D. Erichs of U.S. Immigration and Customs Enforcement’s Homeland Security Investigations (HSI). Full Article
urr Jonathan Murray Discusses Monday's Market Correction By www.wbal.com Published On :: 2018-02-05T16:00:00 Take a listen to Jonathan Murray on WBAL News Now from Monday. Full Article
urr Jonathan Murray Closing Bell Report: February 6th By www.wbal.com Published On :: 2018-02-06T15:00:00 Financial Analyst Jonathan Murray describes what happened on what was a roller coaster of a day on the stock market. Full Article
urr Investigators Conclude that No Abduction Occurred at Grand Canyon National Park By www.nps.gov Published On :: Sun, 19 Jul 2009 20:00:00 EST https://www.nps.gov/grca/learn/news/news-2009-07-20_abduction.htm Full Article
urr Fire crews are actively working to suppress the lightning ignited Imperial Fire. Currently the fire is estimated to be three (3) acres in size and is located along the Cape Royal Road near Vista Encantada. By www.nps.gov Published On :: Wed, 18 Jul 2018 12:49:00 EST Fire crews are actively working to suppress the lightning ignited Imperial Fire. Currently the fire is estimated to be three (3) acres in size and is located along the Cape Royal Road near Vista Encantada. https://www.nps.gov/grca/learn/news/imperial-fire-being-suppressed-on-north-rim-of-grand-canyon-national-park-20180718.htm Full Article
urr Current Grand Canyon National Park Closures as of August 8, 2018 By www.nps.gov Published On :: Wed, 08 Aug 2018 16:26:00 EST This is a summary of current fire related closures for Grand Canyon National Park. Today, new temporary trail closures were implemented that include the Nankoweap Trail, the Point Imperial Trail, and Fire Point on the North Rim. https://www.nps.gov/grca/learn/news/2018-08-08-current-grand-canyon-national-park-closures.htm Full Article
urr Preliminary Findings Indicate No Current Uranium Ore Exposure at Grand Canyon By www.nps.gov Published On :: Tue, 12 Mar 2019 10:05:00 EST Preliminary findings of an interagency safety review conducted last week at Grand Canyon National Park indicate no current exposure concerns for park employees and visitors from uranium ore samples previously stored in buckets at the park's Museum Collection building. https://www.nps.gov/grca/learn/news/preliminary-findings-indicate-no-current-uranium-ore-exposure.htm Full Article
urr The rules on having a bonfire in your garden as Surrey councils warn against them By www.getsurrey.co.uk Published On :: Sun, 03 May 2020 04:30:00 GMT While it is not illegal to have a bonfire, some Surrey councils are urging residents not to light them Full Article What's On
urr I ordered Five Guys takeaway - here's why I won't again in a hurry By www.getsurrey.co.uk Published On :: Sun, 03 May 2020 05:30:00 GMT The popular burger chain has reopened its Guildford branch but is it worth ordering for delivery? Full Article What's On
urr Two maternity hubs open in Surrey so women have same midwife through antenatal and birth By www.getsurrey.co.uk Published On :: Wed, 06 May 2020 17:05:28 GMT There are two new sites, one in Cranleigh and the other in Farnham Full Article What's On
urr Inside Their Hidden World: Tracking the Elusive Marbled Murrelet By www.fs.fed.us Published On :: Tue., 01 Feb 2019 12:00:00 PST The marbled murrelet (Brachyramphus marmoratus) is a threatened coastal bird that feeds on fish and nests in old-growth forests. In northwest Washington, murrelet populations are declining despite protections provided by the Northwest Forest Plan. Full Article
urr Principal short-term findings of the National Fire and Fire Surrogate study. By www.fs.fed.us Published On :: Wed., 04 Apr 2012 12:40:00 PST Principal findings of the National Fire and Fire Surrogate (FFS) study are presented in an annotated bibliography and summarized in tabular form by site, discipline (ecosystem component), treatment type, and major theme. Composed of 12 sites, the FFS is a comprehensive multidisciplinary experiment designed to evaluate the costs and ecological consequences of alternative fuel reduction treatments in seasonally dry forests of the United States. The FFS has a common experimental design across the 12-site network, with each site a fully replicated experiment that compares four treatments: prescribed fi re, mechanical treatments, mechanical + prescribed fire, and an unmanipulated control. We measured treatment cost and variables within several components of the ecosystem, including vegetation, the fuel bed, soils, bark beetles, tree diseases, and wildlife in the same 10-ha experimental units. This design allowed us to assemble a fairly comprehensive picture of ecosystem response to treatment at the site scale, and to compare treatment response across a wide variety of conditions. Full Article
urr Characteristics of remnant old-growth forests in the northern Coast Range of Oregon and comparison to surrounding landscapes. By www.fs.fed.us Published On :: Thu, 25 Jun 2008 08:00:00 PST Old-growth forests provide unique habitat features and landscape functions compared to younger stands. The goals of many forest management plans in the Pacific Northwest include increasing the area of late-successional and old-growth forests. Full Article
urr Stereo photo series for quantifying natural fuels. Volume XII: Post-hurricane fuels in forests of the Southeast United States. By www.fs.fed.us Published On :: Thu, 05 Aug 2010 15:21:00 PST Two series of single and stereo photographs display a range of natural conditions and fuel loadings in post-hurricane forests in the southeastern United States. Each group of photos includes inventory information summarizing vegetation composition, structure and loading, woody material loading and density by size class, forest floor loading, and various site characteristics. The natural fuels photo series is designed to help land managers appraise fuel and vegetation conditions in natural settings. Full Article
urr A vertical slide with current page override. By www.cssplay.co.uk Published On :: 2009-10-29 A vertical sliding menu with current page styling and the ability to override the current page style when hovering other items. Full Article
urr A dropdown menu with current page override. By www.cssplay.co.uk Published On :: 2009-11-02 A dropdown menu with current page styling and the ability to override the current page style when hovering other items. Full Article
urr Dropline menu with current page override By www.cssplay.co.uk Published On :: 2010-03-30 A CSS only dropline menu with current page selection and override when hovering other tabs. Full Article
urr When a Red Arrows flypast to commemorate VE Day will fly over part of Surrey By www.getsurrey.co.uk Published On :: Fri, 08 May 2020 08:54:07 GMT SurreyLive has detailed the times and locations the aerobatics team is expected in the county Full Article Home
urr The parts of Surrey, Hampshire and Sussex that could see Spitfire flypast commemorating VE Day By www.getsurrey.co.uk Published On :: Fri, 08 May 2020 09:48:04 GMT The spitfires will be flying over 11 locations across the three counties Full Article Home
urr Number of coronavirus deaths at Surrey hospital trusts rise to 980 By www.getsurrey.co.uk Published On :: Fri, 08 May 2020 14:22:01 GMT The latest figures have been announced by NHS England Full Article Home
urr Surrey Police issue statement after armed officers and helicopter called to Guildford in early hours By www.getsurrey.co.uk Published On :: Fri, 08 May 2020 14:58:27 GMT Armed officers and a police helicopter were in Guildford during the early hours Full Article Home
urr Number of coronavirus deaths at Surrey hospital trusts rise to 983 By www.getsurrey.co.uk Published On :: Sat, 09 May 2020 14:03:27 GMT The latest NHS figures show a small increase in recorded deaths Full Article Home
urr Northwest Forest Plan-The First 10 Years (1994-2003): Status and Trends of Populations and Nesting Habitat For The Marbled Murrelet By www.fs.fed.us Published On :: Thu, 08 Jun 2006 14:00:36 PST The Northwest Forest Plan (the Plan) is a large-scale ecosystem management plan for federal land in the Pacific Northwest. Marbled murrelet (Brachyramphus marmoratus) populations and habitat were monitored to evaluate effectiveness of the Plan. The chapters in this volume summarize information on marbled murrelet ecology and present the monitoring results for marbled murrelets over the first 10 years of the Plan, 1994 to 2003. Full Article
urr Regional population monitoring of the marbled murrelet: field and analytical methods By www.fs.fed.us Published On :: Thu, 16 Aug 2007 09:00:00 PST The marbled murrelet (Brachyramphus marmoratus) ranges from Alaska to California and is listed under the Endangered Species Act as a threatened species in Washington, Oregon, and California. Marbled murrelet recovery depends, in large part, on conservation and restoration of breeding habitat on federally managed lands. A major objective of the Northwest Forest Plan (the Plan) is to conserve and restore nesting habitat that will sustain a viable marbled murrelet population. Under the Plan, monitoring is an essential component and is designed to help managers understand the degree to which the Plan is meeting this objective. This report describes methods used to assess the status and trend of marbled murrelet populations under the Plan. Full Article
urr Making fire and fire surrogate science available: a summary of regional workshops with clients By www.fs.fed.us Published On :: Thu, 16 Aug 2007 09:45:00 PST Operational-scale experiments that evaluate the consequences of fire and mechanical "surrogates" for natural disturbance events are essential to better understand strategies for reducing the incidence and severity of wildfire. The national Fire and Fire Surrogate (FFS) study was initiated in 1999 to establish an integrated network of long-term studies designed to evaluate the consequences of using fire and fire surrogate treatments for fuel reduction and forest restoration. Beginning in September2005, four regional workshops were conducted with selected clients to identify effective and efficient means of communicating FFS study findings to users. We used participatory evaluation to design the workshops, collect responses to focused questions and impressions, and summarize the results. We asked four overarching questions: (1) Who needs fuel reduction information? (2) What information do they need? (3) Why do they need it? (4) How can it best be delivered to them? Participants identified key users of FFS science and technology, specific pieces of information that users most desired, and how this information might be applied to resolve fuel reduction and restoration issues. They offered recommendations for improving overall science delivery and specific ideas for improving delivery of FFS study results and information. User groups identified by workshop participants and recommendations for science delivery are then combined in a matrix to form the foundation of a strategic plan for conducting science delivery of FFS study results and information. These potential users, their information needs, and preferred science delivery processes likely have wide applicability to other fire science research. Full Article
urr Dry forests of the Northeastern Cascades Fire and Fire Surrogate Project site, Mission Creek, Okanogan-Wenatchee National Forest By www.fs.fed.us Published On :: Wed, 08 Feb 2009 09:10:00 PST The Fire and Fire Surrogate (FFS) project is a large long-term metastudy established to assess the effectiveness and ecological impacts of burning and fire "surrogates" such as cuttings and mechanical fuel treatments that are used instead of fire, or in combination with fire, to restore dry forests. One of the 13 national FFS sites is the Northeastern Cascades site at Mission Creek on the Okanogan- Wenatchee National Forest. The study area includes 12 forested stands that encompass a representative range of dry forest conditions in the northeastern Cascade Range. We describe site histories and environmental settings, experimental design, field methods, and quantify the pretreatment composition and structure of vegetation, fuels, soils and soil biota, entomology and pathology, birds, and small mammals that occurred during the 2000 and 2001 field seasons. We also describe the implementation of thinning treatments completed during 2003 and spring burning treatments done during 2004 and 2006. Full Article
urr Wood energy for residential heating in Alaska: current conditions, attitudes, and expected use. By www.fs.fed.us Published On :: Tue, 20 Jul 2010 14:10:00 PST This study considered three aspects of residential wood energy use in Alaska: current conditions and fuel consumption, knowledge and attitudes, and future use and conditions. We found that heating oil was the primary fuel for home heating in southeast and interior Alaska, whereas natural gas was used most often in south-central Alaska (Anchorage). Firewood heating played a much more important role as a secondary (vs. primary) heating source in all regions of Alaska. In interior Alaska, there was a somewhat greater interest in the use of wood energy compared to other regions. Likewise, consumption of fossil fuels was considerably greater in interior Alaska. Full Article
urr Nontimber forest products in the United States: Montreal Process indicators as measures of current conditions and sustainability. By www.fs.fed.us Published On :: Wed, 20 Jul 2011 11:10:00 PST The United States, in partnership with 11 other countries, participates in the Montreal Process. Each country assesses national progress toward the sustainable management of forest resources by using a set of criteria and indicators agreed on by all member countries. Several indicators focus on nontimber forest products (NTFPs). In the United States, permit and contract data from the U.S. Forest Service and the Bureau of Land Management, in addition to several other data sources, were used as a benchmark to assess harvest, value, employment, exports and imports, per capita consumption, and subsistence uses for many NTFPs. The retail value of commercial harvests of NTFPs from U.S. forest lands is estimated at $1.4 billion annually. Nontimber forest products in the United States are important to many people throughout the country for personal, cultural, and commercial uses, providing food security, beauty, connection to culture and tradition, and income. Full Article
urr Northwest Forest Plan—the first 15 years (1994–2008): status and trend of nesting habitat for the marbled murrelet By www.fs.fed.us Published On :: Mon, 29 Aug 2011 13:39:00 PST The primary objectives of the effectiveness monitoring plan for the marbled murrelet (Brachyramphus marmoratus) include mapping baseline nesting habitat (at the start of the Northwest Forest Plan [the Plan]) and estimating changes in that habitat over time. Using vegetation data derived from satellite imagery, we modeled habitat suitability by using a maximum entropy model. We used Maxent software to compute habitat suitability scores from vegetation and physiographic attributes based on comparisons of conditions at 342 sites that were occupied by marbled murrelets (equal numbers of confirmed nest sites and likely nest sites) and average conditions over all forested lands in which the murrelets occurred. We estimated 3.8 million acres of higher suitability nesting habitat over all lands in the murrelet's range in Washington, Oregon, and California at the start of the Plan (1994/96). Most (89 percent) baseline habitat on federally administered lands occurred within reserved-land allocations. A substantial amount (36 percent) of baseline habitat occurred on nonfederal lands. Over all lands, we observed a net loss of about 7 percent of higher suitability potential nesting habitat from the baseline period to 2006/07. If we focus on losses and ignore gains, we estimate a loss of about 13 percent of the higher suitability habitat present at baseline, over this same period. Fire has been the major cause of loss of nesting habitat on federal lands since the Plan was implemented; timber harvest is the primary cause of loss on nonfederal lands. We also found that murrelet population size is strongly and positively correlated with amount of nesting habitat, suggesting that conservation of remaining nesting habitat and restoration of currently unsuitable habitat is key to murrelet recovery. Full Article
urr KCMP (89.3 The Current)/Minneapolis’ Jim McGuinn And Glassnote’s Nick Petropoulos Collaborate On Videos To Support Charity By www.allaccess.com Published On :: Fri, 01 May 2020 01:20:01 -0700 While sheltering-at-home in UPSTATE NEW YORK, GLASSNOTE Head Of Promotion NICK PETROPOULOS sent KCMP (89.3 THE CURRENT)/MINNEAPOLIS PD JIM MCGUINN a song of guitar riffs and an email about … more Full Article
urr The Forgotten Coast and Hurricane Michael: A Sea Level Rise Story By blog.wfsu.org Published On :: Mon, 20 Apr 2020 02:53:52 +0000 I hate how impressive the wreckage looks. Some of these stumps are visually fascinating; extraterrestrial… Full Article Along the Coast Apalachicola River and Bay climate change forgotten coast Jeff Chanton Saint Vincent Island sea level rise Susan Cerulean US Route 98 WFSU News
urr Las Pozas: The Surrealistic Wonderland Hidden in the Middle Of The Jungle By feedproxy.google.com Published On :: Mon, 04 May 2020 11:40:43 +0000 girlsthatwander After losing 20,000 orchids in an unseasonal frost, “extravagant” Englishman Edward James turned to his real love, surrealism, and... Full Article Architecture jungle Las Pozas mexico surreal wonderland
urr Leila Curran By feedproxy.google.com Published On :: Wed, 06 May 2020 20:23:28 PDT LEILA CURRANDecorahLeila Curran, 81, of Decorah, died on Monday, May 4, 2020, at Gundersen Health System in La Crosse, Wis.A memorial service for Leila will be held at a later date. Details and date of Celebration of Life will be posted at a later date.Leila was born on March 28, 1939, to Leslie and Marie (Smith) Moyle, on their Clayton County, Iowa, rural home. She graduated from Strawberry Point High school on May 23, 1956, and married Carlyle Curran on June 2, 1956. She was employed at United Way as a bookkeeper in Cedar Rapids, Iowa, and later worked as an office manager for Wathan Flight Service in Cedar Rapids. After retiring, she moved to Decorah. She enjoyed crafting and instructing class on porcelain doll making and started her own Leila's Dolls business. One of her favorite pastimes was her travels to Marquette, Iowa, to the "Boat." She entered Wellington Place in July of 2019. She is survived by two children, Carlotta Ellison of Decorah and Shannon (Sandy) Curran of Henderson, Nev.; grandchildren, Don Ellison of Charles City, Iowa, Clayton Ellison and Jessica Ellison of Decorah, Tim Curran Henderson, Nev., and Justin (Hayley) Curran of Riverside, Calif., and Rebecca (Dillon) Dyches of Salt Lake City, Utah; great-grandchildren, Hayden Dyches, Dylan Bakke, Breyer Ellison and Rhylen Ellison; an uncle, Glen Smith of Waterloo, Iowa; sister, Linda Floyd of Cedar Rapids; and sister-in-laws, Mary Moyle of Dundee, Iowa, Mary Moyle of Arizona and Karen Meese of Edgewood, Iowa.She is preceded in death by her parents, Leslie and Marie Moyle; and four brothers, Leslie "Les", Lyle, Leland "Lee" and Lynn. Full Article Obituaries
urr I hate recurring payments…so why do I sell my software with ’em? By feedproxy.google.com Published On :: Wed, 20 Nov 2019 23:38:31 +0000 It’s simple—I don’t like recurring payments. And I don’t know about you, but with most recurring payments, I feel anxiety around this need to “get my money’s worth.” In other words, I often feel like I under-utilize the product and thus overpay to some extent. So why do I sell my software under a recurring […] Full Article Philosophy
urr Trump sets up states’ rights battle; most conservative governors surrender By www.thegazette.com Published On :: Tue, 5 May 2020 17:18:27 -0400 After more than a decade in the making, the Tea Party moment has finally arrived.The movement originated in 2009 as a challenge to runaway taxes, spending and regulation. Organizers sought to restore the constitutional balance of power between the states and the federal government.Eventually, the Tea Party devolved into a catchall for right-wing populism, and a magnet for xenophobes and culture warriors. In 2016, its early adherents overwhelmingly fell in line with President Donald Trump, choosing protectionism over freedom.But that original Tea Party spirit — the charge to buck the national government in favor of local control — was on full display recently from two unlikely sources.Trump decided early on in the coronavirus pandemic that the federal government would not centrally coordinate the purchase and distribution of medical supplies. That might have worked fine, except the Trump administration actively undermined state governments’ efforts. The federal government has outbid state buyers and even seized products from states.After 3 million masks ordered by the Massachusetts governor were confiscated in New York, Republican Gov. Charlie Baker decided to sidestep the usual procurement process. He sent a New England Patriots’ private airplane to bring supplies back from China.In Maryland, Republican Gov. Larry Hogan coordinated a large COVID-19 test order from South Korea. The delivery was facilitated by the National Guard and state police, and the tests were put in a secure location with armed security.“We guarded that cargo from whoever might interfere with us getting that to our folks that needed it,” Hogan said last week in an interview with Washington Post Live.Hogan and Baker don’t fit the common perception of the Tea Party mold. They both have harshly criticized President Donald Trump and supported the impeachment inquiry. Hogan openly considered challenging Trump for the GOP presidential nomination.They are among the last vestiges of moderate conservatism in American executive office, and yet they are the ones waging a battle over federalism and states’ rights.The political minds built for this moment — the ones who have long fantasized about escalating the state-federal power struggle — are not up to the task. The conservative firebrands who should be taking up this fight instead are beholden to Trump and whatever cockamamie plans he comes up with.At a news conference last month, Trump made a striking claim about his powers in managing the public health crisis: “When somebody is the president of the United States, the authority is total. And that’s the way it’s got to be. It’s total.”That should have been a flashpoint for conservatives, the beginning of a revitalized Tea Party that recognizes the enormous threat Trumpism poses to our values.But it wasn’t. Loyalists brushed it off, again, as Trump misspeaking.The small-government philosophy is founded on the likelihood that the levers of government power will eventually be grabbed by some menace, an incompetent or malicious figure. But when that menace is your friend, your fundraiser and your public relations manager, it proves hard to slap his hand away.adam.sullivan@thegazette.com; (319) 339-3156 Full Article Staff Columnist
urr Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
urr Scurry: A Race-To-Finish Scavenger Hunt App By feedproxy.google.com Published On :: Thu, 26 Mar 2020 13:58:00 -0400 We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those. Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes. The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here. The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout. Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted: An activity all Vigets could do together, where they could create memories, and share broadly on socialSomething that we could use in a spotty network at C Lazy U Ranch in ColoradoA product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was Quick and easy to set up huntsFree and intuitive for usersA nice combination of trivia and activitiesSocial! We wanted to enable teams to share photos and progress One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for. We tested out Notion as our primary tool, writing tickets and tracking progress. When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application. There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging. This is a snap shot of our app and Expo. Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality. On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data. Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems. Whimsical is one of our favorite tools for building out mockups of an app. We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world! Full Article News & Culture
urr 7 Best WordPress Membership Plugins to Generate Recurring Revenue By feedproxy.google.com Published On :: Fri, 28 Feb 2020 14:55:24 +0000 Do you want to turn your WordPress blog into a membership site? Businesses around the globe use this model to sell their physical products or offer exclusive digital content, and many of them are super successful. CopyBlogger, a site with content marketing lessons, offers premium courses to members and they’re currently an eight-figure business. Meanwhile, the owner of the razor […] Full Article Plugins
urr Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
urr Scurry: A Race-To-Finish Scavenger Hunt App By feedproxy.google.com Published On :: Thu, 26 Mar 2020 13:58:00 -0400 We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those. Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes. The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here. The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout. Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted: An activity all Vigets could do together, where they could create memories, and share broadly on socialSomething that we could use in a spotty network at C Lazy U Ranch in ColoradoA product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was Quick and easy to set up huntsFree and intuitive for usersA nice combination of trivia and activitiesSocial! We wanted to enable teams to share photos and progress One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for. We tested out Notion as our primary tool, writing tickets and tracking progress. When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application. There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging. This is a snap shot of our app and Expo. Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality. On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data. Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems. Whimsical is one of our favorite tools for building out mockups of an app. We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world! Full Article News & Culture
urr Surrender By feedproxy.google.com Published On :: Monday, February 19, 2018 - 2:51pm To be a caregiver at home for someone who is severely injured is to surrender. You surrender your time, put your ambitions on hold, and surrender many of the simple pleasures. You also surrender your peace of mind, your good night’s sleep, and routine. But there are ways to make life a little easier and more enjoyable... Full Article
urr Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
urr Scurry: A Race-To-Finish Scavenger Hunt App By feedproxy.google.com Published On :: Thu, 26 Mar 2020 13:58:00 -0400 We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those. Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes. The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here. The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout. Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted: An activity all Vigets could do together, where they could create memories, and share broadly on socialSomething that we could use in a spotty network at C Lazy U Ranch in ColoradoA product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was Quick and easy to set up huntsFree and intuitive for usersA nice combination of trivia and activitiesSocial! We wanted to enable teams to share photos and progress One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for. We tested out Notion as our primary tool, writing tickets and tracking progress. When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application. There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging. This is a snap shot of our app and Expo. Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality. On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data. Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems. Whimsical is one of our favorite tools for building out mockups of an app. We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world! Full Article News & Culture
urr DHS: Secret Service has 11 Current Virus Cases By feeds.drudge.com Published On :: Sat, 09 May 2020 13:31:02 -0400 According to the DHS document, along with the 11 active cases there are 23 members of the Secret Service who have recovered from COVID-19 and an additional 60 employees who are self-quarantining. No details have been provided about which members of the Secret Service are infected or if any have recently been on detail with the president or vice president. Full Article news
urr Concurrency & Multithreading in iOS By feedproxy.google.com Published On :: Tue, 25 Feb 2020 08:00:00 -0500 Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this: Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this: Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications. A Brief History In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem... How can we take advantage of these extra cores? Multithreading. Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram: In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program. The Burden of Threads A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage. Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to: Responsibly create new threads, adjusting that number dynamically as system conditions change Manage them carefully, deallocating them from memory once they have finished executing Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance. Grand Central Dispatch iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete. A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads. Let's take a look at the main components of GCD: What've we got here? Let's start from the left: DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately. DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued. Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading. Serial Queues: The Main Thread As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device. import UIKit class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { compute() } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest. We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long? Background Threads How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above: class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { DispatchQueue.global(qos: .userInitiated).async { [unowned self] in self.compute() } } private func compute() -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance. Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing. You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority. A Note on Main Thread vs. Main Queue You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application." The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably. Concurrent Queues So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in. Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue. class ViewController: UIViewController { let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent) let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5) @IBAction func handleTap(_ sender: Any) { for img in images { queue.async { [unowned self] in self.compute(img) } } } private func compute(_ img: UIImage) -> Void { // Pretending to post-process a large image. var counter = 0 for _ in 0..<9999999 { counter += 1 } } } Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop. Parallelization of N Tasks So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores. Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads. let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads! let semaphore = DispatchSemaphore(value: kMaxConcurrent) let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent) class ViewController: UIViewController { @IBAction func handleTap(_ sender: Any) { for i in 0..<15 { downloadQueue.async { [unowned self] in // Lock shared resource access semaphore.wait() // Expensive task self.download(i + 1) // Update the UI on the main thread, always! DispatchQueue.main.async { tableView.reloadData() // Release the lock semaphore.signal() } } } } func download(_ songId: Int) -> Void { var counter = 0 // Simulate semi-random download times. for _ in 0..<Int.random(in: 999999...10000000) { counter += songId } } } Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes. Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless. Finer Control with OperationQueue GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this: This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API: You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application. The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue. Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle. OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects. The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this: class ViewController: UIViewController { var queue = OperationQueue() var rawImage = UIImage? = nil let imageUrl = URL(string: "https://example.com/portrait.jpg")! @IBOutlet weak var imageView: UIImageView! let downloadOperation = BlockOperation { let image = Downloader.downloadImageWithURL(url: imageUrl) OperationQueue.main.async { self.rawImage = image } } let filterOperation = BlockOperation { let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage) OperationQueue.main.async { self.imageView = filteredImage } } filterOperation.addDependency(downloadOperation) [downloadOperation, filterOperation].forEach { queue.addOperation($0) } } So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds. The Cost of Concurrency DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it. We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like: Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other. Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility. Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD. ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency. Parting Thoughts + Further Reading If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper. Building Concurrent User Interfaces on iOS (WWDC 2012) Concurrency and Parallelism: Understanding I/O Apple's Official Concurrency Programming Guide Mutexes and Closure Capture in Swift Locks, Thread Safety, and Swift Advanced NSOperations (WWDC 2015) NSHipster: NSOperation Full Article Code
urr Scurry: A Race-To-Finish Scavenger Hunt App By feedproxy.google.com Published On :: Thu, 26 Mar 2020 13:58:00 -0400 We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those. Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes. The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here. The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout. Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted: An activity all Vigets could do together, where they could create memories, and share broadly on socialSomething that we could use in a spotty network at C Lazy U Ranch in ColoradoA product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was Quick and easy to set up huntsFree and intuitive for usersA nice combination of trivia and activitiesSocial! We wanted to enable teams to share photos and progress One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for. We tested out Notion as our primary tool, writing tickets and tracking progress. When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application. There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging. This is a snap shot of our app and Expo. Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality. On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data. Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems. Whimsical is one of our favorite tools for building out mockups of an app. We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world! Full Article News & Culture
urr Finite dimensional simple modules of $(q, mathbf{Q})$-current algebras. (arXiv:2004.11069v2 [math.RT] UPDATED) By arxiv.org Published On :: The $(q, mathbf{Q})$-current algebra associated with the general linear Lie algebra was introduced by the second author in the study of representation theory of cyclotomic $q$-Schur algebras. In this paper, we study the $(q, mathbf{Q})$-current algebra $U_q(mathfrak{sl}_n^{langle mathbf{Q} angle}[x])$ associated with the special linear Lie algebra $mathfrak{sl}_n$. In particular, we classify finite dimensional simple $U_q(mathfrak{sl}_n^{langle mathbf{Q} angle}[x])$-modules. Full Article
urr A regularity criterion of the 3D MHD equations involving one velocity and one current density component in Lorentz. (arXiv:2005.03377v1 [math.AP]) By arxiv.org Published On :: In this paper, we study the regularity criterion of weak solutions to the three-dimensional (3D) MHD equations. It is proved that the solution $(u,b)$ becomes regular provided that one velocity and one current density component of the solution satisfy% egin{equation} u_{3}in L^{frac{30alpha }{7alpha -45}}left( 0,T;L^{alpha ,infty }left( mathbb{R}^{3} ight) ight) ext{ with }frac{45}{7}% leq alpha leq infty , label{eq01} end{equation}% and egin{equation} j_{3}in L^{frac{2eta }{2eta -3}}left( 0,T;L^{eta ,infty }left( mathbb{R}^{3} ight) ight) ext{ with }frac{3}{2}leq eta leq infty , label{eq02} end{equation}% which generalize some known results. Full Article
urr Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment. (arXiv:2005.00165v3 [cs.CL] UPDATED) By arxiv.org Published On :: A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i.e. is a grammatical sentence more probable than an ungrammatical sentence). Our work uses ambiguous relative clause attachment to extend such evaluations to cases of multiple simultaneous valid interpretations, where stark grammaticality differences are absent. We compare model performance in English and Spanish to show that non-linguistic biases in RNN LMs advantageously overlap with syntactic structure in English but not Spanish. Thus, English models may appear to acquire human-like syntactic preferences, while models trained on Spanish fail to acquire comparable human-like preferences. We conclude by relating these results to broader concerns about the relationship between comprehension (i.e. typical language model use cases) and production (which generates the training data for language models), suggesting that necessary linguistic biases are not present in the training signal at all. Full Article