rre

Grand Canyon Rangers Arrest Guide for Operating in Park without a Permit

On Friday, August 31, after approximately four weeks of investigation, Grand Canyon National Park rangers arrested 42-year old Brian Thompson of Cottonwood, Arizona, for conducting commercial operations in a national park without a permit. https://www.nps.gov/grca/learn/news/2012-09-21_arrest.htm




rre

Fire crews are actively working to suppress the lightning ignited Imperial Fire. Currently the fire is estimated to be three (3) acres in size and is located along the Cape Royal Road near Vista Encantada.

Fire crews are actively working to suppress the lightning ignited Imperial Fire. Currently the fire is estimated to be three (3) acres in size and is located along the Cape Royal Road near Vista Encantada. https://www.nps.gov/grca/learn/news/imperial-fire-being-suppressed-on-north-rim-of-grand-canyon-national-park-20180718.htm




rre

Current Grand Canyon National Park Closures as of August 8, 2018

This is a summary of current fire related closures for Grand Canyon National Park. Today, new temporary trail closures were implemented that include the Nankoweap Trail, the Point Imperial Trail, and Fire Point on the North Rim. https://www.nps.gov/grca/learn/news/2018-08-08-current-grand-canyon-national-park-closures.htm




rre

Preliminary Findings Indicate No Current Uranium Ore Exposure at Grand Canyon

Preliminary findings of an interagency safety review conducted last week at Grand Canyon National Park indicate no current exposure concerns for park employees and visitors from uranium ore samples previously stored in buckets at the park's Museum Collection building. https://www.nps.gov/grca/learn/news/preliminary-findings-indicate-no-current-uranium-ore-exposure.htm




rre

Irreversible No Longer: Blind Mice See Again Thanks To New Method of Synthesizing Lost Cells

Rather than opting for the costly and complex process of using stem cells to cure age-related macular degeneration, scientists used skin cells.

The post Irreversible No Longer: Blind Mice See Again Thanks To New Method of Synthesizing Lost Cells appeared first on Good News Network.




rre

The rules on having a bonfire in your garden as Surrey councils warn against them

While it is not illegal to have a bonfire, some Surrey councils are urging residents not to light them




rre

Two maternity hubs open in Surrey so women have same midwife through antenatal and birth

There are two new sites, one in Cranleigh and the other in Farnham




rre

Inside Their Hidden World: Tracking the Elusive Marbled Murrelet

The marbled murrelet (Brachyramphus marmoratus) is a threatened coastal bird that feeds on fish and nests in old-growth forests. In northwest Washington, murrelet populations are declining despite protections provided by the Northwest Forest Plan.




rre

A vertical slide with current page override.

A vertical sliding menu with current page styling and the ability to override the current page style when hovering other items.




rre

A dropdown menu with current page override.

A dropdown menu with current page styling and the ability to override the current page style when hovering other items.




rre

A peculiar IE bug that allows irregular image maps with PNG images

A method of creating irregular shaped image maps with ease exploiting an odd effect when using Microsofts AlphaImageLoader to render background PNG images. Only for IE though.




rre

Dropline menu with current page override

A CSS only dropline menu with current page selection and override when hovering other tabs.




rre

When a Red Arrows flypast to commemorate VE Day will fly over part of Surrey

SurreyLive has detailed the times and locations the aerobatics team is expected in the county




rre

The parts of Surrey, Hampshire and Sussex that could see Spitfire flypast commemorating VE Day

The spitfires will be flying over 11 locations across the three counties




rre

Number of coronavirus deaths at Surrey hospital trusts rise to 980

The latest figures have been announced by NHS England




rre

Surrey Police issue statement after armed officers and helicopter called to Guildford in early hours

Armed officers and a police helicopter were in Guildford during the early hours




rre

Number of coronavirus deaths at Surrey hospital trusts rise to 983

The latest NHS figures show a small increase in recorded deaths




rre

EPISODE 1—SCARRED FOR LIFE: WHAT TREE RINGS CAN REVEAL ABOUT FIRE HISTORY

April 2012—To anticipate how a changing climate might impact future forest fires, scientists need to understand the past. But how can you tell the frequency and severity of wildfires that occurred hundreds—or, even, thousands—of years ago? Part of the answer lies in tree rings (6:09)




rre

Rocky To Bullwinkle: Understanding Flying Squirrels Helps Us Restore Dry Forest Ecosystems

A century of effective fire suppression has radically transformed many forested landscapes on the east side of the Cascades. Managers of dry forests critically need information to help plan for and implement forest restoration. Management priorities include the stabilization of fire regimes and the maintenance of habitat for the northern spotted owl and other old-forest associates.




rre

Northwest Forest Plan-The First 10 Years (1994-2003): Status and Trends of Populations and Nesting Habitat For The Marbled Murrelet

The Northwest Forest Plan (the Plan) is a large-scale ecosystem management plan for federal land in the Pacific Northwest. Marbled murrelet (Brachyramphus marmoratus) populations and habitat were monitored to evaluate effectiveness of the Plan. The chapters in this volume summarize information on marbled murrelet ecology and present the monitoring results for marbled murrelets over the first 10 years of the Plan, 1994 to 2003.




rre

Regional population monitoring of the marbled murrelet: field and analytical methods

The marbled murrelet (Brachyramphus marmoratus) ranges from Alaska to California and is listed under the Endangered Species Act as a threatened species in Washington, Oregon, and California. Marbled murrelet recovery depends, in large part, on conservation and restoration of breeding habitat on federally managed lands. A major objective of the Northwest Forest Plan (the Plan) is to conserve and restore nesting habitat that will sustain a viable marbled murrelet population. Under the Plan, monitoring is an essential component and is designed to help managers understand the degree to which the Plan is meeting this objective. This report describes methods used to assess the status and trend of marbled murrelet populations under the Plan.




rre

A review of the literature on seed fate in whitebark pine and the life history traits of Clark's nutcracker and pine squirrels

Whitebark pine is a critical component of subalpine ecosystems in western North America, where it contributes to biodiversity and ecosystem function and in some communities is considered a keystone species. Whitebark pine is undergoing rangewide population declines attributed to the combined effects of mountain pine beetle, white pine blister rust, and fire suppression. The restoration and maintenance of whitebark pine populations require an understanding of all aspects of seed fate. In this paper, we review the literature on seed dispersal in whitebark pine. Clark's nutcracker, pine squirrels, and scatter-hoarding rodents are all known to influence whitebark pine seed fate and ultimately affect the ability of whitebark pine populations to regenerate. We also provide a general overview of the natural histories of the most influential species involved with whitebark pine seed fate: Clark's nutcracker and the pine squirrel.




rre

Wood energy for residential heating in Alaska: current conditions, attitudes, and expected use.

This study considered three aspects of residential wood energy use in Alaska: current conditions and fuel consumption, knowledge and attitudes, and future use and conditions. We found that heating oil was the primary fuel for home heating in southeast and interior Alaska, whereas natural gas was used most often in south-central Alaska (Anchorage). Firewood heating played a much more important role as a secondary (vs. primary) heating source in all regions of Alaska. In interior Alaska, there was a somewhat greater interest in the use of wood energy compared to other regions. Likewise, consumption of fossil fuels was considerably greater in interior Alaska.




rre

Nontimber forest products in the United States: Montreal Process indicators as measures of current conditions and sustainability.

The United States, in partnership with 11 other countries, participates in the Montreal Process. Each country assesses national progress toward the sustainable management of forest resources by using a set of criteria and indicators agreed on by all member countries. Several indicators focus on nontimber forest products (NTFPs). In the United States, permit and contract data from the U.S. Forest Service and the Bureau of Land Management, in addition to several other data sources, were used as a benchmark to assess harvest, value, employment, exports and imports, per capita consumption, and subsistence uses for many NTFPs. The retail value of commercial harvests of NTFPs from U.S. forest lands is estimated at $1.4 billion annually. Nontimber forest products in the United States are important to many people throughout the country for personal, cultural, and commercial uses, providing food security, beauty, connection to culture and tradition, and income.




rre

Terrestrial species viability assessments for national forests in northeastern Washington.

We developed a process to address terrestrial wildlife species for which management for ecosystem diversity may be inadequate for providing ecological conditions capable of sustaining viable populations. The process includes (1) identifying species of conservation concern, (2) describing source habitats, and other important ecological factors, (3) organizing species into groups, (4) selecting surrogate species for each group, (5) developing surrogate species assessment models; (6) applying surrogate species assessment models to evaluate current and historical conditions, (7) developing conservation considerations, and (8) designing monitoring and adaptive management. Following the application of our species screening criteria, we identified 209 of 700 species as species of concern on National Forest System lands east of the Cascade Range in Washington state. We aggregated the 209 species of conservation concern into 10 families and 28 groups based primarily on their habitat associations (these are not phylogenetic families). We selected 32 primary surrogate species (78 percent birds, 17 percent mammals, 5 percent amphibians) for application in northeastern Washington, based on risk factors and ecological characteristics. Our assessment documented reductions in habitat capability across the assessment area compared to historical conditions. We combined management considerations for individual species with other surrogate species to address multiple species. This information may be used to inform land management planning efforts currently underway on the Okanogan-Wenatchee and Colville National Forests in northeastern Washington.




rre

Northwest Forest Plan—the first 15 years (1994–2008): status and trend of nesting habitat for the marbled murrelet

The primary objectives of the effectiveness monitoring plan for the marbled murrelet (Brachyramphus marmoratus) include mapping baseline nesting habitat (at the start of the Northwest Forest Plan [the Plan]) and estimating changes in that habitat over time. Using vegetation data derived from satellite imagery, we modeled habitat suitability by using a maximum entropy model. We used Maxent software to compute habitat suitability scores from vegetation and physiographic attributes based on comparisons of conditions at 342 sites that were occupied by marbled murrelets (equal numbers of confirmed nest sites and likely nest sites) and average conditions over all forested lands in which the murrelets occurred. We estimated 3.8 million acres of higher suitability nesting habitat over all lands in the murrelet's range in Washington, Oregon, and California at the start of the Plan (1994/96). Most (89 percent) baseline habitat on federally administered lands occurred within reserved-land allocations. A substantial amount (36 percent) of baseline habitat occurred on nonfederal lands. Over all lands, we observed a net loss of about 7 percent of higher suitability potential nesting habitat from the baseline period to 2006/07. If we focus on losses and ignore gains, we estimate a loss of about 13 percent of the higher suitability habitat present at baseline, over this same period. Fire has been the major cause of loss of nesting habitat on federal lands since the Plan was implemented; timber harvest is the primary cause of loss on nonfederal lands. We also found that murrelet population size is strongly and positively correlated with amount of nesting habitat, suggesting that conservation of remaining nesting habitat and restoration of currently unsuitable habitat is key to murrelet recovery.




rre

The world's top goalscorer and NUFC legend's son - the bizarre transfer links

Newcastle United fans have been treated to some interesting names when it come to transfer




rre

WFUZ (ALT 92.1)/Wilkes Barre-Scranton Launches Majority Rules, Weeknights From 7p-Mid

TIMES-SHAMROCK COMMUNICATIONS Alternative WFUZ (ALT 92.1)/WILKES BARRE-SCRANTON has kicked off MAY with a new weeknight show, "MAJORITY RULES," allowing listeners to control the … more




rre

Sony/ATV Signs Gabby Barrett To Publishing Admin Deal

SONY/ATV MUSIC PUBLISHING has signed singer/songwriter GABBY BARRETT to a global publishing administration deal, hot on the heels of her first #1 single, “I Hope,” on her WARNER … more




rre

RAB 'Open For Business' Live Video Series Offers Presentation By Entercom's Jennifer Morrelli On Audience Engagement

The RADIO ADVERTISING BUREAU's next webinar in its "Business Unusual" program's "Open for Business" series, “Creating Audience Engagement,” will … more




rre

KCMP (89.3 The Current)/Minneapolis’ Jim McGuinn And Glassnote’s Nick Petropoulos Collaborate On Videos To Support Charity

While sheltering-at-home in UPSTATE NEW YORK, GLASSNOTE Head Of Promotion NICK PETROPOULOS sent KCMP (89.3 THE CURRENT)/MINNEAPOLIS PD JIM MCGUINN a song of guitar riffs and an email about … more




rre

Eminem Confronts An Intruder In His Home And Arrest Is Made

Hip Hop artist EMINEM reportedly confronted and detained an intruder in his DETROIT-area home in early APRIL. The alleged intruder, MATTHEW DAVID HUGHES, had set off a security alarm when he … more




rre

Music Executive Andre Harrell Dead At 59

Multiple media sources have reported that veteran music executive ANDRE HARRELL has died at the age of 59. The cause of death is not yet known. A native NEW YORKER, he was the Founder of … more




rre

Better science needed to support clinical predictors that link cardiac arrest, brain injury, and death: a statement from the American Heart Association

Statement Highlights: While significant improvements have been made in resuscitation and post cardiac arrest resuscitation care, mortality remains high and is mainly attributed to widespread brain injury.Better science is needed to support the ...




rre

Las Pozas: The Surrealistic Wonderland Hidden in the Middle Of The Jungle

girlsthatwander After losing 20,000 orchids in an unseasonal frost, “extravagant” Englishman Edward James turned to his real love, surrealism, and...




rre

Florida Man Arrested Trying To Quarantine On Abandoned Disney Treasure Island, And That’s What This Island Looks Like From The Inside

The 42-year-old said he didn’t hear numerous deputies searching the private island for him on foot, by boat and by...




rre

18-year-old charged in fatal shooting arrested for drunken driving while out on bail

CEDAR RAPIDS — A 17-year-old, charged in January with fatally shooting an 18-year-old during a drug robbery, was released in March only to be arrested about a month later for drunken driving....




rre

Man arrested in Texas faces murder charge in Iowa City shooting

IOWA CITY — An Iowa City man has been arrested in Texas in connection with the April 20 shooting death of Kejuan Winters. Reginald Little, 44, was taken into custody Friday by the Lubbock...




rre

Trump sets up states’ rights battle; most conservative governors surrender

After more than a decade in the making, the Tea Party moment has finally arrived.

The movement originated in 2009 as a challenge to runaway taxes, spending and regulation. Organizers sought to restore the constitutional balance of power between the states and the federal government.

Eventually, the Tea Party devolved into a catchall for right-wing populism, and a magnet for xenophobes and culture warriors. In 2016, its early adherents overwhelmingly fell in line with President Donald Trump, choosing protectionism over freedom.

But that original Tea Party spirit — the charge to buck the national government in favor of local control — was on full display recently from two unlikely sources.

Trump decided early on in the coronavirus pandemic that the federal government would not centrally coordinate the purchase and distribution of medical supplies. That might have worked fine, except the Trump administration actively undermined state governments’ efforts. The federal government has outbid state buyers and even seized products from states.

After 3 million masks ordered by the Massachusetts governor were confiscated in New York, Republican Gov. Charlie Baker decided to sidestep the usual procurement process. He sent a New England Patriots’ private airplane to bring supplies back from China.

In Maryland, Republican Gov. Larry Hogan coordinated a large COVID-19 test order from South Korea. The delivery was facilitated by the National Guard and state police, and the tests were put in a secure location with armed security.

“We guarded that cargo from whoever might interfere with us getting that to our folks that needed it,” Hogan said last week in an interview with Washington Post Live.

Hogan and Baker don’t fit the common perception of the Tea Party mold. They both have harshly criticized President Donald Trump and supported the impeachment inquiry. Hogan openly considered challenging Trump for the GOP presidential nomination.

They are among the last vestiges of moderate conservatism in American executive office, and yet they are the ones waging a battle over federalism and states’ rights.

The political minds built for this moment — the ones who have long fantasized about escalating the state-federal power struggle — are not up to the task. The conservative firebrands who should be taking up this fight instead are beholden to Trump and whatever cockamamie plans he comes up with.

At a news conference last month, Trump made a striking claim about his powers in managing the public health crisis: “When somebody is the president of the United States, the authority is total. And that’s the way it’s got to be. It’s total.”

That should have been a flashpoint for conservatives, the beginning of a revitalized Tea Party that recognizes the enormous threat Trumpism poses to our values.

But it wasn’t. Loyalists brushed it off, again, as Trump misspeaking.

The small-government philosophy is founded on the likelihood that the levers of government power will eventually be grabbed by some menace, an incompetent or malicious figure. But when that menace is your friend, your fundraiser and your public relations manager, it proves hard to slap his hand away.

adam.sullivan@thegazette.com; (319) 339-3156




rre

Big oil overreaches on COVID-19 bailout

Like everyone, U.S. oil companies have been hit hard by the pandemic, and they are looking for relief. . Oil companies have requested special access to a $600 billion lending facility at the Federal Reserve, and the administration seems keen to deliver. The president just announced that the Secretary of Energy and Secretary of the Treasury would make funds available, and the Department of Energy is also floating a $7 billion plan to pay drillers to leave oil in the ground.

Unfortunately, at least one faction of the industry — a group of refiners that traditionally profit when crude feedstocks are cheap — is angling for much more than a financial bailout. They are using the pandemic as cover to cannibalize markets vital to U.S. biofuel producers and farmers.

Their plan, outlined in letter from several oil-patch governors, would require the Environmental Protection Agency (EPA) to halt enforcement of the Renewable Fuel Standard (RFS). It would allow refiners to stop offering biofuel blends at the fuel pump, eliminating the market for U.S. ethanol and biodiesel and decimating demand for billions of bushels of corn and soybeans used to make renewable motor fuel.

With half the nation’s 200-plus biofuel plants already offline, thousands of rural workers facing layoffs, and millions of U.S. farmers on financial life support, the destruction of the RFS would be an economic death knell for rural America.

It’s hard to imagine why refiners would expect the Trump administration to take the request seriously. The misguided plan would inflict incredible collateral damage on our economy, our energy security, and to the President’s prospects with rural voters. Notably, the courts rejected similar abuse in 2016. Even former EPA Administrator Scott Pruitt, who scorned American farmers, rejected a similar plan back in 2017.

Nevertheless, refiners saw the current health crisis as a political opportunity and went for a kill. Fortunately, farm state champions are pushing back. Governors from Kansas, Iowa, Nebraska, South Dakota and Minnesota condemned the oil-backed plan. They wrote, “Using this global pandemic as an excuse to undercut the RFS is not just illegal; it would also sever the economic lifeline that renewable fuels provide for farmers, workers and rural communities across the Midwest.”

Aside from the sheer audacity, the refinery-backed plan also suffers from a major flaw — it wouldn’t change the economic situation of a single refinery. They claim that lifting the RFS would eliminate the costs associated with biofuel credits known as RINs, which are used to demonstrate compliance with the nation’s biofuel targets. Refiners that refuse to produce biofuel blends can purchase RINs from those that blend more ethanol or biodiesel into the fuel mix. In turn, when they sell a gallon of fuel, that RIN price is reflected in their returns. The oil industry’s own reports show that “there is no economic harm to RIN purchasers, even if RIN prices are high, because those costs are recouped in the gasoline blend stock and diesel.”

Even in a fictional scenario where costs aren’t automatically recouped, a detailed EPA analysis found that “all obligated parties, including the small refiners subject to the RFS program, would be affected at less than 1 percent of their sales (i.e., the estimated costs of compliance with the rule would be less than 1 percent of their sales) even when we did not consider their potential to recover RIN costs — with the estimated cost-to-sales percentages ranging from -0.04 percent (a cost savings) to 0.006 percent.”

Clearly, a 0.006 percent savings isn’t going to protect any refinery jobs, but refineries are betting that DC policymakers don’t know the difference between RINs values and compliance costs. They open one side of a ledger and hope that no one asks to see the next page.

Meanwhile, the nation’s biggest oil lobby, American Petroleum Institute, is calling on the EPA to simply cut 770 million gallons of biofuel out of the 2020 targets. Earlier this year, regulators approved a modest bump in biofuels to addresses a small fraction of the four billion gallons lost to secretive EPA refinery exemptions. The courts have since sided against the handouts, but the EPA has refused to implement the decision. Now, API says the agency should rip away the few gallons clawed back by U.S. farmers. It’s a baseless argument with one goal: blocking competition at the fuel pump.

Keep in mind, collapsing demand for motor fuel is just as hard on the nation’s biofuel producers. RFS targets enforced by the EPA are based on a percentage of each gallon sold — so if refiners make less fuel, their obligations under the law shrink at an equal rate. Meanwhile, biofuel producers across the heartland are closing their doors, as even their modest 10 percent share of the market has been cut in half.

Biofuel advocates are focused on their own survival. Iowa Sen. Chuck Grassley summed it up, saying “[T]here ought to be parity for all liquid fuels. So I look forward to working with (Agriculture) Secretary (Sonny) Perdue to make sure that our biofuels industry gets through this crisis so that we can continue to use America’s (home) grown energy in our gas tanks.”

Parity makes sense, but refinery lobbyists want more. The Trump EPA should reject the latest anti-biofuel pitch because it’s bad policy, but more than that, it’s an insulting attempt to capitalize on a health crisis to make an end run around the truth.

Former Missouri Sen. Jim Talent spearheaded the Renewable Fuel Standard in 2005. He currently serves as co-chair of Americans for Energy Security and Innovation.





rre

18-year-old charged in fatal shooting arrested for drunken driving while out on bail

CEDAR RAPIDS — A 17-year-old, charged in January with fatally shooting an 18-year-old during a drug robbery, was released in March only to be arrested about a month later for drunken driving.

Kyler David Carson, now 18, of Cedar Rapids, was charged last month with operating while intoxicated and unlawful possession of an anti-anxiety prescription drug.

After two judges reduced Carson’s bail, he bonded out and was released pending trial.

Police arrested Carson April 24 when they believed he was driving under the influence of alcohol or drugs, according to a criminal complaint.

He provided a breath sample, which showed no signs on alchol, but refused to provide a urine sample for chemical testing, the complaint states.

In January, Carson was charged with voluntary manslaughter, delivery of a controlled substance-marijuana, carrying weapons and obstructing prosecution.

He is accused of fatally shooting Andrew D. Gaston, 18, on Jan. 24, as Gaston and his cousin, Tyrell J. Gaston, 16, were attempting to rob marijuana from Carson, according to a criminal complaint.

Police received a report of shots being fired at 11:48 p.m. and found Andrew and Tyrell Gaston with gunshot wounds in the parking lot of 3217 Agin Court NE.

During the investigation, police learned the Gaston cousins had arranged, with the help of others, to rob Carson that night. Witnesses told investigators they contacted Carson and “lured” him to the address to rob him of marijuana.

Carson thought he was called that night to sell 45 pre-rolled tubes of marijuana for $900, according to criminal complaint.

While Carson was delivering marijuana to the others in their car, the cousins and a third person ambushed Carson from behind, according to a criminal complaint.

Andrew Gaston struck Carson in the back of the head with a metal object. Carson then turned around and exchanged gunfire with Tyrell Gaston before running from the parking lot, witnesses told police.

Both Carson and Tyrell Gaston later discarded their firearms, which police didn’t recover, according to the complaint.

Tyrell Gaston also was charged with first-degree robbery, conspiracy to deliver a controlled substance-marijuana, carrying weapons and obstructing prosecution.

A judge, during Carson’s initial appearance in the fatal shooting, set his bail at $50,000 cash only, according to court documents. His bail was amended, in agreement with prosecutor and Carson’s lawyer, to $50,000 cash or surety March 23 by 6th Judicial Associate District Judge Russell Keast.

Carson remained in jail, but his lawyer asked for a bond review three days later, March 26, and Associate District Judge Casey Jones lowered the bail to $30,000 cash or surety.

Carson posted bail that day, according to court documents.

Assistant Linn County Attorney Rena Schulte has filed a motion to revoke Carson’s pretrial release and will request his bail ne set at $500,000. A hearing is set on the motion for next Thursday in Linn County District Court.

If convicted, Carson faces up to 19 years in the fatal shooting and up to two years for the other offenses.

Comments: (319) 398-8318; trish.mehaffey@thegazette.com




rre

Man arrested in Texas faces murder charge in Iowa City shooting

IOWA CITY — An Iowa City man has been arrested in Texas in connection with the April 20 shooting death of Kejuan Winters.

Reginald Little, 44, was taken into custody Friday by the Lubbock County Sheriff’s Office, according to Iowa City police.

Little faces a charge of first-degree murder and is awaiting extradition back to Iowa City.

The shooting happened in an apartment at 1960 Broadway St. around 9:55 a.m. April 20. Police said gunfire could be heard during the call to police.

Officers found Winters, 21, of Iowa City, with multiple gunshot wounds. He died in the apartment.

Police said Durojaiya A. Rosa, 22, of Iowa City, and a woman were at the apartment and gave police a description of the shooter and said they heard him fighting with Winters before hearing gunshots.

Surveillance camera footage and cellphone records indicated Little was in the area before the shots were fired, police said.

Investigators also discovered Little and Rosa had been in communication about entering the apartment, and Rosa told police he and Little had planned to rob Winters.

Rosa also faces one count of first-degree murder.

The shooting death spurred three additional arrests.

Winters’ father, Tyris D. Winters, 41, of Peoria, Ill., and Tony M. Watkins, 39, of Iowa City, were arrested on attempted murder charges after confronting another person later that day in Coralville about the homicide, and, police say, shooting that person in the head and foot.

Police also arrested Jordan R. Hogan, 21, of Iowa City, for obstructing prosecution, saying he helped the suspect, Little, avoid arrest.

First-degree murder is a Class A felony punishable by an automatic life sentence.

Comments: (319) 339-3155; lee.hermiston@thegazette.com




rre

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




rre

Visa cancelled due to incorrect information given or provided to the Department of Home Affairs

It is a requirement that a visa applicant must fill in or complete his or her application form in a manner that all questions are answered, and no incorrect answers are given or provided. There is also a requirement that visa applicants must not provide incorrect information during interviews with the Minister for Immigration (‘Minister’), […]

The post Visa cancelled due to incorrect information given or provided to the Department of Home Affairs appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.



  • Visa Cancellation
  • 1703474 (Refugee) [2017] AATA 2985
  • cancel a visa
  • cancelledvi sa
  • Citizenship and Multicultural Affairs
  • Department of Home Affairs
  • migration act 1958
  • minister for immigration
  • NOICC
  • notice of intention to consider cancellation
  • Sanaee (Migration) [2019] AATA 4506
  • section 109
  • time limits

rre

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




rre

Surrender

To be a caregiver at home for someone who is severely injured is to surrender. You surrender your time, put your ambitions on hold, and surrender many of the simple pleasures. You also surrender your peace of mind, your good night’s sleep, and routine. But there are ways to make life a little easier and more enjoyable...




rre

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




rre

Website Inspiration: Barrel Recap 2019

Fun Annual Report One Pager (built using Webflow) by Barrel recapping their 2019 year. It’s busting with flavor from colorful changing backgrounds, cheeky thick-line illustrations and the playful bouncing social media icon footer. Also worth a shout is the responsive design, scaling up perfectly on huge screens while rearranging content well on small. Full Review




rre

DHS: Secret Service has 11 Current Virus Cases

According to the DHS document, along with the 11 active cases there are 23 members of the Secret Service who have recovered from COVID-19 and an additional 60 employees who are self-quarantining. No details have been provided about which members of the Secret Service are infected or if any have recently been on detail with the president or vice president.