io

Uptown Marion Market opening with caveats

MARION — While the Uptown Marion Market will continue to sell fresh produce, it will look a little different this year.

The market will continue operating on the second Saturday of June, July and August with some adjustments.

But the city of Marion has canceled community events until at least early July because of the coronavirus pandemic.

The Uptown market will run along Sixth Avenue instead of being held in City Square Park. It will be fenced and no more than 50 people will be let in at an time.

Jill Ackerman, president of the Marion Chamber of Commerce, said there are usually between 50 and 60 vendors at each market, but she expects only 15 to 25 at this summer’s markets.

“The main thing here is safety,” Ackerman said. “We want to make sure people have opportunities to buy fresh produce from our local growers, but we’re going to ask patrons to only spend 30 minutes inside the market.”

Vendors will sell produce and some plants, but artisan items will not be available.

While there will be summer events through the Chamber of Commerce, Ackerman said, they will be fewer and look a little different than they usually do.

Free community concerts and movie nights are canceled until July by the city, according to a news release.

The Marion Farmers Market, held at Taube Park, is expected to resume May 16.

Officials hope to have smaller-scale events throughout the summer like performances in the Uptown Artway, Messy Art Days and the Tiny Fair series as restrictions ease.

Sunrise Yoga at the Klopfenstein Amphitheater at Lowe Park is expected to take place every Saturday from June to August.

“Unfortunately, given our current reality, we know that 2020 will be far from normal,” said Marion Mayor Nicolas AbouAssaly. “After careful consideration and consultation with event organizers and sponsors, we have made the collective decision to cancel the free community concerts, events and movie nights originally planned for our outdoor public venues through early-July.”

Comments: (319) 368-8664; grace.king@thegazette.com




io

Coronavirus closes the Iowa Writers’ House — for now

IOWA CITY — Once upon a time, there was a house in a city that loved literature.

It was a quaint, two-story home in the heart of the historic district with brick stairs, pale yellow siding, a hipped red roof and a rich history: Its original owner was Emma J. Harvat, who in 1922 became the nation’s first female mayor for a city of more than 10,000.

Nearly a century later, in 2014, Andrea Wilson was working in advertising in Florida and pined for a more “altruistic purpose” for her life. So she planned a return to Iowa, where she grew up in Columbus Junction.

But this time Wilson would live in Iowa City, known for — among other things — pioneering academic creative writing pursuits at the University of Iowa’s famed Writers’ Workshop.

Wilson wanted to write and found the idea of the historic Harvat house so charming she bought it “sight unseen” from down in Miami, aiming to run it as a bed-and-breakfast. But when she arrived, Wilson discovered a need in her new community she aimed to fill. It had a surprising dearth of literary resources for those outside the university.

“There wasn’t any place for the public to take a class or meet other writers or really be part of a writing community where people could just express their humanity through words,” she said. “It became my passion project — to try to create that for this community. I thought if anywhere should have a place like that, it would be America’s only UNESCO City of Literature at the time.”

So in March 2015, Wilson debuted Iowa City’s first community-based literary center for writers — or those aspiring. She had hoped to open a communal writing space closer to downtown but didn’t have the funding. So she gave her home a third identity: the Iowa Writers’ House.

She continued to live there and maintain her bed-and-breakfast business, which funded the writing endeavor and kept its cozy corridors bustling with interesting characters.

Famed visiting writers included Leslie Jamison, American novelist and essayist with works on the New York Times bestseller list; Hope Edelman, whose six non-fiction books have published in 17 countries and translated in 11 languages; Alison Bechdel, an American cartoonist and MacArthur fellow; and Piedad Bonnett Velez, Colombian poet, playwright and novelist of international acclaim.

And over the years, the Iowa Writers’ House connected, served and motivated thousands with its workshops, seminars, readings and summer camps. It offered editing services, founded a Bicultural Iowa Writers’ Fellowship, and — among other things — inspired a growing network of friends and creatives to value their own stories and the stories of others.

“I said yes to everything anyone ever asked of me,” Wilson said. “We gave tours. I received visiting scholars. We hosted dinners for visiting poets and writers for the university. And a lot of that was all volunteer. We never had a steady funding stream like most literary centers do.”

So when the coronavirus in March reached Iowa City, later shuttering storefronts, canceling events, curtailing travel plans and crippling the economy, the Iowa Writers’ House momentum stopped, too.

“Once COVID hit, because all of our programming is live and people come to the house, we had to cancel it,” Wilson said.

She dropped most of the organization’s spring season. She lost all her projected bed-and-breakfast business. And in a message posted to the Iowa Writers’ House website last month, Wilson announced her hard but unavoidable news.

“As the situation pushes on, and with no programming in the foreseeable future, we must make drastic changes,” she wrote. “Organizations must weather the storm or adapt, and in the case of this little organization with a big heart, evolution is the only option.”

And so after five years of intimate conversations, communal meals, singing, laughing, crying and lots and lots of writing and reading — all done in the shadow of Harvat — the organization is leaving the historic space and “taking a break to assess our mission and consider our best options for the future.”

Wilson said she plans to focus on her own writing. And the Bicultural Iowa Writers’ Fellowship program will continue — allowing for the release later this year of a third volume of “We the Interwoven: An Anthology of Bicultural Iowa,” including six new authors with multilingual stories of living in Iowa.

News of the goodbye — at least for now — has been met with an outpouring of support and testimonials of the impact the Iowa Writers’ House has had,

“I grew up without a writing community, and it was a very lonely experience,” Erin Casey wrote to Wilson after learning of its pause.

Casey — on the Iowa Writers’ House team and director of The Writers’ Rooms, an offshoot of the house — said her involvement in the project shaped not only her career but her personal growth.

“You, and the Iowa Writers’ House, helped me become a stronger person who felt deserving of companionship, respect, and love,” she wrote. “Watching the house grow, the workshops fill, and the stories come in about how much the IWH touched people’s lives added to the joy. I finally found a place to call home.”

Casey said that while the future is unknown, its legacy is not.

“The IWH will live on in the hearts of the people you touched,” she wrote. “Writers have found friends, support, guidance …”

Although the project isn’t getting a fairy-tale ending, Wilson said the story isn’t over.

“The organization is leaving the space. I’m leaving the space. We’re going on an organizational break so we can determine what a sustainable future might be,” she said. “But it’s really the end of a chapter. And we don’t know what the next chapter will be.”

Comments: (319) 339-3158; vanessa.miller@thegazette.com




io

Ahead of VP Pence’s Iowa visit, Joe Biden’s campaign calls out ‘consequential failure’ of Trump coronavirus response

Vice President Mike Pence owes Iowans more than a photo-op when he visits Des Moines today, according to Joe Biden’s campaign.

“Iowans are seeing up close the most consequential failure of government in modern American history,” said Kate Bedingfeld, spokeswoman for the former vice president and presumptive Democratic presidential nominee.

“With nearly 300,000 Iowans filing for unemployment, rural hospitals on life support, Latino communities disproportionately suffering and workers on the job without sufficient protection, Mike Pence owes Iowans more than a photo-op — he owes them answers,” she said.

Pence, head of the White House coronavirus task force, is scheduled to meet with Iowa Gov. Kim Reynolds and U.S. Sens. Chuck Grassley and Joni Ernst, all Republicans, as well as with faith, farm and food production leaders.

Pence will talk to faith leaders about how they are using federal and state guidelines to open their houses of worship in a safe and responsible manner.

Later, he will go to Hy-Vee’s headquarters in West Des Moines for a roundtable discussion with agriculture and food supply leaders to discuss steps being taken to ensure the food supply remains secure.

Pence has called Iowa a “success story” in its response to the COVID-19, but Bedingfeld said the Trump administration failed to protect Iowa families from the virus that has claimed the lives of 231 Iowans.

“From devastating losses across the state, at meatpacking plants to rural communities, one thing is clear — it’s Iowans and the American people who are paying the price for the Trump administration’s denials and delays in response to this pandemic,” she said.

“Instead of listening to our own intelligence agencies and public health experts, Donald Trump was fed dangerous propaganda from the Chinese Communist Party — and he bought it,” she said. “Iowans deserve better — they deserve Joe Biden.”

For his part, Grassley said he welcomes the discussion with Pence.

“There’s much work to be done, and the pandemic is disrupting all of our communities,” Grassley said. “It’s important to hear directly from those who help feed the nation and the world.”

Ernst also is looking forward to the discussion of how Iowa is working to protect the health and safety of Iowa’s families and communities while reopening the state’s economy.

“We continue to take an all-hands-on-deck approach to tackling this pandemic,” she said. “Together, we will get through this.”

Comments: (319) 398-8375; james.lynch@thegazette.com




io

Now playing at Iowa county fairs: The waiting game

CEDAR RAPIDS — Getting your hands on some fried food on a stick is going to be a little more difficult this summer for Iowans.

With the COVID-19 pandemic imposing restrictions on life in the state, county fair organizers across Iowa are trying to decide if they should cancel, go virtual or wait and see if restrictions lift and their events can go on in a relatively normal manner.

One thing seems to be for certain: The fair experience won’t quite be the same this year.

“It’ll be different,” said John Harms, general manager for the Great Jones County Fair, known for attracting popular musical acts. “I can tell you that.”

Iowa is home to 106 county and district fairs, as well as the Iowa State Fair, according to the Association of Iowa Fairs. Those fairs are scheduled to begin June 17 with the Worth County Fair and continue through Sept. 20 with the conclusion of the National Cattle Congress in Black Hawk County.

Those early fairs already are beginning to announce decisions about their events. Organizers of the Wapello County Fair announced they are canceling for this year. On Thursday, the Linn County Fair Association announced it is canceling grounds and grandstand entertainment with plans to take the exhibition aspects of the fair online.

Linn County Fair Marketing Manager Heidi Steffen said the association met with county public health and Board of Supervisors officials in recent weeks. The focus of those discussions was on ensuring the safety of all fair exhibitors, workers, performers and visitors, Steffen said.

“We just couldn’t guarantee that,” she said.

Steffen was quick to point out the fair isn’t canceled — it’s just taking on a different form. The fair is scheduled for June 24-28.

The fair association is working with the Iowa State University Extension and Outreach of Linn County and Linn County 4-H to ensure 4-H and National FFA Organization members get a chance to exhibit their livestock and projects. Details on what that will look like are expected later this month.

Fair association members have been attending webinars and learning from other fairs across the country that have gone virtual. Steffen said they’ve received valuable suggestions and feedback.

“It’s been done,” she said. “We can learn from their mistakes. We can learn what went well with them and hopefully implement it here in Linn County.”

Steffen said they are already kicking around other ideas to engage the community during fair week, just in a virtual manner. Those ideas include livestreaming pie-eating contests, encouraging local businesses to offer fair foods on their menus and seeing if local artists who had been scheduled to perform at the fair would be interested in online performances instead.

“We’re open to ideas,” she said, encouraging anyone with suggestions to reach out via email or Facebook.

Up the road in Jones County, organizers there have a little more time to decide how to move forward. For now, Harms is confident that fair will go on July 22-26.

“We’re still going to have a fair,” he said. “It may look differently than what we have experienced and enjoyed in the past.”

How exactly it may look different still is up in the air. Harms said plans “a, b, c and d are all being studied.” At least one grandstand act, the Zac Brown Band, won’t be performing. But Harms said organizers have other acts they’re ready to announce “if it makes sense to have entertainment at the fair.”

Whatever takes place likely will be determined by proclamations covering social distancing made by Gov. Kim Reynolds, Harms said. He said the fair’s planning process has been dictated by her health orders.

“We’re just trying to keep everything on the table and make sensible decisions and directions based on what’s going on,” he said. “It’s going to be challenging, but I think for the most part we’ll take a deep breath, have a little more faith and we’ll get through it.”

Tim Rogers, vice president for the Johnson County Fair Board, said the decision whether to have a fair will be made in the next 40-plus days.

“That’s kind of a deadline we’ve set to either call it completely, proceed fully or proceed with what we can do and still stay in compliance with all of the laws,” he said.

The Johnson County Fair Board will discuss what a partial fair might look like once that decision has been made, Rogers said.

Tom Barnes, executive director of the Association of Iowa Fairs, said his group is providing resources to fair organizers, but is not making any recommendations on whether to proceed.

“We’re asking them to be fiscally responsible for their fair,” he said. “We don’t ask them to cancel. We don’t ask them to go ahead with their fair. They know better what they can do and not do.”

Barnes said fair organizers should be asking themselves: If your fair is open, will people buy tickets? And, if they come, will they buy food and beverages?

As long as they make good financial decisions, Barnes said, he believes county fairs have the resources to weather the COVID-19 storm and return in 2021.

“We’ll be back next year if the fairs don’t go ahead,” he said.

Comments: (319) 339-3155; lee.hermiston@thegazette.com




io

Scenic designer in Iowa City looks for light in the darkness

Benjamin Stuben Farrar of Iowa City is a storyteller without a story to tell at the moment.

The first story is as dramatic and layered as his bold scenic and lighting designs for area stages: “Benjamin Stuben Farrar” is not his actual name.

He was born Stewart Benjamin Farrar 41 years ago in Kentucky. He didn’t want to go through life as “Stewie,” so he went by “Benjamin,” until he got to college at Vanderbilt University in Nashville. He ran into so many other Bens, that his buddies decided to combine his names into “Stuben.”

That name followed him to grad school at the University of Iowa in 2002, where he earned an MFA in theater design. But when he moved to New York City in 2006 to pursue his career, he didn’t like hearing “Stuben” shouted across the theater.

“It sounded too much like ‘stupid,’ ” he said, “so I reverted back to Benjamin.”

But nicknames have a way of sticking. When he and his wife moved back to Iowa City in 2015 to raise their daughter, he switched to “Stuben” again, since that’s how people knew him there.

Professionally, he uses “S. Benjamin Farrar” and on Facebook, he goes by “Benjamin Stuben Farrar” so friends from his various circles can find him. Even though most people now call him “Stuben,” he still introduces himself as “Benjamin.”

“To this day, I have 12 different names,” he said with a laugh. “Only the bill collectors know me as ‘Stewart.’”

Changing realms

Like his name, his artistry knows no bounds.

He has planted apple trees on Riverside Theatre’s indoor stage in Iowa City; a child’s outdoor playground on the Theatre Cedar Rapids stage; and dramatic spaces for Noche Flamenca’s dancers in New York City venues and on tour.

These days, however, his theatrical world has gone dark.

His recent designs for “The Humans,” “The Skin of Our Teeth” and “Kinky Boots” at Theatre Cedar Rapids and “A Doll’s House, Part 2” at Riverside Theatre have been canceled or postponed in the wake of the coronavirus pandemic. He has “The Winter’s Tale” in the works for Riverside Theatre’s free Shakespeare in the Park slated for June, but time will tell if that changes, too.

“Within the course of two weeks, five productions were canceled or moved indefinitely,” he said.

Looking ahead, he’s not sure what shows he’ll have time to design for the upcoming seasons. He’s used to juggling three or four productions at a time, but he said that could become really difficult if the shows fall on top of each other at the various venues.

As with so many artists right now, his world keeps changing.

He and his wife, Jody Caldwell, an editor and graduate of the UI Writers’ Workshop, are both freelancers, leaving them with no income during this pandemic. So Farrar has been wading through red tape and delays to secure unemployment compensation and the government stimulus check, for which he’s still waiting. One bright spot was receiving a $1,000 Iowa Arts & Culture Emergency Relief Fund grant given to 156 Iowa creatives who have lost income from canceled projects.

With his regular revenue streams drying up, he’s been considering other ways to earn money through teaching theater or creating and selling more of his digital and film photography — an outgrowth of his fascination for the way lighting can sculpt a scene on stage.

“I love doing nature (photography). I love doing details,” he said. “I love photographing people, too, especially on stage — I love photographing my own shows. It’s just a lot of fun.

“For me, nature’s so interesting, especially living where we do in North America, there’s vast changes from one time of year to another. I just love looking at that on a very small scale, and how light happens to fall on that particular surface — how that surface changes color,” he said.

“Right now the redbuds are out. The magnolias came out two weeks ago and then they started to fall. It changes the landscape dramatically, especially based on whether it’s a morning light or afternoon light or evening light, whether it’s cloudy, whether the sun’s peeking through clouds and highlighting a few individual leaves. I find that super fascinating.

“That’s how I can look at the same boring tree at different times of year, at different times of day, and find something interesting to photograph.”

Lighting design

While his scenic designs create an immediate visual impact and help tell the story swirling around the actors, Farrar was a lighting designer before he became a scenic designer.

It wasn’t love at first sight. He took a light design course in college, but didn’t “get” it.

“It’s really difficult to wrap your head around it,” he said.

His aha moment came when he was running lights for an operetta in college.

“I just had these little faders in front of me so I could raise certain lights up and down. And the music was happening in front of me and I thought, ‘I control this whole little universe. I can make things completely disappear. I can sculpt things from the side, I can make things feel totally different — just like music can — just based on how it’s lit.’ And then I finally started to understand how the lighting hooked things together,” he said.

From there, his interest in lighting soared.

“I absolutely love lighting,” he said. “I think it’s probably given me more joy than anything else, just because I can go for a walk someplace and just the way the lighting changes as the clouds come in or out, or as the time of year changes and the angle of the sun changes, I really enjoy seeing that — and that’s what got me into photography.”

Scenic design

While his design work is a collaborative process with the director and other production team members, the ideas begin flowing as soon as he starts reading a script. With the flamenco dance company in New York, he might start working on a show two years in advance. With Theatre Cedar Rapids, the lead time is generally six months to look at the season overall, and four months to “get things going” on a particular show, he said. The lead time is about two months for Riverside Theatre shows, which have shorter rehearsal periods.

He begins thinking about the theater spaces, the text that the audience never sees, the show’s technical demands, and the scale in relation to the human body. He still likes to do some of his design work by hand, but computers and the 3D printer he has in his basement workshop have made the process much quicker for creating the drawings and scale models for each show.

He also enjoys the variety and challenge of moving between the small space inside Riverside Theatre and the large space inside Theatre Cedar Rapids, as well as the theaters at Grinnell College and Cornell College in Mount Vernon, as well as the theaters in New York and the touring venues that have housed his designs.

Ultimately, the goal of scenic design “is always about the storytelling,” he said.

“There’s a version of a show that exists in a script, if there is a script. Assuming it has a script, there is a scaffolding for that show in the script, and then there’s a version of the show in the director’s head, and then there’s a version of the show that’s performed in my head as I read the script. So there’s all these different versions.”

If the show is a musical, the choreographer brings in another idea, and the musical score adds another element. Sometimes Farrar knows the music very well, but other times, he doesn’t.

“Hopefully, I can integrate that well if I listen to the music while working on the show — not usually when I’m reading the script, but while I’m drafting the show. I’ll listen to the music to get a sense of how the show wants to move.

“Integrating all these different versions of the show — the text, what’s in my head, what’s in the director’s head, what’s in the choreographer’s head, the role the music plays — and then you synthesize all those elements, and then you find out how the show wants to move in the space it has. And how a show moves is one of the most important things to me. ...

“You get a sense that the show becomes this conscious element that wants a certain thing, and will reveal those things over time.”

And time is something he has right now.

Comments: (319) 368-8508; diana.nollen@thegazette.com



  • Arts & Culture

io

Task force will make recommendations on how to resume jury trials, given coronavirus concerns

DES MOINES — The Iowa Supreme Court has asked a group of criminal and civil lawyers, judges and court staff from judicial districts across the state to make recommendations on how criminal and civil jury trials will resume with coronavirus health restrictions.

The court is asking the 17-member Jumpstart Jury Trials Task Force to develop temporary policies and procedures for jury trials that will ensure the “fundamental rights of a defendant” to a jury trial, while at the same time “protecting the health and safety” of the jurors, attorneys, judges and the public, said Des Moines lawyer Guy Cook, co-chairman of the task force.

The court, Cook said Thursday, has put together a “good cross-section” of professionals who have experience with civil and criminal trials.

Task force members are:

• Associate Supreme Court Justice Mark McDermott, chairman

• Guy Cook, Des Moines criminal and civil attorney, co-chairman

• 4th Judicial District Judge Michael Hooper

• 5th Judicial District Judge David Porter

• Angela Campbell, Des Moines criminal defense attorney

• Jim Craig, Cedar Rapids civil attorney, president of Iowa Defense Counsel Association

• Janietta Criswell, clerk and jury manager, 8th Judicial District, Ottumwa

• Kathy Gaylord, district court administrator, 7th Judicial District, Davenport

• Patrick Jennings, Woodbury county attorney, Sioux City

• Julie Kneip, clerk of court, 2nd Judicial District, Fort Dodge

• Bill Miller, Des Moines civil attorney, chairman of Iowa State Bar Association litigation

• Todd Nuccio, Iowa state court administrator

• Jerry Schnurr, Fort Dodge civil attorney and president-elect of Iowa State Bar Association

• Jennifer Solberg, Woodbury County chief public defender

• Chad Swanson, Waterloo civil attorney, president of Iowa Association of Justice

• Brian Williams, Black Hawk county attorney

• Mark Headlee, information technology director of Iowa Judicial Branch

The committee will review the current schedule to resume jury trials that the court has established in consultation with public health officials and other health care providers, and recommend whether the schedule should be altered, according to the court’s order.

Jury criminal trials can resume July 13 and civil trials Aug. 3, according to the order.

The task force also will make recommendations for how those trials should proceed, according to the court’s order.

Members should develop policies and procedures aimed at protecting the health and safety of jurors, court staff, attorneys, judges and visitors throughout the trial process, particularly during the identification of potential jurors, summons of potential jurors, jury selection, trials, jury instructions and jury deliberations.

Cook said members will have to consider the challenges for each type of trial. More jurors, for example, are needed in a criminal case, so space and logistics will have to be considered with social distancing requirements.

That will be more difficult in the rural courthouses that have less space.

A pool of 80 to 100 potential jurors are sometimes summoned for felony trials in larger counties, but that, too, may be a challenge with social distancing.

Another possibility would be requiring masks, but how will a mask affect the credibility of a witness if it hides the person’s facial expressions, Cook said.

These are all issues the members may encounter.

Steve Davis, Iowa Judicial Branch spokesman, said the goal is one uniform statewide plan, but it’s possible that each district may have some discretion, as in the previous orders issued during this pandemic, because of the differences in each county.

Davis said the task force members were chosen based on gender, background and geographic area.

The recommendations should be submitted to the court the first week in June.

Davis said he didn’t yet know when the task force would start meeting by phone or video conference or how often.

Comments: (319) 398-8318; trish.mehaffey@thegazette.com




io

Iowa Gov. Kim Reynolds will not hold coronavirus press conference Friday

DES MOINES — Iowa Gov. Kim Reynolds will not be holding a news briefing Friday on the coronavirus outbreak in Iowa due to scheduling conflicts created by Vice President Mike Pence’s visit to Iowa, according to the governor’s office.

The vice president was slated to travel to Des Moines Friday morning with plans to participate in a discussion with faith leaders about how they are using federal and state guidelines to open their houses of worship in a safe and responsible manner.

Also, Friday afternoon the vice president was scheduled to visit Hy-Vee headquarters in West Des Moines for a roundtable discussion with agriculture and food supply leaders to discuss steps being taken to ensure the food supply remains secure. Pence will return to Washington, D.C., later Friday evening.

Along with the governor, Iowa’s Republican U.S. Sens. Joni Ernst and Chuck Grassley are slated to join Pence at Friday’s events in Iowa.

According to the governor’s staff, Reynolds plans to resume her regular schedule of 11 a.m. press conferences next week.




io

Coronavirus in Iowa, live updates for May 8: Cedar Rapids to host virtual City Council meeting

4:43 P.M.: GOODWILL PLANS TO REOPEN 11 EASTERN IOWA RETAIL LOCATIONS

Goodwill of the Heartland will reopen 11 retail locations in Eastern Iowa next week, including all its Cedar Rapids stores, according to an announcement on the Goodwill Facebook page. Stores in Marion, Coralville, Iowa City, Washington, Bettendorf, Davenport and Muscatine also will resume business Monday, starting with accepting donations only.

Locations will be open to shoppers, beginning Friday, May 15, and run from 11 a.m.-6 p.m. Monday through Saturday and from noon-5 p.m. Sunday.

All customers are required to wear face masks to enter the store. For more information, including safety guidelines, visit the Goodwill website.

3:02 p.m.: IOWA DNR URGES CAMPERS TO CHECK WEBSITE BEFORE TRAVEL

The Iowa Department of Natural Resources encourage visitors to recently reopened campgrounds to check the DNR website for temporary closures before traveling to any of the areas. Campgrounds started to open Friday for walk-in, first come, first served campers with self-contained restrooms, according to a news release.

Some parks and campgrounds have closures due construction or other maintenance projects. Staff will monitor the areas closely, reminding visitors to practice physical distancing guidelines and other policies issued by the DNR earlier this week.

Some pit latrines in high-use areas will be open, but all other restrooms, drinking fountains and shower facilities will be closed. Park visitors are asked to use designated parking areas and follow all park signs.

The DNR’s reservation system for reservable campgrounds is available online, taking reservations for Monday and later.

Iowa has 68 state parks and four state forests, including hiking trails, lake recreation and camping. For more information, visit the DNR website.

10:23 a.m.: CEDAR RAPIDS TO HOST VIRTUAL CITY COUNCIL MEETING

The next Cedar Rapids City Council meeting will be hosted virtually. The meeting will be held May 12, beginning at noon. The livestream is available at the city’s Facebook page. Indexed videos can be accessed on the City of Cedar Rapids website.

The public is invited to provide comments, submitting written comments via email to cityclerk@cedar-rapids.org before the meeting or joining the Zoom conference call and registering here before 2 p.m. Tuesday. Registrants will receive an email with instructions to participate. Written comments received before 2 p.m. the day of the meeting will be given to City Council members before the event.

The public will only be invited to speak during designated public comment sections of the meeting. Please visit the City’s website for speaking guidelines. City Hall remains closed to the public. No in-person participation is available.

Tuesday’s meeting agenda will be posted to the website by 4 p.m. Friday.

MICHAEL BUBLE PERFORMANCES IN MOLINE, DES MOINES MOVED TO 2021

Michael Buble’s “An Evening with Michael Buble” Tour has rescheduled dates to 2021. The 26-date series of concerts will begin February 6 in Salt Lake City and conclude March 25 in Jacksonville, Fla., according to a news release Friday.

Bubble’s shows at TaxSlayer Center in Moline, Ill., has been switched to Feb. 20, 2021. He will perform at Wells Fargo Arena in Des Moines the following day.

Tickets for previously scheduled dates will be honored.

“I am so looking forward to getting back on stage,” Buble said in the release. “I’ve missed my fans and my touring family. Meantime, I hope everyone stays safe. We can all look forward to a great night out.”

Buble also just completed a series of Facebook Live shows while in quarantine with his family in Vancouver.

Comments: (319) 368-8679; kj.pilcher@thegazette.com




io

Man arrested in Texas faces murder charge in Iowa City shooting

IOWA CITY — An Iowa City man has been arrested in Texas in connection with the April 20 shooting death of Kejuan Winters.

Reginald Little, 44, was taken into custody Friday by the Lubbock County Sheriff’s Office, according to Iowa City police.

Little faces a charge of first-degree murder and is awaiting extradition back to Iowa City.

The shooting happened in an apartment at 1960 Broadway St. around 9:55 a.m. April 20. Police said gunfire could be heard during the call to police.

Officers found Winters, 21, of Iowa City, with multiple gunshot wounds. He died in the apartment.

Police said Durojaiya A. Rosa, 22, of Iowa City, and a woman were at the apartment and gave police a description of the shooter and said they heard him fighting with Winters before hearing gunshots.

Surveillance camera footage and cellphone records indicated Little was in the area before the shots were fired, police said.

Investigators also discovered Little and Rosa had been in communication about entering the apartment, and Rosa told police he and Little had planned to rob Winters.

Rosa also faces one count of first-degree murder.

The shooting death spurred three additional arrests.

Winters’ father, Tyris D. Winters, 41, of Peoria, Ill., and Tony M. Watkins, 39, of Iowa City, were arrested on attempted murder charges after confronting another person later that day in Coralville about the homicide, and, police say, shooting that person in the head and foot.

Police also arrested Jordan R. Hogan, 21, of Iowa City, for obstructing prosecution, saying he helped the suspect, Little, avoid arrest.

First-degree murder is a Class A felony punishable by an automatic life sentence.

Comments: (319) 339-3155; lee.hermiston@thegazette.com




io

Campgrounds reopen in Iowa Friday, see takers despite some health limitations

Some Eastern Iowans are ready to go camping.

With Gov. Kim Reynolds allowing campgrounds across the state to open Friday, some people wasted little time in heading outdoors.

“They’re already starting to fill up,” said Ryan Schlader of Linn County Conservation. “By about 7 this morning, we had a dozen at Squaw Creek Park. People were coming in bright and early to camp. We’re not surprised.”

Schlader said Linn County Conservation tried to have the campgrounds open at the county’s Squaw Creek, Morgan Creek and Pinicon Ridge parks at 5 a.m. Friday. He expected all of them would be busy.

“I think people were ready to go,” he said.

Lake Macbride State Park in Johnson County didn’t see quite as much of a rush for campsites, park manager Ron Puettmann reported Friday morning, saying he’d had six walk-ins for the park’s 42 campsites.

Camping this weekend will be done on a first-come, first-served basis. Sites won’t be available for reservations until next week, though online reservations can be made now, Puettmann said.

“I’m quite sure people were waiting anxiously to get on,” he said.

While Reynolds’ campground announcement came Wednesday, Schlader and Puettmann said they had no issues having the campgrounds ready for Friday.

Schlader said county staff have been in touch with the Iowa Department of Natural Resources and other county conservation boards to discuss protocols for reopening to ensure a safe experience for campers and employers.

“We anticipated at some point the order would be lifted,” Schlader said. “We were anticipating maybe May 15. The campgrounds were in good shape and ready to go.”

For now, camping comes with some limitations:

• Campers can camp only in a self-contained unit with a functioning restroom, such as a recreational vehicle.

• Shower houses with restrooms will remain closed for the time being.

• Campsites are limited to six people unless they are from the same household.

• No visitors are allowed at the campsites.

Puettmann said staffers and a DNR officer will be on hand to make sure guidelines are followed, but he didn’t anticipate enforcement would be an issue.

“For the most part, we’re going to allow people to police themselves,” he said.

It’s hard to gauge demand, Schlader said.

The weather isn’t yet deal for camping, and some people might not be ready to camp, given the continuing coronavirus.

“There is a lot of uncertainty,” he said. “Do people feel like they need to get out and enjoy a camping experience within their own campsite, or do people still feel under the weather and think it’s not a good idea for my family to go right now? ... We just want this to be an option for people.”

Comments: (319) 339-3155; lee.hermiston@thegazette.com




io

New machines in Test Iowa initiative still unproven

DES MOINES — More than 20 days after Iowa signed a $26 million contract with a Utah company to expand testing in the state, the machines the firm supplied to run the samples still have not passed muster.

A time frame for completing the validation process for the Test Iowa lab machines is unknown, as the process can vary by machine, University of Iowa officials said Friday.

The validation process is undertaken to determine if the machines are processing tests accurately. To this point, the lab has processed the Test Iowa results using machines the State Hygienic Lab already had, officials told The Gazette.

Running side-by-side testing is part of the validation process. The lab then compares whether the machines yield the same results when the sample is run, officials said Friday. The side-by-side testing means the Test Iowa samples are being run at least twice to compare results.

The state does not break out how many of the 331,186 Iowans who by Friday have completed the coronavirus assessment at TestIowa.com have actually been tested. Test Iowa was initiated last month to ramp up testing of essential workers and Iowans showing COVID-19 symptoms. The state’s fourth drive-though location where people with appointments can be tested opened Thursday at the Kirkwood Continuing Education Training Center in Cedar Rapids.

On Friday, Iowa posted a fourth straight day of double-digit deaths from coronavirus, with the latest 12 deaths reported by the state Department of Public Health bringing the statewide toll to 243 since COVID-19 was first confirmed March 8 in Iowa.

State health officials reported another 398 Iowans tested positive for the respiratory ailment, bringing that count to 11,457 of the 70,261 residents who have been tested — a positive rate of more than 16 percent.

One in 44 Iowans has been tested for COVID-19, with 58,804 posting negative results, according to state data. A total of 4,685 people have recovered from the disease.

During a Thursday media briefing, Gov. Kim Reynolds told reporters a backlog of test results that occurred due to validation of Test Iowa equipment had been “caught up,” but some Iowans who participated in drive-through sites set up around the state indicated they still were awaiting results.

Reynolds spokesman Pat Garrett confirmed Thursday that “a very small percentage” of coronavirus test samples collected under the Test Iowa program could not be processed because they were “potentially damaged,” resulting in incomplete results.

There were 407 Iowans who were hospitalized (with 34 admitted in the past 24 hours) for coronavirus-related illnesses and symptoms with 164 being treated in intensive care units and 109 requiring ventilators to assist their breathing.

Health officials said the 12 deaths reported Friday were: three in Woodbury County, two in Linn County and one each in Black Hawk, Dallas, Dubuque, Jasper, Louisa, Muscatine and Scott counties. No other information about the COVID-19 victims was available from state data.

According to officials, 51 percent of the Iowans who have died from coronavirus have been male — the same percentage that tested positive.

Iowans over the age of 80 represent 46 percent of the COVID-19 victims, followed by 41 percent between 61 and 80.

Of those who have tested positive, state data indicates about 42 percent are age 18 to 40; 37 percent are 41 to 60; 14 percent are 61 to 80 and 5 percent are 81 or older.

Counties with the highest number of positive test results are Polk (2,150), Woodbury (1,532), Black Hawk (1,463) and Linn (813).

Earlier this week, state officials revamped the data available to the public at coronavirus.iowa.gov, with the new format no longer listing the age range of Iowans who died from coronavirus and providing information using a different timeline than before.

The governor did not hold a daily media briefing Friday due to scheduling conflicts created by Vice President Mike Pence’s trip to Iowa. Garrett said Reynolds would resume her COVID-19 briefings next week.

John McGlothlen and Zack Kucharski of The Gazette contributed to this report.




io

C.R. workplace shooting suspect turns self in after father drives him to Alabama police station

A man suspected of a workplace shooting last month at a vinyl window manufacturer in southwest Cedar Rapids turned himself into authorities Friday.

Jamal Devonte Edwards, 26, has been wanted since two men were shot at Associated Materials, 3801 Beverly Rd. SW, the morning of April 9.

Cedar Rapids police had indicated Edwards was wanted in particular for the shooting of Mark Robertson, 36.

Edwards faces charges of attempted murder, intimidation with a dangerous weapon, going armed with intent and willful injury.

The U.S. Marshals Service helped locate Edwards, distributing a photo of Edwards along the Gulf Coast. He was located in Mobile, Ala. when his father brought him to the Mobile police department so he could turn himself in, according to a Cedar Rapids police news release.

The April 9 shooting was reported at 5:03 a.m. after two employees were shot at Associated Materials. Both suffered non-life-threatening injuries, police said.

Police said at the time it appeared the shooter knew the two men.

Shawn Hardy, senior vice president of integrated products for Associated Materials, confirmed Edwards worked at the Cedar Rapids business, which gave him access to the building, but said he had been employed through a temp agency.




io

Pence’s Iowa visit underscores coronavirus worry

DES MOINES — In traveling to Iowa to call attention to the burdens COVID-19 brought to religious services and the food supply, Vice President Mike Pence unwittingly called attention to another issue: whether the White House itself is safe from the disease.

So far this week, two White House aides — President Donald Trump’s valet on Thursday, and Pence’s press secretary on Friday — have tested positive for the virus.

On Friday morning, Pence’s departure to Des Moines was delayed an hour as Air Force Two idled on a tarmac near Washington. Though Pence’s press secretary was not on the plane, White House physicians through contact tracing identified six other aides who had been near her who were aboard, and pulled them from the flight. The White House later said the six had tested negative.

Trump, who identified the Pence aide as press secretary Katie Miller, said he was “not worried” about the virus in the White House.

Nonetheless, officials said they were stepping up safety protocols and were considering a mandatory mask policy for those in close contact with Trump and Pence.

The vice president and 10 members of his staff are given rapid coronavirus tests daily, and the president is also tested regularly.

Miller, who is married to Trump adviser Stephen Miller, had been in recent contact with Pence but not with the president. Pence is leader of the White House coronavirus task force and Katie Miller has handled the group’s communications.

After landing in Des Moines, Pence spoke to a group of faith leaders about the importance of resuming religious services, saying cancellations in the name of slowing the spread of the virus have “been a burden” for congregants.

His visit coincided with the state announcing 12 more deaths from the virus, a total of 243 in less than two months.

Pence spoke with the religious leaders and Republican officials during a brief visit. He also spoke later with agricultural and food company executives.

“It’s been a source of heartache for people across the country,” Pence told about a dozen people at the Church of the Way Presbyterian church in Urbandale.

Pence told the group that continued efforts to hold services online and in other ways “made incalculable difference in our nation seeing our way through these troubled times.”

Iowa is among many states where restrictions on in-person services are starting to ease. GOP Gov. Kim Reynolds, who joined both of the state’s Republican senators at the event, has instituted new rules that allow services to resume with restrictions.

At Friday’s event, some religious leaders expressed hesitation at resuming large gatherings, while others said they would begin holding services soon,

“We are pretty much in a position of uniformly believing that it’s too early to return to personal worship. It’s inadvisable at the moment particularly with rising case counts in communities where we are across the state,” said David Kaufman, rabbi of Temple B’nai Jeshurun in Des Moines.

The Rev. Terry Amann, of Church of the Way, said his church will resume services May 17 with chairs arranged so families can sit together but avoid the temptation to shake hands or offer hugs. He said hand sanitizer will be available.

A new poll by The University of Chicago Divinity School and the Associated Press-NORC Center for Public Affairs Research shows just 9 percent of Americans think in-person services should be allowed without restrictions, while 42 percent think they should be allowed with restrictions and 48 percent think they shouldn’t be allowed at all.

Pence later met with agriculture and food industry leaders. Iowa tops the nation in egg production and pork processing and is a top grower of corn and soybeans.

Meatpacking is among the state’s biggest employers, and companies have been working to restart operations after closing them because hundreds of their workers became infected.

As Pence touted the Trump administration’s announcement of the reopening of 14 meatpacking plants including two of the worst hit by coronavirus infections in Perry and Waterloo, the union representing workers called for safer work conditions.

“Iowa’s meatpacking workers are not sacrificial lambs. They have been working tirelessly during the coronavirus pandemic to ensure families here and across the country have access to the food they need,” said the United Food and Commercial Workers Union in a statement.

The Associated Press and the McClatchy Washington Bureau contributed to this report.




io

Celebrating on a screen: Iowa universities hold first-ever online commencements

Iowa State University graduates who celebrated commencement Friday saw lots of caps and gowns, red-and-gold confetti and arenas packed with friends and family.

But none of those images were from this year — which now is defined by the novel coronavirus that has forced education online and put an end to large gatherings like graduation ceremonies.

Appearing in front of a red ISU screen Friday, College of Agriculture and Life Sciences Dean Daniel J. Robison addressed graduates like he usually would at commencement — but this time in a recorded message acknowledging the unprecedented circumstances keeping them apart.

“This year, because of the COVID crisis, we are unfortunately not all together for this happy occasion,” he said, pushing forward in a motivational tone by quoting famed ISU alumnus George Washington Carver.

“When you can do the common things in life in an uncommon way, you will command the attention of the world,” Robison said, citing Carver.

About 12,000 graduates across Iowa’s public universities this month are doing exactly that — capping their collegiate careers with never-before-attempted online-only commencement ceremonies, with each campus and their respective colleges attempting a variety of virtual celebration methods.

ISU and the University of Iowa are attempting some form of socially-distanced livestreamed convocation with countdown clocks and virtual confetti. All three campuses including the University of Northern Iowa have posted online recorded messages, videos and slides acknowledging individual graduates.

Some slides include photos, thank-yous, quotes and student plans for after graduation.

UNI, which didn’t try any form of a live virtual ceremony, instead created a graduation website that went live Thursday. That site hosts an array of recorded video messages — including one from UNI President Mark Nook who, standing alone behind a podium on campus clad in traditional academic regalia, recognized his campus’ 1,500-some spring graduates and their unusual challenges.

“We know the loss you feel in not being able to be on campus to celebrate this time with your friends, faculty and staff,” Nook said. “To walk around campus in your robe and to take those pictures with friends and family members … The loss is felt by many of us as well.”

He reminded those listening that this spring’s UNI graduates — like those at the UI and ISU — can participate in an upcoming in-person commencement ceremony.

And although students were allowed to return caps and gowns they ordered for their canceled walks across the stage, some kept them as keepsakes. The campuses offered other tokens of remembrance as well, including “CYlebration” gift packages ISU sent to graduates in April stuffed with a souvenir tassel, diploma cover, and streamer tube — to make up for the confetti that won’t be falling on graduation caps from the Hilton Coliseum rafters.

In addition to the recorded messages from 17 UI leaders — including President Bruce Harreld — the campus solicited parent messages, which will be included in the live virtual ceremonies.

To date, about 3,100 of the more than 5,400 UI graduates have RSVP’d to participate in the ceremony, which spokeswoman Anne Bassett said is a required affirmation from the students to have their names read.

“Students do not have to sign up to watch,” she said. “So there’s no way at this time to predict how many will do so.”

Despite the historic nature of the first online-only commencement ceremonies — forever bonding distanced graduates through the shared experience — UI graduate Omar Khodor, 22, said it’s a club he would have liked to avoid.

“I’d definitely prefer not to be part of that group,” the environmental science major said, sharing disappointment over the education, experiences and celebrations he lost to the pandemic.

“A lot of students like myself, we’re upset, but we’re not really allowed to be upset given the circumstances,” Khodor said. “You have this sense that something is unfair, that something has been taken from you. But you can’t be mad about it at all.”

‘Should I Dance Across the Stage?’

Life is too short to dwell on what could have been or what should have been — which sort of captures graduate Dawn Hales’ motivation to get an ISU degree.

The 63-year-old Ames grandmother calls herself the “oldest BSN Iowa State grad ever.”

“It’s the truth, because we’re only the second cohort to graduate,” Hales said. “I’ll probably be the oldest for a while.”

ISU began offering a Bachelor of Science in nursing degree in fall 2018 for registered nurses hoping to advance their careers — like Hales, who spent years in nursing before becoming director of nursing at Accura Healthcare, a skilled nursing and rehabilitation center in Ames.

In addition to wanting more education, Hales said, she felt like the “odd man out” in her red-and-gold family — with her husband, three sons and their wives all earning ISU degrees. She earned an associate degree and became a registered nurse with community college training.

“I was director of nursing at different facilities, but I did not have a four-year degree,” she said. “I always wanted to get my BSN.”

So in January 2019, she started full-time toward her three-semester pursuit of a BSN — even as she continued working. And her education took a relevant and important turn when COVID-19 arrived.

“My capstone project was infection control,” she said, noting her focus later sharpened to “infection control and crisis management” — perfect timing to fight the coronavirus, which has hit long-term care facilities particularly hard.

“We were hyper vigilant,” Hales said of her facility, which has yet to report a case of COVID-19. “I think we were probably one of the first facilities that pretty much shut down and started assessing our staff when they would come in.”

Hales said she was eager to walk in her first university graduation and was planning antics for it with her 10-year-old granddaughter.

“We were trying to think, should I dance across the stage?” Hales said. “Or would I grab a walker and act like an old lady going across the stage?

“She was trying to teach me to do this ‘dab’ move,” Hales said. “I said, ‘Honey, I cannot figure that out.’”

In the end, Hales watched the celebration online instead. She did, however, get a personalized license plate that reads, “RN2BSN.”

In From Idaho To Exalt ‘In ‘Our Own Way’

Coming from a family-run dairy farm in Jerome, Idaho, EllieMae Millenkamp, 22, is the first in her family to graduate college.

Although music is her passion, Millenkamp long expected to study at an agriculture school — but Colorado State was her original choice.

Then, while visiting family in Iowa during a cousin’s visit to ISU, she fell in love with the Ames campus and recalibrated her academic path.

While at ISU, the musical Millenkamp began writing more songs and performing more online, which led to in-person shows and a local band.

And then, during her junior year, a talent scout reached out to invite her to participate in an audition for NBC’s “The Voice.” That went well and Millenkamp, in the summer before her senior year, moved to Los Angeles and made it onto the show.

She achieved second-round status before being bumped, but the experience offered her lifelong friendships and connections and invigorated her musical pursuits — which have been slowed by COVID-19. Shows have been canceled in now idled bars.

Millenkamp went back to Idaho to be with her family, like thousands of her peers also did with their families, when the ISU campus shut down.

After graduation she plans on returning and working the family farm again until her musical career has the chance to regain momentum.

But she recently returned to Ames for finals. And she and some friends, also in town, plan to celebrate graduation, even if not with an official cap and gown.

“We’ll probably have a bonfire and all hang out,” she said. “We’ll celebrate in our own way.”

Seeking Closure After Abrupt Campus Exits

Most college seniors nearing graduation get to spend their academic hours focusing on their major and interests, wrapping their four or sometimes five years with passion projects and capstone experiences.

That was Omar Khodor’s plan — with lab-based DNA sequencing on tap, along with a geology trip and policy proposal he planned to present to the Iowa Legislature. But all that got canceled — and even some requirements were waived since COVID-19 made them impossible.

“There were still a lot of a lot of things to wrap up,” he said. “A lot of things I was looking forward to.”

He’s ending the year with just three classes to finish and “absolutely” would have preferred to have a fuller plate.

But Khodor’s academic career isn’t over. He’s planning to attend law school in the fall at the University of Pennsylvania, where he’ll pursue environmental law. But this spring has diminished his enthusiasm, with the question lingering of whether in-person courses will return to campus soon.

If they don’t, he’s still leaning toward enrolling — in part — because of all the work that goes into applying and getting accepted, which he’s already done.

“But online classes are definitely less fulfilling, less motivating. You feel like you learn less,” he said. “So it will kind of be a tossup. There’ll be some trade-offs involved in what I would gain versus what I would be paying for such an expensive endeavor like law school.”

As for missing a traditional college commencement, Khodor said he will, even though he plans to participate in the virtual alternative.

“Before it got canceled, I didn’t think that I was looking forward to it as much as I actually was,” he said.

Not so much for the pomp and circumstance, but for the closure, which none of the seniors got this year. When the universities announced no one would return to campus this semester, students were away on spring break.

They had already experienced their last in-person class, their last after-class drink, their last cram session, their last study group, their last lecture, their last Iowa Memorial Union lunch — and they didn’t even know it.

“So many of us, we won’t have closure, and that can kind of be a difficult thing,” he said.

Comments: (319) 339-3158; vanessa.miller@thegazette.com

Online Celebrations

For a list of commencement times and virtual celebrations, visit:

The University of Iowa’s commencement site at https://commencement.uiowa.edu/

Iowa State University’s commencement site at https://virtual.graduation.iastate.edu/

University of Northern Iowa’s commencement site at https://vgrad.z19.web.core.windows.net/uni/index.html




io

Coronavirus in Iowa, live updates for May 9: 214 more positive tests reported

11 a.m. Iowa sees 214 more positive tests for coronavirus

The Iowa Department of Public Health on Saturday reported nine more deaths from COVID-19, for a total of 252 since March 8.

An additional 214 people tested positive for the virus, bringing the state’s total to 11,671.

A total of 71,476 Iowans have been tested for COVID-19, the department reported.

With Saturday’s new figures from the Department of Public Health, these are the top 10 counties in terms of total cases:

• Polk — 2194

• Woodbury — 1554

• Black Hawk — 1477

• Linn — 819

• Marshall — 702

• Dallas — 660

• Johnson — 549

• Muscatine — 471

• Tama — 327

• Louisa — 282.




io

Iowa coronavirus hospitalizations drop for second consecutive day

For the second consecutive day the number of Iowa patients hospitalized with COVID-19 has dropped.

The Iowa Department of Public Health reported Saturday that 402 people were hospitalized with the coronavirus, down five from the previous day, and down 15 from its current peak of 417 on Thursday.

Saturday’s totals mark the first time back-to-back COVID-19 hospitalization decreases since figures had begun being tracked.

Nine deaths in Iowa were recorded Saturday, according to the Department of Public Health, bringing the total to 252. But it snapped a streak of four consecutive days in which 10 or more deaths were recorded in Iowa.

Four of the deaths were in Polk County, bringing Polk’s total to 58 — matching Linn County’s as most in the state.

Saturday was the first time since Monday that no deaths in Linn County were reported.

Two deaths were in Jasper County, one each in Johnson, Muscatine and Tama counties.

Four of those who died were 81 years of age and older, three were 61 to 80 and two were aged 41 to 60.

Saturday’s report also showed there now have been a total of 29 outbreaks recorded in long-term care facilities statewide.

Including Saturday’s latest figures from the Department of Public Health — with 214 positive cases, for a total of 11,671 — these are the top 10 Iowa counties in terms of total cases:

• Polk — 2,194

• Woodbury — 1,554

• Black Hawk — 1,477

• Linn — 819

• Marshall — 702

• Dallas — 660

• Johnson — 549

• Muscatine — 471

• Tama — 327

• Louisa — 282.

More than 71,000 Iowans — one of 43 — have been tested, and 16.3 percent of those tested have been positive cases, according to the state.

Forty-six percent of Iowa deaths have been those age 81 and older, while 87 percent are 61 and older. Fifty-one percent have been male.

Beginning this past Friday, Gov. Kim Reynolds permitted more businesses to partially reopen.

“I’m proud to say that Iowans do what they always do and they responded,” she said at her Thursday news conference, her most recent. “So since we’ve kind of really accomplished what we were trying to do, ... now we have shifted our focus from mitigation and resources to managing and containing virus activity as we begin to open Iowa back up.”

Reynolds met with President Donald Trump on Wednesday at the White House to discuss the pandemic and mitigation strategies in the state.

Vice President Mike Pence visited Iowa Friday, when he met with faith leaders and agricultural and food company executives.

Comments: (319) 368-8857; jeff.linder@thegazette.com




io

Members – GiveWP Integration

Announcement of the Members - Give Integration add-on that creates a nicer UI when the GiveWP and Members plugins are active.




io

Members – ACF Integration

Announcement of the Members - ACF Integration plugin, which creates custom capabilities for the Advanced Custom Fields plugin.




io

Members – EDD Integration

Introducing an add-on plugin for Members that integrates the Easy Digital Downloads plugin roles and capabilities.




io

Members – WooCommerce Integration

An add-on plugin that integrates WooCommerce's roles and capabilities into the Members user role editor.




io

Members – Block Permissions

Announcement of Members - Block Permissions, a WordPress plugin for showing/hiding content using the block editor (Gutenberg).




io

Exhale Version 2.2.0

Release announcement of version 2.2.0 of the Exhale WordPress theme.




io

I’ve shot at this location a few times but for some reason...



I’ve shot at this location a few times but for some reason I’ve never seen it from the other side. Literal proof that shooting with other creatives gives you new perspective. ???? (at Toronto, Ontario)




io

Thanks for all the positive support and reception to my...



Thanks for all the positive support and reception to my Lightroom presets so far, especially to those who pulled the trigger and became my first customers! I’d love to hear your feedback once you try them out!
.
Still time to enter the giveaway or to take advantage of the 50% sale! See my last post for full details and the link in my profile. ❤️ (at Toronto, Ontario)




io

Bricks are better black. ◾️ (at Toronto, Ontario)



Bricks are better black. ◾️ (at Toronto, Ontario)




io

Lights, camera, action. ???? — A few more days left to get 50% off...



Lights, camera, action. ????

A few more days left to get 50% off my custom Lightroom presets! Link in profile. (at Toronto, Ontario)




io

This trip solidified my conviction to learning photography. A...



This trip solidified my conviction to learning photography. A lot has happened since this shot was taken.
Can you pinpoint the moment you decided to pursue photography? (at Toronto, Ontario)




io

Self-promotion

The world has changed. Everything we do is more immediately visible to others than ever before, but much remains the same; the relationships we develop are as important as they always were. This post is a few thoughts on self-promotion, and how to have good relationships as a self-publisher.

Meeting people face to face is ace. They could be colleagues, vendors, or clients; at conferences, coffee shops, or meeting rooms. The hallway and bar tracks at conferences are particularly great. I always come away with a refreshed appreciation for meatspace. However, most of our interactions take place over the Web. On the Web, the lines separating different kinds of relationships are a little blurred. The company trying to get you to buy a product or conference ticket uses the same medium as your friends.

Freelancers and small companies (and co-ops!) can have as much of an impact as big businesses. ‘I publish therefore I am’ could be our new mantra. Hence this post, in a way. Although, I confess I have discussed these thoughts with friends and thought it was about time I kept my promise to publish them.

Publishing primarily means text and images. Text is the most prevalent. However, much more meaning is conveyed non-verbally. ‘It’s not what you say, it’s how you say it.’

Text can contain non-verbal elements like style — either handwritten or typographic characters — and emoticons, but we don’t control style in Twitter, email, or feeds. Or in any of the main situations where people read what we write (unless it’s our own site). Emoticons are often used in text to indicate tone, pitch, inflection, and emotion like irony, humour, or dismay. They plug gaps in the Latin alphabet’s scope that could be filled with punctuation like the sarcasm mark. By using them, we affirm how important non-verbal communication is.

The other critical non-verbal communication around text is karma. Karma is our reputation, our social capital with our audience of peers, commentators, and customers. It has two distinct parts: Personality, and professional reputation. ‘It’s not what was said, it’s who said what.’

So, after that quick brain dump, let me recap:

  • Relationships are everything.
  • We publish primarily in text without the nuance of critical non-verbal communication.
  • Text has non-verbal elements like style and emoticons, but we can only control the latter.
  • Context is also non-verbal communication. Context is karma: Character and professional reputation.

Us Brits are a funny bunch. Traditionally reserved. Hyperbole-shy. At least, in public. We use certain extreme adjectives sparingly for the most part, and usually avoid superlatives if at all possible. We wince a little if we forget and get super-excited. We sometimes prefer ‘spiffing’ accompanied by a wry, ironic smile over an outright ‘awesome’. Both are genuine — one has an extra layer in the inflection cake. However, we take great displeasure in observing blunt marketing messages that try to convince us something is true with massive, lobe-smacking enthusiasm, and some sort of exaggerated adjective-osmosis effect. We poke fun at attempts to be overly cool. We expect a decent level of self-awareness and ring of honesty from people who would sell us stuff. The Web is no exception. In fact, I may go so far as to say that the sensibilities of the Web are fairly closely aligned with British sensibilities. Without, of course, any of our crippling embarrassment. In an age when promoting oneself on the Web is almost required for designers, that’s no bad thing. After all, running smack bang through the middle of the new marketing arts is a large dose of reality; we’re just a bunch of folks telling our story. No manipulation, cool-kid feigned nonchalance, or lobe-smacking enthusiasm required.

Consider what the majority of designers do to promote themselves in this brave new maker-creative culture. People like my friend, Elliot Jay Stocks: making his own magazine, making music, distributing WordPress themes, and writing about his experiences. Yes, it is important for him that he has an audience, and yes, he wants us to buy his stuff, but no, he won’t try to impress or trick us into liking him. It’s our choice. Compare this to traditional advertising that tries to appeal to your demographic with key phrases from your tribe, life-style pitches, and the usual raft of Freudian manipulations. (Sarcasm mark needed here, although I do confess to a soft spot for the more visceral and kitsch Freudian manipulations.)

There is a middle ground between the two though. A dangerous place full of bad surprises: The outfit that seems like a human being. It appears to publish just like you would. They want money in exchange for their amazing stuff they’re super-duper proud of. Then, you find out they’re selling it to you at twice the price it is in the States, or that it crashes every time it closes, or has awful OpenType support. You find out the human being was really a corporate cyborg who sounds like you, but is not of you, and it’s impervious to your appeals to human fairness. Then there are the folks who definitely are human, after all they’re only small, and you know their names. All the non-verbal communication tells you so. Then you peek a little closer —  you see the context — and all they seem to do is talk about themselves, or their business. Their interactions are as carefully crafted as the big companies, and they treat their audience as a captive market. Great spirit forefend they share the bandwidth by celebrating anyone else. They sound like one of us, but act like one of them. Their popularity is inversely proportional to their humanity.

Extreme examples, I know. This is me exploring thoughts though, and harsh light helps define the edges. Feel free to sound off if it offends, but mind your non-verbal communication. :)

That brings me to self-promotion versus self-aggrandisement; there’s a big difference between the two. As independent designers and developer-type people, self-promotion is good, necessary, and often mutually beneficial. It’s about goodwill. It connects us to each other and lubricates the Web. We need it. Self-aggrandisement is coarse, obvious, and often an act of denial; the odour of insecurity or arrogance is nauseating. It is to be avoided.

If you consider the difference between a show-off and a celebrant, perhaps it will be clearer what I’m reaching for:

The very best form of self-promotion is celebration. To celebrate is to share the joy of what you do (and critically also celebrate what others do) and invite folks to participate in the party. To show off is a weakness of character — an act that demands acknowledgement and accolade before the actor can feel the tragic joy of thinking themselves affirmed. To celebrate is to share joy. To show-off is to yearn for it.

It’s as tragic as the disdainful, casual arrogance of criticising the output of others less accomplished than oneself. Don’t be lazy now. Critique, if you please. Be bothered to help, or if you can’t hold back, have a little grace by being discreet and respectful. If you’re arrogant enough to think you have the right to treat anyone in the world badly, you grant them the right to reciprocate. Beware.

Celebrants don’t reserve their bandwidth for themselves. They don’t treat their friends like a tricky audience who may throw pennies at you at the end of the performance. They treat them like friends. It’s a pretty simple way of measuring whether what you publish is good: would I do/say/act the same way with my friends? Human scales are always the best scales.

So, this ends. I feel very out of practise at writing. It’s hard after a hiatus. These are a few thoughts that still feel partially-formed in my mind, but I hope there was a tiny snippet or two in there that fired off a few neurons in your brain. Not too many, though, it’s early yet. :)




io

Facebook Live Streaming and Audio/Video Hosting connected to Auphonic

Facebook is not only a social media giant, the company also provides valuable tools for broadcasting. Today we release a connection to Facebook, which allows to use the Facebook tools for video/audio production and publishing within Auphonic and our connected services.

The following workflows are possible with Facebook and Auphonic:
  • Use Facebook for live streaming, then import, process and distribute the audio/video with Auphonic.
  • Post your Auphonic audio or video productions directly to the news feed of your Facebook Page or User.
  • Use Facebook as a general media hosting service and share the link or embed the audio/video on any webpage (also visible to non-Facebook users).

Connect to Facebook

First you have to connect to a Facebook account at our External Services Page, click on the "Facebook" button.

Select if you want to connect to your personal Facebook User or to a Facebook Page:

It is always possible to remove or edit the connection in your Facebook Settings (Tab Business Integrations).

Import (Live) Videos from Facebook to Auphonic

Facebook Live is an easy (and free) way to stream live videos:

We implemented an interface to use Facebook as an Incoming External Service. Please select a (live or non-live) video from your Facebook Page/User as the source of a production and then process it with Auphonic:

This workflow allows you to use Facebook for live streaming, import and process the audio/video with Auphonic, then publish a podcast and video version of your live video to any of our connected services.

Export from Auphonic to Facebook

Similar to Youtube, it is possible to use Facebook for media file hosting.
Please add your Facebook Page/User as an External Service in your Productions or Presets to upload the Auphonic results directly to Facebook:

Options for the Facebook export:
  • Distribution Settings
    • Post to News Feed: The exported video is posted directly to your news feed / timeline.
    • Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic's video tab), but it is not posted to your news feed (you can do that later if you want).
    • Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want).
  • Embeddable
    Choose if the exported video should be embeddable in third-party websites.

It is always possible to change the distribution/privacy and embeddable options later directly on Facebook. For example, you can export a video to Facebook as Secret and publish it to your news feed whenever you want.


If your production is audio-only, we automatically generate a video track from the Cover Image and (possible) Chapter Images.
Alternatively you can select an Audiogram Output File, if you want to add an Audiogram (audio waveform visualization) to your Facebook video - for details please see Auphonic Audiogram Generator.

Auphonic Title and Description metadata fields are exported to Facebook as well.
If you add Speech Recognition to your production, we create an SRT file with the speech recognition results and add it to your Facebook video as captions.
See the example below.

Facebook Video Hosting Example with Audiogram and Automatic Captions

Facebook can be used as a general video hosting service: even if you export videos as Secret, you will get a direct link to the video which can be shared or embedded in any third-party websites. Users without a Facebook account are also able to view these videos.

In the example below, we automatically generate an Audiogram Video for an audio-only production, use our integrated Speech Recognition system to create captions and export the video as Secret to Facebook.
Afterwards it can be embedded directly into this blog post (enable Captions if they don't show up per default) - for details please see How to embed a video:

It is also possible to just use the generated result URL from Auphonic to share the link to your video (also visible to non-Facebook users):
https://www.facebook.com/auphonic/videos/1687244844638091/

Important Note:
Facebook needs some time to process an exported video (up to a few minutes) and the direct video link won't work before the processing is finished - please try again a bit later!
On Facebook Pages, you can see the processing progress in your Video Library.

Conclusion

Facebook has many broadcasting tools to offer and is a perfect addition to Auphonic.
Both systems and our other external services can be used to create automated processing and publishing workflows. Furthermore, the export and import to/from Facebook is also fully supported in the Auphonic API.

Please contact us if you have any questions or further ideas!




io

Auphonic Audio Inspector Release

At the Subscribe 9 Conference, we presented the first version of our new Audio Inspector:
The Auphonic Audio Inspector is shown on the status page of a finished production and displays details about what our algorithms are changing in audio files.

A screenshot of the Auphonic Audio Inspector on the status page of a finished Multitrack Production.
Please click on the screenshot to see it in full resolution!

It is possible to zoom and scroll within audio waveforms and the Audio Inspector might be used to manually check production result and input files.

In this blog post, we will discuss the usage and all current visualizations of the Inspector.
If you just want to try the Auphonic Audio Inspector yourself, take a look at this Multitrack Audio Inspector Example.

Inspector Usage

Control bar of the Audio Inspector with scrollbar, play button, current playback position and length, button to show input audio file(s), zoom in/out, toggle legend and a button to switch to fullscreen mode.

Seek in Audio Files
Click or tap inside the waveform to seek in files. The red playhead will show the current audio position.
Zoom In/Out
Use the zoom buttons ([+] and [-]), the mouse wheel or zoom gestures on touch devices to zoom in/out the audio waveform.
Scroll Waveforms
If zoomed in, use the scrollbar or drag the audio waveform directly (with your mouse or on touch devices).
Show Legend
Click the [?] button to show or hide the Legend, which describes details about the visualizations of the audio waveform.
Show Stats
Use the Show Stats link to display Audio Processing Statistics of a production.
Show Input Track(s)
Click Show Input to show or hide input track(s) of a production: now you can see and listen to input and output files for a detailed comparison. Please click directly on the waveform to switch/unmute a track - muted tracks are grayed out slightly:

Showing four input tracks and the Auphonic output of a multitrack production.

Please click on the fullscreen button (bottom right) to switch to fullscreen mode.
Now the audio tracks use all available screen space to see all waveform details:

A multitrack production with output and all input tracks in fullscreen mode.
Please click on the screenshot to see it in full resolution.

In fullscreen mode, it’s also possible to control playback and zooming with keyboard shortcuts:
Press [Space] to start/pause playback, use [+] to zoom in and [-] to zoom out.

Singletrack Algorithms Inspector

First, we discuss the analysis data of our Singletrack Post Production Algorithms.

The audio levels of output and input files, measured according to the ITU-R BS.1770 specification, are displayed directly as the audio waveform. Click on Show Input to see the input and output file. Only one file is played at a time, click directly on the Input or Output track to unmute a file for playback:

Singletrack Production with opened input file.
See the first Leveler Audio Example to try the audio inspector yourself.

Waveform Segments: Music and Speech (gold, blue)
Music/Speech segments are displayed directly in the audio waveform: Music segments are plotted in gold/yellow, speech segments in blue (or light/dark blue).
Waveform Segments: Leveler High/No Amplification (dark, light blue)
Speech segments can be displayed in normal, dark or light blue: Dark blue means that the input signal was very quiet and contains speech, therefore the Adaptive Leveler has to use a high amplification value in this segment.
In light blue regions, the input signal was very quiet as well, but our classifiers decided that the signal should not be amplified (breathing, noise, background sounds, etc.).

Yellow/orange background segments display leveler fades.

Background Segments: Leveler Fade Up/Down (yellow, orange)
If the volume of an input file changes in a fast way, the Adaptive Leveler volume curve will increase/decrease very fast as well (= fade) and should be placed in speech pauses. Otherwise, if fades are too slow or during active speech, one will hear pumping speech artifacts.
Exact fade regions are plotted as yellow (fade up, volume increase) and orange (fade down, volume decrease) background segments in the audio inspector.

Horizontal red lines display noise and hum reduction profiles.

Horizontal Lines: Noise and Hum Reduction Profiles (red)
Our Noise and Hiss Reduction and Hum Reduction algorithms segment the audio file in regions with different background noise characteristics, which are displayed as red horizontal lines in the audio inspector (top lines for noise reduction, bottom lines for hum reduction).
Then a noise print is extracted in each region and a classifier decides if and how much noise reduction is necessary - this is plotted as a value in dB below the top red line.
The hum base frequency (50Hz or 60Hz) and the strength of all its partials is also classified in each region, the value in Hz above the bottom red line indicates the base frequency and whether hum reduction is necessary or not (no red line).

You can try the singletrack audio inspector yourself with our Leveler, Noise Reduction and Hum Reduction audio examples.

Multitrack Algorithms Inspector

If our Multitrack Post Production Algorithms are used, additional analysis data is shown in the audio inspector.

The audio levels of the output and all input tracks are measured according to the ITU-R BS.1770 specification and are displayed directly as the audio waveform. Click on Show Input to see all the input files with track labels and the output file. Only one file is played at a time, click directly into the track to unmute a file for playback:

Input Tracks: Waveform Segments, Background Segments and Horizontal Lines
Input tracks are displayed below the output file including their track names. The same data as in our Singletrack Algorithms Inspector is calculated and plotted separately in each input track:
Output Waveform Segments: Multiple Speakers and Music
Each speaker is plotted in a separate, blue-like color - in the example above we have 3 speakers (normal, light and dark blue) and you can see directly in the waveform when and which speaker is active.
Audio from music input tracks are always plotted in gold/yellow in the output waveform, please try to not mix music and speech parts in music tracks (see also Multitrack Best Practice)!

You can try the multitrack audio inspector yourself with our Multitrack Audio Inspector Example or our general Multitrack Audio Examples.

Ducking, Background and Foreground Segments

Music tracks can be set to Ducking, Foreground, Background or Auto - for more details please see Automatic Ducking, Foreground and Background Tracks.

Ducking Segments (light, dark orange)
In Ducking, the level of a music track is reduced if one of the speakers is active, which is plotted as a dark orange background segment in the output track.
Foreground music parts, where no speaker is active and the music track volume is not reduced, are displayed as light orange background segments in the output track.
Background Music Segments (dark orange background)
Here the whole music track is set to Background and won’t be amplified when speakers are inactive.
Background music parts are plotted as dark organge background segments in the output track.
Foreground Music Segments (light orange background)
Here the whole music track is set to Foreground and its level won’t be reduced when speakers are active.
Foreground music parts are plotted as light organge background segments in the output track.

You can try the ducking/background/foreground audio inspector yourself: Fore/Background/Ducking Audio Examples.

Audio Search, Chapters Marks and Video

Audio Search and Transcriptions
If our Automatic Speech Recognition Integration is used, a time-aligned transcription text will be shown above the waveform. You can use the search field to search and seek directly in the audio file.
See our Speech Recognition Audio Examples to try it yourself.
Chapters Marks
Chapter Mark start times are displayed in the audio waveform as black vertical lines.
The current chapter title is written above the waveform - see “This is Chapter 2” in the screenshot above.

A video production with output waveform, input waveform and transcriptions in fullscreen mode.
Please click on the screenshot to see it in full resolution.

Video Display
If you add a Video Format or Audiogram Output File to your production, the audio inspector will also show a separate video track in addition to the audio output and input tracks. The video playback will be synced to the audio of output and input tracks.

Supported Audio Formats

We use the native HTML5 audio element for playback and the aurora.js javascript audio decoders to support all common audio formats:

WAV, MP3, AAC/M4A and Opus
These formats are supported in all major browsers: Firefox, Chrome, Safari, Edge, iOS Safari and Chrome for Android.
FLAC
FLAC is supported in Firefox, Chrome, Edge and Chrome for Android - see FLAC audio format.
In Safari and iOS Safari, we use aurora.js to directly decode FLAC files in javascript, which works but uses much more CPU compared to native decoding!
ALAC
ALAC is not supported by any browser so far, therefore we use aurora.js to directly decode ALAC files in javascript. This works but uses much more CPU compared to native decoding!
Ogg Vorbis
Only supported by Firefox, Chrome and Chrome for Android - for details please see Ogg Vorbis audio format.

We suggest to use a recent Firefox or Chrome browser for best performance.
Decoding FLAC and ALAC files also works in Safari and iOS with the help of aurora.js, but javascript decoders need a lot of CPU and they sometimes have problems with exact scrolling and seeking.

Please see our blog post Audio File Formats and Bitrates for Podcasts for more details about audio formats.

Mobile Audio Inspector

Multiple responsive layouts were created to optimize the screen space usage on Android and iOS devices, so that the audio inspector is fully usable on mobile devices as well: tap into the waveform to set the playhead location, scroll horizontally to scroll waveforms, scroll vertically to scroll between tracks, use zoom gestures to zoom in/out, etc.

Unfortunately the fullscreen mode is not available on iOS devices (thanks to Apple), but it works on Android and is a really great way to inspect everything using all the available screen space:

Audio inspector in horizontal fullscreen mode on Android.

Conclusion

Try the Auphonic Audio Inspector yourself: take a look at our Audio Example Page or play with the Multitrack Audio Inspector Example.

The Audio Inspector will be shown in all productions which are created in our Web Service.
It might be used to manually check production result/input files and to send us detailed feedback about audio processing results.

Please let us know if you have some feedback or questions - more visualizations will be added in future!







io

Auphonic Add-ons for Adobe Audition and Adobe Premiere

The new Auphonic Audio Post Production Add-ons for Adobe allows you to use the Auphonic Web Service directly within Adobe Audition and Adobe Premiere (Mac and Windows):

Audition Multitrack Editor with the Auphonic Audio Post Production Add-on.
The Auphonic Add-on can be embedded directly inside the Adobe user interface.


It is possible to export tracks/projects from Audition/Premiere and process them with the Auphonic audio post production algorithms (loudness, leveling, noise reduction - see Audio Examples), use our Encoding/Tagging, Chapter Marks, Speech Recognition and trigger Publishing with one click.
Furthermore, you can import the result file of an Auphonic Production into Audition/Premiere.


Download the Auphonic Audio Post Production Add-ons for Adobe:

Auphonic Add-on for Adobe Audition

Audition Waveform Editor with the Auphonic Audio Post Production Add-on.
Metadata, Marker times and titles will be exported to Auphonic as well.

Export from Audition to Auphonic

You can upload the audio of your current active document (a Multitrack Session or a Single Audio File) to our Web Service.
In case of a Multitrack Session, a mixdown will be computed automatically to create a Singletrack Production in our Web Service.
Unfortunately, it is not possible to export the individual tracks in Audition, which could be used to create Multitrack Productions.

Metadata and Markers
All metadata (see tab Metadata in Audition) and markers (see tab Marker in Audition and the Waveform Editor Screenshot) will be exported to Auphonic as well.
Marker times and titles are used to create Chapter Marks (Enhanced Podcasts) in your Auphonic output files.
Auphonic Presets
You can optionally choose an Auphonic Preset to use previously stored settings for your production.
Start Production and Upload & Edit Buttons
Click Upload & Edit to upload your audio and create a new Production for further editing. After the upload, a web browser will be started to edit/adjust the production and start it manually.
Click Start Production to upload your audio, create a new Production and start it directly without further editing. A web browser will be started to see the results of your production.
Audio Compression
Uncompressed Multitrack Sessions or audio files in Audition (WAV, AIFF, RAW, etc.) will be compressed automatically with lossless codecs to speed up the upload time without a loss in audio quality.
FLAC is used as lossless codec on Windows and Mac OS (>= 10.13), older Mac OS systems (< 10.13) do not support FLAC and use ALAC instead.

Import Auphonic Productions in Audition

To import the result of an Auphonic Production into Audition, choose the corresponding production and click Import.
The result file will be downloaded from the Auphonic servers and can be used within Audition. If the production contains multiple Output File Formats, the output file with the highest bitrate (or uncompressed/lossless if available) will be chosen.

Auphonic Add-on for Adobe Premiere

Premiere Video Editor with the Auphonic Audio Post Production Add-on.
The Auphonic Add-on can be embedded directly inside the Adobe Premiere user interface.

Export from Premiere to Auphonic

You can upload the audio of your current Active Sequence in Premiere to our Web Service.

We will automatically create an audio-only mixdown of all enabled audio tracks in your current Active Sequence.
Video/Image tracks are ignored: no video will be rendered or uploaded to Auphonic!
If you want to export a specific audio track, please just mute the other tracks.

Start Production and Upload & Edit Buttons
Click Upload & Edit to upload your audio and create a new Production for further editing. After the upload, a web browser will be started to edit/adjust the production and start it manually.
Click Start Production to upload your audio, create a new Production and start it directly without further editing. A web browser will be started to see the results of your production.
Auphonic Presets
You can optionally choose an Auphonic Preset to use previously stored settings for your production.
Chapter Markers
Chapter Markers in Premiere (not all the other marker types!) will be exported to Auphonic as well and are used to create Chapter Marks (Enhanced Podcasts) in your Auphonic output files.
Audio Compression
The mixdown of your Active Sequence in Premiere will be compressed automatically with lossless codecs to speed up the upload time without a loss in audio quality.
FLAC is used as lossless codec on Windows and Mac OS (>= 10.13), older Mac OS systems (< 10.13) do not support FLAC and use ALAC instead.

Import Auphonic Productions in Premiere

To import the result of an Auphonic Production into Premiere, choose the corresponding production and click Import.
The result file will be downloaded from the Auphonic servers and can be used within Premiere. If the production contains multiple Output File Formats, the output file with the highest bitrate (or uncompressed/lossless if available) will be chosen.

Installation

Install our Add-ons for Audition and Premiere directly on the Adobe Add-ons website:

Auphonic Audio Post Production for Adobe Audition:
https://exchange.adobe.com/addons/products/20433

Auphonic Audio Post Production for Adobe Premiere:
https://exchange.adobe.com/addons/products/20429

The installation requires the Adobe Creative Cloud desktop application and might take a few minutes. Please also also try to restart Audition/Premiere if the installation does not work (on Windows it was once even necessary to restart the computer to trigger the installation).


After the installation, you can start our Add-ons directly in Audition/Premiere:
navigate to Window -> Extensions and click Auphonic Post Production.

Enjoy

Thanks a lot to Durin Gleaves and Charles Van Winkle from Adobe for their great support!

Please let us know if you have any questions or feedback!







io

New Auphonic Transcript Editor and Improved Speech Recognition Services

Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable.
Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor:

Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.
Try out the Transcript Editor Examples yourself!


The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing.

The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services.
They also return word confidence values, timestamps and some punctuation, which is exported to our output files.

The Auphonic Transcript Editor

With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing.
Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible.

Main features of our transcript editor:
  • Edit the transcription directly in the HTML document.
  • Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine).
  • Listen to audio playback of specific words directly in the HTML editor.
  • Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription.
  • Export the edited transcript to HTML, PDF or WebVTT.
  • Completely useable on all mobile devices and desktop browsers.

Examples: Try Out the Transcript Editor

Here are two examples of the new transcript editor, taken from our speech recognition audio examples page:

1. Singletrack Transcript Editor Example
Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing.
2. Multitrack Transcript Editor Example
A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing.
As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!).

Transcript Editing

By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below).
If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes.


When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript.

Word Confidence Highlighting

Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy.

Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing.
Use the button Add/Remove Highlighting to disable/enable word confidence highlighting.

NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services!

Audio Playback

The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor).
This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file.

If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor.
Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days.

If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback.

Export Formats, Save/Share Transcript Editor

Click on the button Export... to see all export and saving/sharing options:

Save/Share Editor
The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies).
Export HTML / Export PDF / Export WebVTT
Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee).
Every export format is rendered directly in the browser, no server needed.

Amazon Transcribe

The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation.

UPDATE 2019:
Amazon Transcribe offers more languages now - please see Amazon Transcribe Features!

Pricing
The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h).
More information is available at Amazon Transcribe Pricing.
Custom Vocabulary (Keywords) Support
Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals).
The same feature is also available in the Google Cloud Speech API.
Timestamps, Word Confidence, and Punctuation
Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text.
It also adds some punctuation, which is combined with our own punctuation and formatting automatically.

The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages.
However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services!

Try it yourself:
Connect your Auphonic account with Amazon Transcribe at our External Services Page.

Speechmatics

Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation.

Many Languages
Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages).
It also has a Global English feature, which supports different English accents during transcription.
Timestamps, Word Confidence, and Punctuation
Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation.
Pricing
Speechmatics is the most expensive speech recognition service at Auphonic.
Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks.
They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic.
More information is available at Speechmatics Pricing.

Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic.

Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet.

Try it yourself:
Connect your Auphonic account with Speechmatics at our External Services Page.

What do you think?

Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated.

We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better?
Please let us know!






io

Audio Manipulations and Dynamic Ad Insertion with the Auphonic API

We are pleased to announce a new Audio Inserts feature in the Auphonic API: audio inserts are separate audio files (like intros/outros), which will be inserted into your production at a defined offset.
This blog post shows how one can use this feature for Dynamic Ad Insertion and discusses other Audio Manipulation Methods of the Auphonic API.

API-only Feature

For the general podcasting hobbyist, or even for someone producing a regular podcast, the features that are accessible via our web interface are more than sufficient.

However, some of our users, like podcasting companies who integrate our services as part of their products, asked us for dynamic ad insertions. We teamed up with them to develop a way of making this work within the Auphonic API.

We are pleased therefore to announce audio inserts - a new feature that has been made part of our API. This feature is not available through the web interface though, it requires the use of our API.

Before we talk about audio inserts, let's talk about what you need to know about dynamic ad insertion!

Dynamic Ad Insertion

There are two ways of dealing with adverts within podcasts. In the first, adverts are recorded or edited into the podcast and are fixed, or baked in. The second method is to use dynamic insertion, whereby the adverts are not part of the podcast recording/file but can be inserted into the podcast afterwards, at any time.

This second approach would allow you to run new ad campaigns across your entire catalog of shows. As a podcaster this allows you to potentially generate new revenue from your old content.

As a hosting company, dynamic ad insertion allows you to choose up to date and relevant adverts across all the podcasts you host. You can make these adverts relevant by subject or location, for instance.

Your users can define the time for the ads and their podcast episode, you are then in control of the adverts you insert.

Audio Inserts in Auphonic

Whichever approach to adverts you are taking, using audio inserts can help you.

Audio inserts are separate audio files which will be inserted into your main single or multitrack production at your defined offset (in seconds).

When a separate audio file is inserted as part of your production, it creates a gap in the podcast audio file, shifting the audio back by the length of the insert. Helpfully, chapters and other time-based information like transcriptions are also shifted back when an insert is used.

The biggest advantage of this is that Auphonic will apply loudness normalization to the audio insert so, from an audio point of view, it matches the rest of the podcast.

Although created with dynamic ad insertion in mind, this feature can be used for any type of audio inserts: adverts, music songs, individual parts of a recording, etc. In the case of baked-in adverts, you could upload your already processed advert audio as an insert, without having to edit it into your podcast recording using a separate audio editing application.

Please note that audio inserts should already be edited and processed before using them in production. (This is usually the case with pre-recorded adverts anyway). The only algorithm that Auphonic applies to an audio insert is loudness normalization in order to match the loudness of the entire production. Auphonic does not add any other processing (i.e. no leveling, noise reduction etc).

Audio Inserts Coding Example

Here is a brief overview of how to use our API for audio inserts. Be warned, this section is coding heavy, so if this isn't your thing, feel free to move along to the next section!

You can add audio insert files with a call to https://auphonic.com/api/production/{uuid}/multi_input_files.json, where uuid is the UUID of your production.
Here is an example with two audio inserts from an https URL. The offset/position in the main audio file must be given in seconds:

curl -X POST -H "Content-Type: application/json" 
    https://auphonic.com/api/production/{uuid}/multi_input_files.json 
    -u username:password 
    -d '[
            {
                "input_file": "https://mydomain.com/my_audio_insert_1.wav",
                "type": "insert",
                "offset": 20.5
            },
            {
                "input_file": "https://mydomain.com/my_audio_insert_2.wav",
                "type": "insert",
                "offset": 120.3
            }
        ]'

More details showing how to use audio inserts in our API can be seen here.

Additional API Audio Manipulations

In addition to audio inserts, using the Auphonic API offers a number of other audio manipulation options, which are not available via the web interface:

Cut start/end of audio files: See Docs
In Single-track productions, this feature allows the user to cut the start and/or the end of the uploaded audio file. Crucially, time-based information such as chapters etc. will be shifted accordingly.
Fade In/Out time of audio files: See Docs
This allows you to set the fade in/out time (in ms) at the start/end of output files. The default fade time is 100ms, but values can be set between 0ms and 5000ms.
This feature is also available in our Auphonic Leveler Desktop App.
Adding intro and outro: See Docs
Automatically add intros and outros to your main audio input file, as it is also available in our web interface.
Add multiple intros or outros: See Docs
Using our API, you can also add multiple intros or outros to a production. These intros or outros are played in series.
Overlapping intros/outros: See Docs
This feature allows intros/outros to overlap either the main audio or the following/previous intros/outros.

Conclusion

If you haven't explored our API already, the new audio inserts feature allows for greater flexibility and also dynamic ad insertion.
If you offer online services to podcasters, the Auphonic API would also then allow you to pass on Auphonic's audio processing algorithms to your customers.

If this is of interest to you or you have any new feature suggestions that you feel could benefit your company, please get in touch. We are always happy to extend the functionality of our products!







io

Leveler Presets, LRA Target and Advanced Audio Parameters (Beta)

Lots of users have asked us about more customization and control over the sound of our audio algorithms in the past, so today, we have introduced some advanced algorithm parameters for our singletrack version in a private beta program!

The following new parameters are available:

UPDATE Nov. 2018:
We released a complete rework of the Adaptive Leveler parameters and the description here is not valid anymore!
Please see Auphonic Adaptive Leveler Customization (Beta Update)!

Please join our private beta program and let us know how you use these new features or if you need even more control!

Leveler Presets

Our Adaptive Leveler corrects level differences between speakers, between music and speech and will also apply dynamic range compression to achieve a balanced overall loudness. If you don't know about the Leveler yet, take a look at our Audio Examples.

Leveler presets are basically complete new leveling algorithms, which we have been working on in the past few months:
Our current Leveler tries to normalize all speakers to the same loudness. However, in some cases, you might want more or less loudness differences (dynamic range / loudness range) between the speakers and music segments, or more or less compression, etc.
For these use cases, we have developed additional Leveler Presets and the parameter Maximum Loudness Range.

The following Leveler presets are now available:
Preset Medium:
This is our current leveling algorithm as demonstrated in the Audio Examples.
Preset Hard:
The hard preset reacts faster and applies more gain and compression compared to the medium preset. It is built for recordings with extreme loudness differences, for example very quiet questions from the audience in a lecture recording, extremely soft and loud voices within one audio track, etc.
Preset Soft:
This preset reacts slower, applies less gain and compression compared to the medium preset. Use it if you want to keep more loudness differences (dynamic narration), if you want your voices to sound "less compressed/processed", for dynamic music (concert/classical recordings), background music, etc.
Preset Softer:
Like soft, but softer :)
Preset Speech Medium, Music Soft:
Uses the medium preset in speech segments and the soft preset in music segments. It is built for music live recordings or dynamic music mixes, where you want to amplify all speakers but keep the loudness differences within and between music segments.
Preset Medium, No Compressor:
Like the medium preset, but only (mid-term) leveling and no (short-term) compression is applied. This preset is optimal if you just use a Maximum Loudness Range Target and want to avoid any additional compression as much as possible.
Please let us know your use case, if you need more/other controls or if anything is confusing. The Leveler presets are still in private beta and can be changed as necessary!

Maximum Loudness Range (LRA) Target

The loudness range (LRA) indicates the variation of loudness over the course of a program and is measured in LU (loudness units) - for more details see Loudness Measurement and Normalization or EBU Tech 3342.

The parameter Max Loudness Range controls how much leveling is applied:
volume changes of our Adaptive Leveler will be restricted so that the loudness range of the output file is below the selected value.
High loudness range values will result in very dynamic output files, low loudness range values in compressed output audio. If the LRA value of your input file is already below the maximum loudness range value, no leveling at all will be applied.

It is also important which Leveler Preset you select, for example, if you use the soft(er) preset, it won't be possible to achieve very low loudness range targets.

Also, the Max Loudness Range parameter is not such a precise target value as the Loudness Target. The LRA of your output file might be off a few LU, as it is not reasonable to reach the exact target value.

Use Cases: The Maximum LRA parameter allows you to control the strength of our leveling algorithms, in combination with the parameter Leveler Preset. This might be used for automatic mixdowns with different LRA values for different target platforms (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Maximum True Peak Level

This parameter sets the maximum allowed true peak level of the processed output file, which is controlled by the True Peak Limiter after our Global Loudness Normalization algorithms.

If set to Auto (which is the current default), a reasonable value according to the selected loudness target is used: -1dBTP for 23 LUFS (EBU R128) and higher, -2dBTP for -24 LUFS (ATSC A/85) and lower loudness targets.

The maximum true peak level parameter is already available in our desktop program.

Better Hum and Noise Reduction Controls

In addition to the parameter (Noise) Reduction Amount, we now offer two more parameters to control the combination of our Noise and Hum Reduction algorithms:
Hum Base Frequency:
Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.
Hum Reduction Amount:
Maximum hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides how much hum reduction is necessary in each speech region. Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier. Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Behavior of noise and hum reduction parameter combinations:

Noise Reduction Amount Hum Base Frequency Hum Reduction Amount
Auto Auto Auto Automatic hum and noise reduction
Auto or > 0 * Disabled No hum reduction, only denoise
Disabled 50Hz Auto or > 0 Force 50Hz hum reduction, no denoise
Disabled Auto Auto or > 0 Automatic dehum, no denoise
12dB 60Hz Auto or > 0 Always do dehum (60Hz) and denoise (12dB)

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

y6KCBI4yo0 ksIFEsmI1y BDZec2a21V i4XRGLlVm2 0UDxuS0vbu aaBxi35sKN aaiDSZUbmY bu8lPF80Ih eMsSl6Sf8K DaWpsUnyjo
2YM00m8zDW wh7K2pPmSa jCX7mMy2OJ ZGvvhzCpTF HI0lmGhjVO eXqVhN6QLU t4BH0tYcxY LMjQREVuOx emIogTCAth 0OTPNB7Coz
VIFY8STj2f eKzRSWzOyv 40cMMKKCMN oBruOxBkqS YGgPem6Ne7 BaaFG9I1xZ iSC0aNXoLn ZaS4TykKIa l32bTSBbAx xXWraxS40J
zGtwRJeAKy mVsx489P5k 6SZM5HjkxS QmzdFYOIpf 500AHHtEFA 7Kvk6JRU66 z7ATzwado6 4QEtpzeKzC c9qt9Z1YXx pGSrDzbEED
MP3JUTdnlf PDm2MOLJIG 3uDietVFSL 1i7jZX0Y9e zPkSgmAqqP 5OhcmHIZUP E0vNsPxZ4s FzTIyZIG2r 5EywA0M7r5 FMhpcFkVN5
oRLbRGcRmI 2LTh8GlN7h Cjw6Z3cveP fayCewjE55 GbkyX89Lxu 4LpGZGZGgc iQV7CXYwkH pGLyQPgaha e3lhKDRUMs Skrei1tKIa
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







io

Auphonic Adaptive Leveler Customization (Beta Update)

In late August, we launched the private beta program of our advanced audio algorithm parameters. After feedback by our users and many new experiments, we are proud to release a complete rework of the Adaptive Leveler parameters:

In the previous version, we based our Adaptive Leveler parameters on the Loudness Range descriptor (LRA), which is included in the EBU R128 specification.
Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values.
Therefore we developed our own algorithm to measure the dynamic range of audio signals, which works similarly for speech, music and other audio content.

The following advanced parameters for our Adaptive Leveler allow you to customize which parts of the audio should be leveled (foreground, all, speech, music, etc.), how much they should be leveled (dynamic range), and how much micro-dynamics compression should be applied.

To try out the new algorithms, please join our private beta program and let us know your feedback!

Leveler Preset

The Leveler Preset defines which parts of the audio should be adjusted by our Adaptive Leveler:

  • Default Leveler:
    Our classic, default leveling algorithm as demonstrated in the Leveler Audio Examples. Use it if you are unsure.
  • Foreground Only Leveler:
    This preset reacts slower and levels foreground parts only. Use it if you have background speech or background music, which should not be amplified.
  • Fast Leveler:
    A preset which reacts much faster. It is built for recordings with fast and extreme loudness differences, for example, to amplify very quiet questions from the audience in a lecture recording, to balance fast-changing soft and loud voices within one audio track, etc.
  • Amplify Everything:
    Amplify as much as possible. Similar to the Fast Leveler, but also amplifies non-speech background sounds like noise.

Leveler Dynamic Range

Our default Leveler tries to normalize all speakers to a similar loudness so that a consumer in a car or subway doesn't feel the need to reach for the volume control.
However, in other environments (living room, cinema, etc.) or in dynamic recordings, you might want more level differences (Dynamic Range, Loudness Range / LRA) between speakers and within music segments.

The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
We also like to call this Loudness Comfort Zone: above a maximum and below a minimum possible level (the comfort zone), no leveling is applied. So if your input file already has a small dynamic range (is within the comfort zone), our leveler will be just bypassed.

Example Use Cases:
Higher dynamic range values should be used if you want to keep more loudness differences in dynamic narration or dynamic music recordings (live concert/classical).
It is also possible to utilize this parameter to generate automatic mixdowns with different loudness range (LRA) values for different target environments (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Compressor

Controls Micro-Dynamics Compression:
The compressor reduces the volume of short and loud spikes like "p", "t" or laughter ( short-term dynamics) and also shapes the sound of your voice (it will sound more or less "processed").
The Leveler, on the other hand, adjusts mid-term level differences, as done by a sound engineer, using the faders of an audio mixer, so that a listener doesn't have to adjust the playback volume all the time.
For more details please see Loudness Normalization and Compression of Podcasts and Speech Audio.

Possible values are:
  • Auto:
    The compressor setting depends on the selected Leveler Preset. Medium compression is used in Foreground Only and Default Leveler presets, Hard compression in our Fast Leveler and Amplify Everything presets.
  • Soft:
    Uses less compression.
  • Medium:
    Our default setting.
  • Hard:
    More compression, especially tries to compress short and extreme level overshoots. Use this preset if you want your voice to sound very processed, our if you have extreme and fast-changing level differences.
  • Off:
    No short-term dynamics compression is used at all, only mid-term leveling. Switch off the compressor if you just want to adjust the loudness range without any additional micro-dynamics compression.

Separate Music/Speech Parameters

Use the switch Separate MusicSpeech Parameters (top right), to see separate Adaptive Leveler parameters for music and speech segments, to control all leveling details separately for speech and music parts:

For dialog intelligibility improvements in films and TV, it is important that the speech/dialog level and loudness range is not too soft compared to the overall programme level and loudness range. This parameter allows you to use more leveling in speech parts while keeping music and FX elements less processed.
Note: Speech, music and overall loudness and loudness range of your production are also displayed in our Audio Processing Statistics!

Example Use Case:
Music live recordings or dynamic music mixes, where you want to amplify all speakers (speech dynamic range should be small) but keep the dynamic range within and between music segments (music dynamic range should be high).
Dialog intelligibility improvements for films and TV, without effecting music and FX elements.

Other Advanced Audio Algorithm Parameters

We also offer advanced audio parameters for our Noise, Hum Reduction and Global Loudness Normalization algorithms:

For more details, please see the Advanced Audio Algorithms Documentation.

Want to know more?

If you want to know more details about our advanced algorithm parameters (especially the leveler parameters), please listen to the following podcast interview with Chris Curran (Podcast Engineering School):
Auphonic’s New Advanced Features, with Georg Holzmann – PES 108

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

jbwCVpLYrl 6zmLqq8o3z RXYIUbC6al QDmIZLuPKa JIrnGRZBgl SWQOWeZOBD ISeBCA9gTy w5FdsyhZVI qWAvANQ5mC twOjdHrit3
KwnL2Le6jB 63SE2V54KK G32AULFyaM 3H0CLYAwLU mp1GFNVZHr swzvEBRCVa rLcNJHUNZT CGGbL0O4q1 5o5dUjruJ9 hAggWBpGvj
ykJ57cFQSe 0OHAD2u1Dx RG4wSYTLbf UcsSYI78Md Xedr3NPCgK mI8gd7eDvO 0Au4gpUDJB mYLkvKYz1C ukrKoW5hoy S34sraR0BU
J2tlV0yNwX QwNdnStYD3 Zho9oZR2e9 jHdjgUq420 51zLbV09p4 c0cth0abCf 3iVBKHVKXU BK4kTbDQzt uTBEkMnSPv tg6cJtsMrZ
BdB8gFyhRg wBsLHg90GG EYwxVUZJGp HLQ72b65uH NNd415ktFS JIm2eTkxMX EV2C5RAUXI a3iwbxWjKj X1AT7DCD7V y0AFIrWo5l
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







io

More Languages for Amazon Transcribe Speech Recognition

Until recently, Amazon Transcribe supported speech recognition in English and Spanish only.
Now they included French, Italian and Portuguese as well - and a few other languages (including German) are in private beta.

Update March 2019:
Now Amazon Transcribe supports German and Korean as well.

The Auphonic Audio Inspector on the status page of a finished Multitrack Production including speech recognition.
Please click on the screenshot to see it in full resolution!


Amazon Transcribe is integrated as speech recognition engine within Auphonic and offers accurate transcriptions (compared to other services) at low costs, including keywords / custom vocabulary support, word confidence, timestamps, and punctuation.
See the following AWS blog post and video for more information about recent Amazon Transcribe developments: Transcribe speech in three new languages: French, Italian, and Brazilian Portuguese.

Amazon Transcribe is also a perfect fit if you want to use our Transcript Editor because you will be able to see word timestamps and confidence values to instantly check which section/words should be corrected manually to increase the transcription accuracy:


Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.

These features are also available if you use Speechmatics, but unfortunately not in our other integrated speech recognition services.

About Speech Recognition within Auphonic

Auphonic has built a layer on top of a few external speech recognition services to make audio searchable:
Our classifiers generate metadata during the analysis of an audio signal (music segments, silence, multiple speakers, etc.) to divide the audio file into small and meaningful segments, which are processed by the speech recognition engine. The results from all segments are then combined, and meaningful timestamps, simple punctuation and structuring are added to the resulting text.

To learn more about speech recognition within Auphonic, take a look at our Speech Recognition and Transcript Editor help pages or listen to our Speech Recognition Audio Examples.

A comparison table of our integrated services (price, quality, languages, speed, features, etc.) can be found here: Speech Recognition Services Comparison.

Conclusion

We hope that Amazon and others will continue to add new languages, to get accurate and inexpensive automatic speech recognition in many languages.

Don't hesitate to contact us if you have any questions or feedback about speech recognition or our transcript editor!






io

Advanced Multitrack Audio Algorithms Release (Beta)

Last weekend, at the Subscribe10 conference, we released Advanced Audio Algorithm Parameters for Multitrack Productions:

We launched our advanced audio algorithm parameters for Singletrack Productions last year. Now these settings (and more) are available for Multitrack Algorithms as well, which gives you detailed control for each track of your production.

The following new parameters are available:

Please join our private beta program and let us know how you use these new features or if you need even more control!

Fore/Background Settings

The parameter Fore/Background controls whether a track should be in foreground, in background, ducked, or unchanged, which is especially important for music or clip tracks.
For more details, please see Automatic Ducking, Foreground and Background Tracks .

We now added the new option Unchanged and a new parameter to set the level of background segments/tracks:
Unchanged (Foreground):
We sometimes received complaints from users, which produced very complex music or clip tracks, that Auphonic changes the levels too hard.
If you set the parameter Fore/Background to the new option Unchanged (Foreground), Level relations within this track won’t be changed at all. It will be added to the final mixdown so that foreground/solo parts of this track will be as loud as (foreground) speech from other tracks.
Background Level:
It is now possible to set the level of background segments/tracks (compared to foreground segments) in background and ducking tracks. By default, background and ducking segments are 18dB softer than foreground segments.

Leveler Parameters

Similar to our Singletrack Advanced Leveler Parameters (see this previous blog post), we also released leveling parameters for Multitrack Productions now.
The following advanced parameters for our Multitrack Adaptive Leveler can be set for each track and allow you to customize which parts of the audio should be leveled, how much they should be leveled, how much dynamic range compression should be applied and to set the stereo panorama (balance):

Leveler Preset:
Select the Speech or Music Leveler for this track.
If set to Automatic (default), a classifier will decide if this is a music or speech track.
Dynamic Range:
The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
For more details, please see Multitrack Leveler Parameters.
Compressor:
Select a preset for Micro-Dynamics Compression: Auto, Soft, Medium, Hard or Off.
The Compressor adjusts short-term dynamics, whereas the Leveler adjusts mid-term level differences.
For more details, please see Multitrack Leveler Parameters.
Stereo Panorama (Balance):
Change the stereo panorama (balance for stereo input files) of the current track.
Possible values: L100, L75, L50, L25, Center, R25, R50, R75 and R100.

If you understand German and want to know more about our Advanced Leveler Parameters and audio dynamics in general, watch our talk at the Subscribe10 conference:
Video: Audio Lautheit und Dynamik.

Better Hum and Noise Reduction Controls

We now offer three parameters to control the combination of our Multitrack Noise and Hum Reduction Algorithms for each input track:
Noise Reduction Amount:
Maximum noise and hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides if and how much noise reduction is necessary (to avoid artifacts). Set to a custom (non-Auto) value if you prefer more noise reduction or want to bypass our classifier.
Hum Base Frequency:
Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.
Hum Reduction Amount:
Maximum hum reduction amount in dB, higher values remove more noise.
In Auto mode, a classifier decides how much hum reduction is necessary in each speech region. Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier. Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Behavior of noise and hum reduction parameter combinations:

Noise Reduction Amount Hum Base Frequency Hum Reduction Amount
Auto Auto Auto Automatic hum and noise reduction
Auto or > 0 * Disabled No hum reduction, only denoise
Disabled 50Hz Auto or > 0 Force 50Hz hum reduction, no denoise
Disabled Auto Auto or > 0 Automatic dehum, no denoise
12dB 60Hz Auto or > 0 Always do dehum (60Hz) and denoise (12dB)

Maximum True Peak Level

In the Master Algorithm Settings of your multitrack production, you can set the maximum allowed true peak level of the processed output file, which is controlled by the True Peak Limiter after our Loudness Normalization algorithms.

If set to Auto (which is the current default), a reasonable value according to the selected loudness target is used: -1dBTP for 23 LUFS (EBU R128) and higher, -2dBTP for -24 LUFS (ATSC A/85) and lower loudness targets.

Full API Support

All advanced algorithm parameters, for Singletrack and Multitrack Productions, are available in our API as well, which allows you to integrate them into your scripts, external workflows and third-party applications.

Singletrack API:
Documentation on how to use the advanced algorithm parameters in our singletrack production API: Advanced Algorithm Parameters
Multitrack API:
Documentation of advanced settings for each track of a multitrack production:
Multitrack Advanced Audio Algorithm Settings

Join the Beta and Send Feedback

Please join our beta and let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

8tZPc3T9pH VAvO8VsDg9 0TwKXBW4Ni kjXJMivtZ1 J9APmAAYjT Zwm6HabuFw HNK5gF8FR5 Do1MPHUyPW CTk45VbV4t xYOzDkEnWP
9XE4dZ0FxD 0Sl3PxDRho uSoRQxmKPx TCI62OjEYu 6EQaPYs7v4 reIJVOwIr8 7hPJqZmWfw kti3m5KbNE GoM2nF0AcN xHCbDC37O5
6PabLBRm9P j2SoI8peiY olQ2vsmnfV fqfxX4mWLO OozsiA8DWo weJw0PXDky VTnOfOiL6l B6HRr6gil0 so0AvM1Ryy NpPYsInFqm
oFeQPLwG0k HmCOkyaX9R G7DR5Sc9Kv MeQLSUCkge xCSvPTrTgl jyQKG3BWWA HCzWRxSrgW xP15hYKEDl 241gK62TrO Q56DHjT3r4
9TqWVZHZLE aWFMSWcuX8 x6FR5OTL43 Xf6tRpyP4S tDGbOUngU0 5BkOF2I264 cccHS0KveO dT29cF75gG 2ySWlYp1kp iJWPhpAimF
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the Multitrack Advanced Audio Algorithm Parameters:
Auphonic Algorithm Parameters Private Beta Activation







io

Dynamic Range Processing in Audio Post Production

If listeners find themselves using the volume up and down buttons a lot, level differences within your podcast or audio file are too big.
In this article, we are discussing why audio dynamic range processing (or leveling) is more important than loudness normalization, why it depends on factors like the listening environment and the individual character of the content, and why the loudness range descriptor (LRA) is only reliable for speech programs.

Photo by Alexey Ruban.

Why loudness normalization is not enough

Everybody who has lived in an apartment building knows the problem: you want to enjoy a movie late at night, but you're constantly on the edge - not only because of the thrilling story, but because your index finger is hovering over the volume down button of your remote. The next loud sound effect is going to come sooner rather than later, and you want to avoid waking up your neighbors with some gunshot sounds blasting from your TV.

In our previous post, we talked about the overall loudness of a production. While that's certainly important to keep in mind, the loudness target is only an average value, ignoring how much the loudness varies within a production. The loudness target of your movie might be in the ideal range, yet the level differences between a gunshot and someone whispering can still be enormous - having you turn the volume down for the former and up for the latter.

While the average loudness might be perfect, level differences can lead to an unpleasant listening experience.

Of course, this doesn't apply to movies alone. The image above shows a podcast or radio production. The loud section is music, the very quiet section just breathing, and the remaining sections are different voices.

To be clear, we're not saying that the above example is problematic per se. There are many situations, where a big difference in levels - a high dynamic range - is justified: for instance, in a movie theater, optimized for listening and without any outside noise, or in classical music.
Also, if the dynamic range is too small, listening can be tiring.

But if you watch the same movie in an outdoor screening in the summer on a beach next to the crashing waves or in the middle of a noisy city, it can be tricky to hear the softer parts.
Spoken word usually has a smaller dynamic range, and if you produce your podcast for a target audience of train or car commuters, the dynamic range should be even smaller, adjusting for the listening situation.

Therefore, hitting the loudness target has less impact on the listening experience than level differences (dynamic range) within one file!
What makes a suitable dynamic range does not only depend on the listening environment, but also on the nature of the content itself. If the dynamic range is too small, the audio can be tiring to listen to, whereas more variability in levels can make a program more interesting, but might not work in all environments, such as a noisy car.

Dynamic range experiment in a car

Wolfgang Rein, audio technician at SWR, a public broadcaster in Germany, did an experiment to test how drivers react to programs with different dynamic ranges. They monitored to what level drivers set the car stereo depending on speed (thus noise level) and audio dynamic range.
While the results are preliminary, it seems like drivers set the volume as low as possible so that they can still understand the content, but don't get distracted by loud sounds.

As drivers adjust the volume to the loudest voice in a program, they won't understand quieter speakers in content with a high dynamic range anymore. To some degree and for short periods of time, they can compensate by focusing more on the radio program, but over time that's tiring. Therefore, if the loudness varies too much, drivers tend to switch to another program rather than adjusting the volume.
Similar results have been found in a study conducted by NPR Labs and Towson University.

On the other hand, the perception was different in pure music programs. When drivers set the volume according to louder parts, they weren't able to hear softer segments or the beginning of a song very well. But that did not matter to them as much and didn't make them want to turn up the volume or switch the program.

Listener's reaction in response to frequent loudness changes. (from John Kean, Eli Johnson, Dr. Ellyn Sheffield: Study of Audio Loudness Range for Consumers in Various Listening Modes and Ambient Noise Levels)

Loudness comfort zone

The reaction of drivers to variable loudness hints at something that BBC sound engineer Mike Thornton calls the loudness comfort zone.

Tests (...) have shown that if the short-term loudness stays within the "comfort zone" then the consumer doesn’t feel the need to reach for the remote control to adjust the volume.
In a blog post, he highlights how the series Blue Planet 2 and Planet Earth 2 might not always have been the easiest to listen to. The graph below shows an excerpt with very loud music, followed by commentary just at the bottom of the green comfort zone. Thornton writes: "with the volume set at a level that was comfortable when the music was playing we couldn’t always hear the excellent commentary from Sir David Attenborough and had to resort to turning on the subtitles to be sure we knew what Sir David was saying!"

Planet Earth 2 Loudness Plot Excerpt. Colored green: comfort zone of +3 to -5LU around the loudness target. (from Mike Thornton: BBC Blue Planet 2 Latest Show In Firing Line For Sound Issues - Are They Right?)

As already mentioned above, a good mix considers the maximum and minimum possible loudness in the target listening environment.
In a movie theater the loudness comfort zone is big (loudness can vary a lot), and loud music is part of the fun, while quiet scenes work just as well. The opposite was true in the aforementioned experiment with drivers, where the loudness comfort zone is much smaller and quiet voices are difficult to understand.

Hence, the loudness comfort zone determines how much dynamic range an audio signal can use in a specific listening environment.

How to measure dynamic range: LRA

When producing audio for various environments, it would be great to have a target value for dynamic range, (the difference between the smallest and largest signal values of an audio signal) as well. Then you could just set a dynamic range target, similarly to a loudness target.

Theoretically, the maximum possible dynamic range of a production is defined by the bit-depth of the audio format. A 16-bit recording can have a dynamic range of 96 dB; for 24-bit, it's 144 dB - which is well above the approx. 120 dB the human ear can handle. However, most of those bits are typically being used to get to a reasonable base volume. Picture a glass of water: you want it to be almost full, with some headroom so that it doesn't spill when there's a sudden movement, i.e. a bigger amplitude wave at the top.

Determining the dynamic range of a production is easier said than done, though. It depends on which signals are included in the measurement: for example, if something like background music or breathing should be considered at all.
The currently preferred method for broadcasting is called Loudness Range, LRA. It is measured in Loudness Units (LU), and takes into account everything between the 10th and the 95th percentile of a loudness distribution, after an additional gating method. In other words, the loudest 5% and quietest 10% of the audio signal are being ignored. This way, quiet breathing or an occasional loud sound effect won't affect the measurement.

Loudness distribution and LRA for the film 'The Matrix'. Figure from EBU Tech Doc 3343 (p.13).

However, the main difficulty is which signals should be included in the loudness range measurement and which ones should be gated. This is unfortunately often very subjective and difficult to define with a purely statistical method like LRA.

Where LRA falls short

Therefore, only pure speech programs give reliable LRA values that are comparable!
For instance, a typical LRA for news programs is 3 LU; for talks and discussions 5 LU is common. LRA values for features, radio dramas, movies or music very much depend on the individual character and might be in the range between 5 and 25 LU.

To further illustrate this, here are some typical LRA values, according to a paper by Thomas Lund (table 2):

ProgramLoudness Range
Matrix, full movie25.0
NBC Interstitials, Jan. 2008, all together (3:30)9.4
Friends Episode 166.6
Speak Ref., Male, German, SQUAM Trk 546.2
Speak Ref., Female, French, SQUAM Trk 514.8
Speak Ref., Male, English, Sound Check3.3
Wish You Were Here, Pink Floyd22.1
Gilgamesh, Battle of Titans, Osaka Symph.19.7
Don’t Cry For Me Arg., Sinead O’Conner13.7
Beethoven Son in F, Op17, Kliegel & Tichman12.0
Rock’n Roll Train, AC/DC6.0
I.G.Y., Donald Fagen3.6

LRA values of music are very unpredictable as well.
For instance, Tom Frampton measured the LRA of songs in multiple genres, and the differences within each genre are quite big. The ten pop songs that he analyzed varied in LRA between 3.7 and 12 LU, country songs between 3.6 and 14.9 LU. In the Electronic genre the individual LRAs were between 3.7 and 15.2 LU. Please see the tables at the bottom of his blog post for more details.

We at Auphonic also tried to base our Adaptive Leveler parameters on the LRA descriptor. Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values. Therefore we developed our own algorithm to measure the dynamic range of audio signals.

In conclusion, LRA comparisons are only useful for productions with spoken word only and the LRA value is therefore not applicable as a general dynamic range target value. The more complex a production gets, the more difficult it is to make any judgment based on the LRA.
This is, because the definition of LRA is purely statistical. There's no smart measurement using classifiers that distinguish between music, speech, quiet breathing, background noises and other types of audio. One would need a more intelligent algorithm (as we use in our Adaptive Leveler), that knows which audio segments should be included and excluded from the measurement.

From theory to application: tools

Loudness and dynamic range clearly is a complicated topic. Luckily, there are tools that can help. To keep short-term loudness in range, a compressor can help control sudden changes in loudness - such as p-pops or consonants like t or k. To achieve a good mid-term loudness, i.e. a signal that doesn't go outside the comfort zone too much, a leveler is a good option. Or, just use a fader or manually adjust volume curves. And to make sure that separate productions sound consistent, loudness normalization is the way to go. We have covered all of this in-depth before.

Looking at the audio from above again, with an adaptive leveler applied it looks like this:

Leveler example. Output at the top, input with leveler envelope at the bottom.

Now, the voices are evened out and the music is at a comfortable level, while the breathing has not been touched at all.
We recently extended Auphonic's adaptive leveler, so that it is now possible to customize the dynamic range - please see adaptive leveler customization and advanced multitrack audio algorithms.
If you wanted to increase the loudness comfort zone (or dynamic range) of the standard preset by 10 dB (or LU), for example, the envelope would look like this:

Leveler with higher dynamic range, only touching sections with extremely low or extremely high loudness to fit into a specific loudness comfort zone.

When a production is done, our adaptive leveler uses classifiers to also calculate the integrated loudness and loudness range of dialog and music sections separately. This way it is possible to just compare the dialog LRA and loudness of complex productions.

Assessing the LRA and loudness of dialog and music separately.

Conclusion

Getting audio dynamics right is not easy. Yet, it is an important thing to keep in mind, because focusing on loudness normalization alone is not enough. In fact, hitting the loudness target often has less impact on the listening experience than level differences, i.e. audio dynamics.

If the dynamic range is too small, the audio can be tiring to listen to, whereas a bigger dynamic range can make a program more interesting, but might not work in loud environments, such as a noisy train.
Therefore, a good mix adapts the audio dynamic range according to the target listening environment (different loudness comfort zones in cinema, at home, in a car) and according to the nature of the content (radio feature, movie, podcast, music, etc.).

Furthermore, because the definition of the loudness range / LRA is purely statistical, only speech programs give reliable LRA values that are comparable.
More "intelligent" algorithms are in development, which use classifiers to decide which signals should be included and excluded from the dynamic range measurement.

If you understand German, take a look at our presentation about audio dynamic processing in podcasts for further information:







io

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




io

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

io

A Viget Exploration: How Tech Can Help in a Pandemic

Viget Explorations have always been the result of our shared curiosities. They’re usually a spontaneous outcome of team downtime and a shared problem we’ve experienced. We use our Explorations to pursue our diverse interests and contribute to the conversations about building a better digital world.

As the COVID-19 crisis emerged, we were certainly experiencing a shared problem. As a way to keep busy and manage our anxieties, a small team came together to dive into how technology has helped, and, unfortunately, hindered the community response to the current pandemic.

We started by researching the challenges we saw: information overload, a lack of clarity, individual responsibility, and change. Then we brainstormed possible technical solutions that could further improve how communities respond to a pandemic. Click here to see our Exploration on some possible ways to take the panic out of pandemics.

While we aren’t currently pursuing the solutions outlined in the Exploration, we’d love to hear what you think about these approaches, as well as any ideas you have for how technology can help address the outlined challenges.

Please note, this Exploration doesn’t provide medical information. Visit the Center for Disease Control’s website for current information and COVID-19, its symptoms, and treatments.

At Viget, we’re adjusting to this crisis for the safety of our clients, our staff, and our communities. If you’d like to hear from Viget's co-founder, Brian Williams, you can read his article on our response to the situation.



  • News & Culture

io

Pursuing A Professional Certification In Scrum

Professional certifications have become increasingly popular in this age of career switchers and the freelance gig economy. A certification can be a useful way to advance your skill set quickly or make your resume stand out, which can be especially important for those trying to break into a new industry or attract business while self-employed. Whatever your reason may be for pursuing a professional certificate, there is one question only you can answer for yourself: is it worth it?

Finding first-hand experiences from professionals with similar career goals and passions was the most helpful research I used to answer that question for myself. So, here’s mine; why I decided to get Scrum certified, how I evaluated my options, and if it was really worth it.

A shift in mindset

My background originates in brand strategy where it’s typical for work to follow a predictable order, each step informing the next. This made linear techniques like water-fall timelines, completing one phase of work in its entirety before moving onto the next, and documenting granular tasks weeks in advance helpful and easy to implement. When I made the move to more digitally focused work, tasks followed a much looser set of ‘typical’ milestones. While the general outline remained the same (strategy, design, development, launch) there was a lot more overlap with how tasks informed each other, and would keep informing and re-informing as an iterative workflow would encourage.

Trying to fit a very fluid process into my very stiff linear approach to project planning didn’t work so well. I didn’t have the right strategies to manage risks in a productive way without feeling like the whole project was off track; with the habit of account for granular details all the time, I struggled to lean on others to help define what we should work on and when, and being okay if that changed once, or twice, or three times. Everything I learned about the process of product development came from learning on the job and making a ton of mistakes—and I knew I wanted to get better.

Photo by Christin Hume on Unsplash

I was fortunate enough to work with a group of developers who were looking to make a change, too. Being ‘agile’-enthusiasts, this group of developers were desperately looking for ways to infuse our approach to product work with agile-minded principles (the broad definition of ‘agile’ comes from ‘The Agile Manifesto’, which has influenced frameworks for organizing people and information, often applied in product development). This not only applied to how I worked with them, but how they worked with each other, and the way we all onboarded clients to these new expectations. This was a huge eye opener to me. Soon enough, I started applying these agile strategies to my day-to-day— running stand-ups, setting up backlogs, and reorganizing the way I thought about work output. It’s from this experience that I decided it may be worth learning these principles more formally.

The choice to get certified

There is a lot of literature out there about agile methodologies and a lot to be learned from casual research. This benefitted me for a while until I started to work on more complicated projects, or projects with more ambitious feature requests. My decision to ultimately pursue a formal agile certification really came down to three things:

  1. An increased use of agile methods across my team. Within my day-to-day I would encounter more team members who were familiar with these tactics and wanted to use them to structure the projects they worked on.
  2. The need for a clear definition of what processes to follow. I needed to grasp a real understanding of how to implement agile processes and stay consistent with using them to be an effective champion of these principles.
  3. Being able to diversify my experience. Finding ways to differentiate my resume from others with similar experience would be an added benefit to getting a certification. If nothing else, it would demonstrate that I’m curious-minded and proactive about my career.

To achieve these things, I gravitated towards a more foundational education in a specific agile-methodology. This made Scrum the most logical choice given it’s the basis for many of the agile strategies out there and its dominance in the field.

Evaluating all the options

For Scrum education and certification, there are really two major players to consider.

  1. Scrum Alliance - Probably the most well known Scrum organization is Scrum Alliance. They are a highly recognizable organization that does a lot to further the broader understanding of Scrum as a practice.
  2. Scrum.org - Led by the original co-founder of Scrum, Ken Schwaber, Scrum.org is well-respected and touted for its authority in the industry.

Each has their own approach to teaching and awarding certifications as well as differences in price point and course style that are important to be aware of.

SCRUM ALLIANCE

Pros

  • Strong name recognition and leaders in the Scrum field
  • Offers both in-person and online courses
  • Hosts in-person events, webinars, and global conferences
  • Provides robust amounts of educational resources for its members
  • Has specialization tracks for folks looking to apply Scrum to their specific discipline
  • Members are required to keep their skills up to date by earning educational credits throughout the year to retain their certification
  • Consistent information across all course administrators ensuring you'll be set up to succeed when taking your certification test.

Cons

  • High cost creates a significant barrier to entry (we’re talking in the thousands of dollars here)
  • Courses are required to take the certification test
  • Certification expires after two years, requiring additional investment in time and/or money to retain credentials
  • Difficult to find sample course material ahead of committing to a course
  • Courses are several days long which may mean taking time away from a day job to complete them

SCRUM.ORG

Pros

  • Strong clout due to its founder, Ken Schwaber, who is the originator of Scrum
  • Offers in-person classes and self-paced options
  • Hosts in-person events and meetups around the world
  • Provides free resources and materials to the public, including practice tests
  • Has specialization tracks for folks looking to apply Scrum to their specific discipline
  • Minimum score on certification test required to pass; certification lasts for life
  • Lower cost for certification when compared to peers

Cons

  • Much lesser known to the general public, as compared to its counterpart
  • Less sophisticated educational resources (mostly confined to PDFs or online forums) making digesting the material challenging
  • Practice tests are slightly out of date making them less effective as a study tool
  • Self-paced education is not structured and therefore can’t ensure you’re learning everything you need to know for the test
  • Lack of active and engaging community will leave something to be desired

Before coming to a decision, it was helpful to me to weigh these pros and cons against a set of criteria. Here’s a helpful scorecard I used to compare the two institutions.

Scrum Alliance Scrum.org
Affordability ⚪⚪⚪
Rigor⚪⚪⚪⚪⚪
Reputation⚪⚪⚪⚪⚪
Recognition⚪⚪⚪
Community⚪⚪⚪
Access⚪⚪⚪⚪⚪
Flexibility⚪⚪⚪
Specialization⚪⚪⚪⚪⚪⚪
Requirements⚪⚪⚪
Longevity⚪⚪⚪

For me, the four areas that were most important to me were:

  • Affordability - I’d be self-funding this certificate so the investment of cost would need to be manageable.
  • Self-paced - Not having a lot of time to devote in one sitting, the ability to chip away at coursework was appealing to me.
  • Reputation - Having a certificate backed by a well-respected institution was important to me if I was going to put in the time to achieve this credential.
  • Access - Because I wanted to be a champion for this framework for others in my organization, having access to resources and materials would help me do that more effectively.

Ultimately, I decided upon a Professional Scrum Master certification from Scrum.org! The price and flexibility of learning course content were most important to me. I found a ton of free materials on Scrum.org that I could study myself and their practice tests gave me a good idea of how well I was progressing before I committed to the cost of actually taking the test. And, the pedigree of certification felt comparable to that of Scrum Alliance, especially considering that the founder of Scrum himself ran the organization.

Putting a certificate to good use

I don’t work in a formal Agile company, and not everyone I work with knows the ins and outs of Scrum. I didn’t use my certification to leverage a career change or new job title. So after all that time, money, and energy, was it worth it?

I think so. I feel like I use my certification every day and employ many of the principles of Scrum in my day-to-day management of projects and people.

  • Self-organizing teams is really important when fostering trust and collaboration among project members. This means leaning on each other’s past experiences and lessons learned to inform our own approach to work. It also means taking a step back as a project manager to recognize the strengths on your team and trust their lead.
  • Approaching things in bite size pieces is also a best practice I use every day. Even when there isn't a mandated sprint rhythm, breaking things down into effort level, goals, and requirements is an excellent way to approach work confidently and avoid getting too overwhelmed.
  • Retrospectives and stand ups are also absolute musts for Scrum practices, and these can be modified to work for companies and project teams of all shapes and sizes. Keeping a practice of collective communication and reflection will keep a team humming and provides a safe space to vent and improve.
Photo by Gautam Lakum on Unsplash

Parting advice

I think furthering your understanding of industry standards and keeping yourself open to new ways of working will always benefit you as a professional. Professional certifications are readily available and may be more relevant than ever.

If you’re on this path, good luck! And here are some things to consider:

  • Do your research – With so many educational institutions out there, you can definitely find the right one for you, with the level of rigor you’re looking for.
  • Look for company credits or incentives – some companies cover part or all of the cost for continuing education.
  • Get started ASAP – You don’t need a full certification to start implementing small tactics to your workflows. Implementing learnings gradually will help you determine if it’s really something you want to pursue more formally.




io

New regional visas for Australia

The Australian Government has introduced two new regional visas which requires migrants to commit to life in regional Australia for at least three years. This new visa opens to the door to permanent residency for overseas workers from a wider range of occupations than before — including such occupations as real estate agents, call centre […]

The post New regional visas for Australia appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




io

Occupations that may be taken off or put onto the skilled migration occupation lists

The Department of Employment, Skills, Small and Family Business is considering removing the following occupations from the Skilled Migration Occupation Lists (Skills List) in March 2020: Careers Counsellor Vehicle Trimmer Business Machine Mechanic Animal Attendants and Trainers Gardener (General) Hairdresser Wood Machinist Massage Therapist Community Worker Diving Instructor (Open Water) Gymnastics Coach or Instructor At […]

The post Occupations that may be taken off or put onto the skilled migration occupation lists appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




io

Visa cancelled due to incorrect information given or provided to the Department of Home Affairs

It is a requirement that a visa applicant must fill in or complete his or her application form in a manner that all questions are answered, and no incorrect answers are given or provided. There is also a requirement that visa applicants must not provide incorrect information during interviews with the Minister for Immigration (‘Minister’), […]

The post Visa cancelled due to incorrect information given or provided to the Department of Home Affairs appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.



  • Visa Cancellation
  • 1703474 (Refugee) [2017] AATA 2985
  • cancel a visa
  • cancelledvi sa
  • Citizenship and Multicultural Affairs
  • Department of Home Affairs
  • migration act 1958
  • minister for immigration
  • NOICC
  • notice of intention to consider cancellation
  • Sanaee (Migration) [2019] AATA 4506
  • section 109
  • time limits

io

Reel 3.0: New Color Schemes, Portfolio Styles & More!

We’re very excited to announce a new major update for our Reel theme. The new 3.0 version brings new color schemes and many improvements to the Portfolio Showcase widget. What’s new in 3.0? 5 New Color Schemes + 2 New Theme Styles Full-width header option New styles & options for Portfolio Showcase widget 5 New Color Schemes After long research […]




io

Presence 2.0: Beaver Builder Integration, Dark Skin & More!

Great news for the users of Presence — our multipurpose theme. We have finally released the long-awaited 2.0 version, which features major changes and improvements. What’s new in Presence 2.0? Beaver Builder Integration Dark Skin New Demo: Organic Shop New Typography and Colors options in the Customizer New Templates in Page Builder Beaver Builder Integration If you have followed recent […]




io

Jiacheng Yang 2020 Portfolio

Interaction Designer’s 2020 portfolio




io

Concurrency & Multithreading in iOS

Concurrency is the notion of multiple things happening at the same time. This is generally achieved either via time-slicing, or truly in parallel if multiple CPU cores are available to the host operating system. We've all experienced a lack of concurrency, most likely in the form of an app freezing up when running a heavy task. UI freezes don't necessarily occur due to the absence of concurrency — they could just be symptoms of buggy software — but software that doesn't take advantage of all the computational power at its disposal is going to create these freezes whenever it needs to do something resource-intensive. If you've profiled an app hanging in this way, you'll probably see a report that looks like this:

Anything related to file I/O, data processing, or networking usually warrants a background task (unless you have a very compelling excuse to halt the entire program). There aren't many reasons that these tasks should block your user from interacting with the rest of your application. Consider how much better the user experience of your app could be if instead, the profiler reported something like this:

Analyzing an image, processing a document or a piece of audio, or writing a sizeable chunk of data to disk are examples of tasks that could benefit greatly from being delegated to background threads. Let's dig into how we can enforce such behavior into our iOS applications.


A Brief History

In the olden days, the maximum amount of work per CPU cycle that a computer could perform was determined by the clock speed. As processor designs became more compact, heat and physical constraints started becoming limiting factors for higher clock speeds. Consequentially, chip manufacturers started adding additional processor cores on each chip in order to increase total performance. By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output. There's just one problem...

How can we take advantage of these extra cores? Multithreading.

Multithreading is an implementation handled by the host operating system to allow the creation and usage of n amount of threads. Its main purpose is to provide simultaneous execution of two or more parts of a program to utilize all available CPU time. Multithreading is a powerful technique to have in a programmer's toolbelt, but it comes with its own set of responsibilities. A common misconception is that multithreading requires a multi-core processor, but this isn't the case — single-core CPUs are perfectly capable of working on many threads, but we'll take a look in a bit as to why threading is a problem in the first place. Before we dive in, let's look at the nuances of what concurrency and parallelism mean using a simple diagram:

In the first situation presented above, we observe that tasks can run concurrently, but not in parallel. This is similar to having multiple conversations in a chatroom, and interleaving (context-switching) between them, but never truly conversing with two people at the same time. This is what we call concurrency. It is the illusion of multiple things happening at the same time when in reality, they're switching very quickly. Concurrency is about dealing with lots of things at the same time. Contrast this with the parallelism model, in which both tasks run simultaneously. Both execution models exhibit multithreading, which is the involvement of multiple threads working towards one common goal. Multithreading is a generalized technique for introducing a combination of concurrency and parallelism into your program.


The Burden of Threads

A modern multitasking operating system like iOS has hundreds of programs (or processes) running at any given moment. However, most of these programs are either system daemons or background processes that have very low memory footprint, so what is really needed is a way for individual applications to make use of the extra cores available. An application (process) can have many threads (sub-processes) operating on shared memory. Our goal is to be able to control these threads and use them to our advantage.

Historically, introducing concurrency to an app has required the creation of one or more threads. Threads are low-level constructs that need to be managed manually. A quick skim through Apple's Threaded Programming Guide is all it takes to see how much complexity threaded code adds to a codebase. In addition to building an app, the developer has to:

  • Responsibly create new threads, adjusting that number dynamically as system conditions change
  • Manage them carefully, deallocating them from memory once they have finished executing
  • Leverage synchronization mechanisms like mutexes, locks, and semaphores to orchestrate resource access between threads, adding even more overhead to application code
  • Mitigate risks associated with coding an application that assumes most of the costs associated with creating and maintaining any threads it uses, and not the host OS

This is unfortunate, as it adds enormous levels of complexity and risk without any guarantees of improved performance.


Grand Central Dispatch

iOS takes an asynchronous approach to solving the concurrency problem of managing threads. Asynchronous functions are common in most programming environments, and are often used to initiate tasks that might take a long time, like reading a file from the disk, or downloading a file from the web. When invoked, an asynchronous function executes some work behind the scenes to start a background task, but returns immediately, regardless of how long the original task might takes to actually complete.

A core technology that iOS provides for starting tasks asynchronously is Grand Central Dispatch (or GCD for short). GCD abstracts away thread management code and moves it down to the system level, exposing a light API to define tasks and execute them on an appropriate dispatch queue. GCD takes care of all thread management and scheduling, providing a holistic approach to task management and execution, while also providing better efficiency than traditional threads.

Let's take a look at the main components of GCD:

What've we got here? Let's start from the left:

  • DispatchQueue.main: The main thread, or the UI thread, is backed by a single serial queue. All tasks are executed in succession, so it is guaranteed that the order of execution is preserved. It is crucial that you ensure all UI updates are designated to this queue, and that you never run any blocking tasks on it. We want to ensure that the app's run loop (called CFRunLoop) is never blocked in order to maintain the highest framerate. Subsequently, the main queue has the highest priority, and any tasks pushed onto this queue will get executed immediately.
  • DispatchQueue.global: A set of global concurrent queues, each of which manage their own pool of threads. Depending on the priority of your task, you can specify which specific queue to execute your task on, although you should resort to using default most of the time. Because tasks on these queues are executed concurrently, it doesn't guarantee preservation of the order in which tasks were queued.

Notice how we're not dealing with individual threads anymore? We're dealing with queues which manage a pool of threads internally, and you will shortly see why queues are a much more sustainable approach to multhreading.

Serial Queues: The Main Thread

As an exercise, let's look at a snippet of code below, which gets fired when the user presses a button in the app. The expensive compute function can be anything. Let's pretend it is post-processing an image stored on the device.

import UIKit

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        compute()
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

At first glance, this may look harmless, but if you run this inside of a real app, the UI will freeze completely until the loop is terminated, which will take... a while. We can prove it by profiling this task in Instruments. You can fire up the Time Profiler module of Instruments by going to Xcode > Open Developer Tool > Instruments in Xcode's menu options. Let's look at the Threads module of the profiler and see where the CPU usage is highest.

We can see that the Main Thread is clearly at 100% capacity for almost 5 seconds. That's a non-trivial amount of time to block the UI. Looking at the call tree below the chart, we can see that the Main Thread is at 99.9% capacity for 4.43 seconds! Given that a serial queue works in a FIFO manner, tasks will always complete in the order in which they were inserted. Clearly the compute() method is the culprit here. Can you imagine clicking a button just to have the UI freeze up on you for that long?

Background Threads

How can we make this better? DispatchQueue.global() to the rescue! This is where background threads come in. Referring to the GCD architecture diagram above, we can see that anything that is not the Main Thread is a background thread in iOS. They can run alongside the Main Thread, leaving it fully unoccupied and ready to handle other UI events like scrolling, responding to user events, animating etc. Let's make a small change to our button click handler above:

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        DispatchQueue.global(qos: .userInitiated).async { [unowned self] in
            self.compute()
        }
    }

    private func compute() -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Unless specified, a snippet of code will usually default to execute on the Main Queue, so in order to force it to execute on a different thread, we'll wrap our compute call inside of an asynchronous closure that gets submitted to the DispatchQueue.global queue. Keep in mind that we aren't really managing threads here. We're submitting tasks (in the form of closures or blocks) to the desired queue with the assumption that it is guaranteed to execute at some point in time. The queue decides which thread to allocate the task to, and it does all the hard work of assessing system requirements and managing the actual threads. This is the magic of Grand Central Dispatch. As the old adage goes, you can't improve what you can't measure. So we measured our truly terrible button click handler, and now that we've improved it, we'll measure it once again to get some concrete data with regards to performance.

Looking at the profiler again, it's quite clear to us that this is a huge improvement. The task takes an identical amount of time, but this time, it's happening in the background without locking up the UI. Even though our app is doing the same amount of work, the perceived performance is much better because the user will be free to do other things while the app is processing.

You may have noticed that we accessed a global queue of .userInitiated priority. This is an attribute we can use to give our tasks a sense of urgency. If we run the same task on a global queue of and pass it a qos attribute of background , iOS will think it's a utility task, and thus allocate fewer resources to execute it. So, while we don't have control over when our tasks get executed, we do have control over their priority.

A Note on Main Thread vs. Main Queue

You might be wondering why the Profiler shows "Main Thread" and why we're referring to it as the "Main Queue". If you refer back to the GCD architecture we described above, the Main Queue is solely responsible for managing the Main Thread. The Dispatch Queues section in the Concurrency Programming Guide says that "the main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application."

The terms "execute on the Main Thread" and "execute on the Main Queue" can be used interchangeably.


Concurrent Queues

So far, our tasks have been executed exclusively in a serial manner. DispatchQueue.main is by default a serial queue, and DispatchQueue.global gives you four concurrent dispatch queues depending on the priority parameter you pass in.

Let's say we want to take five images, and have our app process them all in parallel on background threads. How would we go about doing that? We can spin up a custom concurrent queue with an identifier of our choosing, and allocate those tasks there. All that's required is the .concurrent attribute during the construction of the queue.

class ViewController: UIViewController {
    let queue = DispatchQueue(label: "com.app.concurrentQueue", attributes: .concurrent)
    let images: [UIImage] = [UIImage].init(repeating: UIImage(), count: 5)

    @IBAction func handleTap(_ sender: Any) {
        for img in images {
            queue.async { [unowned self] in
                self.compute(img)
            }
        }
    }

    private func compute(_ img: UIImage) -> Void {
        // Pretending to post-process a large image.
        var counter = 0
        for _ in 0..<9999999 {
            counter += 1
        }
    }
}

Running that through the profiler, we can see that the app is now spinning up 5 discrete threads to parallelize a for-loop.

Parallelization of N Tasks

So far, we've looked at pushing computationally expensive task(s) onto background threads without clogging up the UI thread. But what about executing parallel tasks with some restrictions? How can Spotify download multiple songs in parallel, while limiting the maximum number up to 3? We can go about this in a few ways, but this is a good time to explore another important construct in multithreaded programming: semaphores.

Semaphores are signaling mechanisms. They are commonly used to control access to a shared resource. Imagine a scenario where a thread can lock access to a certain section of the code while it executes it, and unlocks after it's done to let other threads execute the said section of the code. You would see this type of behavior in database writes and reads, for example. What if you want only one thread writing to a database and preventing any reads during that time? This is a common concern in thread-safety called Readers-writer lock. Semaphores can be used to control concurrency in our app by allowing us to lock n number of threads.

let kMaxConcurrent = 3 // Or 1 if you want strictly ordered downloads!
let semaphore = DispatchSemaphore(value: kMaxConcurrent)
let downloadQueue = DispatchQueue(label: "com.app.downloadQueue", attributes: .concurrent)

class ViewController: UIViewController {
    @IBAction func handleTap(_ sender: Any) {
        for i in 0..<15 {
            downloadQueue.async { [unowned self] in
                // Lock shared resource access
                semaphore.wait()

                // Expensive task
                self.download(i + 1)

                // Update the UI on the main thread, always!
                DispatchQueue.main.async {
                    tableView.reloadData()

                    // Release the lock
                    semaphore.signal()
                }
            }
        }
    }

    func download(_ songId: Int) -> Void {
        var counter = 0

        // Simulate semi-random download times.
        for _ in 0..<Int.random(in: 999999...10000000) {
            counter += songId
        }
    }
}

Notice how we've effectively restricted our download system to limit itself to k number of downloads. The moment one download finishes (or thread is done executing), it decrements the semaphore, allowing the managing queue to spawn another thread and start downloading another song. You can apply a similar pattern to database transactions when dealing with concurrent reads and writes.

Semaphores usually aren't necessary for code like the one in our example, but they become more powerful when you need to enforce synchronous behavior whille consuming an asynchronous API. The above could would work just as well with a custom NSOperationQueue with a maxConcurrentOperationCount, but it's a worthwhile tangent regardless.


Finer Control with OperationQueue

GCD is great when you want to dispatch one-off tasks or closures into a queue in a 'set-it-and-forget-it' fashion, and it provides a very lightweight way of doing so. But what if we want to create a repeatable, structured, long-running task that produces associated state or data? And what if we want to model this chain of operations such that they can be cancelled, suspended and tracked, while still working with a closure-friendly API? Imagine an operation like this:

This would be quite cumbersome to achieve with GCD. We want a more modular way of defining a group of tasks while maintaining readability and also exposing a greater amount of control. In this case, we can use Operation objects and queue them onto an OperationQueue, which is a high-level wrapper around DispatchQueue. Let's look at some of the benefits of using these abstractions and what they offer in comparison to the lower-level GCI API:

  • You may want to create dependencies between tasks, and while you could do this via GCD, you're better off defining them concretely as Operation objects, or units of work, and pushing them onto your own queue. This would allow for maximum reusability since you may use the same pattern elsewhere in an application.
  • The Operation and OperationQueue classes have a number of properties that can be observed, using KVO (Key Value Observing). This is another important benefit if you want to monitor the state of an operation or operation queue.
  • Operations can be paused, resumed, and cancelled. Once you dispatch a task using Grand Central Dispatch, you no longer have control or insight into the execution of that task. The Operation API is more flexible in that respect, giving the developer control over the operation's life cycle.
  • OperationQueue allows you to specify the maximum number of queued operations that can run simultaneously, giving you a finer degree of control over the concurrency aspects.

The usage of Operation and OperationQueue could fill an entire blog post, but let's look at a quick example of what modeling dependencies looks like. (GCD can also create dependencies, but you're better off dividing up large tasks into a series of composable sub-tasks.) In order to create a chain of operations that depend on one another, we could do something like this:

class ViewController: UIViewController {
    var queue = OperationQueue()
    var rawImage = UIImage? = nil
    let imageUrl = URL(string: "https://example.com/portrait.jpg")!
    @IBOutlet weak var imageView: UIImageView!

    let downloadOperation = BlockOperation {
        let image = Downloader.downloadImageWithURL(url: imageUrl)
        OperationQueue.main.async {
            self.rawImage = image
        }
    }

    let filterOperation = BlockOperation {
        let filteredImage = ImgProcessor.addGaussianBlur(self.rawImage)
        OperationQueue.main.async {
            self.imageView = filteredImage
        }
    }

    filterOperation.addDependency(downloadOperation)

    [downloadOperation, filterOperation].forEach {
        queue.addOperation($0)
     }
}

So why not opt for a higher level abstraction and avoid using GCD entirely? While GCD is ideal for inline asynchronous processing, Operation provides a more comprehensive, object-oriented model of computation for encapsulating all of the data around structured, repeatable tasks in an application. Developers should use the highest level of abstraction possible for any given problem, and for scheduling consistent, repeated work, that abstraction is Operation. Other times, it makes more sense to sprinkle in some GCD for one-off tasks or closures that we want to fire. We can mix both OperationQueue and GCD to get the best of both worlds.


The Cost of Concurrency

DispatchQueue and friends are meant to make it easier for the application developer to execute code concurrently. However, these technologies do not guarantee improvements to the efficiency or responsiveness in an application. It is up to you to use queues in a manner that is both effective and does not impose an undue burden on other resources. For example, it's totally viable to create 10,000 tasks and submit them to a queue, but doing so would allocate a nontrivial amount of memory and introduce a lot of overhead for the allocation and deallocation of operation blocks. This is the opposite of what you want! It's best to profile your app thoroughly to ensure that concurrency is enhancing your app's performance and not degrading it.

We've talked about how concurrency comes at a cost in terms of complexity and allocation of system resources, but introducing concurrency also brings a host of other risks like:

  • Deadlock: A situation where a thread locks a critical portion of the code and can halt the application's run loop entirely. In the context of GCD, you should be very careful when using the dispatchQueue.sync { } calls as you could easily get yourself in situations where two synchronous operations can get stuck waiting for each other.
  • Priority Inversion: A condition where a lower priority task blocks a high priority task from executing, which effectively inverts their priorities. GCD allows for different levels of priority on its background queues, so this is quite easily a possibility.
  • Producer-Consumer Problem: A race condition where one thread is creating a data resource while another thread is accessing it. This is a synchronization problem, and can be solved using locks, semaphores, serial queues, or a barrier dispatch if you're using concurrent queues in GCD.
  • ...and many other sorts of locking and data-race conditions that are hard to debug! Thread safety is of the utmost concern when dealing with concurrency.

Parting Thoughts + Further Reading

If you've made it this far, I applaud you. Hopefully this article gives you a lay of the land when it comes to multithreading techniques on iOS, and how you can use some of them in your app. We didn't get to cover many of the lower-level constructs like locks, mutexes and how they help us achieve synchronization, nor did we get to dive into concrete examples of how concurrency can hurt your app. We'll save those for another day, but you can dig into some additional reading and videos if you're eager to dive deeper.




io

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture