si

Ahead of VP Pence’s Iowa visit, Joe Biden’s campaign calls out ‘consequential failure’ of Trump coronavirus response

Vice President Mike Pence owes Iowans more than a photo-op when he visits Des Moines today, according to Joe Biden’s campaign.

“Iowans are seeing up close the most consequential failure of government in modern American history,” said Kate Bedingfeld, spokeswoman for the former vice president and presumptive Democratic presidential nominee.

“With nearly 300,000 Iowans filing for unemployment, rural hospitals on life support, Latino communities disproportionately suffering and workers on the job without sufficient protection, Mike Pence owes Iowans more than a photo-op — he owes them answers,” she said.

Pence, head of the White House coronavirus task force, is scheduled to meet with Iowa Gov. Kim Reynolds and U.S. Sens. Chuck Grassley and Joni Ernst, all Republicans, as well as with faith, farm and food production leaders.

Pence will talk to faith leaders about how they are using federal and state guidelines to open their houses of worship in a safe and responsible manner.

Later, he will go to Hy-Vee’s headquarters in West Des Moines for a roundtable discussion with agriculture and food supply leaders to discuss steps being taken to ensure the food supply remains secure.

Pence has called Iowa a “success story” in its response to the COVID-19, but Bedingfeld said the Trump administration failed to protect Iowa families from the virus that has claimed the lives of 231 Iowans.

“From devastating losses across the state, at meatpacking plants to rural communities, one thing is clear — it’s Iowans and the American people who are paying the price for the Trump administration’s denials and delays in response to this pandemic,” she said.

“Instead of listening to our own intelligence agencies and public health experts, Donald Trump was fed dangerous propaganda from the Chinese Communist Party — and he bought it,” she said. “Iowans deserve better — they deserve Joe Biden.”

For his part, Grassley said he welcomes the discussion with Pence.

“There’s much work to be done, and the pandemic is disrupting all of our communities,” Grassley said. “It’s important to hear directly from those who help feed the nation and the world.”

Ernst also is looking forward to the discussion of how Iowa is working to protect the health and safety of Iowa’s families and communities while reopening the state’s economy.

“We continue to take an all-hands-on-deck approach to tackling this pandemic,” she said. “Together, we will get through this.”

Comments: (319) 398-8375; james.lynch@thegazette.com




si

Scenic designer in Iowa City looks for light in the darkness

Benjamin Stuben Farrar of Iowa City is a storyteller without a story to tell at the moment.

The first story is as dramatic and layered as his bold scenic and lighting designs for area stages: “Benjamin Stuben Farrar” is not his actual name.

He was born Stewart Benjamin Farrar 41 years ago in Kentucky. He didn’t want to go through life as “Stewie,” so he went by “Benjamin,” until he got to college at Vanderbilt University in Nashville. He ran into so many other Bens, that his buddies decided to combine his names into “Stuben.”

That name followed him to grad school at the University of Iowa in 2002, where he earned an MFA in theater design. But when he moved to New York City in 2006 to pursue his career, he didn’t like hearing “Stuben” shouted across the theater.

“It sounded too much like ‘stupid,’ ” he said, “so I reverted back to Benjamin.”

But nicknames have a way of sticking. When he and his wife moved back to Iowa City in 2015 to raise their daughter, he switched to “Stuben” again, since that’s how people knew him there.

Professionally, he uses “S. Benjamin Farrar” and on Facebook, he goes by “Benjamin Stuben Farrar” so friends from his various circles can find him. Even though most people now call him “Stuben,” he still introduces himself as “Benjamin.”

“To this day, I have 12 different names,” he said with a laugh. “Only the bill collectors know me as ‘Stewart.’”

Changing realms

Like his name, his artistry knows no bounds.

He has planted apple trees on Riverside Theatre’s indoor stage in Iowa City; a child’s outdoor playground on the Theatre Cedar Rapids stage; and dramatic spaces for Noche Flamenca’s dancers in New York City venues and on tour.

These days, however, his theatrical world has gone dark.

His recent designs for “The Humans,” “The Skin of Our Teeth” and “Kinky Boots” at Theatre Cedar Rapids and “A Doll’s House, Part 2” at Riverside Theatre have been canceled or postponed in the wake of the coronavirus pandemic. He has “The Winter’s Tale” in the works for Riverside Theatre’s free Shakespeare in the Park slated for June, but time will tell if that changes, too.

“Within the course of two weeks, five productions were canceled or moved indefinitely,” he said.

Looking ahead, he’s not sure what shows he’ll have time to design for the upcoming seasons. He’s used to juggling three or four productions at a time, but he said that could become really difficult if the shows fall on top of each other at the various venues.

As with so many artists right now, his world keeps changing.

He and his wife, Jody Caldwell, an editor and graduate of the UI Writers’ Workshop, are both freelancers, leaving them with no income during this pandemic. So Farrar has been wading through red tape and delays to secure unemployment compensation and the government stimulus check, for which he’s still waiting. One bright spot was receiving a $1,000 Iowa Arts & Culture Emergency Relief Fund grant given to 156 Iowa creatives who have lost income from canceled projects.

With his regular revenue streams drying up, he’s been considering other ways to earn money through teaching theater or creating and selling more of his digital and film photography — an outgrowth of his fascination for the way lighting can sculpt a scene on stage.

“I love doing nature (photography). I love doing details,” he said. “I love photographing people, too, especially on stage — I love photographing my own shows. It’s just a lot of fun.

“For me, nature’s so interesting, especially living where we do in North America, there’s vast changes from one time of year to another. I just love looking at that on a very small scale, and how light happens to fall on that particular surface — how that surface changes color,” he said.

“Right now the redbuds are out. The magnolias came out two weeks ago and then they started to fall. It changes the landscape dramatically, especially based on whether it’s a morning light or afternoon light or evening light, whether it’s cloudy, whether the sun’s peeking through clouds and highlighting a few individual leaves. I find that super fascinating.

“That’s how I can look at the same boring tree at different times of year, at different times of day, and find something interesting to photograph.”

Lighting design

While his scenic designs create an immediate visual impact and help tell the story swirling around the actors, Farrar was a lighting designer before he became a scenic designer.

It wasn’t love at first sight. He took a light design course in college, but didn’t “get” it.

“It’s really difficult to wrap your head around it,” he said.

His aha moment came when he was running lights for an operetta in college.

“I just had these little faders in front of me so I could raise certain lights up and down. And the music was happening in front of me and I thought, ‘I control this whole little universe. I can make things completely disappear. I can sculpt things from the side, I can make things feel totally different — just like music can — just based on how it’s lit.’ And then I finally started to understand how the lighting hooked things together,” he said.

From there, his interest in lighting soared.

“I absolutely love lighting,” he said. “I think it’s probably given me more joy than anything else, just because I can go for a walk someplace and just the way the lighting changes as the clouds come in or out, or as the time of year changes and the angle of the sun changes, I really enjoy seeing that — and that’s what got me into photography.”

Scenic design

While his design work is a collaborative process with the director and other production team members, the ideas begin flowing as soon as he starts reading a script. With the flamenco dance company in New York, he might start working on a show two years in advance. With Theatre Cedar Rapids, the lead time is generally six months to look at the season overall, and four months to “get things going” on a particular show, he said. The lead time is about two months for Riverside Theatre shows, which have shorter rehearsal periods.

He begins thinking about the theater spaces, the text that the audience never sees, the show’s technical demands, and the scale in relation to the human body. He still likes to do some of his design work by hand, but computers and the 3D printer he has in his basement workshop have made the process much quicker for creating the drawings and scale models for each show.

He also enjoys the variety and challenge of moving between the small space inside Riverside Theatre and the large space inside Theatre Cedar Rapids, as well as the theaters at Grinnell College and Cornell College in Mount Vernon, as well as the theaters in New York and the touring venues that have housed his designs.

Ultimately, the goal of scenic design “is always about the storytelling,” he said.

“There’s a version of a show that exists in a script, if there is a script. Assuming it has a script, there is a scaffolding for that show in the script, and then there’s a version of the show in the director’s head, and then there’s a version of the show that’s performed in my head as I read the script. So there’s all these different versions.”

If the show is a musical, the choreographer brings in another idea, and the musical score adds another element. Sometimes Farrar knows the music very well, but other times, he doesn’t.

“Hopefully, I can integrate that well if I listen to the music while working on the show — not usually when I’m reading the script, but while I’m drafting the show. I’ll listen to the music to get a sense of how the show wants to move.

“Integrating all these different versions of the show — the text, what’s in my head, what’s in the director’s head, what’s in the choreographer’s head, the role the music plays — and then you synthesize all those elements, and then you find out how the show wants to move in the space it has. And how a show moves is one of the most important things to me. ...

“You get a sense that the show becomes this conscious element that wants a certain thing, and will reveal those things over time.”

And time is something he has right now.

Comments: (319) 368-8508; diana.nollen@thegazette.com



  • Arts & Culture

si

Ready to reopen? Four Cedar Rapids business leaders offer advice

On Wednesday, Gov. Kim Reynolds removed some restrictions on businesses in the 22 counties that have been seeing higher numbers of Iowans affected by COVID-19, including Linn and Johnson counties.

Now those organizations have to make decisions — on bringing back employees, services to provide and how much access to allow for customers.

And as those businesses reopen — some after more than two months — crucial steps likely will include ongoing communication with employees and customers and a well-thought-out restart plan.

The Gazette spoke with business leaders about the challenges faced by business owners as they consider how and when to open their doors.

• David Drewelow of ActionCoach Heartland in Cedar Rapids is a consultant with 19 years of business coaching experience.

• David Hensley, director of the University of Iowa’s John Pappajohn Entrepreneurial Center, has expertise in small business management during a crisis.

• Josh Seamans is vice president of Cushman and Wakefield, a global commercial real estate adviser that operates offices in more than 60 countries including China.

• Steve Shriver is a Cedar Rapids entrepreneur who operates and/or helped found four diverse enterprises, including Eco Lips and Brewhemia.

Their responses here have been condensed from lengthy individual interviews.

How important is communication and having a well-prepared plan for resumption of business?

Shriver: The one thing that has been imperative throughout this whole process is communication with employees, customers and the public. I also would recommend writing as detailed a business resumption plan as possible.

One of the main reasons is to fully understand what you are doing as this is a brand-new challenge that none of us has faced.

Drewelow: You really need to be communicating now, more than ever, with your employees, customers, vendors and suppliers. What does your plan for the next 20 to 30 days look like? What are things that you can be doing right now to get ready?

Hensley: I think it is critically important to have a reopening plan because most businesses are not going to be at full strength right away. What might their revenue forecasts look like? How can they keep their costs down as their business starts to rebound before it gets back to full capacity?

Seamans: Your plan should include a checklist of reopening steps appropriate to your type of business. Retail will have different items than distribution or industrial businesses.

You need to communicate your plan to employees, customers, landlords and lenders.

How much will fear play a role in the resumption of business?

Shriver: Everyone has a different idea of the risks involved, such as using a handle to open a door or interacting with a person — the little things that we are used to doing.

When you look at the risk versus reward of doing that, some people will be willing to go into a store and others will stay home. Some employees don’t want to come back to work yet and some people are itching to get back. You have everything in between.

Drewelow: The fear factor is huge. For the small business owner, we try to channel that fear into a focus on being highly aware of all the possibilities to mitigate concerns.

If you own a restaurant, can you post the menu online or use disposable menus? That way, a customer doesn’t have to touch something that might have been handled by someone else.

Appropriate spacing of customers within a restaurant also will help alleviate some of the fear.

Hensley: You need to communicate what steps you are taking to protect the health and safety of your employees and your customers. If you will be requiring the use of personal protective equipment like face masks, are you going to make them available?

Will limiting the number of people entering a business be difficult?

Shriver: There are not a lot of people who want to gather in masses right now. It seems like as businesses start to reopen, it will be more like a trickle.

It will be just like turning on a water spigot, with the flow of customers gradually increasing.

Hensley: I think we will see a lot more customers buying, rather than just shopping. They are going to buy the items they came for and then leave.

If businesses have more vulnerable customers, I would recommend establishing separate early morning times like many of the grocery stores have done to provide a safer environment.

Many companies have adopted using digital conferencing platforms for meetings. Will we see that trend continue?

Seamans: I think Zoom will be used for more internal meetings, so there is no need for someone to fly from, say, San Francisco to New York. But in terms of sales, it does not replicate that face-to-face interaction.

We have done work with clients that live several hours away and we have to come in for a city council meeting for a project that we are working on. That’s a three-hour drive in for a one- or two-hour council meeting and another three-hour drive back — basically an eight-hour day. If we can Zoom in and answer any questions, that’s a lot more efficient at less cost.

What should a small-business owner consider when determining how many employees to recall?

Shriver: We will be able to bring some people back to work and generate some revenue, but not in a huge way. Anybody who can work from home should continue working from home for as long as they possibly can.

We should not be rushing to get those people back. There is no incentive.

Hensley: Owners are going to be making hard decisions. Do I bring back half of my team at full time or do I bring everyone back at reduced hours? What are those implications going to be?

In some cases, other industries have been hiring and some may be making more money. Businesses may have to pay more to attract that talent back.

Restaurants have been forced to change their business model from on-premise dining to carryout and delivery. Should all owners take this opportunity to examine and update their business model?

Shriver: We took two businesses — SOKO Outfitters, a retail store, and Brewhemia, a restaurant — and put them rapidly online within a month. When we come out of this, I think we will be stronger because we will have that infrastructure in place in addition to the old school face to face traffic that we used to have.

Hensley: I think this is definitely the time to look at your business model to determine what is appropriate given the economic situation that we have. That is not just going to be critical for reopening, but over the next six months to a year as long as we are dealing with the virus.

Some business owners will see that their customers have lost their jobs or seen their income drop dramatically. They are going to be changing their patterns of consumption based on necessities.

Drewelow: Some of my clients believe that are looking their competitors and realize that some may not reopen. They are looking at whether they can merge with them or somehow salvage parts of that business.

Some business owners have realized that the way they deliver products or services will have to change. Many of my older clients have been dragged into using modern technology.




si

U.S. Rep. Dave Loebsack calls on president to protect packing plant workers

At the same time Vice President Mike Pence was in Iowa on Friday to discuss the nation’s food supply security, U.S. Rep. Dave Loebsack called on the administration to take more measures to protect workers in food processing plants.

Loebsack also questioned the decisions to reopen the economy being made by the Trump administration and Iowa Gov. Kim Reynolds.

“I don’t think we’re ready for that yet, quite honestly,” the Iowa City Democrat said.

“Ready” will be when adequate protections are in place for the people processing America’s food, Loebsack said.

Workers are showing up on the job, but “they fear for their families, they fear for themselves, they fear for everybody,” Loebsack said. “They don’t know if they’re going to catch this thing or not. But they’re there.”

Of particular concern are workers in food processing, such as those in meatpacking plants in Iowa where more than 1,600 cases of COVID-19 have been reported.

“I really believe that we should not open the plants if we do not ensure worker safety,” Loebsack said.

He called for President Donald Trump to use the Defense Production Act, which the president invoked to keep meatpacking plants open, to ensure an adequate supply of personal protective equipment for packing plant workers.

If Pence and the president are concerned about the nation’s food supply, then they need to “keep those workers safe and, therefore, keep those processing plants running” to avoid meat shortages at the grocery store, Loebsack said.

“We can’t have those plants running if workers are not protected. It’s that simple,” he said. “It’s not just the workers, it’s the families, it’s the community at large.”

With unemployment at 14.7 percent — probably higher, Loebsack said, Congress should extend federal coronavirus-related unemployment benefits of $600 a week beyond their current July end date.

He’s also pleased that the last relief package fixed a Small Business Administration Economic Injury Disaster Loan program to allow farmers to apply for assistance.

Comments: (319) 398-8375; james.lynch@thegazette.com




si

Pence’s Iowa visit underscores coronavirus worry

DES MOINES — In traveling to Iowa to call attention to the burdens COVID-19 brought to religious services and the food supply, Vice President Mike Pence unwittingly called attention to another issue: whether the White House itself is safe from the disease.

So far this week, two White House aides — President Donald Trump’s valet on Thursday, and Pence’s press secretary on Friday — have tested positive for the virus.

On Friday morning, Pence’s departure to Des Moines was delayed an hour as Air Force Two idled on a tarmac near Washington. Though Pence’s press secretary was not on the plane, White House physicians through contact tracing identified six other aides who had been near her who were aboard, and pulled them from the flight. The White House later said the six had tested negative.

Trump, who identified the Pence aide as press secretary Katie Miller, said he was “not worried” about the virus in the White House.

Nonetheless, officials said they were stepping up safety protocols and were considering a mandatory mask policy for those in close contact with Trump and Pence.

The vice president and 10 members of his staff are given rapid coronavirus tests daily, and the president is also tested regularly.

Miller, who is married to Trump adviser Stephen Miller, had been in recent contact with Pence but not with the president. Pence is leader of the White House coronavirus task force and Katie Miller has handled the group’s communications.

After landing in Des Moines, Pence spoke to a group of faith leaders about the importance of resuming religious services, saying cancellations in the name of slowing the spread of the virus have “been a burden” for congregants.

His visit coincided with the state announcing 12 more deaths from the virus, a total of 243 in less than two months.

Pence spoke with the religious leaders and Republican officials during a brief visit. He also spoke later with agricultural and food company executives.

“It’s been a source of heartache for people across the country,” Pence told about a dozen people at the Church of the Way Presbyterian church in Urbandale.

Pence told the group that continued efforts to hold services online and in other ways “made incalculable difference in our nation seeing our way through these troubled times.”

Iowa is among many states where restrictions on in-person services are starting to ease. GOP Gov. Kim Reynolds, who joined both of the state’s Republican senators at the event, has instituted new rules that allow services to resume with restrictions.

At Friday’s event, some religious leaders expressed hesitation at resuming large gatherings, while others said they would begin holding services soon,

“We are pretty much in a position of uniformly believing that it’s too early to return to personal worship. It’s inadvisable at the moment particularly with rising case counts in communities where we are across the state,” said David Kaufman, rabbi of Temple B’nai Jeshurun in Des Moines.

The Rev. Terry Amann, of Church of the Way, said his church will resume services May 17 with chairs arranged so families can sit together but avoid the temptation to shake hands or offer hugs. He said hand sanitizer will be available.

A new poll by The University of Chicago Divinity School and the Associated Press-NORC Center for Public Affairs Research shows just 9 percent of Americans think in-person services should be allowed without restrictions, while 42 percent think they should be allowed with restrictions and 48 percent think they shouldn’t be allowed at all.

Pence later met with agriculture and food industry leaders. Iowa tops the nation in egg production and pork processing and is a top grower of corn and soybeans.

Meatpacking is among the state’s biggest employers, and companies have been working to restart operations after closing them because hundreds of their workers became infected.

As Pence touted the Trump administration’s announcement of the reopening of 14 meatpacking plants including two of the worst hit by coronavirus infections in Perry and Waterloo, the union representing workers called for safer work conditions.

“Iowa’s meatpacking workers are not sacrificial lambs. They have been working tirelessly during the coronavirus pandemic to ensure families here and across the country have access to the food they need,” said the United Food and Commercial Workers Union in a statement.

The Associated Press and the McClatchy Washington Bureau contributed to this report.




si

Celebrating on a screen: Iowa universities hold first-ever online commencements

Iowa State University graduates who celebrated commencement Friday saw lots of caps and gowns, red-and-gold confetti and arenas packed with friends and family.

But none of those images were from this year — which now is defined by the novel coronavirus that has forced education online and put an end to large gatherings like graduation ceremonies.

Appearing in front of a red ISU screen Friday, College of Agriculture and Life Sciences Dean Daniel J. Robison addressed graduates like he usually would at commencement — but this time in a recorded message acknowledging the unprecedented circumstances keeping them apart.

“This year, because of the COVID crisis, we are unfortunately not all together for this happy occasion,” he said, pushing forward in a motivational tone by quoting famed ISU alumnus George Washington Carver.

“When you can do the common things in life in an uncommon way, you will command the attention of the world,” Robison said, citing Carver.

About 12,000 graduates across Iowa’s public universities this month are doing exactly that — capping their collegiate careers with never-before-attempted online-only commencement ceremonies, with each campus and their respective colleges attempting a variety of virtual celebration methods.

ISU and the University of Iowa are attempting some form of socially-distanced livestreamed convocation with countdown clocks and virtual confetti. All three campuses including the University of Northern Iowa have posted online recorded messages, videos and slides acknowledging individual graduates.

Some slides include photos, thank-yous, quotes and student plans for after graduation.

UNI, which didn’t try any form of a live virtual ceremony, instead created a graduation website that went live Thursday. That site hosts an array of recorded video messages — including one from UNI President Mark Nook who, standing alone behind a podium on campus clad in traditional academic regalia, recognized his campus’ 1,500-some spring graduates and their unusual challenges.

“We know the loss you feel in not being able to be on campus to celebrate this time with your friends, faculty and staff,” Nook said. “To walk around campus in your robe and to take those pictures with friends and family members … The loss is felt by many of us as well.”

He reminded those listening that this spring’s UNI graduates — like those at the UI and ISU — can participate in an upcoming in-person commencement ceremony.

And although students were allowed to return caps and gowns they ordered for their canceled walks across the stage, some kept them as keepsakes. The campuses offered other tokens of remembrance as well, including “CYlebration” gift packages ISU sent to graduates in April stuffed with a souvenir tassel, diploma cover, and streamer tube — to make up for the confetti that won’t be falling on graduation caps from the Hilton Coliseum rafters.

In addition to the recorded messages from 17 UI leaders — including President Bruce Harreld — the campus solicited parent messages, which will be included in the live virtual ceremonies.

To date, about 3,100 of the more than 5,400 UI graduates have RSVP’d to participate in the ceremony, which spokeswoman Anne Bassett said is a required affirmation from the students to have their names read.

“Students do not have to sign up to watch,” she said. “So there’s no way at this time to predict how many will do so.”

Despite the historic nature of the first online-only commencement ceremonies — forever bonding distanced graduates through the shared experience — UI graduate Omar Khodor, 22, said it’s a club he would have liked to avoid.

“I’d definitely prefer not to be part of that group,” the environmental science major said, sharing disappointment over the education, experiences and celebrations he lost to the pandemic.

“A lot of students like myself, we’re upset, but we’re not really allowed to be upset given the circumstances,” Khodor said. “You have this sense that something is unfair, that something has been taken from you. But you can’t be mad about it at all.”

‘Should I Dance Across the Stage?’

Life is too short to dwell on what could have been or what should have been — which sort of captures graduate Dawn Hales’ motivation to get an ISU degree.

The 63-year-old Ames grandmother calls herself the “oldest BSN Iowa State grad ever.”

“It’s the truth, because we’re only the second cohort to graduate,” Hales said. “I’ll probably be the oldest for a while.”

ISU began offering a Bachelor of Science in nursing degree in fall 2018 for registered nurses hoping to advance their careers — like Hales, who spent years in nursing before becoming director of nursing at Accura Healthcare, a skilled nursing and rehabilitation center in Ames.

In addition to wanting more education, Hales said, she felt like the “odd man out” in her red-and-gold family — with her husband, three sons and their wives all earning ISU degrees. She earned an associate degree and became a registered nurse with community college training.

“I was director of nursing at different facilities, but I did not have a four-year degree,” she said. “I always wanted to get my BSN.”

So in January 2019, she started full-time toward her three-semester pursuit of a BSN — even as she continued working. And her education took a relevant and important turn when COVID-19 arrived.

“My capstone project was infection control,” she said, noting her focus later sharpened to “infection control and crisis management” — perfect timing to fight the coronavirus, which has hit long-term care facilities particularly hard.

“We were hyper vigilant,” Hales said of her facility, which has yet to report a case of COVID-19. “I think we were probably one of the first facilities that pretty much shut down and started assessing our staff when they would come in.”

Hales said she was eager to walk in her first university graduation and was planning antics for it with her 10-year-old granddaughter.

“We were trying to think, should I dance across the stage?” Hales said. “Or would I grab a walker and act like an old lady going across the stage?

“She was trying to teach me to do this ‘dab’ move,” Hales said. “I said, ‘Honey, I cannot figure that out.’”

In the end, Hales watched the celebration online instead. She did, however, get a personalized license plate that reads, “RN2BSN.”

In From Idaho To Exalt ‘In ‘Our Own Way’

Coming from a family-run dairy farm in Jerome, Idaho, EllieMae Millenkamp, 22, is the first in her family to graduate college.

Although music is her passion, Millenkamp long expected to study at an agriculture school — but Colorado State was her original choice.

Then, while visiting family in Iowa during a cousin’s visit to ISU, she fell in love with the Ames campus and recalibrated her academic path.

While at ISU, the musical Millenkamp began writing more songs and performing more online, which led to in-person shows and a local band.

And then, during her junior year, a talent scout reached out to invite her to participate in an audition for NBC’s “The Voice.” That went well and Millenkamp, in the summer before her senior year, moved to Los Angeles and made it onto the show.

She achieved second-round status before being bumped, but the experience offered her lifelong friendships and connections and invigorated her musical pursuits — which have been slowed by COVID-19. Shows have been canceled in now idled bars.

Millenkamp went back to Idaho to be with her family, like thousands of her peers also did with their families, when the ISU campus shut down.

After graduation she plans on returning and working the family farm again until her musical career has the chance to regain momentum.

But she recently returned to Ames for finals. And she and some friends, also in town, plan to celebrate graduation, even if not with an official cap and gown.

“We’ll probably have a bonfire and all hang out,” she said. “We’ll celebrate in our own way.”

Seeking Closure After Abrupt Campus Exits

Most college seniors nearing graduation get to spend their academic hours focusing on their major and interests, wrapping their four or sometimes five years with passion projects and capstone experiences.

That was Omar Khodor’s plan — with lab-based DNA sequencing on tap, along with a geology trip and policy proposal he planned to present to the Iowa Legislature. But all that got canceled — and even some requirements were waived since COVID-19 made them impossible.

“There were still a lot of a lot of things to wrap up,” he said. “A lot of things I was looking forward to.”

He’s ending the year with just three classes to finish and “absolutely” would have preferred to have a fuller plate.

But Khodor’s academic career isn’t over. He’s planning to attend law school in the fall at the University of Pennsylvania, where he’ll pursue environmental law. But this spring has diminished his enthusiasm, with the question lingering of whether in-person courses will return to campus soon.

If they don’t, he’s still leaning toward enrolling — in part — because of all the work that goes into applying and getting accepted, which he’s already done.

“But online classes are definitely less fulfilling, less motivating. You feel like you learn less,” he said. “So it will kind of be a tossup. There’ll be some trade-offs involved in what I would gain versus what I would be paying for such an expensive endeavor like law school.”

As for missing a traditional college commencement, Khodor said he will, even though he plans to participate in the virtual alternative.

“Before it got canceled, I didn’t think that I was looking forward to it as much as I actually was,” he said.

Not so much for the pomp and circumstance, but for the closure, which none of the seniors got this year. When the universities announced no one would return to campus this semester, students were away on spring break.

They had already experienced their last in-person class, their last after-class drink, their last cram session, their last study group, their last lecture, their last Iowa Memorial Union lunch — and they didn’t even know it.

“So many of us, we won’t have closure, and that can kind of be a difficult thing,” he said.

Comments: (319) 339-3158; vanessa.miller@thegazette.com

Online Celebrations

For a list of commencement times and virtual celebrations, visit:

The University of Iowa’s commencement site at https://commencement.uiowa.edu/

Iowa State University’s commencement site at https://virtual.graduation.iastate.edu/

University of Northern Iowa’s commencement site at https://vgrad.z19.web.core.windows.net/uni/index.html




si

Coronavirus in Iowa, live updates for May 9: 214 more positive tests reported

11 a.m. Iowa sees 214 more positive tests for coronavirus

The Iowa Department of Public Health on Saturday reported nine more deaths from COVID-19, for a total of 252 since March 8.

An additional 214 people tested positive for the virus, bringing the state’s total to 11,671.

A total of 71,476 Iowans have been tested for COVID-19, the department reported.

With Saturday’s new figures from the Department of Public Health, these are the top 10 counties in terms of total cases:

• Polk — 2194

• Woodbury — 1554

• Black Hawk — 1477

• Linn — 819

• Marshall — 702

• Dallas — 660

• Johnson — 549

• Muscatine — 471

• Tama — 327

• Louisa — 282.




si

Members – Block Permissions

Announcement of Members - Block Permissions, a WordPress plugin for showing/hiding content using the block editor (Gutenberg).




si

Exhale Version 2.2.0

Release announcement of version 2.2.0 of the Exhale WordPress theme.




si

Thanks for all the positive support and reception to my...



Thanks for all the positive support and reception to my Lightroom presets so far, especially to those who pulled the trigger and became my first customers! I’d love to hear your feedback once you try them out!
.
Still time to enter the giveaway or to take advantage of the 50% sale! See my last post for full details and the link in my profile. ❤️ (at Toronto, Ontario)




si

Missing Berlin’s gorgeous buildings again. ???? (at Berlin,...



Missing Berlin’s gorgeous buildings again. ???? (at Berlin, Germany)




si

And while we’re in the process of missing European...



And while we’re in the process of missing European architecture… ????

4 more days left to catch my Lightroom presets for 50% off! ⌛️ (at Copenhagen, Denmark)





si

Web Design as Narrative Architecture

Stories are everywhere. When they don’t exist we make up the narrative — we join the dots. We make cognitive leaps and fill in the bits of a story that are implied or missing. The same goes for websites. We make quick judgements based on a glimpse. Then we delve deeper. The narrative unfolds, or we create one as we browse.

Mark Bernstein penned Beyond Usability and Design: The Narrative Web for A List Apart in 2001. He wrote, ‘the reader’s journey through our site is a narrative experience’. I agreed wholeheartedly: Websites are narrative spaces where stories can be enacted, or emerge.

Henry Jenkins, Director of Comparative Media Studies, and Professor of Literature at MIT, wrote Game Design as Narrative Architecture. He suggested we think of game designers, ‘less as storytellers than as narrative architects’. I agree, and I think web designers are narrative architects, too. (Along with all the multitude of other roles we assume.) Much of what Henry Jenkins wrote applies to modern web design. In particular, he describes two kinds of narratives in game design that are relevant to us:

Enacted narratives are those where:

[…] the story itself may be structured around the character’s movement through space and the features of the environment may retard or accelerate that plot trajectory.

Sites like Amazon, New Adventures, or your portfolio are enacted narrative spaces: Shops or service brochures that want the audience to move through the site towards a specific set of actions like buying something or initiating contact.

Emergent narratives are those where:

[…] spaces are designed to be rich with narrative potential, enabling the story-constructing activity of players.

Sites like Flickr, Twitter, or Dribbble are emergent narrative spaces: Web applications that encourage their audience use the tools at their disposal to tell their own story. The audience defines how they want to use the narrative space, often with surprising results.

We often build both kinds of narrative spaces. Right now, my friends and I at Analog are working on Mapalong, a new maps-based app that’s just launched into private beta. At its heart Mapalong is about telling our stories. It’s one big map with a set of tools to view the world, add places, share them, and see the places others share. The aim is to help people tell their stories. We want to use three ideas to help you do that: Space (recording places, and annotating them), data (importing stuff we create elsewhere), and time (plotting our journeys, and recording all the places, people, and memories along the way). We know that people will find novel uses for the tools in Mapalong. In fact, we want them to because it will help us refine and build better tools. We work in an agile way because that’s the only way to design an emerging narrative space. Without realising it we’ve become architects of a narrative space, and you probably are, too.

Many projects like shops or brochure sites have fixed costs and objectives. They want to guide the audience to a specific set of actions. The site needs to be an enacted narrative space. Ideally, designers would observe behaviour and iterate. Failing that, a healthy dose of empathy can serve. Every site seeks to teach, educate, or inform. So, a bit of knowledge about people’s learning styles can be useful. I once did a course in one to one and small group training with the Chartered Institute of Personnel and Development. It introduced me to Peter Honey and Alan Mumford’s model which describes four different learning styles that are useful for us to know. I paraphrase:

  1. Activists like learning as they go; getting stuck in and working it out. They enjoy the here and now, and are happy to be dominated by immediate experiences. They are open-minded, not sceptical, and this tends to make them enthusiastic about anything new.
  2. Reflectors like being guided with time to take it all in and perhaps return later. They like to stand back to ponder experiences and observe them from many different perspectives. They collect data, both first hand and from others, and prefer to think about it thoroughly before coming to a conclusion.
  3. Theorists to understand and make logical sense of things before they leap in. They think problems through in a vertical, step-by-step logical way. They assimilate disparate facts into coherent theories.
  4. Pragmatists like practical applications of ideas, experiments, and results. They like trying out ideas, theories and techniques to see if they work in practice. They positively search out new ideas and take the first opportunity to experiment with applications.

Usually people share two or more of these qualities. The weight of each can vary depending on the context. So how might learning styles manifest themselves in web browsing behaviour?

  • Activists like to explore, learn as they go, and wander the site working it out. They need good in-context navigation to keep exploring. For example, signposts to related information are optimal for activists. They can just keep going, and going, and exploring until sated.
  • Reflectors are patient and thoughtful. They like to ponder, read, reflect, then decide. Guided tours to orientate them in emergent sites can be a great help. Saving shopping baskets for later, and remembering sessions in enacted sites can also help them.
  • Theorists want logic. Documentation. An understanding of what the site is, and what they might get from it. Clear, detailed information helps a theorist, whatever the space they’re in.
  • Pragmatists get stuck in like activists, but evaluate quickly, and test their assumptions. They are quick, and can be helped by uncluttered concise information, and contextual, logical tools.

An understanding of interactive narrative types and a bit of knowledge about learning styles can be useful concepts for us to bear in mind. I also think they warrant inclusion as part of an articulate designer’s language of web design. If Henry Jenkins is right about games designers, I think he could also be right about web designers: we are narrative architects, designing spaces where stories are told.

The original version of this article first appeared as ‘Jack A Nory’ alongside other, infinitely more excellent articles, in the New Adventures paper of January 2011. It is reproduced with the kind permission of the irrepressible Simon Collison. For a short time, the paper is still available as a PDF!

—∞—




si

Design Festival, The Setup, and Upcoming Posts

Wow, this has been a busy period. I’m just back from the Ampersand web typography conference in Brighton, and having a catch-up day in Mild Bunch HQ. Just before that I’ve been working flat out. First on Mapalong which was a grass-roots sponsor of Ampersand, and is going great guns. Then on an article for The Manual which is being published soon, and on 8 Faces #3 which is in progress right now. Not to mention the new talk for Ampersand which left me scratching my head and wondering if I was making any sense at all. More on that in a subsequent post.

In the meantime two previous events deserve a mention. (This is me starting more of a journalistic blog. :)

First of all, an interview with Simon Pascal Klien, the typographer and designer who’s curating the Design Festival podcast at the moment. We talked about all things web typography. Pascal cheekily left in a bit of noise from me in the prelude, and that rant pretty much sets the tone for the rest of the conversation. Thanks for your time, Pascal! If anyone reading this would care to listen in, the podcast can be downloaded or played from here:

Secondly, Daniel Bogan of The Setup sent me a few questions about my own tools. My answers are pretty clipped because of time, but you may find it interesting to compare this designer’s setup with your own:

I should note that in the meantime I’ve started writing with Writer, and discovered the great joy of keeping a journal and notes with a Midori Traveler’s Notebook. The latter is part of an on-going search I’m having to find Tools for Life. More on that, too at some point. Here’s my current list of topics I want to write about shortly:

  • Ampersand, the aftermath
  • Marrying a FujiFilm X100
  • No-www
  • Tools for life
  • Paper versus pixels

There, I’ve written it!




si

We, Who Are Web Designers

In 2003, my wife Lowri and I went to a christening party. We were friends of the hosts but we knew almost no-one else there. Sitting next to me was a thirty-something woman and her husband, both dressed in the corporate ‘smart casual’ uniform: Jersey, knitwear, and ready-faded jeans for her, formal shoes and tucked-in formal shirt for him (plus the jeans of course; that’s the casual bit). Both appeared polite, neutral, and neat in every respect.

I smiled and said hello, and asked how they knew our hosts. The conversation stalled pretty quickly the way all conversations will when only one participant is engaged. I persevered, asked about their children who they mentioned, trying to be a good friend to our hosts by being friendly to other guests. It must have prompted her to reciprocate. With reluctant interest she asked the default question: ‘What do you do?’ I paused, uncertain for a second. ‘I’m a web designer’ I managed after a bit of nervous confusion at what exactly it was that I did. Her face managed to drop even as she smiled condescendingly. ‘Oh. White backgrounds!’ she replied with a mixture of scorn and delight. I paused. ‘Much of the time’, I nodded with an attempt at a self-deprecating smile, trying to maintain the camaraderie of the occasion. ‘What do you do?’ I asked, curious to see where her dismissal was coming from. ‘I’m the creative director for … agency’ she said smugly, overbearingly confident in the knowledge that she had a trump card, and had played it. The conversation was over.

I’d like to say her reaction didn’t matter to me, but it did. It stung to be regarded so disdainfully by someone who I would naturally have considered a colleague. I thought to try and explain. To mention how I started in print, too. To find out why she had such little respect for web design, but that was me wanting to be understood. I already knew why. Anything I said would sound defensive. She may have been rude, but at least she was honest.

I am a web designer. I neither concentrate on the party venue, food, music, guest list, or entertainment, but on it all. On the feeling people enter with and walk away remembering. That’s my job. It’s probably yours too.

I’m self-actualised, without the stamp of approval from any guild, curriculum authority, or academic institution. I’m web taught. Colleague taught. Empirically taught. Tempered by over fifteen years of failed experiments on late nights with misbehaving browsers. I learnt how to create venues because none existed. I learnt what music to play for the people I wanted at the event, and how to keep them entertained when they arrived. I empathised, failed, re-empathised, and did it again. I make sites that work. That’s my certificate. That’s my validation.

I try, just like you, to imbue my practice with an abiding sense of responsibility for the universality of the Web as Tim Berners-Lee described it. After all, it’s that very universality that’s allowed our profession and the Web to thrive. From the founding of the W3C in 1994, to Mosaic shipping with <img> tag support in 1993, to the Web Standards Project in 1998, and the CSS Zen Garden in 2003, those who care have been instrumental in shaping the Web. Web designers included. In more recent times I look to the web type revolution, driven and curated by both web designers, developers, and the typography community. Again, we’re teaching ourselves. The venues are open to all, and getting more amazing by the day.

Apart from the sites we’ve built, all the best peripheral resources that support our work are made by us. We’ve contributed vast amounts of code to our collective toolkit. We’ve created inspirational conferences like Brooklyn Beta, New Adventures, Web Directions, Build, An Event Apart, dConstruct, and Webstock. As a group, we’ve produced, written-for, and supported forward-thinking magazines like A List Apart, 8 Faces, Smashing Mag, and The Manual. We’ve written the books that distill our knowledge either independently or with publishers from our own community like Five Simple Steps and A Book Apart. We’ve created services and tools like jQuery, Fontdeck, Typekit, Hashgrid, Teuxdeux, and Firebug. That’s just a sample. There’s so many I haven’t mentioned. We did these things. What an extraordinary industry.

I know I flushed with anger and embarrassment that day at the christening party. Afterwards, I started to look a little deeper into what I do. I started to ask what exactly it means to be a web designer. I started to realise how extraordinary our community is. How extraordinary this profession is that we’ve created. How good the work is that we do. How delightful it is when it does work; for audiences, clients, and us. How fantastic it is that I help build the Web. Long may that feeling last. May it never go away. There’s so much still to learn, create, and make. This is my our party. Hi, I’m Jon; my friends and I are making Mapalong, and I’m a web designer.




si

Dynamic Range Processing in Audio Post Production

If listeners find themselves using the volume up and down buttons a lot, level differences within your podcast or audio file are too big.
In this article, we are discussing why audio dynamic range processing (or leveling) is more important than loudness normalization, why it depends on factors like the listening environment and the individual character of the content, and why the loudness range descriptor (LRA) is only reliable for speech programs.

Photo by Alexey Ruban.

Why loudness normalization is not enough

Everybody who has lived in an apartment building knows the problem: you want to enjoy a movie late at night, but you're constantly on the edge - not only because of the thrilling story, but because your index finger is hovering over the volume down button of your remote. The next loud sound effect is going to come sooner rather than later, and you want to avoid waking up your neighbors with some gunshot sounds blasting from your TV.

In our previous post, we talked about the overall loudness of a production. While that's certainly important to keep in mind, the loudness target is only an average value, ignoring how much the loudness varies within a production. The loudness target of your movie might be in the ideal range, yet the level differences between a gunshot and someone whispering can still be enormous - having you turn the volume down for the former and up for the latter.

While the average loudness might be perfect, level differences can lead to an unpleasant listening experience.

Of course, this doesn't apply to movies alone. The image above shows a podcast or radio production. The loud section is music, the very quiet section just breathing, and the remaining sections are different voices.

To be clear, we're not saying that the above example is problematic per se. There are many situations, where a big difference in levels - a high dynamic range - is justified: for instance, in a movie theater, optimized for listening and without any outside noise, or in classical music.
Also, if the dynamic range is too small, listening can be tiring.

But if you watch the same movie in an outdoor screening in the summer on a beach next to the crashing waves or in the middle of a noisy city, it can be tricky to hear the softer parts.
Spoken word usually has a smaller dynamic range, and if you produce your podcast for a target audience of train or car commuters, the dynamic range should be even smaller, adjusting for the listening situation.

Therefore, hitting the loudness target has less impact on the listening experience than level differences (dynamic range) within one file!
What makes a suitable dynamic range does not only depend on the listening environment, but also on the nature of the content itself. If the dynamic range is too small, the audio can be tiring to listen to, whereas more variability in levels can make a program more interesting, but might not work in all environments, such as a noisy car.

Dynamic range experiment in a car

Wolfgang Rein, audio technician at SWR, a public broadcaster in Germany, did an experiment to test how drivers react to programs with different dynamic ranges. They monitored to what level drivers set the car stereo depending on speed (thus noise level) and audio dynamic range.
While the results are preliminary, it seems like drivers set the volume as low as possible so that they can still understand the content, but don't get distracted by loud sounds.

As drivers adjust the volume to the loudest voice in a program, they won't understand quieter speakers in content with a high dynamic range anymore. To some degree and for short periods of time, they can compensate by focusing more on the radio program, but over time that's tiring. Therefore, if the loudness varies too much, drivers tend to switch to another program rather than adjusting the volume.
Similar results have been found in a study conducted by NPR Labs and Towson University.

On the other hand, the perception was different in pure music programs. When drivers set the volume according to louder parts, they weren't able to hear softer segments or the beginning of a song very well. But that did not matter to them as much and didn't make them want to turn up the volume or switch the program.

Listener's reaction in response to frequent loudness changes. (from John Kean, Eli Johnson, Dr. Ellyn Sheffield: Study of Audio Loudness Range for Consumers in Various Listening Modes and Ambient Noise Levels)

Loudness comfort zone

The reaction of drivers to variable loudness hints at something that BBC sound engineer Mike Thornton calls the loudness comfort zone.

Tests (...) have shown that if the short-term loudness stays within the "comfort zone" then the consumer doesn’t feel the need to reach for the remote control to adjust the volume.
In a blog post, he highlights how the series Blue Planet 2 and Planet Earth 2 might not always have been the easiest to listen to. The graph below shows an excerpt with very loud music, followed by commentary just at the bottom of the green comfort zone. Thornton writes: "with the volume set at a level that was comfortable when the music was playing we couldn’t always hear the excellent commentary from Sir David Attenborough and had to resort to turning on the subtitles to be sure we knew what Sir David was saying!"

Planet Earth 2 Loudness Plot Excerpt. Colored green: comfort zone of +3 to -5LU around the loudness target. (from Mike Thornton: BBC Blue Planet 2 Latest Show In Firing Line For Sound Issues - Are They Right?)

As already mentioned above, a good mix considers the maximum and minimum possible loudness in the target listening environment.
In a movie theater the loudness comfort zone is big (loudness can vary a lot), and loud music is part of the fun, while quiet scenes work just as well. The opposite was true in the aforementioned experiment with drivers, where the loudness comfort zone is much smaller and quiet voices are difficult to understand.

Hence, the loudness comfort zone determines how much dynamic range an audio signal can use in a specific listening environment.

How to measure dynamic range: LRA

When producing audio for various environments, it would be great to have a target value for dynamic range, (the difference between the smallest and largest signal values of an audio signal) as well. Then you could just set a dynamic range target, similarly to a loudness target.

Theoretically, the maximum possible dynamic range of a production is defined by the bit-depth of the audio format. A 16-bit recording can have a dynamic range of 96 dB; for 24-bit, it's 144 dB - which is well above the approx. 120 dB the human ear can handle. However, most of those bits are typically being used to get to a reasonable base volume. Picture a glass of water: you want it to be almost full, with some headroom so that it doesn't spill when there's a sudden movement, i.e. a bigger amplitude wave at the top.

Determining the dynamic range of a production is easier said than done, though. It depends on which signals are included in the measurement: for example, if something like background music or breathing should be considered at all.
The currently preferred method for broadcasting is called Loudness Range, LRA. It is measured in Loudness Units (LU), and takes into account everything between the 10th and the 95th percentile of a loudness distribution, after an additional gating method. In other words, the loudest 5% and quietest 10% of the audio signal are being ignored. This way, quiet breathing or an occasional loud sound effect won't affect the measurement.

Loudness distribution and LRA for the film 'The Matrix'. Figure from EBU Tech Doc 3343 (p.13).

However, the main difficulty is which signals should be included in the loudness range measurement and which ones should be gated. This is unfortunately often very subjective and difficult to define with a purely statistical method like LRA.

Where LRA falls short

Therefore, only pure speech programs give reliable LRA values that are comparable!
For instance, a typical LRA for news programs is 3 LU; for talks and discussions 5 LU is common. LRA values for features, radio dramas, movies or music very much depend on the individual character and might be in the range between 5 and 25 LU.

To further illustrate this, here are some typical LRA values, according to a paper by Thomas Lund (table 2):

ProgramLoudness Range
Matrix, full movie25.0
NBC Interstitials, Jan. 2008, all together (3:30)9.4
Friends Episode 166.6
Speak Ref., Male, German, SQUAM Trk 546.2
Speak Ref., Female, French, SQUAM Trk 514.8
Speak Ref., Male, English, Sound Check3.3
Wish You Were Here, Pink Floyd22.1
Gilgamesh, Battle of Titans, Osaka Symph.19.7
Don’t Cry For Me Arg., Sinead O’Conner13.7
Beethoven Son in F, Op17, Kliegel & Tichman12.0
Rock’n Roll Train, AC/DC6.0
I.G.Y., Donald Fagen3.6

LRA values of music are very unpredictable as well.
For instance, Tom Frampton measured the LRA of songs in multiple genres, and the differences within each genre are quite big. The ten pop songs that he analyzed varied in LRA between 3.7 and 12 LU, country songs between 3.6 and 14.9 LU. In the Electronic genre the individual LRAs were between 3.7 and 15.2 LU. Please see the tables at the bottom of his blog post for more details.

We at Auphonic also tried to base our Adaptive Leveler parameters on the LRA descriptor. Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values. Therefore we developed our own algorithm to measure the dynamic range of audio signals.

In conclusion, LRA comparisons are only useful for productions with spoken word only and the LRA value is therefore not applicable as a general dynamic range target value. The more complex a production gets, the more difficult it is to make any judgment based on the LRA.
This is, because the definition of LRA is purely statistical. There's no smart measurement using classifiers that distinguish between music, speech, quiet breathing, background noises and other types of audio. One would need a more intelligent algorithm (as we use in our Adaptive Leveler), that knows which audio segments should be included and excluded from the measurement.

From theory to application: tools

Loudness and dynamic range clearly is a complicated topic. Luckily, there are tools that can help. To keep short-term loudness in range, a compressor can help control sudden changes in loudness - such as p-pops or consonants like t or k. To achieve a good mid-term loudness, i.e. a signal that doesn't go outside the comfort zone too much, a leveler is a good option. Or, just use a fader or manually adjust volume curves. And to make sure that separate productions sound consistent, loudness normalization is the way to go. We have covered all of this in-depth before.

Looking at the audio from above again, with an adaptive leveler applied it looks like this:

Leveler example. Output at the top, input with leveler envelope at the bottom.

Now, the voices are evened out and the music is at a comfortable level, while the breathing has not been touched at all.
We recently extended Auphonic's adaptive leveler, so that it is now possible to customize the dynamic range - please see adaptive leveler customization and advanced multitrack audio algorithms.
If you wanted to increase the loudness comfort zone (or dynamic range) of the standard preset by 10 dB (or LU), for example, the envelope would look like this:

Leveler with higher dynamic range, only touching sections with extremely low or extremely high loudness to fit into a specific loudness comfort zone.

When a production is done, our adaptive leveler uses classifiers to also calculate the integrated loudness and loudness range of dialog and music sections separately. This way it is possible to just compare the dialog LRA and loudness of complex productions.

Assessing the LRA and loudness of dialog and music separately.

Conclusion

Getting audio dynamics right is not easy. Yet, it is an important thing to keep in mind, because focusing on loudness normalization alone is not enough. In fact, hitting the loudness target often has less impact on the listening experience than level differences, i.e. audio dynamics.

If the dynamic range is too small, the audio can be tiring to listen to, whereas a bigger dynamic range can make a program more interesting, but might not work in loud environments, such as a noisy train.
Therefore, a good mix adapts the audio dynamic range according to the target listening environment (different loudness comfort zones in cinema, at home, in a car) and according to the nature of the content (radio feature, movie, podcast, music, etc.).

Furthermore, because the definition of the loudness range / LRA is purely statistical, only speech programs give reliable LRA values that are comparable.
More "intelligent" algorithms are in development, which use classifiers to decide which signals should be included and excluded from the dynamic range measurement.

If you understand German, take a look at our presentation about audio dynamic processing in podcasts for further information:







si

Markdown Comes Alive! Part 1, Basic Editor

In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

Rendering Markdown

Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

First, let’s define our struct in lib/frampton/post.ex:

defmodule Frampton.Post do
  defstruct body: "", title: ""

  def render(%__MODULE{} = post, markdown) do
    # Fill me in!
  end
end

Now the failing test (in test/frampton/post_test.exs):

describe "render/2" do
  test "returns our post with the body set" do
    markdown = "# Hello world!"                                                                                                                 
    assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
"}}
  end
end

Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

def render(%__MODULE{} = post, markdown) do
  html = Earmark.as_html!(markdown)
  {:ok, Map.put(post, :body, html)}
end

Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

iex(1)> alias Frampton.Post
Frampton.Post
iex(2)> post = %Post{}
%Frampton.Post{body: "", title: ""}
iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
{:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}}
iex(4)> updated_post
%Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}

Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

LiveView Editor

Now for the fun part: Editing this live!

First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

defmodule FramptonWeb.EditorLiveTest do
  use FramptonWeb.ConnCase
  import Phoenix.LiveViewTest

  test "the editor renders" do
    conn = get(build_conn(), "/editor")
    assert html_response(conn, 200) =~ "data-test="editor""
  end
end

This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

import Phoenix.LiveView.Router
# … Code skipped ...
# Inside of `scope "/"`:
live "/editor", EditorLive

Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

defmodule FramptonWeb.EditorLive do
  use Phoenix.LiveView

  def render(assigns) do
    ~L"""
      <div data-test=”editor”>
        <h1>Hello world!</h1>
      </div>
      """
  end

  def mount(_params, _session, socket) do
    {:ok, socket}
  end
end

And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

defmodule FramptonWeb.EditorView do
  use FramptonWeb, :view
  import Phoenix.LiveView
end

Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

Handling User Input

We’ve got four tasks to accomplish before we are done:

  1. Take markdown input from the textarea
  2. Send that input to the LiveServer
  3. Turn that raw markdown into HTML
  4. Return the rendered HTML to the page.

Event binding

To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

<textarea phx-keyup="render_post"></textarea>

This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

Event handling

We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

def handle_event("render_post", params, socket) do
  IO.inspect(params)

  {:noreply, socket}
end

The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

Rendering Markdown

Lets adjust our handle_event to pattern match out the value of the textarea:

def handle_event("render_post", %{"value" => raw}, socket) do

Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

{:ok, post} = Post.render(%Post{}, raw)
IO.inspect(post)

If you type into the textarea you should see output that looks something like this:

Perfect! Lastly, it’s time to send that rendered html back to the page.

Returning HTML to the page

In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

Open up show.html.leex again and modify it like so:

<div class="rendered-output">
  <%= @post.body %>
</div>

Refresh the page and see:

Whoops!

The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

def mount(_params, _session, socket) do
  post = %Post{}
  {:ok, assign(socket, post: post)}
end

In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
  {:ok, post} = Post.render(post, raw)
  {:noreply, assign(socket, post: post)
end

Let's load up http://localhost:4000/editor and see it in action.

Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

Check the edit_posts branch for the final version.

Conclusion

That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



  • Code
  • Back-end Engineering

si

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

si

Pursuing A Professional Certification In Scrum

Professional certifications have become increasingly popular in this age of career switchers and the freelance gig economy. A certification can be a useful way to advance your skill set quickly or make your resume stand out, which can be especially important for those trying to break into a new industry or attract business while self-employed. Whatever your reason may be for pursuing a professional certificate, there is one question only you can answer for yourself: is it worth it?

Finding first-hand experiences from professionals with similar career goals and passions was the most helpful research I used to answer that question for myself. So, here’s mine; why I decided to get Scrum certified, how I evaluated my options, and if it was really worth it.

A shift in mindset

My background originates in brand strategy where it’s typical for work to follow a predictable order, each step informing the next. This made linear techniques like water-fall timelines, completing one phase of work in its entirety before moving onto the next, and documenting granular tasks weeks in advance helpful and easy to implement. When I made the move to more digitally focused work, tasks followed a much looser set of ‘typical’ milestones. While the general outline remained the same (strategy, design, development, launch) there was a lot more overlap with how tasks informed each other, and would keep informing and re-informing as an iterative workflow would encourage.

Trying to fit a very fluid process into my very stiff linear approach to project planning didn’t work so well. I didn’t have the right strategies to manage risks in a productive way without feeling like the whole project was off track; with the habit of account for granular details all the time, I struggled to lean on others to help define what we should work on and when, and being okay if that changed once, or twice, or three times. Everything I learned about the process of product development came from learning on the job and making a ton of mistakes—and I knew I wanted to get better.

Photo by Christin Hume on Unsplash

I was fortunate enough to work with a group of developers who were looking to make a change, too. Being ‘agile’-enthusiasts, this group of developers were desperately looking for ways to infuse our approach to product work with agile-minded principles (the broad definition of ‘agile’ comes from ‘The Agile Manifesto’, which has influenced frameworks for organizing people and information, often applied in product development). This not only applied to how I worked with them, but how they worked with each other, and the way we all onboarded clients to these new expectations. This was a huge eye opener to me. Soon enough, I started applying these agile strategies to my day-to-day— running stand-ups, setting up backlogs, and reorganizing the way I thought about work output. It’s from this experience that I decided it may be worth learning these principles more formally.

The choice to get certified

There is a lot of literature out there about agile methodologies and a lot to be learned from casual research. This benefitted me for a while until I started to work on more complicated projects, or projects with more ambitious feature requests. My decision to ultimately pursue a formal agile certification really came down to three things:

  1. An increased use of agile methods across my team. Within my day-to-day I would encounter more team members who were familiar with these tactics and wanted to use them to structure the projects they worked on.
  2. The need for a clear definition of what processes to follow. I needed to grasp a real understanding of how to implement agile processes and stay consistent with using them to be an effective champion of these principles.
  3. Being able to diversify my experience. Finding ways to differentiate my resume from others with similar experience would be an added benefit to getting a certification. If nothing else, it would demonstrate that I’m curious-minded and proactive about my career.

To achieve these things, I gravitated towards a more foundational education in a specific agile-methodology. This made Scrum the most logical choice given it’s the basis for many of the agile strategies out there and its dominance in the field.

Evaluating all the options

For Scrum education and certification, there are really two major players to consider.

  1. Scrum Alliance - Probably the most well known Scrum organization is Scrum Alliance. They are a highly recognizable organization that does a lot to further the broader understanding of Scrum as a practice.
  2. Scrum.org - Led by the original co-founder of Scrum, Ken Schwaber, Scrum.org is well-respected and touted for its authority in the industry.

Each has their own approach to teaching and awarding certifications as well as differences in price point and course style that are important to be aware of.

SCRUM ALLIANCE

Pros

  • Strong name recognition and leaders in the Scrum field
  • Offers both in-person and online courses
  • Hosts in-person events, webinars, and global conferences
  • Provides robust amounts of educational resources for its members
  • Has specialization tracks for folks looking to apply Scrum to their specific discipline
  • Members are required to keep their skills up to date by earning educational credits throughout the year to retain their certification
  • Consistent information across all course administrators ensuring you'll be set up to succeed when taking your certification test.

Cons

  • High cost creates a significant barrier to entry (we’re talking in the thousands of dollars here)
  • Courses are required to take the certification test
  • Certification expires after two years, requiring additional investment in time and/or money to retain credentials
  • Difficult to find sample course material ahead of committing to a course
  • Courses are several days long which may mean taking time away from a day job to complete them

SCRUM.ORG

Pros

  • Strong clout due to its founder, Ken Schwaber, who is the originator of Scrum
  • Offers in-person classes and self-paced options
  • Hosts in-person events and meetups around the world
  • Provides free resources and materials to the public, including practice tests
  • Has specialization tracks for folks looking to apply Scrum to their specific discipline
  • Minimum score on certification test required to pass; certification lasts for life
  • Lower cost for certification when compared to peers

Cons

  • Much lesser known to the general public, as compared to its counterpart
  • Less sophisticated educational resources (mostly confined to PDFs or online forums) making digesting the material challenging
  • Practice tests are slightly out of date making them less effective as a study tool
  • Self-paced education is not structured and therefore can’t ensure you’re learning everything you need to know for the test
  • Lack of active and engaging community will leave something to be desired

Before coming to a decision, it was helpful to me to weigh these pros and cons against a set of criteria. Here’s a helpful scorecard I used to compare the two institutions.

Scrum Alliance Scrum.org
Affordability ⚪⚪⚪
Rigor⚪⚪⚪⚪⚪
Reputation⚪⚪⚪⚪⚪
Recognition⚪⚪⚪
Community⚪⚪⚪
Access⚪⚪⚪⚪⚪
Flexibility⚪⚪⚪
Specialization⚪⚪⚪⚪⚪⚪
Requirements⚪⚪⚪
Longevity⚪⚪⚪

For me, the four areas that were most important to me were:

  • Affordability - I’d be self-funding this certificate so the investment of cost would need to be manageable.
  • Self-paced - Not having a lot of time to devote in one sitting, the ability to chip away at coursework was appealing to me.
  • Reputation - Having a certificate backed by a well-respected institution was important to me if I was going to put in the time to achieve this credential.
  • Access - Because I wanted to be a champion for this framework for others in my organization, having access to resources and materials would help me do that more effectively.

Ultimately, I decided upon a Professional Scrum Master certification from Scrum.org! The price and flexibility of learning course content were most important to me. I found a ton of free materials on Scrum.org that I could study myself and their practice tests gave me a good idea of how well I was progressing before I committed to the cost of actually taking the test. And, the pedigree of certification felt comparable to that of Scrum Alliance, especially considering that the founder of Scrum himself ran the organization.

Putting a certificate to good use

I don’t work in a formal Agile company, and not everyone I work with knows the ins and outs of Scrum. I didn’t use my certification to leverage a career change or new job title. So after all that time, money, and energy, was it worth it?

I think so. I feel like I use my certification every day and employ many of the principles of Scrum in my day-to-day management of projects and people.

  • Self-organizing teams is really important when fostering trust and collaboration among project members. This means leaning on each other’s past experiences and lessons learned to inform our own approach to work. It also means taking a step back as a project manager to recognize the strengths on your team and trust their lead.
  • Approaching things in bite size pieces is also a best practice I use every day. Even when there isn't a mandated sprint rhythm, breaking things down into effort level, goals, and requirements is an excellent way to approach work confidently and avoid getting too overwhelmed.
  • Retrospectives and stand ups are also absolute musts for Scrum practices, and these can be modified to work for companies and project teams of all shapes and sizes. Keeping a practice of collective communication and reflection will keep a team humming and provides a safe space to vent and improve.
Photo by Gautam Lakum on Unsplash

Parting advice

I think furthering your understanding of industry standards and keeping yourself open to new ways of working will always benefit you as a professional. Professional certifications are readily available and may be more relevant than ever.

If you’re on this path, good luck! And here are some things to consider:

  • Do your research – With so many educational institutions out there, you can definitely find the right one for you, with the level of rigor you’re looking for.
  • Look for company credits or incentives – some companies cover part or all of the cost for continuing education.
  • Get started ASAP – You don’t need a full certification to start implementing small tactics to your workflows. Implementing learnings gradually will help you determine if it’s really something you want to pursue more formally.




si

So You've Written a Bad Design Take

So you’ve just written a blog post or tweet about why wireframes are becoming obsolete, the dangers of “too accessible” design, or how a certain style of icon creates “cognitive fatigue.”

Your post went viral, but now you’re getting ratioed by rude people on the Internet. That sucks! You were just trying to start a conversation and you probably didn’t deserve all that negativity (except for you, “too accessible” guy).

Most likely, you made one of these common mistakes:

1. You made generalizations about “design”

You, a good user-centered designer, know that you are not your user. Nor are you every designer.

First of all, let's acknowledge that there is no universal definition of design. Even if we narrow it down to software design, it’s still hard to make generalizations. Agency, in-house, product, startup, enterprise, non-profit, website, app, connected hardware, etc. – there are a lot of different work contexts and cultures for people with “designer” in their titles.

"The Design Industry" is not a thing, but even if it were, you don't speak for it. Don’t assume that the kind of design work you do is the universal default.

2. You didn’t share enough context

There are many great design books and few great design blog posts. (There are, to my knowledge, no great design tweets, but I am open to your suggestions.) Writing about design is not well suited to short formats, because context plays such an important role and there’s always a lot of it to cover.

Writing about your work should include as much context as you would include if you were presenting your portfolio for a job interview. What kind of organization did you work for? Who was your client and/or your stakeholders? What was the goal of the project? Your timeline? What was the makeup of your team? What were the notable business rules and constraints? How are you defining effectiveness and success?

Without these kinds of details, it’s not possible for other designers to know if what you’ve written is credible or applicable to them.

3. You were too certain

A blog post doesn’t need to be a dissertation. It’s okay to share hunches and anecdotes, but give the necessary caveats. And if you're making claims about science, bruh, you gotta cite your sources.

Be humble in your takes. Your account of what worked for you and why is more valuable to your peers than making sweeping claims and reheating the same old arguments. Be prepared to be told you’re wrong, and have the humility to realize that your perspective is just your perspective. Real conversations, like good design, are built on feedback and diverse viewpoints.

Together, we can improve the discourse in our information ecosystems. Don't generalize. Give context. Be humble.




si

Should you use Userbase for your next static site?

During the winter 2020 Pointless Weekend, we built TrailBuddy (working app coming soon). Our team consisted of four developers, two project managers, two front-end developers, a digital-analyst, a UXer, and a designer. In about 48 hours, we took an idea from Jeremy Field’s head to a (mostly) working app. We broke up the project in two parts:. First, a back-end that crunches trail, weather, and soil data. That data is exposed via a GraphQL API for a web app to consume.

While developers built the API, I built a static front end using Next.js. Famously, static front-ends don’t have a database, or a concept of “users.” A bit of functionality I wanted to add was saving favorite trails. I didn’t want to be hacky about it, I needed some way to add users and a database. I knew it’d be hard for the developers to set this up as part of the API, they had their hands full with all the #soil-soil-soil-soil-soil (a slack channel dedicated solely to figuring out our soil data problem—those were plentiful.) I had been looking for an excuse to use Userbase, and this seemed like as good a time as any.

A textbook Userbase use case

“When would I use it?” The Usebase site lists these reasons:

  • If you want to build a web app without writing any backend code.
  • If you never want to see your users' data.
  • If you're tired of dealing with databases.
  • If you want to radically simplify your GDPR compliance.
  • And if you want to keep things really simple.

This was a perfect fit for my problem. I didn’t want to write any more backend code for this. I didn’t want to see our user’s data, I don’t care to know anyone’s favorite trails.* A nice bonus to not having users in our backend was not having to worry about keeping their data safe. We don’t have their data at all, it’s end-to-end encrypted by Userbase. We can offer a reasonable amount of privacy for free (well for the price of using Userbase: $49 a year.) I am not tired of dealing with databases, but I’d rather not. I don’t think anyone doesn’t want to simplify their GDPR compliance. Finally, given our tight timeline I wanted nothing more than to keep things really simple.

A sign up form that I didn't have to write a back-end for

Using Userbase

Userbase can be tried for free, so I set aside thirty minutes or so to do a quick proof of concept to make sure this would work out for us. I made an account and followed their Quickstart. Userbase is a fundamentally easy tool to use, but their quickstart is everything I’d want out of a quickstart:

  • Written in the most vanilla way possible (just HTML and vanilla JS). This means I can adapt it to my needs, in this case React using Next.js
  • Easy to follow, it does the most barebones tour of the functionality you can expect to get out of the SDK (software development kit.) In other words it is quick and it is a start
  • It has a live demo and code samples you can download and run yourself

It didn’t take long after that to integrate Userbase into our app with more help from their great docs. I debated whether to add code samples of what we did here, and I didn’t because any reader would be better off using the great quickstart and docs Userbase provides—they are that clear, and that good. Depending on your use case you’ll need to adapt the examples to your needs, for us the trickiest things were creating a top level authentication context to manage users in the app, and a custom hook to encapsulate all the logic for setting, updating, and deleting favourite trails in the app. Userbase’s SDK worked seamlessly for us.

A log in form that I didn't have to write a back-end for

Is Userbase for you?

Maybe. I am definitely a fan, so much so that this blog post probably reads like an advert. Userbase saved me a ton of time in this project. It reminded me of “The All Powerful Front End Developer” talk by Chris Coyer. I don’t fully subscribe to all the ideas in that talk, but it is nice to have “serverless” tools like Userbase, and all the new JAMstacky things. There are limits to the Userbase serverless experience in terms of scale, and control. Obviously relying on a third party for something always carries some (probably small) risk—it’s worth noting Usebase includes a note on their pricing page that says “You can host it yourself always under your control, or we can run it for you for a full serverless experience”—Still, I wouldn’t hesitate this to use in future projects.

One of the great things about Viget and Pointless Weekend is the opportunity to try new things. For me that was Next.js and Userbase for Trailbuddy. It doesn’t always work out (in fact this is my first pointless weekend where a risk hasn’t blown up in my face) but it is always fun. Getting to try out Userbase and beginning to think about how we may use it in the future made the weekend worthwhile for me, and it made my job on this project much more enjoyable.

*I will write a future post about privacy conscious analytics in TrailBuddy when I’ve figured that out. I am looking into Fathom Analytics for that.



  • Code
  • Front-end Engineering

si

Australia’s global talent visa for individuals and businesses

In late 2019 the Australian Government launched the Global Talent – Independent program which offers a streamlined, priority visa pathway for highly skilled and talented individuals to work and live permanently in Australia. There are two streams. The first is the Global Talent Independent Program (GTI) and the second is the Global Talent Employer Sponsored (GTES). […]

The post Australia’s global talent visa for individuals and businesses appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




si

What can I do if I am on a working holiday or seasonal worker visa in the Coronavirus (COVID-19) crisis?

Seasonal Worker Programme and Pacific Labour Scheme workers can extend their stay for up to 12 months to work for approved employers as long as pastoral care and accommodation needs of workers are met to minimise health risks to visa holders and the community. Approved employers under the Seasonal Worker Programme and Pacific Labour Scheme […]

The post What can I do if I am on a working holiday or seasonal worker visa in the Coronavirus (COVID-19) crisis? appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




si

Student visa holders and New Zealand citizens in Australia and the Coronavirus (COVID-19) crisis?

International students who have been in Australia for longer than 12 months who find themselves in financial hardship will be able to access their Australian superannuation. The Government will undertake further engagement with the international education sector who already provide some financial support for international students facing hardship. International students working in supermarkets will have […]

The post Student visa holders and New Zealand citizens in Australia and the Coronavirus (COVID-19) crisis? appeared first on Visa Australia - Immigration Lawyers & Registered Migration Agents.




si

9 Things You Can Do To Your WordPress Website During Quarantine

If you’d have told us at WPZOOM about the current situation we find ourselves in six months ago, we wouldn’t have believed you. It’s all we can see if we turn on the TV and it’s clear right now, humanity has taken a break. Worrying about loved ones, ensuring we stay safe, and for heaven’s sake, stay inside. Staying inside […]




si

If You’re Using Beaver Builder Lite, You Need This Addon

Hey there, I’m Ben, and I’m a guest author here at WPZOOM. Today I thought I’d share with you my experience of one of their rather awesome plugins, an addon for Beaver Builder. I know the team at WPZOOM are big fans of Beaver Builder, why not? It’s a great page builder with an excellent feature set; chances are if […]





si

Markdown Comes Alive! Part 1, Basic Editor

In my last post, I covered what LiveView is at a high level. In this series, we’re going to dive deeper and implement a LiveView powered Markdown editor called Frampton. This series assumes you have some familiarity with Phoenix and Elixir, including having them set up locally. Check out Elizabeth’s three-part series on getting started with Phoenix for a refresher.

This series has a companion repository published on GitHub. Get started by cloning it down and switching to the starter branch. You can see the completed application on master. Our goal today is to make a Markdown editor, which allows a user to enter Markdown text on a page and see it rendered as HTML next to it in real-time. We’ll make use of LiveView for the interaction and the Earmark package for rendering Markdown. The starter branch provides some styles and installs LiveView.

Rendering Markdown

Let’s set aside the LiveView portion and start with our data structures and the functions that operate on them. To begin, a Post will have a body, which holds the rendered HTML string, and title. A string of markdown can be turned into HTML by calling Post.render(post, markdown). I think that just about covers it!

First, let’s define our struct in lib/frampton/post.ex:

defmodule Frampton.Post do
  defstruct body: "", title: ""

  def render(%__MODULE{} = post, markdown) do
    # Fill me in!
  end
end

Now the failing test (in test/frampton/post_test.exs):

describe "render/2" do
  test "returns our post with the body set" do
    markdown = "# Hello world!"                                                                                                                 
    assert Post.render(%Post{}, markdown) == {:ok, %Post{body: "<h1>Hello World</h1>
"}}
  end
end

Our render method will just be a wrapper around Earmark.as_html!/2 that puts the result into the body of the post. Add {:earmark, "~> 1.4.3"} to your deps in mix.exs, run mix deps.get and fill out render function:

def render(%__MODULE{} = post, markdown) do
  html = Earmark.as_html!(markdown)
  {:ok, Map.put(post, :body, html)}
end

Our test should now pass, and we can render posts! [Note: we’re using the as_html! method, which prints error messages instead of passing them back to the user. A smarter version of this would handle any errors and show them to the user. I leave that as an exercise for the reader…] Time to play around with this in an IEx prompt (run iex -S mix in your terminal):

iex(1)> alias Frampton.Post
Frampton.Post
iex(2)> post = %Post{}
%Frampton.Post{body: "", title: ""}
iex(3)> {:ok, updated_post} = Post.render(post, "# Hello world!")
{:ok, %Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}}
iex(4)> updated_post
%Frampton.Post{body: "<h1>Hello world!</h1>
", title: ""}

Great! That’s exactly what we’d expect. You can find the final code for this in the render_post branch.

LiveView Editor

Now for the fun part: Editing this live!

First, we’ll need a route for the editor to live at: /editor sounds good to me. LiveViews can be rendered from a controller, or directly in the router. We don’t have any initial state, so let's go straight from a router.

First, let's put up a minimal test. In test/frampton_web/live/editor_live_test.exs:

defmodule FramptonWeb.EditorLiveTest do
  use FramptonWeb.ConnCase
  import Phoenix.LiveViewTest

  test "the editor renders" do
    conn = get(build_conn(), "/editor")
    assert html_response(conn, 200) =~ "data-test="editor""
  end
end

This test doesn’t do much yet, but notice that it isn’t live view specific. Our first render is just the same as any other controller test we’d write. The page’s content is there right from the beginning, without the need to parse JavaScript or make API calls back to the server. Nice.

To make that test pass, add a route to lib/frampton_web/router.ex. First, we import the LiveView code, then we render our Editor:

import Phoenix.LiveView.Router
# … Code skipped ...
# Inside of `scope "/"`:
live "/editor", EditorLive

Now place a minimal EditorLive module, in lib/frampton_web/live/editor_live.ex:

defmodule FramptonWeb.EditorLive do
  use Phoenix.LiveView

  def render(assigns) do
    ~L"""
      <div data-test=”editor”>
        <h1>Hello world!</h1>
      </div>
      """
  end

  def mount(_params, _session, socket) do
    {:ok, socket}
  end
end

And we have a passing test suite! The ~L sigil designates that LiveView should track changes to the content inside. We could keep all of our markup in this render/1 method, but let’s break it out into its own template for demonstration purposes.

Move the contents of render into lib/frampton_web/templates/editor/show.html.leex, and replace EditorLive.render/1 with this one liner: def render(assigns), do: FramptonWeb.EditorView.render("show.html", assigns). And finally, make an EditorView module in lib/frampton_web/views/editor_view.ex:

defmodule FramptonWeb.EditorView do
  use FramptonWeb, :view
  import Phoenix.LiveView
end

Our test should now be passing, and we’ve got a nicely separated out template, view and “live” server. We can keep markup in the template, helper functions in the view, and reactive code on the server. Now let’s move forward to actually render some posts!

Handling User Input

We’ve got four tasks to accomplish before we are done:

  1. Take markdown input from the textarea
  2. Send that input to the LiveServer
  3. Turn that raw markdown into HTML
  4. Return the rendered HTML to the page.

Event binding

To start with, we need to annotate our textarea with an event binding. This tells the liveview.js framework to forward DOM events to the server, using our liveview channel. Open up lib/frampton_web/templates/editor/show.html.leex and annotate our textarea:

<textarea phx-keyup="render_post"></textarea>

This names the event (render_post) and sends it on each keyup. Let’s crack open our web inspector and look at the web socket traffic. Using Chrome, open the developer tools, navigate to the network tab and click WS. In development you’ll see two socket connections: one is Phoenix LiveReload, which polls your filesystem and reloads pages appropriately. The second one is our LiveView connection. If you let it sit for a while, you’ll see that it's emitting a “heartbeat” call. If your server is running, you’ll see that it responds with an “ok” message. This lets LiveView clients know when they've lost connection to the server and respond appropriately.

Now, type some text and watch as it sends down each keystroke. However, you’ll also notice that the server responds with a “phx_error” message and wipes out our entered text. That's because our server doesn’t know how to handle the event yet and is throwing an error. Let's fix that next.

Event handling

We’ll catch the event in our EditorLive module. The LiveView behavior defines a handle_event/3 callback that we need to implement. Open up lib/frampton_web/live/editor_live.ex and key in a basic implementation that lets us catch events:

def handle_event("render_post", params, socket) do
  IO.inspect(params)

  {:noreply, socket}
end

The first argument is the name we gave to our event in the template, the second is the data from that event, and finally the socket we’re currently talking through. Give it a try, typing in a few characters. Look at your running server and you should see a stream of events that look something like this:

There’s our keystrokes! Next, let’s pull out that value and use it to render HTML.

Rendering Markdown

Lets adjust our handle_event to pattern match out the value of the textarea:

def handle_event("render_post", %{"value" => raw}, socket) do

Now that we’ve got the raw markdown string, turning it into HTML is easy thanks to the work we did earlier in our Post module. Fill out the body of the function like this:

{:ok, post} = Post.render(%Post{}, raw)
IO.inspect(post)

If you type into the textarea you should see output that looks something like this:

Perfect! Lastly, it’s time to send that rendered html back to the page.

Returning HTML to the page

In a LiveView template, we can identify bits of dynamic data that will change over time. When they change, LiveView will compare what has changed and send over a diff. In our case, the dynamic content is the post body.

Open up show.html.leex again and modify it like so:

<div class="rendered-output">
  <%= @post.body %>
</div>

Refresh the page and see:

Whoops!

The @post variable will only be available after we put it into the socket’s assigns. Let’s initialize it with a blank post. Open editor_live.ex and modify our mount/3 function:

def mount(_params, _session, socket) do
  post = %Post{}
  {:ok, assign(socket, post: post)}
end

In the future, we could retrieve this from some kind of storage, but for now, let's just create a new one each time the page refreshes. Finally, we need to update the Post struct with user input. Update our event handler like this:

def handle_event("render_post", %{"value" => raw}, %{assigns: %{post: post}} = socket) do
  {:ok, post} = Post.render(post, raw)
  {:noreply, assign(socket, post: post)
end

Let's load up http://localhost:4000/editor and see it in action.

Nope, that's not quite right! Phoenix won’t render this as HTML because it’s unsafe user input. We can get around this (very good and useful) security feature by wrapping our content in a raw/1 call. We don’t have a database and user processes are isolated from each other by Elixir. The worst thing a malicious user could do would be crash their own session, which doesn’t bother me one bit.

Check the edit_posts branch for the final version.

Conclusion

That’s a good place to stop for today. We’ve accomplished a lot! We’ve got a dynamically rendering editor that takes user input, processes it and updates the page. And we haven’t written any JavaScript, which means we don’t have to maintain or update any JavaScript. Our server code is built on the rock-solid foundation of the BEAM virtual machine, giving us a great deal of confidence in its reliability and resilience.

In the next post, we’ll tackle making a shared editor, allowing multiple users to edit the same post. This project will highlight Elixir’s concurrency capabilities and demonstrate how LiveView builds on them to enable some incredible user experiences.



  • Code
  • Back-end Engineering

si

TrailBuddy: Using AI to Create a Predictive Trail Conditions App

Viget is full of outdoor enthusiasts and, of course, technologists. For this year's Pointless Weekend, we brought these passions together to build TrailBuddy. This app aims to solve that eternal question: Is my favorite trail dry so I can go hike/run/ride?

While getting muddy might rekindle fond childhood memories for some, exposing your gear to the elements isn’t great – it’s bad for your equipment and can cause long-term, and potentially expensive, damage to the trail.

There are some trail apps out there but we wanted one that would focus on current conditions. Currently, our favorites trail apps, like mtbproject.com, trailrunproject.com, and hikingproject.com -- all owned by REI, rely on user-reported conditions. While this can be effective, the reports are frequently unreliable, as condition reports can become outdated in just a few days.

Our goal was to solve this problem by building an app that brought together location, soil type, and weather history data to create on-demand condition predictions for any trail in the US.

We built an initial version of TrailBuddy by tapping into several readily-available APIs, then running the combined data through a machine learning algorithm. (Oh, and also by bringing together a bunch of smart and motivated people and combining them with pizza and some of the magic that is our Pointless Weekends. We'll share the other Pointless Project, Scurry, with you soon.)

The quest for data.

We knew from the start this app would require data from a number of sources. As previously mentioned, we used REI’s APIs (i.e. https://www.hikingproject.com/data) as the source for basic trail information. We used the trails’ latitude and longitude coordinates as well as its elevation to query weather and soil type. We also found data points such as a trail’s total distance to be relevant to our app users and decided to include that on the front-end, too. Since we wanted to go beyond relying solely on user-reported metrics, which is how REI’s current MTB project works, we came up with a list of factors that could affect the trail for that day.

First on that list was weather.

We not only considered the impacts of the current forecast, but we also looked at the previous day’s forecast. For example, it’s safe to assume that if it’s currently raining or had been raining over the last several days, it would likely lead to muddy and unfavorable conditions for that trail. We utilized the DarkSky API (https://darksky.net/dev) to get the weather forecasts for that day, as well as the records for previous days. This included expected information, like temperature and precipitation chance. It also included some interesting data points that we realized may be factors, like precipitation intensity, cloud cover, and UV index. 

But weather alone can’t predict how muddy or dry a trail will be. To determine that for sure, we also wanted to use soil data to help predict how well a trail’s unique soil composition recovers after precipitation. Similar amounts of rain on trails of very different soil types could lead to vastly different trail conditions. A more clay-based soil would hold water much longer, and therefore be much more unfavorable, than loamy soil. Finding a reliable source for soil type and soil drainage proved incredibly difficult. After many hours, we finally found a source through the USDA that we could use. As a side note—the USDA keeps track of lots of data points on soil information that’s actually pretty interesting! We can’t say we’re soil experts but, we felt like we got pretty close.

We used Whimsical to build our initial wireframes.

Putting our design hats on.

From the very first pitch for this app, TrailBuddy’s main differentiator to peer trail resources is its ability to surface real-time information, reliably, and simply. For as complicated as the technology needed to collect and interpret information, the front-end app design needed to be clean and unencumbered.

We thought about how users would naturally look for information when setting out to find a trail and what factors they’d think about when doing so. We posed questions like:

  • How easy or difficult of a trail are they looking for?
  • How long is this trail?
  • What does the trail look like?
  • How far away is the trail in relation to my location?
  • For what activity am I needing a trail for?
  • Is this a trail I’d want to come back to in the future?

By putting ourselves in our users’ shoes we quickly identified key features TrailBuddy needed to have to be relevant and useful. First, we needed filtering, so users could filter between difficulty and distance to narrow down their results to fit the activity level. Next, we needed a way to look up trails by activity type—mountain biking, hiking, and running are all types of activities REI’s MTB API tracks already so those made sense as a starting point. And lastly, we needed a way for the app to find trails based on your location; or at the very least the ability to find a trail within a certain distance of your current location.

We used Figma to design, prototype, and gather feedback on TrailBuddy.

Using machine learning to predict trail conditions.

As stated earlier, none of us are actual soil or data scientists. So, in order to achieve the real-time conditions reporting TrailBuddy promised, we’d decided to leverage machine learning to make predictions for us. Digging into the utility of machine learning was a first for all of us on this team. Luckily, there was an excellent tutorial that laid out the basics of building an ML model in Python. Provided a CSV file with inputs in the left columns, and the desired output on the right, the script we generated was able to test out multiple different model strategies, and output the effectiveness of each in predicting results, shown below.

We assembled all of the historical weather and soil data we could find for a given latitude/longitude coordinate, compiled a 1000 * 100 sized CSV, ran it through the Python evaluator, and found that the CART and SVM models consistently outranked the others in terms of predicting trail status. In other words, we found a working model for which to run our data through and get (hopefully) reliable predictions from. The next step was to figure out which data fields were actually critical in predicting the trail status. The more we could refine our data set, the faster and smarter our predictive model could become.

We pulled in some Ruby code to take the original (and quite massive) CSV, and output smaller versions to test with. Now again, we’re no data scientists here but, we were able to cull out a good majority of the data and still get a model that performed at 95% accuracy.

With our trained model in hand, we could serialize that to into a model.pkl file (pkl stands for “pickle”, as in we’ve “pickled” the model), move that file into our Rails app along with it a python script to deserialize it, pass in a dynamic set of data, and generate real-time predictions. At the end of the day, our model has a propensity to predict fantastic trail conditions (about 99% of the time in fact…). Just one of those optimistic machine learning models we guess.

Where we go from here.

It was clear that after two days, our team still wanted to do more. As a first refinement, we’d love to work more with our data set and ML model. Something that was quite surprising during the weekend was that we found we could remove all but two days worth of weather data, and all of the soil data we worked so hard to dig up, and still hit 95% accuracy. Which … doesn’t make a ton of sense. Perhaps the data we chose to predict trail conditions just isn’t a great empirical predictor of trail status. While these are questions too big to solve in just a single weekend, we'd love to spend more time digging into this in a future iteration.



  • News & Culture

si

Pursuing A Professional Certification In Scrum

Professional certifications have become increasingly popular in this age of career switchers and the freelance gig economy. A certification can be a useful way to advance your skill set quickly or make your resume stand out, which can be especially important for those trying to break into a new industry or attract business while self-employed. Whatever your reason may be for pursuing a professional certificate, there is one question only you can answer for yourself: is it worth it?

Finding first-hand experiences from professionals with similar career goals and passions was the most helpful research I used to answer that question for myself. So, here’s mine; why I decided to get Scrum certified, how I evaluated my options, and if it was really worth it.

A shift in mindset

My background originates in brand strategy where it’s typical for work to follow a predictable order, each step informing the next. This made linear techniques like water-fall timelines, completing one phase of work in its entirety before moving onto the next, and documenting granular tasks weeks in advance helpful and easy to implement. When I made the move to more digitally focused work, tasks followed a much looser set of ‘typical’ milestones. While the general outline remained the same (strategy, design, development, launch) there was a lot more overlap with how tasks informed each other, and would keep informing and re-informing as an iterative workflow would encourage.

Trying to fit a very fluid process into my very stiff linear approach to project planning didn’t work so well. I didn’t have the right strategies to manage risks in a productive way without feeling like the whole project was off track; with the habit of account for granular details all the time, I struggled to lean on others to help define what we should work on and when, and being okay if that changed once, or twice, or three times. Everything I learned about the process of product development came from learning on the job and making a ton of mistakes—and I knew I wanted to get better.

Photo by Christin Hume on Unsplash

I was fortunate enough to work with a group of developers who were looking to make a change, too. Being ‘agile’-enthusiasts, this group of developers were desperately looking for ways to infuse our approach to product work with agile-minded principles (the broad definition of ‘agile’ comes from ‘The Agile Manifesto’, which has influenced frameworks for organizing people and information, often applied in product development). This not only applied to how I worked with them, but how they worked with each other, and the way we all onboarded clients to these new expectations. This was a huge eye opener to me. Soon enough, I started applying these agile strategies to my day-to-day— running stand-ups, setting up backlogs, and reorganizing the way I thought about work output. It’s from this experience that I decided it may be worth learning these principles more formally.

The choice to get certified

There is a lot of literature out there about agile methodologies and a lot to be learned from casual research. This benefitted me for a while until I started to work on more complicated projects, or projects with more ambitious feature requests. My decision to ultimately pursue a formal agile certification really came down to three things:

  1. An increased use of agile methods across my team. Within my day-to-day I would encounter more team members who were familiar with these tactics and wanted to use them to structure the projects they worked on.
  2. The need for a clear definition of what processes to follow. I needed to grasp a real understanding of how to implement agile processes and stay consistent with using them to be an effective champion of these principles.
  3. Being able to diversify my experience. Finding ways to differentiate my resume from others with similar experience would be an added benefit to getting a certification. If nothing else, it would demonstrate that I’m curious-minded and proactive about my career.

To achieve these things, I gravitated towards a more foundational education in a specific agile-methodology. This made Scrum the most logical choice given it’s the basis for many of the agile strategies out there and its dominance in the field.

Evaluating all the options

For Scrum education and certification, there are really two major players to consider.

  1. Scrum Alliance - Probably the most well known Scrum organization is Scrum Alliance. They are a highly recognizable organization that does a lot to further the broader understanding of Scrum as a practice.
  2. Scrum.org - Led by the original co-founder of Scrum, Ken Schwaber, Scrum.org is well-respected and touted for its authority in the industry.

Each has their own approach to teaching and awarding certifications as well as differences in price point and course style that are important to be aware of.

SCRUM ALLIANCE

Pros

  • Strong name recognition and leaders in the Scrum field
  • Offers both in-person and online courses
  • Hosts in-person events, webinars, and global conferences
  • Provides robust amounts of educational resources for its members
  • Has specialization tracks for folks looking to apply Scrum to their specific discipline
  • Members are required to keep their skills up to date by earning educational credits throughout the year to retain their certification
  • Consistent information across all course administrators ensuring you'll be set up to succeed when taking your certification test.

Cons

  • High cost creates a significant barrier to entry (we’re talking in the thousands of dollars here)
  • Courses are required to take the certification test
  • Certification expires after two years, requiring additional investment in time and/or money to retain credentials
  • Difficult to find sample course material ahead of committing to a course
  • Courses are several days long which may mean taking time away from a day job to complete them

SCRUM.ORG

Pros

  • Strong clout due to its founder, Ken Schwaber, who is the originator of Scrum
  • Offers in-person classes and self-paced options
  • Hosts in-person events and meetups around the world
  • Provides free resources and materials to the public, including practice tests
  • Has specialization tracks for folks looking to apply Scrum to their specific discipline
  • Minimum score on certification test required to pass; certification lasts for life
  • Lower cost for certification when compared to peers

Cons

  • Much lesser known to the general public, as compared to its counterpart
  • Less sophisticated educational resources (mostly confined to PDFs or online forums) making digesting the material challenging
  • Practice tests are slightly out of date making them less effective as a study tool
  • Self-paced education is not structured and therefore can’t ensure you’re learning everything you need to know for the test
  • Lack of active and engaging community will leave something to be desired

Before coming to a decision, it was helpful to me to weigh these pros and cons against a set of criteria. Here’s a helpful scorecard I used to compare the two institutions.

Scrum Alliance Scrum.org
Affordability ⚪⚪⚪
Rigor⚪⚪⚪⚪⚪
Reputation⚪⚪⚪⚪⚪
Recognition⚪⚪⚪
Community⚪⚪⚪
Access⚪⚪⚪⚪⚪
Flexibility⚪⚪⚪
Specialization⚪⚪⚪⚪⚪⚪
Requirements⚪⚪⚪
Longevity⚪⚪⚪

For me, the four areas that were most important to me were:

  • Affordability - I’d be self-funding this certificate so the investment of cost would need to be manageable.
  • Self-paced - Not having a lot of time to devote in one sitting, the ability to chip away at coursework was appealing to me.
  • Reputation - Having a certificate backed by a well-respected institution was important to me if I was going to put in the time to achieve this credential.
  • Access - Because I wanted to be a champion for this framework for others in my organization, having access to resources and materials would help me do that more effectively.

Ultimately, I decided upon a Professional Scrum Master certification from Scrum.org! The price and flexibility of learning course content were most important to me. I found a ton of free materials on Scrum.org that I could study myself and their practice tests gave me a good idea of how well I was progressing before I committed to the cost of actually taking the test. And, the pedigree of certification felt comparable to that of Scrum Alliance, especially considering that the founder of Scrum himself ran the organization.

Putting a certificate to good use

I don’t work in a formal Agile company, and not everyone I work with knows the ins and outs of Scrum. I didn’t use my certification to leverage a career change or new job title. So after all that time, money, and energy, was it worth it?

I think so. I feel like I use my certification every day and employ many of the principles of Scrum in my day-to-day management of projects and people.

  • Self-organizing teams is really important when fostering trust and collaboration among project members. This means leaning on each other’s past experiences and lessons learned to inform our own approach to work. It also means taking a step back as a project manager to recognize the strengths on your team and trust their lead.
  • Approaching things in bite size pieces is also a best practice I use every day. Even when there isn't a mandated sprint rhythm, breaking things down into effort level, goals, and requirements is an excellent way to approach work confidently and avoid getting too overwhelmed.
  • Retrospectives and stand ups are also absolute musts for Scrum practices, and these can be modified to work for companies and project teams of all shapes and sizes. Keeping a practice of collective communication and reflection will keep a team humming and provides a safe space to vent and improve.
Photo by Gautam Lakum on Unsplash

Parting advice

I think furthering your understanding of industry standards and keeping yourself open to new ways of working will always benefit you as a professional. Professional certifications are readily available and may be more relevant than ever.

If you’re on this path, good luck! And here are some things to consider:

  • Do your research – With so many educational institutions out there, you can definitely find the right one for you, with the level of rigor you’re looking for.
  • Look for company credits or incentives – some companies cover part or all of the cost for continuing education.
  • Get started ASAP – You don’t need a full certification to start implementing small tactics to your workflows. Implementing learnings gradually will help you determine if it’s really something you want to pursue more formally.




si

So You've Written a Bad Design Take

So you’ve just written a blog post or tweet about why wireframes are becoming obsolete, the dangers of “too accessible” design, or how a certain style of icon creates “cognitive fatigue.”

Your post went viral, but now you’re getting ratioed by rude people on the Internet. That sucks! You were just trying to start a conversation and you probably didn’t deserve all that negativity (except for you, “too accessible” guy).

Most likely, you made one of these common mistakes:

1. You made generalizations about “design”

You, a good user-centered designer, know that you are not your user. Nor are you every designer.

First of all, let's acknowledge that there is no universal definition of design. Even if we narrow it down to software design, it’s still hard to make generalizations. Agency, in-house, product, startup, enterprise, non-profit, website, app, connected hardware, etc. – there are a lot of different work contexts and cultures for people with “designer” in their titles.

"The Design Industry" is not a thing, but even if it were, you don't speak for it. Don’t assume that the kind of design work you do is the universal default.

2. You didn’t share enough context

There are many great design books and few great design blog posts. (There are, to my knowledge, no great design tweets, but I am open to your suggestions.) Writing about design is not well suited to short formats, because context plays such an important role and there’s always a lot of it to cover.

Writing about your work should include as much context as you would include if you were presenting your portfolio for a job interview. What kind of organization did you work for? Who was your client and/or your stakeholders? What was the goal of the project? Your timeline? What was the makeup of your team? What were the notable business rules and constraints? How are you defining effectiveness and success?

Without these kinds of details, it’s not possible for other designers to know if what you’ve written is credible or applicable to them.

3. You were too certain

A blog post doesn’t need to be a dissertation. It’s okay to share hunches and anecdotes, but give the necessary caveats. And if you're making claims about science, bruh, you gotta cite your sources.

Be humble in your takes. Your account of what worked for you and why is more valuable to your peers than making sweeping claims and reheating the same old arguments. Be prepared to be told you’re wrong, and have the humility to realize that your perspective is just your perspective. Real conversations, like good design, are built on feedback and diverse viewpoints.

Together, we can improve the discourse in our information ecosystems. Don't generalize. Give context. Be humble.




si

Should you use Userbase for your next static site?

During the winter 2020 Pointless Weekend, we built TrailBuddy (working app coming soon). Our team consisted of four developers, two project managers, two front-end developers, a digital-analyst, a UXer, and a designer. In about 48 hours, we took an idea from Jeremy Field’s head to a (mostly) working app. We broke up the project in two parts:. First, a back-end that crunches trail, weather, and soil data. That data is exposed via a GraphQL API for a web app to consume.

While developers built the API, I built a static front end using Next.js. Famously, static front-ends don’t have a database, or a concept of “users.” A bit of functionality I wanted to add was saving favorite trails. I didn’t want to be hacky about it, I needed some way to add users and a database. I knew it’d be hard for the developers to set this up as part of the API, they had their hands full with all the #soil-soil-soil-soil-soil (a slack channel dedicated solely to figuring out our soil data problem—those were plentiful.) I had been looking for an excuse to use Userbase, and this seemed like as good a time as any.

A textbook Userbase use case

“When would I use it?” The Usebase site lists these reasons:

  • If you want to build a web app without writing any backend code.
  • If you never want to see your users' data.
  • If you're tired of dealing with databases.
  • If you want to radically simplify your GDPR compliance.
  • And if you want to keep things really simple.

This was a perfect fit for my problem. I didn’t want to write any more backend code for this. I didn’t want to see our user’s data, I don’t care to know anyone’s favorite trails.* A nice bonus to not having users in our backend was not having to worry about keeping their data safe. We don’t have their data at all, it’s end-to-end encrypted by Userbase. We can offer a reasonable amount of privacy for free (well for the price of using Userbase: $49 a year.) I am not tired of dealing with databases, but I’d rather not. I don’t think anyone doesn’t want to simplify their GDPR compliance. Finally, given our tight timeline I wanted nothing more than to keep things really simple.

A sign up form that I didn't have to write a back-end for

Using Userbase

Userbase can be tried for free, so I set aside thirty minutes or so to do a quick proof of concept to make sure this would work out for us. I made an account and followed their Quickstart. Userbase is a fundamentally easy tool to use, but their quickstart is everything I’d want out of a quickstart:

  • Written in the most vanilla way possible (just HTML and vanilla JS). This means I can adapt it to my needs, in this case React using Next.js
  • Easy to follow, it does the most barebones tour of the functionality you can expect to get out of the SDK (software development kit.) In other words it is quick and it is a start
  • It has a live demo and code samples you can download and run yourself

It didn’t take long after that to integrate Userbase into our app with more help from their great docs. I debated whether to add code samples of what we did here, and I didn’t because any reader would be better off using the great quickstart and docs Userbase provides—they are that clear, and that good. Depending on your use case you’ll need to adapt the examples to your needs, for us the trickiest things were creating a top level authentication context to manage users in the app, and a custom hook to encapsulate all the logic for setting, updating, and deleting favourite trails in the app. Userbase’s SDK worked seamlessly for us.

A log in form that I didn't have to write a back-end for

Is Userbase for you?

Maybe. I am definitely a fan, so much so that this blog post probably reads like an advert. Userbase saved me a ton of time in this project. It reminded me of “The All Powerful Front End Developer” talk by Chris Coyer. I don’t fully subscribe to all the ideas in that talk, but it is nice to have “serverless” tools like Userbase, and all the new JAMstacky things. There are limits to the Userbase serverless experience in terms of scale, and control. Obviously relying on a third party for something always carries some (probably small) risk—it’s worth noting Usebase includes a note on their pricing page that says “You can host it yourself always under your control, or we can run it for you for a full serverless experience”—Still, I wouldn’t hesitate this to use in future projects.

One of the great things about Viget and Pointless Weekend is the opportunity to try new things. For me that was Next.js and Userbase for Trailbuddy. It doesn’t always work out (in fact this is my first pointless weekend where a risk hasn’t blown up in my face) but it is always fun. Getting to try out Userbase and beginning to think about how we may use it in the future made the weekend worthwhile for me, and it made my job on this project much more enjoyable.

*I will write a future post about privacy conscious analytics in TrailBuddy when I’ve figured that out. I am looking into Fathom Analytics for that.



  • Code
  • Front-end Engineering


si

Best Business WordPress Themes

Kalium Kalium is an excellent WordPress theme that is intended for blogging and portfolio websites. It has plenty of layout design variations, along with an impressive drag and drop content builder. There are many features and elements, each designed to enhance your website and guarantee its success. Dalton A classy and clean theme for businesses […]

The post Best Business WordPress Themes appeared first on WP Theme Designer.




si

Internationaal Symposium 2012

Een unieke dag in London met sprekers over ontwerp, innovatie en samenwerking. Vanuit verschillende oogpunten worden de onderwerpen besproken in uitdagende sessies met grote interactie met het publiek.




si

designworkplan zoekt per direct wayfinding grafisch ontwerper

designworkplan zoekt per direct een grafisch ontwerper voor onze wayfinding studio in Amsterdam




si

TADTas website

Select each thumbnail to view the full image × ×




si

Recent Work: TADTas website

The internet holds a lot of potential for non-profits to get their message out, build an audience and raise money. Using the web to tell stories about helping people in need can be very effective for a non-profit organisation looking for new avenues to generate income and build support in other ways such as a […]





si

What every business must do (and designers even more so)

What should all businesses do at least once, and do properly, and (like the title of this blog post suggests) designers need to do repeatedly? The answer is: Understanding the target market they’re catering to. Sure, that makes sense—but why are graphic designers any different? Why do this repeatedly? When you’re in business, you’re in the […]




si

Design checklist: What clients should provide their designer

Hello! I have updated this very popular post to include a free downloadable PDF of this checklist.  Preparation is key to successful management of any project, and design projects are no different. The more preparation that both client and designer do right at the start, the more smoothly the work will go. I find checklists […]




si

Building a PC, Part IX: Downsizing

Hard to believe that I've had the same PC case since 2011, and my last serious upgrade was in 2015. I guess that's yet another sign that the PC is over, because PC upgrades have gotten really boring. It took 5 years for me to muster up the initiative to




si

Creating a Block-based Theme Using Block Templates

This post outlines the steps I took to create a block-based theme version of Twenty Twenty. Thanks to Kjell Reigstad for helping develop the theme and write this post. There’s been a lot of conversation around how theme development changes as Full Site Editing using Gutenberg becomes a reality. Block templates are an experimental feature … Continue reading "Creating a Block-based Theme Using Block Templates"





si

New Branding & Website Design Launched for Enterprise High School in Clearwater, Florida

We recently completed a full rebrand and website design project for Enterprise High School, a charter school located in Clearwater,...continue reading





si

Logo Design & Branding for Food Launcher

A startup specializing in food product development and commercialization services, “Food Launcher” is a team of food scientists with over...continue reading




si

Fort Myers Brewery Website Launch for Coastal Dayz Brewery

Located in Downtown Fort Myers, just steps from the Caloosahatchee River and a short drive away from the Gulf coast...continue reading




si

New website design launch for Automated Irrigation Systems in Zionsville, Indiana

We’re delighted to launch the first ever website for this local irrigation company that has been around since 1989! Automated...continue reading