ni

Bowen Born itching to start basketball career at UNI

CEDAR FALLS — Like many high school seniors across the country, Norwalk’s Bowen Born is unsure when he’ll be able to get on campus at the University of Northern Iowa and begin...




ni

COVID-19 Guidance for Meetup Organizers

On Wednesday, March 11, the World Health Organization (WHO) declared COVID-19 to be a pandemic and advised people everywhere to “make a plan in preparation for an outbreak of COVID-19 in your community.”  Also on Wednesday, the WordPress.org Community Team published their recommendations for organizers of WordPress Events and Meetups. Our Recommendations We recommend that […]

The post COVID-19 Guidance for Meetup Organizers appeared first on WooCommerce.




ni

How to Choose a Niche for Your Online Store

Find your niche in 5 steps! Choosing a niche is one of the most important aspects of building a profitable online store. Follow this guide to define yours.

The post How to Choose a Niche for Your Online Store appeared first on WooCommerce.





ni

Court approves pilot program to test electronic search warrants

The Iowa Supreme Court approved a pilot program in the 4th Judicial District — Audubon, Cass, Fremont, Harrison, Mills, Montgomery, Pottawattamie, Page and Shelby counties — to develop...




ni

Why universal basic health care is both a moral and economic imperative

Several hundred cars were parked outside a food bank in San Antonio on Good Friday — the food bank fed 10,000 people that day. Such scenes, increasingly common across the nation and evocative...




ni

University of Iowa aims to cut greenhouse gas emissions in half

IOWA CITY — The University of Iowa on Thursday unveiled new sustainability goals for the next decade that — if accomplished — would cut its greenhouse gas emissions in half from...




ni

Uptown Marion Market opening with caveats

MARION — While the Uptown Marion Market will continue to sell fresh produce, it will look a little different this year. The market will continue operating on the second Saturday of June, July...




ni

Scenic designer in Iowa City looks for light in the darkness

Benjamin Stuben Farrar of Iowa City is a storyteller without a story to tell at the moment. The first story is as dramatic and layered as his bold scenic and lighting designs for area stages:...



  • Arts & Culture

ni

New machines in Test Iowa initiative still unproven

DES MOINES — More than 20 days after Iowa signed a $26 million contract with a Utah company to expand testing in the state, the machines the firm supplied to run the samples still have not...




ni

Celebrating on a screen: Iowa universities hold first-ever online commencements

Iowa State University graduates who celebrated commencement Friday saw lots of caps and gowns, red-and-gold confetti and arenas packed with friends and family. But none of those images were from...




ni

Mother’s Day, Birthdays, Anniversaries: Celebrating during a pandemic

A 10th wedding anniversary traditionally is celebrated with a gift of aluminum or tin. For Sondy Daggett, her 10th year of marriage to Liz Hoskins was marked with a gift of Champagne and...




ni

Iowa Writers’ House is gone, but need for literary community continues

When Andrea Wilson approached me five years ago with her idea of creating a space for writers in our community separate from any offered by the University of Iowa, I must admit I was a bit skeptical,...




ni

Jennifer Lien

JENNIFER SUZANNE LIEN Raleigh, N.C. Jennifer Suzanne Lien, 51, of Raleigh, N.C., passed away Monday, April 27, 2020, in Raleigh.

She was born April 2, 1969, in Cedar Rapids, Iowa, the daughter of Dennis Hobel and Carla Lange. She was predeceased by her father, Dennis.

Jennifer graduated from Linn-Mar High School. She loved working with the elderly, going to the beach and laughing with her friends and family, with a glass of pinot grigio. Jennifer loved her two dogs, Cici and Fran.

Along with her mother, Carla, Jennifer is survived by her husband of 25 years, Chris Lien of Raleigh; her daughters, Gwen of Edinburgh, Scotland, and Lily of Raleigh; her sister, Lisa Moon of Melbourne, Fla.; and her brother, Rob Hobel, and his wife, Danielle, of Cedar Rapids.

Private services will be held for the family. In lieu of flowers, memorial donations may be made to Gofundme: www.gofundme.com/manage/help-support-jennifer-liens-family-thankyou-xo.

A service of Bright Funeral Home, 405 S. Main St., Wake Forest, NC 27587, www.brightfunerals.com.




ni

Dennis Gaumon

CEDAR RAPIDS
Dennis Gaumon, 69, died Thursday, May 7, 2020. Murdoch Funeral Home & Cremation Service of Marion.




ni

Iowa’s senior care workers need our support

COVID-19 is a brutal villain, infecting millions and taking more than 185,000 lives worldwide, just over 100 of which were Iowans at the time of this writing. In the face of this, Iowans are showing the strength of their character. Individual acts of courage have become everyday occurrences. Nowhere is this truer than in our state’s long-term care centers.

The threat facing those in long-term care is unprecedented. Because many who are infected remain asymptomatic, efforts to prevent the virus from being introduced into facilities has proved difficult. Once the virus is introduced, it is hard to impede its spread — and virtually impossible without enhanced testing capabilities and more personal protective equipment (PPE) than we have access to today.

Long-term care providers have taken unprecedented steps to protect their residents, including prohibiting non-essential visitors in early March. Unfortunately, even with these measures and following guidance from the Centers for Medicare and Medicaid Services, the Centers for Disease Control and Prevention and other public health officials, more than 3,600 long-term care facilities nationwide have been impacted by the virus, including 13 in Iowa.

Yet, in the face of this challenge, our long-term care workers are performing with a valor we have not seen during peacetime in a generation; maybe two.

While many of us are hunkered down in our homes teleworking and spending time with our families, these caregivers are leaving their families to provide care for the loved ones of others. What these caregivers are doing and what they are sacrificing is remarkable. We owe them our gratitude, and we owe them our best efforts to address their critical needs.

Adequate PPE and routine testing for long-term care are paramount. While there has been significant attention paid to providing hospitals with PPE, it is imperative we not overlook those working in long-term care.

More than 70% of long-term care facilities nationwide report they lack enough PPE. This not only puts our caregivers at risk, it also puts the people they care for at greater risk. Preventing the introduction of the virus and containing its spread in nursing homes and assisted living facilities is one of the most important things we must do to relieve pressure on hospitals now.

Testing is a critical area where more support is needed. There are protocols in place to limit the spread of the virus once it is in a facility, including establishing isolation wings where those who have the virus are kept apart from the rest of the residents and are cared for by staff who do not interact with those in the rest of the building. But the virus leaves many of those infected without symptoms, these steps cannot be effectively implemented without broader testing.

We applaud Gov. Kim Reynolds’ recent action to broaden testing for some of Iowa’s long-term care staff. Equally important is the plan to address potential staff shortages which may result from expanded testing. Since a test result only captures an individual’s infection status for a fixed period of time, long-term care staff and residents must be prioritized at the highest level to receive ongoing testing to effectively identify infections and respond as early as possible.

Those one the front lines of this fight need the tools to confront, contain and ultimately defeat the virus. There is reason to be hopeful. Even though residents of long-term care are particularly at risk, most recover from the virus. Caregivers can do even more amazing work if we get them the tools they need: protective equipment, testing and staffing.

It is time to rally around our long-term care residents and staff, and give them the support they need and deserve.

Brent Willett is president and CEO of the Iowa Health Care Association.




ni

Why universal basic health care is both a moral and economic imperative

Several hundred cars were parked outside a food bank in San Antonio on Good Friday — the food bank fed 10,000 people that day. Such scenes, increasingly common across the nation and evocative of loaves and fish, reflect the cruel facts about the wealthiest nation in the world: 80 percent of Americans live paycheck to paycheck, and 100 percent of Americans were unprepared for the COVID-19 pandemic. People are hungry due to macroeconomic and environmental factors, not because they did something wrong. Although everyone is at risk in this pandemic, the risk is not shared equally across socioeconomic classes. Universal basic health care could resolve this disparity and many of the moral and economic aspects associated with the pandemic.

Increases in the total output of the economy, or the gross domestic product (GDP), disproportionately benefit the wealthy. From 1980 to 2020, the GDP increased by 79 percent. Over that same time, the after-tax income of the top 0.01 percent of earners increased by 420 percent, while the after-tax income of the middle 40 percent of earners increased by only 50 percent, and by a measly 20 percent for the bottom 50 percent of earners. At present, the top 0.1 percent of earners have the same total net worth as the bottom 85 percent. Such income inequality produces poverty, which is much more common in the U.S. than in other developed countries. Currently 43 million Americans, or 12.7 percent of the population, live in poverty.

At the same time, 30 million Americans are uninsured and many more are underinsured with poorly designed insurance plans. The estimated total of uninsured and underinsured Americans exceeds 80 million. In addition, most of the 600,000 homeless people and 11 million immigrants in the U.S. lack health care coverage. Immigrants represent an especially vulnerable population, since many do not speak English and cannot report hazardous or unsafe work conditions. Furthermore, many immigrants avoid care due to fear of deportation even if they entered the country through legal channels.

Most people in poverty and many in the middle class obtain coverage from federal programs. On a national level, Medicaid is effectively a middle-class program and covers those living in poverty, 30 percent of adults and 60 percent of children with disabilities as well as about 67 percent of people in nursing homes. In Iowa, 37 percent of children and 48 percent of nursing home residents use Medicaid. Medicaid also finances up to 20 percent of the care provided in rural hospitals. Medicare, Medicaid and the Children’s Hospital Insurance Program (CHIP) together cover over 40 percent of Americans.

In addition to facilitating care, health care policy must also address the “social determinants of health,” since the conditions in which people live, work, and play dictate up to 80 percent of their health risks and outcomes. This means that health care reform requires programs in all facets of society. Winston Churchill first conceptualized such an idea in the early 20th century as a tool to prevent the expansion of socialism, arguing that inequality could persist indefinitely without social safety nets. Since that time most developed countries have implemented such social programs, but not the US.

All developed countries except the U.S. provide some type of universal basic health care for their residents. Universal basic health care refers to a system that provides all people with certain essential benefits, such as emergency services (including maternity), inpatient hospital and physician care, outpatient services, laboratory and radiology services, treatment of mental illness and substance abuse, preventive health services (including vaccinations), rehabilitation, and medications. Providing access to these benefits, along with primary care, dramatically improves the health of the community without imposing concerns regarding payment. Perhaps not coincidentally, the U.S. reports a lower life expectancy and higher rates of infant mortality, suicide and homicide compared to other developed countries.

Countries such as Canada, Great Britain, Denmark, Germany, Switzerland, Australia, and Japan all produce better health care outcomes than the U.S. at a much lower cost. In fact the U.S. spends about twice the percentage of its GDP on health care compared to these countries. With that being said, the Affordable Care Act of 2010 (ACA), which facilitated a decrease in the rate of the uninsured in the U.S. from 20 percent to 12 percent, also decreased the percentage of the GDP spent on health care from 20.2 percent to 17.9 percent in just 10 years. For this reason, most economists agree that universal basic health care would not cost more than the current system, and many would also argue that the total costs of the health care system cannot be further reduced unless everyone has access to basic care.

Achieving successful universal basic health care requires a serious long-term commitment from the federal government — contributing to Medicaid and financing its expansion are not enough. It requires courage from our elected leaders. The ACA took several important steps toward this goal by guaranteeing coverage for preexisting conditions, banishing lifetime maximums for essential services, and mandating individual coverage for everyone, though Congress repealed this final provision in 2017. At present, the ACA requires refinement and a public option, thereby preserving private and employer-based plans for those who want them.

Without universal basic health care the people living at the margins of society have no assurances that they will have access to basic health care services, especially during times of pandemic. Access to food and medications is less reliable, large families live together in small spaces, and public transportation facilitates frequent exposure to others. Childhood diseases such as asthma, chronic diseases such as diabetes, and diseases related to smoking such as COPD and cancer are all likely to worsen. Quarantine protocols also exacerbate the mental health crisis, further increasing rates of domestic violence, child abuse, substance abuse, depression, and suicide. In the last six weeks over 30 million Americans have applied for unemployment benefits, and as people become unemployed, many will lose health insurance.

Access to basic health care without economic or legal consequences would greatly enhance all aspects of pandemic management and response, from tracing contacts and quarantining carriers to administering tests and reinforcing supply chains. The COVID-19 pandemic has disproportionately affected minorities and the impoverished in both mortality and livelihood. Universal basic health care helps these vulnerable populations the most, and by reducing their risk it reduces the risk for everyone. In this way, universal basic health care supports the best interests of all Americans.

Like a living wage, universal basic health care aligns with the Christian tradition of social justice and is a moral and economic imperative for all Americans. Nurses, doctors, and other health care providers often observe a sharp contrast between the haves and have-nots when seeing patients. The homeless, the hungry, the unemployed, the working poor, the uninsured; people without families, patients with no visitors, those who live alone or lack support systems; refugees and immigrants — all of these people deserve the fairness and dignity provided by universal basic health care and programs which improve the social determinants of their health. The ACA moved U.S. toward this goal, but now it requires refinement and a public option. The COVID-19 pandemic highlights the urgency of this imperative by demonstrating how universal basic health care could decrease the risks to those less fortunate, thus significantly decreasing the risks to everyone.

James M. Levett, MD, serves on the board of Linn County Public Health and is a practicing cardiothoracic surgeon with Physicians’ Clinic of Iowa. Pramod Dwivedi, MS, DrPH (c), is the health director of Linn County Public Health.




ni

Let’s talk about mental illness in our community

One in five people will have some kind of mental illness in their lifetime. Yet despite how common these conditions are — as common as silver cars, and more common than being left-handed — stigma remains the greatest barrier to individuals seeking help regarding their mental illness.

May is Mental Health Awareness Month. This serves as a great opportunity for our community to begin eliminating stigma by starting conversations and increasing understanding about mental illness.

Now, more than ever before, it is important to talk about mental illness. Many of us could be feeling increased anxiety, stress and feelings of isolation due to the COVID-19 outbreak and social distancing requirements. For those Iowans who already live with a mental illness, this pandemic could be causing symptoms to compound.

A recent study released by a team at Iowa State University states that increased unemployment and social isolation measures related to COVID-19 could result in an increase in suicide rates of close to 50,000 individuals.

Despite the challenges created by the COVID-19 pandemic, there still is help available: Telehealth services during this crisis is critical. Our state leaders, Iowa Insurance Commissioner Doug Ommen and Gov. Kim Reynolds responded immediately by encouraging health providers, insurers and businesses to work together to remove barriers and ensure telehealth is accessible.

Your Life Iowa, a state-operated service, offers referrals for problems related to alcohol, drugs, gambling, mental health or suicidal thoughts and can be contacted by phone, text or online chat 24/7.

Between March 1 and April 19, Your Life Iowa received nearly 500 contacts related to COVID-19 and traffic on the website — YourLifeIowa.org — is up 27 percent. Crisis lines and mental health counselors around the state and country are also reporting an uptick in patients reaching out for resources or virtual counseling. This is important progress.

However, the greatest barrier for those in need of mental health services is stigma.

If you know someone who is struggling, be a voice of support. The silence around mental illness is preventing our fellow Iowans — our friends, neighbors, co-workers and family members — from feeling better. By breaking down the stigma around mental illness, we can help them access the resources and treatment they deserve.

If someone opened up to you about their mental illness, would you know what to say? Do you have a general understanding of the most common mental illnesses? Do you know how to support loved ones dealing with mental illness? There are free resources available at MakeItOK.org/Iowa to learn more.

You can also read stories of Iowans who live with mental illness, take a pledge to end mental illness stigma and learn more about how you can get more involved with Make It OK through ambassador trainings, upcoming events and workplace programming.

Together, we can end the stigma and Make It OK.

Jami Haberl, Iowa Healthiest State Initiative; Lori Weih, UnityPoint Health — St. Luke’s Hospital; Tricia Kitzmann, Linn County Public Health and Mona McCalley-Whitters, Ph.D., NAMI Linn County.




ni

Iowa Writers’ House is gone, but need for literary community continues

When Andrea Wilson approached me five years ago with her idea of creating a space for writers in our community separate from any offered by the University of Iowa, I must admit I was a bit skeptical, if not defensive. Over a long coffee discussion, I shared with her a detailed look at the literary landscape of Iowa City and all of the things my organization, the Iowa City UNESCO City of Literature was doing to make those assets more visible and accessible.

Coronavirus closes the Iowa Writers’ House - for now

Despite this, Andrea mentioned the need for an “on ramp,” a way for people who don’t feel a part of that community to find their path, to access those riches. It was there, I thought to myself. She just hadn’t looked in the right place.

Then she built that ramp in the form of the Iowa Writers’ House. As she and her team defined what that ramp should look like, what role it should play, the Writers’ House evolved from being an idea with promise to a vital part of our literary infrastructure. She showed that people were hungry for further instruction. They desired more and different ways to connect with one another. These were things beyond the scope and mission of the UI and the City of Literature. She had found her niche, and filled it, nicely complementing what was offered by my organization and others.

But those services do not come without cost. Andrea and her team scrambled, using the house as a literary bed-and-breakfast that was used by many visiting writers. They scheduled workshops. They held fundraisers. But that thin margin disappeared with the onset of COVID-19. Unable to hold those workshops, to serve as a bed-and-breakfast, to provide meaningful in-person connections, the Writers’ House was unable to carry on in its current configuration.

We have every hope and expectation that the Iowa Writers’ House and Andrea will continue to be a part of our literary landscape in the future. This will come perhaps in another form, another space. Conversations have been underway for months about the needs of the literary community beyond the UI. Andrea has been a key part of those discussions, and the work that she and her team has done offer vital information about where those conversations need to go. Gaps have been identified, and while they won’t be filled in the same way, they will be filled.

These conversations join those that have been taking place in our community for decades about the need for space and support for writers and artists. As we all have realized over these past few weeks of isolation just how much we miss when we are not able to gather to create and to celebrate those creations, perhaps those conversations will accelerate and gain focus once we reconvene. The newly formed Iowa City Downtown Arts Alliance, of which we are proud to be a part, is an additional voice in that conversation.

In the meantime, we want to thank Andrea, Associate Director Alisha Jeddeloh, and the team at the Iowa Writers’ House, not just for identifying a need, but for taking the rare and valuable step of actually rolling up their sleeves and doing something to meet it.

John Kenyon is executive director of the Iowa City UNESCO City of Literature.




ni

Fine-Tuning Your Instagram Hashtag Strategy for 2020

Instagram has become the rising star of social media marketing platforms. It is a very attractive option to marketers that are growing frustrated with Facebook’s algorithm changes. Instagram also has a very large user base. Over 116 million Americans are on this popular image sharing site. Marketers can also reach millions of users in India, […] More




ni

How to Create a WordPress Intranet for Your Organization

Do you want to create a WordPress intranet for your organization? WordPress is a powerful platform with tons of flexible options that makes it ideal to be used as your company’s intranet. In this article, we will show you how to create a WordPress intranet.




ni

Ironic Posters of Adventures at Home

Pendant que nous ne pouvons plus voyager à travers le monde et que nous sommes invités à rester à la maison, le « Bureau de Tourisme du Coronavirus » en profite pour dévoiler sa dernière campagne. Ce bureau de tourisme fictif et ironique a été imaginé par Jennifer Baer, une graphiste californienne, pour promouvoir la distanciation sociale. […]




ni

Wildlife in Patagonia Captured by Konsta Punkka

En 2016, la route du photographe finlandais Konsta Punkka croisait celle de deux pumas. Il se situait alors au cœur de la Patagonie, au Chili, dans le vaste parc national Torres del Paine. Spécialiste des clichés d’aventure et d’animaux dans leur habitat naturel, le photographe a passé une dizaine de jours à suivre les félins pour tirer de […]






ni

Court approves pilot program to test electronic search warrants

The Iowa Supreme Court approved a pilot program in the 4th Judicial District — Audubon, Cass, Fremont, Harrison, Mills, Montgomery, Pottawattamie, Page and Shelby counties — to develop procedures for the use of electronic search warrants.

Electronic search warrants will reduce the time required to obtain warrants, reduce travel time by law enforcement and make more effective use of judges’ time, according to the order. Paper warrants require law enforcement to fill out application forms and then leave the scene of the potential search and drive to find a judge, either at a courthouse during business hours or their home after hours. If the judge grants the warrant, then the officer has to drive back to the scene to execute it.

The electronic warrants can be submitted to a judge from a squad car computer, which is more efficient for law enforcement and the judges.

The pilot program will be evaluated by the court annually and will continue until further notice.

Fourth Judicial District Chief Judge Jeff Larson, who was on the advisory committee to develop recommendations for the new process, talked about the project, which will start in the next few weeks.

Page County Chief Deputy Charles McCalla, 6th Judicial Associate District Judge Nicholas Scott, Linn County Sheriff Capt. Greg McGivern and Marion police Lt. Scott Elam also provided their thoughts about electronic search warrants.

Q: Iowa courts started going paperless in 2010, so why did it take so long to get a pilot program for electronic search warrants?

A: Larson: It had been discussed at various levels since (the electronic document management system) started. We should take advantage of the electronic process because it will save us money. Most law enforcement agencies are now used to filing electronic citations from their patrol cars and offices. There may have been some pushback a few years ago because some counties or offices didn’t have computer scanners and needed technology. Now, the rural offices have that technology.

Q: As a task force member working on this program, what were the hurdles?

A: Larson: It was just working through the procedural issues to make sure there would be a safeguard throughout the process. When a search warrant is needed, law enforcement has to fill out the search warrant package, including the application with all the pertinent information, and submit it to a magistrate judge, associate or district judge in their judicial district. Then the officer or deputy can just call the judge to alert him/her to the warrant and the judge can ask for any additional information needed. The judge then administers the oath of office over the phone and signs off or denies the warrant. Law enforcement doesn’t have to leave the crime scene and can print off the warrant from their squad car computer.

The process of going to electronic warrants started in 2017, when the lawmakers amended the law to allow those to be submitted electronically, and then in 2018, the state court administrator’s office set up an advisory committee to develop recommendations.

Q: What has been the process to get a search warrant?

A: Larson: Law enforcement would have to leave the scene, fill out paperwork and then, many times, travel miles to go to the courthouse to have the judge sign it or if it’s after hours, go to a judge’s home. The officer may not be in the same county as the courthouse where the judge works or where the judge lives. (It) can take a lot of time. The process is way overdue.

Q: Page County Sheriff’s Chief Deputy Charles McCalla, what do you see as the biggest advantage for filing them electronically?

A: McCalla: The smaller counties have limited manpower, and some of the judges, like in Mills County, may be 60 to 70 miles away if a search warrant is needed after hours. Just traveling across the county can take time, depending where you are. At a minimum, we probably have to drive 30 minutes and up to an hour to get to a judge. This will save us time, money for travel and provide safety because we can stay at the scene to ensure the evidence hasn’t been tampered with.

Q: Is there a recent incident where an electronic search warrant may have helped?

A: McCalla: A few weeks ago, there was a theft report for a stolen chain saw and deputies went to the home and saw guns all over the house and they knew the guy who lived there had been convicted. They didn’t want to tip him off, so they just left the scene and went to get a search warrant. Luckily, the evidence was still there when they came back. They found about 90 guns.

Q: How do you feel about being the “guinea pigs” for the process?

A: McCalla: Happy to be. As law enforcement, we’re natural fixers. We find solutions. And this is an idea time to use the process during the COVID-19 pandemic to keep everyone safe. We won’t have to have any face-to-face contact with the judges.

Q: Is Linn County excited about the program, once it’s tested and used across the state?

A: Scott: I think many of us in the criminal justice system are eagerly awaiting the results of the pilot. They have the potential to make the system more efficient. It is in the interest of the police and the suspect, who is often detained pending a warrant, to get the search warrant application reviewed by a judge as soon as possible. A potential benefit is that officers could also use those more often, which protects citizens from unlawful search and seizures if a judge first reviews the evidence.

A: McGivern: I believe the implementation will be a much faster and efficient process for deputies. Like any new process, there may need to be some revisions that will have to be worked out, but I look forward to it.

A: Elam: We’ve done it this way for a long time, and it can be a bit of a haul for us, depending who’s on call (among the judges) — after hours. It’s nice to see there’s a pilot. The concern would be if something goes wrong in the process. If the internet is down or something else. Now, we have to go from Marion to the Linn County Courthouse. Then we go to the county attorney’s office to get a prosecutor to review the warrant and then find a judge (in courthouse during business hours). That takes some time. If you can type out the application from your car right at the scene, it would help with details on the warrant — describing the structure or property needing be searched. I just hope they work out all the bugs first.

Comments: (319) 398-8318; trish.mehaffey@thegazette.com




ni

University of Iowa aims to cut greenhouse gas emissions in half

IOWA CITY — The University of Iowa on Thursday unveiled new sustainability goals for the next decade that — if accomplished — would cut its greenhouse gas emissions in half from a decade ago and transform the campus into a “living laboratory for sustainability education and exploration.”

But the goals fall short of what a collective of Iowa City “climate strikers” have demanded for more than a year — that the UI end coal burning immediately at its power plant, commit to using only renewable energy by 2030 and unite with the city of Iowa City in a “town-gown” climate accord.

“It’s ridiculous for the UI to announce a 2030 climate plan as it continues to burn coal for years and burn methane-spewing natural gas for decades at its power plant,” said Massimo Paciotto-Biggers, 14, a student at Iowa City High and member of the Iowa City Climate Strike group.

The university’s new 2030 goals piggyback off its 2020 goals, which former UI President Sally Mason announced in 2010 in hopes of integrating sustainability into the campus’ mission.

Her goals included consuming less energy on campus in 2020 than in 2010, despite projected growth; diversifying the campus’ energy portfolio by using biomass, solar, wind and the like to achieve 40 percent renewable energy consumption by 2020; diverting 60 percent of solid waste; reducing the campus transportation carbon footprint with a 10 percent cut in per capita transportation and travel; and increasing learning and research opportunities.

The university, according to a new report made public Thursday, met or surpassed many of those goals — including, among other things, a slight dip in total energy use, despite 15 new buildings and major additions across campus.

The campus also reported 40 percent of its energy consumption comes via renewable energy sources, and it reduced annual coal consumption 75 percent.

As for waste production, the university diverted 43 percent from the landfill and reported diverting 70 percent more waste than in 2010.

2030 Plan’s first phase HAS FEWER HARD PERCENTAGES

In just the first phase, the new 2030 goals — a result of collaboration across campus involving a 2030 UI Sustainability Goal Setting Task Force — involve fewer numbers and hard percentages. Aside from the aim to cut greenhouse emissions by 50 percent compared to a 2010 baseline, the phase one goals aim to:

• Institutionalize and embed sustainability into campus culture, allowing individual units across campus to develop plans to meeting campus sustainability goals.

• Expand sustainability research, scholarship and other opportunities.

• Use the campus as a “living laboratory” capable of improving campus sustainability and ecosystems.

• Prepare students to live and work in the 21st century through sustainability education.

• Facilitate knowledge exchange among the campus community and with the state, nation, and world.

PHASE 2 EXPANDS ON GOALS

As the campus moves into phase two of its 2030 plan, it will expand on first-phase goals by identifying specific and measurable tasks and metrics.

Leadership plans to finalize that second phase later in the fall semester.

“This approach has meant including units engaged in activities such as academics, research, operations, planning, engagement, athletics, and student life,” Stratis Giannakouros, director of the Office of Sustainability and the Environment, said in a statement.

‘Ambitious and forward-looking’

Sen. Joe Bolkcom, D-Iowa City, who serves as outreach and community education director for the UI Center for Global and Regional Environmental Research, told The Gazette the new goals are “ambitious and forward-looking.”

“The new goals will engage students and research faculty to help build a sustainable path for the campus and broader community,” he said.

The university recently made big news on the utilities front by entering a $1.165 billion deal with a private French company to operate its utility system for 50 years. The deal nets the university a massive upfront lump sum it can invest and pull from annually. It gives the private operator decades of reliable income.

And the university, in making the deal, mandated its new provider pursue ambitious sustainability goals — promising to impose penalties if it failed to do so.

Comments: (319) 339-3158; vanessa.miller@thegazette.com




ni

Uptown Marion Market opening with caveats

MARION — While the Uptown Marion Market will continue to sell fresh produce, it will look a little different this year.

The market will continue operating on the second Saturday of June, July and August with some adjustments.

But the city of Marion has canceled community events until at least early July because of the coronavirus pandemic.

The Uptown market will run along Sixth Avenue instead of being held in City Square Park. It will be fenced and no more than 50 people will be let in at an time.

Jill Ackerman, president of the Marion Chamber of Commerce, said there are usually between 50 and 60 vendors at each market, but she expects only 15 to 25 at this summer’s markets.

“The main thing here is safety,” Ackerman said. “We want to make sure people have opportunities to buy fresh produce from our local growers, but we’re going to ask patrons to only spend 30 minutes inside the market.”

Vendors will sell produce and some plants, but artisan items will not be available.

While there will be summer events through the Chamber of Commerce, Ackerman said, they will be fewer and look a little different than they usually do.

Free community concerts and movie nights are canceled until July by the city, according to a news release.

The Marion Farmers Market, held at Taube Park, is expected to resume May 16.

Officials hope to have smaller-scale events throughout the summer like performances in the Uptown Artway, Messy Art Days and the Tiny Fair series as restrictions ease.

Sunrise Yoga at the Klopfenstein Amphitheater at Lowe Park is expected to take place every Saturday from June to August.

“Unfortunately, given our current reality, we know that 2020 will be far from normal,” said Marion Mayor Nicolas AbouAssaly. “After careful consideration and consultation with event organizers and sponsors, we have made the collective decision to cancel the free community concerts, events and movie nights originally planned for our outdoor public venues through early-July.”

Comments: (319) 368-8664; grace.king@thegazette.com




ni

Scenic designer in Iowa City looks for light in the darkness

Benjamin Stuben Farrar of Iowa City is a storyteller without a story to tell at the moment.

The first story is as dramatic and layered as his bold scenic and lighting designs for area stages: “Benjamin Stuben Farrar” is not his actual name.

He was born Stewart Benjamin Farrar 41 years ago in Kentucky. He didn’t want to go through life as “Stewie,” so he went by “Benjamin,” until he got to college at Vanderbilt University in Nashville. He ran into so many other Bens, that his buddies decided to combine his names into “Stuben.”

That name followed him to grad school at the University of Iowa in 2002, where he earned an MFA in theater design. But when he moved to New York City in 2006 to pursue his career, he didn’t like hearing “Stuben” shouted across the theater.

“It sounded too much like ‘stupid,’ ” he said, “so I reverted back to Benjamin.”

But nicknames have a way of sticking. When he and his wife moved back to Iowa City in 2015 to raise their daughter, he switched to “Stuben” again, since that’s how people knew him there.

Professionally, he uses “S. Benjamin Farrar” and on Facebook, he goes by “Benjamin Stuben Farrar” so friends from his various circles can find him. Even though most people now call him “Stuben,” he still introduces himself as “Benjamin.”

“To this day, I have 12 different names,” he said with a laugh. “Only the bill collectors know me as ‘Stewart.’”

Changing realms

Like his name, his artistry knows no bounds.

He has planted apple trees on Riverside Theatre’s indoor stage in Iowa City; a child’s outdoor playground on the Theatre Cedar Rapids stage; and dramatic spaces for Noche Flamenca’s dancers in New York City venues and on tour.

These days, however, his theatrical world has gone dark.

His recent designs for “The Humans,” “The Skin of Our Teeth” and “Kinky Boots” at Theatre Cedar Rapids and “A Doll’s House, Part 2” at Riverside Theatre have been canceled or postponed in the wake of the coronavirus pandemic. He has “The Winter’s Tale” in the works for Riverside Theatre’s free Shakespeare in the Park slated for June, but time will tell if that changes, too.

“Within the course of two weeks, five productions were canceled or moved indefinitely,” he said.

Looking ahead, he’s not sure what shows he’ll have time to design for the upcoming seasons. He’s used to juggling three or four productions at a time, but he said that could become really difficult if the shows fall on top of each other at the various venues.

As with so many artists right now, his world keeps changing.

He and his wife, Jody Caldwell, an editor and graduate of the UI Writers’ Workshop, are both freelancers, leaving them with no income during this pandemic. So Farrar has been wading through red tape and delays to secure unemployment compensation and the government stimulus check, for which he’s still waiting. One bright spot was receiving a $1,000 Iowa Arts & Culture Emergency Relief Fund grant given to 156 Iowa creatives who have lost income from canceled projects.

With his regular revenue streams drying up, he’s been considering other ways to earn money through teaching theater or creating and selling more of his digital and film photography — an outgrowth of his fascination for the way lighting can sculpt a scene on stage.

“I love doing nature (photography). I love doing details,” he said. “I love photographing people, too, especially on stage — I love photographing my own shows. It’s just a lot of fun.

“For me, nature’s so interesting, especially living where we do in North America, there’s vast changes from one time of year to another. I just love looking at that on a very small scale, and how light happens to fall on that particular surface — how that surface changes color,” he said.

“Right now the redbuds are out. The magnolias came out two weeks ago and then they started to fall. It changes the landscape dramatically, especially based on whether it’s a morning light or afternoon light or evening light, whether it’s cloudy, whether the sun’s peeking through clouds and highlighting a few individual leaves. I find that super fascinating.

“That’s how I can look at the same boring tree at different times of year, at different times of day, and find something interesting to photograph.”

Lighting design

While his scenic designs create an immediate visual impact and help tell the story swirling around the actors, Farrar was a lighting designer before he became a scenic designer.

It wasn’t love at first sight. He took a light design course in college, but didn’t “get” it.

“It’s really difficult to wrap your head around it,” he said.

His aha moment came when he was running lights for an operetta in college.

“I just had these little faders in front of me so I could raise certain lights up and down. And the music was happening in front of me and I thought, ‘I control this whole little universe. I can make things completely disappear. I can sculpt things from the side, I can make things feel totally different — just like music can — just based on how it’s lit.’ And then I finally started to understand how the lighting hooked things together,” he said.

From there, his interest in lighting soared.

“I absolutely love lighting,” he said. “I think it’s probably given me more joy than anything else, just because I can go for a walk someplace and just the way the lighting changes as the clouds come in or out, or as the time of year changes and the angle of the sun changes, I really enjoy seeing that — and that’s what got me into photography.”

Scenic design

While his design work is a collaborative process with the director and other production team members, the ideas begin flowing as soon as he starts reading a script. With the flamenco dance company in New York, he might start working on a show two years in advance. With Theatre Cedar Rapids, the lead time is generally six months to look at the season overall, and four months to “get things going” on a particular show, he said. The lead time is about two months for Riverside Theatre shows, which have shorter rehearsal periods.

He begins thinking about the theater spaces, the text that the audience never sees, the show’s technical demands, and the scale in relation to the human body. He still likes to do some of his design work by hand, but computers and the 3D printer he has in his basement workshop have made the process much quicker for creating the drawings and scale models for each show.

He also enjoys the variety and challenge of moving between the small space inside Riverside Theatre and the large space inside Theatre Cedar Rapids, as well as the theaters at Grinnell College and Cornell College in Mount Vernon, as well as the theaters in New York and the touring venues that have housed his designs.

Ultimately, the goal of scenic design “is always about the storytelling,” he said.

“There’s a version of a show that exists in a script, if there is a script. Assuming it has a script, there is a scaffolding for that show in the script, and then there’s a version of the show in the director’s head, and then there’s a version of the show that’s performed in my head as I read the script. So there’s all these different versions.”

If the show is a musical, the choreographer brings in another idea, and the musical score adds another element. Sometimes Farrar knows the music very well, but other times, he doesn’t.

“Hopefully, I can integrate that well if I listen to the music while working on the show — not usually when I’m reading the script, but while I’m drafting the show. I’ll listen to the music to get a sense of how the show wants to move.

“Integrating all these different versions of the show — the text, what’s in my head, what’s in the director’s head, what’s in the choreographer’s head, the role the music plays — and then you synthesize all those elements, and then you find out how the show wants to move in the space it has. And how a show moves is one of the most important things to me. ...

“You get a sense that the show becomes this conscious element that wants a certain thing, and will reveal those things over time.”

And time is something he has right now.

Comments: (319) 368-8508; diana.nollen@thegazette.com



  • Arts & Culture

ni

New machines in Test Iowa initiative still unproven

DES MOINES — More than 20 days after Iowa signed a $26 million contract with a Utah company to expand testing in the state, the machines the firm supplied to run the samples still have not passed muster.

A time frame for completing the validation process for the Test Iowa lab machines is unknown, as the process can vary by machine, University of Iowa officials said Friday.

The validation process is undertaken to determine if the machines are processing tests accurately. To this point, the lab has processed the Test Iowa results using machines the State Hygienic Lab already had, officials told The Gazette.

Running side-by-side testing is part of the validation process. The lab then compares whether the machines yield the same results when the sample is run, officials said Friday. The side-by-side testing means the Test Iowa samples are being run at least twice to compare results.

The state does not break out how many of the 331,186 Iowans who by Friday have completed the coronavirus assessment at TestIowa.com have actually been tested. Test Iowa was initiated last month to ramp up testing of essential workers and Iowans showing COVID-19 symptoms. The state’s fourth drive-though location where people with appointments can be tested opened Thursday at the Kirkwood Continuing Education Training Center in Cedar Rapids.

On Friday, Iowa posted a fourth straight day of double-digit deaths from coronavirus, with the latest 12 deaths reported by the state Department of Public Health bringing the statewide toll to 243 since COVID-19 was first confirmed March 8 in Iowa.

State health officials reported another 398 Iowans tested positive for the respiratory ailment, bringing that count to 11,457 of the 70,261 residents who have been tested — a positive rate of more than 16 percent.

One in 44 Iowans has been tested for COVID-19, with 58,804 posting negative results, according to state data. A total of 4,685 people have recovered from the disease.

During a Thursday media briefing, Gov. Kim Reynolds told reporters a backlog of test results that occurred due to validation of Test Iowa equipment had been “caught up,” but some Iowans who participated in drive-through sites set up around the state indicated they still were awaiting results.

Reynolds spokesman Pat Garrett confirmed Thursday that “a very small percentage” of coronavirus test samples collected under the Test Iowa program could not be processed because they were “potentially damaged,” resulting in incomplete results.

There were 407 Iowans who were hospitalized (with 34 admitted in the past 24 hours) for coronavirus-related illnesses and symptoms with 164 being treated in intensive care units and 109 requiring ventilators to assist their breathing.

Health officials said the 12 deaths reported Friday were: three in Woodbury County, two in Linn County and one each in Black Hawk, Dallas, Dubuque, Jasper, Louisa, Muscatine and Scott counties. No other information about the COVID-19 victims was available from state data.

According to officials, 51 percent of the Iowans who have died from coronavirus have been male — the same percentage that tested positive.

Iowans over the age of 80 represent 46 percent of the COVID-19 victims, followed by 41 percent between 61 and 80.

Of those who have tested positive, state data indicates about 42 percent are age 18 to 40; 37 percent are 41 to 60; 14 percent are 61 to 80 and 5 percent are 81 or older.

Counties with the highest number of positive test results are Polk (2,150), Woodbury (1,532), Black Hawk (1,463) and Linn (813).

Earlier this week, state officials revamped the data available to the public at coronavirus.iowa.gov, with the new format no longer listing the age range of Iowans who died from coronavirus and providing information using a different timeline than before.

The governor did not hold a daily media briefing Friday due to scheduling conflicts created by Vice President Mike Pence’s trip to Iowa. Garrett said Reynolds would resume her COVID-19 briefings next week.

John McGlothlen and Zack Kucharski of The Gazette contributed to this report.




ni

Celebrating on a screen: Iowa universities hold first-ever online commencements

Iowa State University graduates who celebrated commencement Friday saw lots of caps and gowns, red-and-gold confetti and arenas packed with friends and family.

But none of those images were from this year — which now is defined by the novel coronavirus that has forced education online and put an end to large gatherings like graduation ceremonies.

Appearing in front of a red ISU screen Friday, College of Agriculture and Life Sciences Dean Daniel J. Robison addressed graduates like he usually would at commencement — but this time in a recorded message acknowledging the unprecedented circumstances keeping them apart.

“This year, because of the COVID crisis, we are unfortunately not all together for this happy occasion,” he said, pushing forward in a motivational tone by quoting famed ISU alumnus George Washington Carver.

“When you can do the common things in life in an uncommon way, you will command the attention of the world,” Robison said, citing Carver.

About 12,000 graduates across Iowa’s public universities this month are doing exactly that — capping their collegiate careers with never-before-attempted online-only commencement ceremonies, with each campus and their respective colleges attempting a variety of virtual celebration methods.

ISU and the University of Iowa are attempting some form of socially-distanced livestreamed convocation with countdown clocks and virtual confetti. All three campuses including the University of Northern Iowa have posted online recorded messages, videos and slides acknowledging individual graduates.

Some slides include photos, thank-yous, quotes and student plans for after graduation.

UNI, which didn’t try any form of a live virtual ceremony, instead created a graduation website that went live Thursday. That site hosts an array of recorded video messages — including one from UNI President Mark Nook who, standing alone behind a podium on campus clad in traditional academic regalia, recognized his campus’ 1,500-some spring graduates and their unusual challenges.

“We know the loss you feel in not being able to be on campus to celebrate this time with your friends, faculty and staff,” Nook said. “To walk around campus in your robe and to take those pictures with friends and family members … The loss is felt by many of us as well.”

He reminded those listening that this spring’s UNI graduates — like those at the UI and ISU — can participate in an upcoming in-person commencement ceremony.

And although students were allowed to return caps and gowns they ordered for their canceled walks across the stage, some kept them as keepsakes. The campuses offered other tokens of remembrance as well, including “CYlebration” gift packages ISU sent to graduates in April stuffed with a souvenir tassel, diploma cover, and streamer tube — to make up for the confetti that won’t be falling on graduation caps from the Hilton Coliseum rafters.

In addition to the recorded messages from 17 UI leaders — including President Bruce Harreld — the campus solicited parent messages, which will be included in the live virtual ceremonies.

To date, about 3,100 of the more than 5,400 UI graduates have RSVP’d to participate in the ceremony, which spokeswoman Anne Bassett said is a required affirmation from the students to have their names read.

“Students do not have to sign up to watch,” she said. “So there’s no way at this time to predict how many will do so.”

Despite the historic nature of the first online-only commencement ceremonies — forever bonding distanced graduates through the shared experience — UI graduate Omar Khodor, 22, said it’s a club he would have liked to avoid.

“I’d definitely prefer not to be part of that group,” the environmental science major said, sharing disappointment over the education, experiences and celebrations he lost to the pandemic.

“A lot of students like myself, we’re upset, but we’re not really allowed to be upset given the circumstances,” Khodor said. “You have this sense that something is unfair, that something has been taken from you. But you can’t be mad about it at all.”

‘Should I Dance Across the Stage?’

Life is too short to dwell on what could have been or what should have been — which sort of captures graduate Dawn Hales’ motivation to get an ISU degree.

The 63-year-old Ames grandmother calls herself the “oldest BSN Iowa State grad ever.”

“It’s the truth, because we’re only the second cohort to graduate,” Hales said. “I’ll probably be the oldest for a while.”

ISU began offering a Bachelor of Science in nursing degree in fall 2018 for registered nurses hoping to advance their careers — like Hales, who spent years in nursing before becoming director of nursing at Accura Healthcare, a skilled nursing and rehabilitation center in Ames.

In addition to wanting more education, Hales said, she felt like the “odd man out” in her red-and-gold family — with her husband, three sons and their wives all earning ISU degrees. She earned an associate degree and became a registered nurse with community college training.

“I was director of nursing at different facilities, but I did not have a four-year degree,” she said. “I always wanted to get my BSN.”

So in January 2019, she started full-time toward her three-semester pursuit of a BSN — even as she continued working. And her education took a relevant and important turn when COVID-19 arrived.

“My capstone project was infection control,” she said, noting her focus later sharpened to “infection control and crisis management” — perfect timing to fight the coronavirus, which has hit long-term care facilities particularly hard.

“We were hyper vigilant,” Hales said of her facility, which has yet to report a case of COVID-19. “I think we were probably one of the first facilities that pretty much shut down and started assessing our staff when they would come in.”

Hales said she was eager to walk in her first university graduation and was planning antics for it with her 10-year-old granddaughter.

“We were trying to think, should I dance across the stage?” Hales said. “Or would I grab a walker and act like an old lady going across the stage?

“She was trying to teach me to do this ‘dab’ move,” Hales said. “I said, ‘Honey, I cannot figure that out.’”

In the end, Hales watched the celebration online instead. She did, however, get a personalized license plate that reads, “RN2BSN.”

In From Idaho To Exalt ‘In ‘Our Own Way’

Coming from a family-run dairy farm in Jerome, Idaho, EllieMae Millenkamp, 22, is the first in her family to graduate college.

Although music is her passion, Millenkamp long expected to study at an agriculture school — but Colorado State was her original choice.

Then, while visiting family in Iowa during a cousin’s visit to ISU, she fell in love with the Ames campus and recalibrated her academic path.

While at ISU, the musical Millenkamp began writing more songs and performing more online, which led to in-person shows and a local band.

And then, during her junior year, a talent scout reached out to invite her to participate in an audition for NBC’s “The Voice.” That went well and Millenkamp, in the summer before her senior year, moved to Los Angeles and made it onto the show.

She achieved second-round status before being bumped, but the experience offered her lifelong friendships and connections and invigorated her musical pursuits — which have been slowed by COVID-19. Shows have been canceled in now idled bars.

Millenkamp went back to Idaho to be with her family, like thousands of her peers also did with their families, when the ISU campus shut down.

After graduation she plans on returning and working the family farm again until her musical career has the chance to regain momentum.

But she recently returned to Ames for finals. And she and some friends, also in town, plan to celebrate graduation, even if not with an official cap and gown.

“We’ll probably have a bonfire and all hang out,” she said. “We’ll celebrate in our own way.”

Seeking Closure After Abrupt Campus Exits

Most college seniors nearing graduation get to spend their academic hours focusing on their major and interests, wrapping their four or sometimes five years with passion projects and capstone experiences.

That was Omar Khodor’s plan — with lab-based DNA sequencing on tap, along with a geology trip and policy proposal he planned to present to the Iowa Legislature. But all that got canceled — and even some requirements were waived since COVID-19 made them impossible.

“There were still a lot of a lot of things to wrap up,” he said. “A lot of things I was looking forward to.”

He’s ending the year with just three classes to finish and “absolutely” would have preferred to have a fuller plate.

But Khodor’s academic career isn’t over. He’s planning to attend law school in the fall at the University of Pennsylvania, where he’ll pursue environmental law. But this spring has diminished his enthusiasm, with the question lingering of whether in-person courses will return to campus soon.

If they don’t, he’s still leaning toward enrolling — in part — because of all the work that goes into applying and getting accepted, which he’s already done.

“But online classes are definitely less fulfilling, less motivating. You feel like you learn less,” he said. “So it will kind of be a tossup. There’ll be some trade-offs involved in what I would gain versus what I would be paying for such an expensive endeavor like law school.”

As for missing a traditional college commencement, Khodor said he will, even though he plans to participate in the virtual alternative.

“Before it got canceled, I didn’t think that I was looking forward to it as much as I actually was,” he said.

Not so much for the pomp and circumstance, but for the closure, which none of the seniors got this year. When the universities announced no one would return to campus this semester, students were away on spring break.

They had already experienced their last in-person class, their last after-class drink, their last cram session, their last study group, their last lecture, their last Iowa Memorial Union lunch — and they didn’t even know it.

“So many of us, we won’t have closure, and that can kind of be a difficult thing,” he said.

Comments: (319) 339-3158; vanessa.miller@thegazette.com

Online Celebrations

For a list of commencement times and virtual celebrations, visit:

The University of Iowa’s commencement site at https://commencement.uiowa.edu/

Iowa State University’s commencement site at https://virtual.graduation.iastate.edu/

University of Northern Iowa’s commencement site at https://vgrad.z19.web.core.windows.net/uni/index.html




ni

Mother’s Day, Birthdays, Anniversaries: Celebrating during a pandemic

A 10th wedding anniversary traditionally is celebrated with a gift of aluminum or tin.

For Sondy Daggett, her 10th year of marriage to Liz Hoskins was marked with a gift of Champagne and chocolate-covered strawberries shared through a window.

Employees at Bickford of Marion, the assisted living and memory care center where Hoskins is a resident, surprised the couple with the anniversary gift on May 1. Despite the current coronavirus-related mitigation practices, the staff had created a special moment for the couple, who have been partners for 24 years.

Daggett burst into tears as employees played their wedding song — Billy Joel’s “The Longest Time.”

“It just touched my soul,” Daggett said.

Across the state, moments like this are relegated through windows or over a phone call. As the novel coronavirus pandemic sweeps through the country, long-term care facilities have locked down in an effort to keep residents healthy, which means their families are no longer able to hug their loved ones, or sit with them in their rooms.

For many families, the feelings at such times this time are conflicted. Typical Mother’s Day celebrations have been placed on hold, and recent milestones have been missed by those living in long-term care facilities. Simple visits through windows feel distant.

“Those are the moments you remember and you miss,” said Daggett, recalling memories of visits to Bickford of Marion from Hoskins’s grandchildren and family gatherings during the holidays.

Hoskins, who has dementia, has been a resident at Bickford since August 2019.

“The pandemic has taken this away,” Daggett said.

But beyond this new dynamic with which family members are left to grapple, they also have the constant worry that their loved one could fall ill.

So far, Bickford of Marion has not seen any cases.

“Every time you read about another outbreak — whether it’s close to home or anywhere in the country — it brings home how fortunate we are so far,” said Matt Hoskins, Liz Hoskin’s son. “I can’t imagine the anxiety the residents and staff are having once it breaks through the wall.”

As of Friday, 29 long-term care facilities across the state, which includes skilled nursing facilities and senior living centers, among others — have reported outbreaks of COVID-19 among hundreds of their staff and residents.

As a result, for some Iowans, that fear has become a reality.

‘I have to trust’

Ruth Brackett’s son Jamie Degner, a 38-year-old resident at Harmony House Health Care Center in Waterloo, tested positive for COVID-19 this past week.

Degner, who has severe autism and intellectual disabilities, has been a resident there since he was 15 years old.

More than 60 residents and staff have tested positive for COVID-19 at Harmony House, an intermediate care facility. It’s one of two long-term care facilities in Black Hawk County reporting an outbreak, defined as three or more positive tests among residents.

Degner received his test results on Tuesday. He’s had lower-than-normal oxygen levels, but otherwise has recorded his usual vital signs and has not experienced symptoms.

Brackett said it is “unbelievably difficult to not be able to go be with him through this.”

As with many facilities across the state, Harmony House closed its doors to visitors in early March, when the first cases of COVID-19 began being reported across Iowa and the nation. Brackett said her son’s cognitive abilities make it impossible for him to understand that she is unable to visit because she might make him sick, so the staff instead tell Degner his mom is “at work.”

While she’s optimistic he’ll improve, Brackett worries whether Degner’s virus would take a turn for the worse.

“It’s tough because I have to trust” the staff, Brackett said. “There’s nothing I can do, so I can’t spend a lot of time dwelling on what I might do differently.”

The families that spoke to The Gazette believe the leadership at long-term care facilities are doing what they can to keep its residents safe and healthy.

At Bickford of Marion, officials have taken the unique step of promising public transparency of possible COVID-19 cases in its facility. On the website of every Bickford location is a feature recording the number of residents who have tested positive for COVID-19.

“Whether it’s COVID-19 or not, we want to be transparent with families about their loved ones’ care,” Bickford of Marion Executive Director Jacobi Feckers said. “I don’t know why other nursing homes haven’t taken that step because I haven’t spoken to other facilities, but I’m thankful that’s the route we’ve taken.”

It’s not just families who are placing their trust in management. Ron Moore is an independent living resident at Cottage Grove Place, one of the largest senior living centers in Cedar Rapids that has reported an outbreak of COVID-19 this past week.

According to the latest data from public health officials, five residents and staff there have tested positive.

The outbreak originated in the skilled nursing unit, and officials said they are working to ensure the virus doesn’t spread to the assisted-living and independent-living portions of the facility. They restricted movement between the facilities and conduct frequent temperature checks of staff.

So far, the general feeling among residents at Cottage Grove Place’s independent-living housing is that management has “done a good job” of controlling exposure.

“The feelings of the residents here are positive,” Moore said. “They appreciate what management has done to protect us.”

‘Any opportunity to celebrate’

Still, life looks much differently at Cottage Grove Place. Moore said his schedule typically is packed with weekly book clubs and coffees with friends. Now he and his wife take walks, or try to connect with friends over email.

“I’ve found (residents) are not depressed at this time,” he said. “But in the future, if this goes on for many months? My prediction is yes, depression will be a serious thing.”

Local senior living centers have come up with unique ways to allow visitors to see their loved ones. Gina Hausknecht, a 55-year-old Iowa City resident, was able to see her mother in person for the first time in weeks after her assisted-living home, Oaknoll Retirement Community in Iowa City, created a “drive up” visit option this past weekend.

While Hausknecht sat in the car, she was able to speak with her mother, 93-year-old Ellen Hausknecht, for an hour outside the facility. Before this, it had been emotionally difficult for Hausknecht not to see her mom weekly as she usually does

“It sunk in that I don’t know when I’m going to see my mom again, and that felt really terrible,” Hausknecht previously told The Gazette.

Hausknecht said she hopes to take this year’s Mother’s Day as an opportunity to do something special, particularly given the difficult past few weeks.

“Our family isn’t super-big on these kinds of holidays but we do like to acknowledge them, and this year it feels important to take hold of any opportunity to celebrate,” she said.

Other facilities, including Bickford of Marion, also have eased restrictions on sending food and gifts to residents in time for Mother’s Day. Matt Hoskins, Liz’s Hoskins’ son, said the family’s usual Mother’s Day plans are impossible this year, so they hope to send Liz’s Hoskins artwork from her grandchildren along with other gifts.

Brackett, who will be apart from her son Degner this year, said she hadn’t planned anything for the holiday. She looks forward to her first in-person visit with him after the pandemic, when she will bring his favorite meal from McDonald’s and a new deck of Phase 10 cards.

Despite the feelings of being separated, their wedding anniversary on May 1 likely is something Daggett will cherish, she said. With Daggett acting as Hoskins caregiver since her dementia diagnosis in 2016, their wedding anniversary has been something the couple hasn’t celebrated in a significant way in some time, she said.

But that worry still creeps in the back of her mind. Daggett said she’s trying to remain “as confident as anyone can at this point.”

“I learned a long time ago you can’t worry about what you can’t control,” Daggett said. “But does that mean I still wake up at 2 in the morning worried about it? Of course I do.”

Comments: (319) 398-8469; michaela.ramm@thegazette.com





ni

This might as well be a Herschel ad. ???? (at London, United...



This might as well be a Herschel ad. ???? (at London, United Kingdom)




ni

This trip solidified my conviction to learning photography. A...



This trip solidified my conviction to learning photography. A lot has happened since this shot was taken.
Can you pinpoint the moment you decided to pursue photography? (at Toronto, Ontario)






ni

Web Fonts, Dingbats, Icons, and Unicode

Yesterday, Cameron Koczon shared a link to the dingbat font, Pictos, by the talented, Drew Wilson. Cameron predicted that dingbats will soon be everywhere. Symbol fonts, yes, I thought. Dingbats? No, thanks. Jason Santa Maria replied:

@FictiveCameron I hope not, dingbat fonts sort of spit in the face of accessibility and semantics at the moment. We need better options.

Jason rightly pointed out the accessibility and semantic problems with dingbats. By mapping icons to letters or numbers in the character map, they are represented on the page by that icon. That’s what Pictos does. For example, by typing an ‘a’ on your keyboard, and setting Pictos as the font-face for that letter, the Pictos anchor icon is displayed.

Other folks suggested SVG and JS might be better, and other more novel workarounds to hide content from assistive technology like screen readers. All interesting, but either not workable in my view, or just a bit awkward.

Ralf Herrmann has an elegant CSS example that works well in Safari.

Falling down with CSS text-replacement

A CSS solution in an article from Pictos creator, Drew Wilson, relies on the fact that most of his icons are mapped to a character that forms part of the common name for that symbol. The article uses the delete icon as an example which is mapped to ‘d’. Using :before and :after pseudo-elements, Drew suggests you can kind-of wrangle the markup into something sort-of semantic. However, it starts to fall down fast. For example, a check mark (tick) is mapped to ‘3’. There’s nothing semantic about that. Clever replacement techniques just hide the evidence. It’s a hack. There’s nothing wrong with a hack here and there (as box model veterans well know) but the ends have to justify the means. The end of this story is not good as a VoiceOver test by Scott at Filament Group shows. In fairness to Drew Wilson, though, he goes on to say if in doubt, do it the old way, using his font to create a background image and deploy with a negative text-indent.

I agreed with Jason, and mentioned a half-formed idea:

@jasonsantamaria that’s exactly what I was thinking. Proper unicode mapping if possible, perhaps?

The conversation continued, and thanks to Jason, helped me refine the idea into this post.

Jon Hicks flagged a common problem for some Windows users where certain Unicode characters are displayed as ‘missing character’ glyphs depending on what character it is. I think most of the problems with dingbats or missing Unicode characters can be solved with web fonts and Unicode.

Rising with Unicode and web fonts

I’d love to be able to use custom icons via optimised web fonts. I want to do so accessibly and semantically, and have optimised font files. This is how it could be done:

  1. Map the icons in the font to the existing Unicode code points for those symbols wherever possible.

    Unicode code points already exist for many common symbols. Fonts could be tiny, fast, stand-alone symbol fonts. Existing typefaces could also be extended to contain symbols that match the style of individual widths, variants, slopes, and weights. Imagine a set of Clarendon or Gotham symbols for a moment. Wouldn’t that be a joy to behold?

    There may be a possibility that private code points could be used if a code-point does not exist for a symbol we need. Type designers, iconographers, and foundries might agree a common set of extended symbols. Alternatively, they could be proposed for inclusion in Unicode.

  2. Include the font with font-face.

    This assumes ubiquitous support (as any use of dingbats does) — we’re very nearly there. WOFF is coming to Safari and with a bit more campaigning we may even see WOFF on iPad soon.

  3. In HTML, reference the Unicode code points in UTF-8 using numeric character references.

    Unicode characters have corresponding numerical references. Named entities may not be rendered by XML parsers. Sean Coates reminded me that in many Cocoa apps in OS X the character map is accessible via a simple CMD+ALT+t shortcut. Ralf Herrmann mentioned that unicode characters ‘…have “speaking” descriptions (like Leftwards Arrow) and fall back nicely to system fonts.’

Limitations

  1. Accessibility: Limited Unicode / entity support in assistive devices.

    My friend and colleague, Jon Gibbins’s old tests in JAWS 7 show some of the inconsistencies. It seems some characters are read out, some ignored completely, and some read as a question mark. Not great, but perhaps Jon will post more about this in the future.

    Elizabeth Pyatt at Penn State university did some dingbat tests in screen readers. For real Unicode symbols, there are pronunciation files that increase the character repertoire of screen readers, like this file for phonetic characters. Symbols would benefit from one.

  2. Web fonts: font-face not supported.

    If font-face is not supported on certain devices like mobile phones, falling back to system fonts is problematic. Unicode symbols may not be present in any system fonts. If they are, for many designers, they will almost certainly be stylistically suboptimal. It is possible to detect font-face using the Paul Irish technique. Perhaps there could be a way to swap Unicode for images if font-face is not present.

Now, next, and a caveat

I can’t recommend using dingbats like Pictos, but the icons sure are useful as images. Beautifully crafted icon sets as carefully crafted fonts could be very useful for rapidly creating image icons for different resolution devices like the iPhone 4, and iPad.

Perhaps we could try and formulate a standard set of commonly used icons using the Unicode symbols range as a starting point. I’ve struggled to find a better visual list of the existing symbols than this Unicode symbol chart from Johannes Knabe.

Icons in fonts as Unicode symbols needs further testing in assistive devices and using font-face.

Last, but not least, I feel a bit cheeky making these suggestions. A little knowledge is a dangerous thing. Combine it with a bit of imagination, and it can be lethal. I have a limited knowledge about how fonts are created, and about Unicode. The real work would be done by others with deeper knowledge than I. I’d be fascinated to hear from Unicode, accessibility, or font experts to see if this is possible. I hope so. It feels to me like a much more elegant and sustainable solution for scalable icons than dingbat fonts.

For more on Unicode, read this long, but excellent, article recommended by my colleague, Andrei, the architect of Unicode and internationalization support in PHP 6: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets.




ni

Auphonic Leveler 1.8 and Auphonic Multitrack 1.4 Updates

Today we released free updates for the Auphonic Leveler Batch Processor and the Auphonic Multitrack Processor with many algorithm improvements and bug fixes for Mac and Windows.

Changelog

  • Linear Filtering Algorithms to avoid Asymmetric Waveforms:
    New zero-phase Adaptive Filtering Algorithms to avoid asymmetric waveforms.
    In asymmetric waveforms, the positive and negative amplitude values are disproportionate - please see Asymmetric Waveforms: Should You Be Concerned?.
    Asymmetrical waveforms are quite natural and not necessarily a problem. They are particularly common on recordings of speech, vocals and can be caused by low-end filtering. However, they limit the amount of gain that can be safely applied without introducing distortion or clipping due to aggressive limiting.
  • Noise Reduction Improvements:
    New and improved noise profile estimation algorithms and bug fixes for parallel Noise Reduction Algorithms.
  • Processing Finished Notification on Mac:
    A system notification (including a short glass sound) is now displayed on Mac OS when the Auphonic Leveler or Auphonic Multitrack has finished processing - thanks to Timo Hetzel.
  • Improved Dithering:
    Improved dithering algorithms - using SoX - if a bit-depth reduction is necessary during file export.
  • Auphonic Multitrack Fixes:
    Fixes for ducking and background tracks and for very short music tracks.
  • New Desktop Apps Documentation:
    The documentation of our desktop apps is now integrated in our new help system:
    see Auphonic Leveler Batch Processor and Auphonic Multitrack Processor.
  • Bug Fixes and Audio Algorithm Improvements:
    This release also includes many small bug fixes and all audio algorithms come with improvements and updated classifiers using the data from our Web Service.

About the Auphonic Desktop Apps

We offer two desktop programs which include our audio algorithms only. The algorithms will be computed offline on your device and are exactly the same as implemented in our Web Service.

The Auphonic Leveler Batch Processor is a batch audio file processor and includes all our (Singletrack) Audio Post Production Algorithms. It can process multiple productions at once.

Auphonic Multitrack includes our Multitrack Post Production Algorithms and requires multiple parallel input audio tracks, which will be analyzed and processed individually as well as combined to create one final mixdown.

Upgrade now

Everyone is encouraged to download the latest binaries:

Please let us know if you have any questions or feedback!






ni

Facebook Live Streaming and Audio/Video Hosting connected to Auphonic

Facebook is not only a social media giant, the company also provides valuable tools for broadcasting. Today we release a connection to Facebook, which allows to use the Facebook tools for video/audio production and publishing within Auphonic and our connected services.

The following workflows are possible with Facebook and Auphonic:
  • Use Facebook for live streaming, then import, process and distribute the audio/video with Auphonic.
  • Post your Auphonic audio or video productions directly to the news feed of your Facebook Page or User.
  • Use Facebook as a general media hosting service and share the link or embed the audio/video on any webpage (also visible to non-Facebook users).

Connect to Facebook

First you have to connect to a Facebook account at our External Services Page, click on the "Facebook" button.

Select if you want to connect to your personal Facebook User or to a Facebook Page:

It is always possible to remove or edit the connection in your Facebook Settings (Tab Business Integrations).

Import (Live) Videos from Facebook to Auphonic

Facebook Live is an easy (and free) way to stream live videos:

We implemented an interface to use Facebook as an Incoming External Service. Please select a (live or non-live) video from your Facebook Page/User as the source of a production and then process it with Auphonic:

This workflow allows you to use Facebook for live streaming, import and process the audio/video with Auphonic, then publish a podcast and video version of your live video to any of our connected services.

Export from Auphonic to Facebook

Similar to Youtube, it is possible to use Facebook for media file hosting.
Please add your Facebook Page/User as an External Service in your Productions or Presets to upload the Auphonic results directly to Facebook:

Options for the Facebook export:
  • Distribution Settings
    • Post to News Feed: The exported video is posted directly to your news feed / timeline.
    • Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic's video tab), but it is not posted to your news feed (you can do that later if you want).
    • Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want).
  • Embeddable
    Choose if the exported video should be embeddable in third-party websites.

It is always possible to change the distribution/privacy and embeddable options later directly on Facebook. For example, you can export a video to Facebook as Secret and publish it to your news feed whenever you want.


If your production is audio-only, we automatically generate a video track from the Cover Image and (possible) Chapter Images.
Alternatively you can select an Audiogram Output File, if you want to add an Audiogram (audio waveform visualization) to your Facebook video - for details please see Auphonic Audiogram Generator.

Auphonic Title and Description metadata fields are exported to Facebook as well.
If you add Speech Recognition to your production, we create an SRT file with the speech recognition results and add it to your Facebook video as captions.
See the example below.

Facebook Video Hosting Example with Audiogram and Automatic Captions

Facebook can be used as a general video hosting service: even if you export videos as Secret, you will get a direct link to the video which can be shared or embedded in any third-party websites. Users without a Facebook account are also able to view these videos.

In the example below, we automatically generate an Audiogram Video for an audio-only production, use our integrated Speech Recognition system to create captions and export the video as Secret to Facebook.
Afterwards it can be embedded directly into this blog post (enable Captions if they don't show up per default) - for details please see How to embed a video:

It is also possible to just use the generated result URL from Auphonic to share the link to your video (also visible to non-Facebook users):
https://www.facebook.com/auphonic/videos/1687244844638091/

Important Note:
Facebook needs some time to process an exported video (up to a few minutes) and the direct video link won't work before the processing is finished - please try again a bit later!
On Facebook Pages, you can see the processing progress in your Video Library.

Conclusion

Facebook has many broadcasting tools to offer and is a perfect addition to Auphonic.
Both systems and our other external services can be used to create automated processing and publishing workflows. Furthermore, the export and import to/from Facebook is also fully supported in the Auphonic API.

Please contact us if you have any questions or further ideas!




ni

Auphonic Audio Inspector Release

At the Subscribe 9 Conference, we presented the first version of our new Audio Inspector:
The Auphonic Audio Inspector is shown on the status page of a finished production and displays details about what our algorithms are changing in audio files.

A screenshot of the Auphonic Audio Inspector on the status page of a finished Multitrack Production.
Please click on the screenshot to see it in full resolution!

It is possible to zoom and scroll within audio waveforms and the Audio Inspector might be used to manually check production result and input files.

In this blog post, we will discuss the usage and all current visualizations of the Inspector.
If you just want to try the Auphonic Audio Inspector yourself, take a look at this Multitrack Audio Inspector Example.

Inspector Usage

Control bar of the Audio Inspector with scrollbar, play button, current playback position and length, button to show input audio file(s), zoom in/out, toggle legend and a button to switch to fullscreen mode.

Seek in Audio Files
Click or tap inside the waveform to seek in files. The red playhead will show the current audio position.
Zoom In/Out
Use the zoom buttons ([+] and [-]), the mouse wheel or zoom gestures on touch devices to zoom in/out the audio waveform.
Scroll Waveforms
If zoomed in, use the scrollbar or drag the audio waveform directly (with your mouse or on touch devices).
Show Legend
Click the [?] button to show or hide the Legend, which describes details about the visualizations of the audio waveform.
Show Stats
Use the Show Stats link to display Audio Processing Statistics of a production.
Show Input Track(s)
Click Show Input to show or hide input track(s) of a production: now you can see and listen to input and output files for a detailed comparison. Please click directly on the waveform to switch/unmute a track - muted tracks are grayed out slightly:

Showing four input tracks and the Auphonic output of a multitrack production.

Please click on the fullscreen button (bottom right) to switch to fullscreen mode.
Now the audio tracks use all available screen space to see all waveform details:

A multitrack production with output and all input tracks in fullscreen mode.
Please click on the screenshot to see it in full resolution.

In fullscreen mode, it’s also possible to control playback and zooming with keyboard shortcuts:
Press [Space] to start/pause playback, use [+] to zoom in and [-] to zoom out.

Singletrack Algorithms Inspector

First, we discuss the analysis data of our Singletrack Post Production Algorithms.

The audio levels of output and input files, measured according to the ITU-R BS.1770 specification, are displayed directly as the audio waveform. Click on Show Input to see the input and output file. Only one file is played at a time, click directly on the Input or Output track to unmute a file for playback:

Singletrack Production with opened input file.
See the first Leveler Audio Example to try the audio inspector yourself.

Waveform Segments: Music and Speech (gold, blue)
Music/Speech segments are displayed directly in the audio waveform: Music segments are plotted in gold/yellow, speech segments in blue (or light/dark blue).
Waveform Segments: Leveler High/No Amplification (dark, light blue)
Speech segments can be displayed in normal, dark or light blue: Dark blue means that the input signal was very quiet and contains speech, therefore the Adaptive Leveler has to use a high amplification value in this segment.
In light blue regions, the input signal was very quiet as well, but our classifiers decided that the signal should not be amplified (breathing, noise, background sounds, etc.).

Yellow/orange background segments display leveler fades.

Background Segments: Leveler Fade Up/Down (yellow, orange)
If the volume of an input file changes in a fast way, the Adaptive Leveler volume curve will increase/decrease very fast as well (= fade) and should be placed in speech pauses. Otherwise, if fades are too slow or during active speech, one will hear pumping speech artifacts.
Exact fade regions are plotted as yellow (fade up, volume increase) and orange (fade down, volume decrease) background segments in the audio inspector.

Horizontal red lines display noise and hum reduction profiles.

Horizontal Lines: Noise and Hum Reduction Profiles (red)
Our Noise and Hiss Reduction and Hum Reduction algorithms segment the audio file in regions with different background noise characteristics, which are displayed as red horizontal lines in the audio inspector (top lines for noise reduction, bottom lines for hum reduction).
Then a noise print is extracted in each region and a classifier decides if and how much noise reduction is necessary - this is plotted as a value in dB below the top red line.
The hum base frequency (50Hz or 60Hz) and the strength of all its partials is also classified in each region, the value in Hz above the bottom red line indicates the base frequency and whether hum reduction is necessary or not (no red line).

You can try the singletrack audio inspector yourself with our Leveler, Noise Reduction and Hum Reduction audio examples.

Multitrack Algorithms Inspector

If our Multitrack Post Production Algorithms are used, additional analysis data is shown in the audio inspector.

The audio levels of the output and all input tracks are measured according to the ITU-R BS.1770 specification and are displayed directly as the audio waveform. Click on Show Input to see all the input files with track labels and the output file. Only one file is played at a time, click directly into the track to unmute a file for playback:

Input Tracks: Waveform Segments, Background Segments and Horizontal Lines
Input tracks are displayed below the output file including their track names. The same data as in our Singletrack Algorithms Inspector is calculated and plotted separately in each input track:
Output Waveform Segments: Multiple Speakers and Music
Each speaker is plotted in a separate, blue-like color - in the example above we have 3 speakers (normal, light and dark blue) and you can see directly in the waveform when and which speaker is active.
Audio from music input tracks are always plotted in gold/yellow in the output waveform, please try to not mix music and speech parts in music tracks (see also Multitrack Best Practice)!

You can try the multitrack audio inspector yourself with our Multitrack Audio Inspector Example or our general Multitrack Audio Examples.

Ducking, Background and Foreground Segments

Music tracks can be set to Ducking, Foreground, Background or Auto - for more details please see Automatic Ducking, Foreground and Background Tracks.

Ducking Segments (light, dark orange)
In Ducking, the level of a music track is reduced if one of the speakers is active, which is plotted as a dark orange background segment in the output track.
Foreground music parts, where no speaker is active and the music track volume is not reduced, are displayed as light orange background segments in the output track.
Background Music Segments (dark orange background)
Here the whole music track is set to Background and won’t be amplified when speakers are inactive.
Background music parts are plotted as dark organge background segments in the output track.
Foreground Music Segments (light orange background)
Here the whole music track is set to Foreground and its level won’t be reduced when speakers are active.
Foreground music parts are plotted as light organge background segments in the output track.

You can try the ducking/background/foreground audio inspector yourself: Fore/Background/Ducking Audio Examples.

Audio Search, Chapters Marks and Video

Audio Search and Transcriptions
If our Automatic Speech Recognition Integration is used, a time-aligned transcription text will be shown above the waveform. You can use the search field to search and seek directly in the audio file.
See our Speech Recognition Audio Examples to try it yourself.
Chapters Marks
Chapter Mark start times are displayed in the audio waveform as black vertical lines.
The current chapter title is written above the waveform - see “This is Chapter 2” in the screenshot above.

A video production with output waveform, input waveform and transcriptions in fullscreen mode.
Please click on the screenshot to see it in full resolution.

Video Display
If you add a Video Format or Audiogram Output File to your production, the audio inspector will also show a separate video track in addition to the audio output and input tracks. The video playback will be synced to the audio of output and input tracks.

Supported Audio Formats

We use the native HTML5 audio element for playback and the aurora.js javascript audio decoders to support all common audio formats:

WAV, MP3, AAC/M4A and Opus
These formats are supported in all major browsers: Firefox, Chrome, Safari, Edge, iOS Safari and Chrome for Android.
FLAC
FLAC is supported in Firefox, Chrome, Edge and Chrome for Android - see FLAC audio format.
In Safari and iOS Safari, we use aurora.js to directly decode FLAC files in javascript, which works but uses much more CPU compared to native decoding!
ALAC
ALAC is not supported by any browser so far, therefore we use aurora.js to directly decode ALAC files in javascript. This works but uses much more CPU compared to native decoding!
Ogg Vorbis
Only supported by Firefox, Chrome and Chrome for Android - for details please see Ogg Vorbis audio format.

We suggest to use a recent Firefox or Chrome browser for best performance.
Decoding FLAC and ALAC files also works in Safari and iOS with the help of aurora.js, but javascript decoders need a lot of CPU and they sometimes have problems with exact scrolling and seeking.

Please see our blog post Audio File Formats and Bitrates for Podcasts for more details about audio formats.

Mobile Audio Inspector

Multiple responsive layouts were created to optimize the screen space usage on Android and iOS devices, so that the audio inspector is fully usable on mobile devices as well: tap into the waveform to set the playhead location, scroll horizontally to scroll waveforms, scroll vertically to scroll between tracks, use zoom gestures to zoom in/out, etc.

Unfortunately the fullscreen mode is not available on iOS devices (thanks to Apple), but it works on Android and is a really great way to inspect everything using all the available screen space:

Audio inspector in horizontal fullscreen mode on Android.

Conclusion

Try the Auphonic Audio Inspector yourself: take a look at our Audio Example Page or play with the Multitrack Audio Inspector Example.

The Audio Inspector will be shown in all productions which are created in our Web Service.
It might be used to manually check production result/input files and to send us detailed feedback about audio processing results.

Please let us know if you have some feedback or questions - more visualizations will be added in future!







ni

Auphonic Add-ons for Adobe Audition and Adobe Premiere

The new Auphonic Audio Post Production Add-ons for Adobe allows you to use the Auphonic Web Service directly within Adobe Audition and Adobe Premiere (Mac and Windows):

Audition Multitrack Editor with the Auphonic Audio Post Production Add-on.
The Auphonic Add-on can be embedded directly inside the Adobe user interface.


It is possible to export tracks/projects from Audition/Premiere and process them with the Auphonic audio post production algorithms (loudness, leveling, noise reduction - see Audio Examples), use our Encoding/Tagging, Chapter Marks, Speech Recognition and trigger Publishing with one click.
Furthermore, you can import the result file of an Auphonic Production into Audition/Premiere.


Download the Auphonic Audio Post Production Add-ons for Adobe:

Auphonic Add-on for Adobe Audition

Audition Waveform Editor with the Auphonic Audio Post Production Add-on.
Metadata, Marker times and titles will be exported to Auphonic as well.

Export from Audition to Auphonic

You can upload the audio of your current active document (a Multitrack Session or a Single Audio File) to our Web Service.
In case of a Multitrack Session, a mixdown will be computed automatically to create a Singletrack Production in our Web Service.
Unfortunately, it is not possible to export the individual tracks in Audition, which could be used to create Multitrack Productions.

Metadata and Markers
All metadata (see tab Metadata in Audition) and markers (see tab Marker in Audition and the Waveform Editor Screenshot) will be exported to Auphonic as well.
Marker times and titles are used to create Chapter Marks (Enhanced Podcasts) in your Auphonic output files.
Auphonic Presets
You can optionally choose an Auphonic Preset to use previously stored settings for your production.
Start Production and Upload & Edit Buttons
Click Upload & Edit to upload your audio and create a new Production for further editing. After the upload, a web browser will be started to edit/adjust the production and start it manually.
Click Start Production to upload your audio, create a new Production and start it directly without further editing. A web browser will be started to see the results of your production.
Audio Compression
Uncompressed Multitrack Sessions or audio files in Audition (WAV, AIFF, RAW, etc.) will be compressed automatically with lossless codecs to speed up the upload time without a loss in audio quality.
FLAC is used as lossless codec on Windows and Mac OS (>= 10.13), older Mac OS systems (< 10.13) do not support FLAC and use ALAC instead.

Import Auphonic Productions in Audition

To import the result of an Auphonic Production into Audition, choose the corresponding production and click Import.
The result file will be downloaded from the Auphonic servers and can be used within Audition. If the production contains multiple Output File Formats, the output file with the highest bitrate (or uncompressed/lossless if available) will be chosen.

Auphonic Add-on for Adobe Premiere

Premiere Video Editor with the Auphonic Audio Post Production Add-on.
The Auphonic Add-on can be embedded directly inside the Adobe Premiere user interface.

Export from Premiere to Auphonic

You can upload the audio of your current Active Sequence in Premiere to our Web Service.

We will automatically create an audio-only mixdown of all enabled audio tracks in your current Active Sequence.
Video/Image tracks are ignored: no video will be rendered or uploaded to Auphonic!
If you want to export a specific audio track, please just mute the other tracks.

Start Production and Upload & Edit Buttons
Click Upload & Edit to upload your audio and create a new Production for further editing. After the upload, a web browser will be started to edit/adjust the production and start it manually.
Click Start Production to upload your audio, create a new Production and start it directly without further editing. A web browser will be started to see the results of your production.
Auphonic Presets
You can optionally choose an Auphonic Preset to use previously stored settings for your production.
Chapter Markers
Chapter Markers in Premiere (not all the other marker types!) will be exported to Auphonic as well and are used to create Chapter Marks (Enhanced Podcasts) in your Auphonic output files.
Audio Compression
The mixdown of your Active Sequence in Premiere will be compressed automatically with lossless codecs to speed up the upload time without a loss in audio quality.
FLAC is used as lossless codec on Windows and Mac OS (>= 10.13), older Mac OS systems (< 10.13) do not support FLAC and use ALAC instead.

Import Auphonic Productions in Premiere

To import the result of an Auphonic Production into Premiere, choose the corresponding production and click Import.
The result file will be downloaded from the Auphonic servers and can be used within Premiere. If the production contains multiple Output File Formats, the output file with the highest bitrate (or uncompressed/lossless if available) will be chosen.

Installation

Install our Add-ons for Audition and Premiere directly on the Adobe Add-ons website:

Auphonic Audio Post Production for Adobe Audition:
https://exchange.adobe.com/addons/products/20433

Auphonic Audio Post Production for Adobe Premiere:
https://exchange.adobe.com/addons/products/20429

The installation requires the Adobe Creative Cloud desktop application and might take a few minutes. Please also also try to restart Audition/Premiere if the installation does not work (on Windows it was once even necessary to restart the computer to trigger the installation).


After the installation, you can start our Add-ons directly in Audition/Premiere:
navigate to Window -> Extensions and click Auphonic Post Production.

Enjoy

Thanks a lot to Durin Gleaves and Charles Van Winkle from Adobe for their great support!

Please let us know if you have any questions or feedback!







ni

New Auphonic Privacy Policy and GDPR Compliance

The new General Data Protection Regulation (GDPR) of the European Union will be implemented on May 25th, 2018. We used this opportunity to rework many of our internal data processing structures, removed unnecessary trackers and apply this strict and transparent regulation also to all our customers worldwide.

Image from pixapay.com.

At Auphonic we store as few personal information as possible about your usage and production data.
Here are a few human-readable excerpts from our privacy policy about which information we collect, how we process it, how long and where we store it - for more details please see our full Privacy Policy.

Information that we collect

  • Your email address when you create an account.
  • Your files, content, configuration parameters and other information, including your photos, audio or video files, production settings, metadata and emails.
  • Your tokens or authentication information if you choose to connect to any External services.
  • Your subscription plan, credits purchases and production billing history associated with your account, where applicable.
  • Your interactions with us, whether by email, on our blog or on our social media platforms.

We do not process any special categories of data (also commonly referred to as “sensitive personal data”).

How we use and process your Data

  • To authenticate you when you log on to your account.
  • To run your Productions, such that Auphonic can create new media files from your Content according to your instructions.
  • To improve our audio processing algorithms. For this purpose, you agree that your Content may be viewed and/or listened to by an Auphonic employee or any person contracted by Auphonic to work on our audio processing algorithms.
  • To connect your Auphonic account to an External service according to your instructions.
  • To develop, improve and optimize the contents, screen layouts and features of our Services.
  • To follow up on any question and request for assistance or information.

When using our Service, you fully retain any rights that you have with regards to your Content, including copyright.

How long we store your Information

Your Productions and any associated audio or video files will be permanently deleted from our servers including all its metadata and possible data from external services after 21 days (7 days for video productions).
We will, however, keep billing metadata associated with your Productions in an internal database (how many hours of audio you processed).

Also, we might store selected audio and/or video files (or excerpts thereof) from your Content in an internal storage space for the purpose of improving our audio processing algorithms.

Other information like Presets, connected External services, Account settings etc. will be stored until you delete them or when your account is deleted.

Where we store your Data

All data that we collect from you is stored on secure servers in the European Economic Area (in Germany).

More Information and Contact

For more information please read our full Privacy Policy.

Please do not hesitate to contact us regarding any matter relating to our privacy policy and GDPR compliance!







ni

New Auphonic Transcript Editor and Improved Speech Recognition Services

Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable.
Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor:

Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.
Try out the Transcript Editor Examples yourself!


The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing.

The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services.
They also return word confidence values, timestamps and some punctuation, which is exported to our output files.

The Auphonic Transcript Editor

With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing.
Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible.

Main features of our transcript editor:
  • Edit the transcription directly in the HTML document.
  • Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine).
  • Listen to audio playback of specific words directly in the HTML editor.
  • Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription.
  • Export the edited transcript to HTML, PDF or WebVTT.
  • Completely useable on all mobile devices and desktop browsers.

Examples: Try Out the Transcript Editor

Here are two examples of the new transcript editor, taken from our speech recognition audio examples page:

1. Singletrack Transcript Editor Example
Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing.
2. Multitrack Transcript Editor Example
A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing.
As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!).

Transcript Editing

By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below).
If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes.


When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript.

Word Confidence Highlighting

Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy.

Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing.
Use the button Add/Remove Highlighting to disable/enable word confidence highlighting.

NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services!

Audio Playback

The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor).
This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file.

If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor.
Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days.

If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback.

Export Formats, Save/Share Transcript Editor

Click on the button Export... to see all export and saving/sharing options:

Save/Share Editor
The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies).
Export HTML / Export PDF / Export WebVTT
Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee).
Every export format is rendered directly in the browser, no server needed.

Amazon Transcribe

The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation.

UPDATE 2019:
Amazon Transcribe offers more languages now - please see Amazon Transcribe Features!

Pricing
The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h).
More information is available at Amazon Transcribe Pricing.
Custom Vocabulary (Keywords) Support
Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals).
The same feature is also available in the Google Cloud Speech API.
Timestamps, Word Confidence, and Punctuation
Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text.
It also adds some punctuation, which is combined with our own punctuation and formatting automatically.

The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages.
However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services!

Try it yourself:
Connect your Auphonic account with Amazon Transcribe at our External Services Page.

Speechmatics

Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation.

Many Languages
Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages).
It also has a Global English feature, which supports different English accents during transcription.
Timestamps, Word Confidence, and Punctuation
Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation.
Pricing
Speechmatics is the most expensive speech recognition service at Auphonic.
Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks.
They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic.
More information is available at Speechmatics Pricing.

Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic.

Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet.

Try it yourself:
Connect your Auphonic account with Speechmatics at our External Services Page.

What do you think?

Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated.

We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better?
Please let us know!






ni

Audio Manipulations and Dynamic Ad Insertion with the Auphonic API

We are pleased to announce a new Audio Inserts feature in the Auphonic API: audio inserts are separate audio files (like intros/outros), which will be inserted into your production at a defined offset.
This blog post shows how one can use this feature for Dynamic Ad Insertion and discusses other Audio Manipulation Methods of the Auphonic API.

API-only Feature

For the general podcasting hobbyist, or even for someone producing a regular podcast, the features that are accessible via our web interface are more than sufficient.

However, some of our users, like podcasting companies who integrate our services as part of their products, asked us for dynamic ad insertions. We teamed up with them to develop a way of making this work within the Auphonic API.

We are pleased therefore to announce audio inserts - a new feature that has been made part of our API. This feature is not available through the web interface though, it requires the use of our API.

Before we talk about audio inserts, let's talk about what you need to know about dynamic ad insertion!

Dynamic Ad Insertion

There are two ways of dealing with adverts within podcasts. In the first, adverts are recorded or edited into the podcast and are fixed, or baked in. The second method is to use dynamic insertion, whereby the adverts are not part of the podcast recording/file but can be inserted into the podcast afterwards, at any time.

This second approach would allow you to run new ad campaigns across your entire catalog of shows. As a podcaster this allows you to potentially generate new revenue from your old content.

As a hosting company, dynamic ad insertion allows you to choose up to date and relevant adverts across all the podcasts you host. You can make these adverts relevant by subject or location, for instance.

Your users can define the time for the ads and their podcast episode, you are then in control of the adverts you insert.

Audio Inserts in Auphonic

Whichever approach to adverts you are taking, using audio inserts can help you.

Audio inserts are separate audio files which will be inserted into your main single or multitrack production at your defined offset (in seconds).

When a separate audio file is inserted as part of your production, it creates a gap in the podcast audio file, shifting the audio back by the length of the insert. Helpfully, chapters and other time-based information like transcriptions are also shifted back when an insert is used.

The biggest advantage of this is that Auphonic will apply loudness normalization to the audio insert so, from an audio point of view, it matches the rest of the podcast.

Although created with dynamic ad insertion in mind, this feature can be used for any type of audio inserts: adverts, music songs, individual parts of a recording, etc. In the case of baked-in adverts, you could upload your already processed advert audio as an insert, without having to edit it into your podcast recording using a separate audio editing application.

Please note that audio inserts should already be edited and processed before using them in production. (This is usually the case with pre-recorded adverts anyway). The only algorithm that Auphonic applies to an audio insert is loudness normalization in order to match the loudness of the entire production. Auphonic does not add any other processing (i.e. no leveling, noise reduction etc).

Audio Inserts Coding Example

Here is a brief overview of how to use our API for audio inserts. Be warned, this section is coding heavy, so if this isn't your thing, feel free to move along to the next section!

You can add audio insert files with a call to https://auphonic.com/api/production/{uuid}/multi_input_files.json, where uuid is the UUID of your production.
Here is an example with two audio inserts from an https URL. The offset/position in the main audio file must be given in seconds:

curl -X POST -H "Content-Type: application/json" 
    https://auphonic.com/api/production/{uuid}/multi_input_files.json 
    -u username:password 
    -d '[
            {
                "input_file": "https://mydomain.com/my_audio_insert_1.wav",
                "type": "insert",
                "offset": 20.5
            },
            {
                "input_file": "https://mydomain.com/my_audio_insert_2.wav",
                "type": "insert",
                "offset": 120.3
            }
        ]'

More details showing how to use audio inserts in our API can be seen here.

Additional API Audio Manipulations

In addition to audio inserts, using the Auphonic API offers a number of other audio manipulation options, which are not available via the web interface:

Cut start/end of audio files: See Docs
In Single-track productions, this feature allows the user to cut the start and/or the end of the uploaded audio file. Crucially, time-based information such as chapters etc. will be shifted accordingly.
Fade In/Out time of audio files: See Docs
This allows you to set the fade in/out time (in ms) at the start/end of output files. The default fade time is 100ms, but values can be set between 0ms and 5000ms.
This feature is also available in our Auphonic Leveler Desktop App.
Adding intro and outro: See Docs
Automatically add intros and outros to your main audio input file, as it is also available in our web interface.
Add multiple intros or outros: See Docs
Using our API, you can also add multiple intros or outros to a production. These intros or outros are played in series.
Overlapping intros/outros: See Docs
This feature allows intros/outros to overlap either the main audio or the following/previous intros/outros.

Conclusion

If you haven't explored our API already, the new audio inserts feature allows for greater flexibility and also dynamic ad insertion.
If you offer online services to podcasters, the Auphonic API would also then allow you to pass on Auphonic's audio processing algorithms to your customers.

If this is of interest to you or you have any new feature suggestions that you feel could benefit your company, please get in touch. We are always happy to extend the functionality of our products!







ni

Resumable File Uploads to Auphonic

Large file uploads in a web browser are problematic, even in 2018. If working with a poor network connection, uploads can fail and have to be retried from the start.

At Auphonic, our users have to upload large audio and video files, or multiple media files when creating a multitrack production. To minimize any potential issues, we integrated various external services which are specialized for large file transfers, like FTP, SFTP, Dropbox, Google Drive, S3, etc.

To further minimize issues, as of today we have also released resumable and chunked direct file uploads in the web browser to auphonic.com.

If you are not interested in the technical details, please just go to the section Resumable Uploads in Auphonic below.

The Problem with Large File Uploads in the Browser

If using either mobile networks (which remain fragile) or unstable WiFi connections, file uploads are often interrupted and will fail. There are also many areas in the world where connections are quite poor, which makes uploading big files frustrating.

After an interrupted file upload, the web browser must restart the whole upload from the start, which is a problem when it happens in the middle of a 4GB video file upload on a slow connection.
Furthermore, the longer an upload takes, the more likely it is to have a network glitch interrupting the upload, which then has to be retried from the start.

The Solution: Chunked, Resumable Uploads

To avoid user frustration, we need to be able to detect network errors and potentially resume an upload without having to restart it from the beginning.

To achieve this, we have to split a file upload in smaller chunks directly within the web browser, so that these chunks can then be sent to the server afterwards.
If an upload fails or the user wants to pause, it is possible to resume it later and only send those chunks that have not already been uploaded.
If there is a network interruption or change, the upload will be retried automatically.

Companies like Dropbox, Google, Amazon AWS etc. all have their own protocols and API's for chunked uploads, but there are also some open source implementations available, which offer resumable uploads:

resumable.js [link]:
"A JavaScript library providing multiple simultaneous, stable and resumable uploads via the HTML5 File API"
This solutions is a JavaScript library only and requires that the protocol is implemented on the server as well.
tus.io [link]:
"Open Protocol for Resumable File Uploads"
Tus.io offers a simple, cheap and reusable stack for clients and servers (in many languages). They have a blog with further information about resumable uploads, see tus blog.
plupload [link]:
A JavaScript library, similar to resumable.js, which requires a separate server implementation.

We chose to use resumable.js and developed our own server implementation.

Resumable Uploads in Auphonic

If you upload files to a singletrack or multitrack production, you will see the upload progress bar and a pause button, which is one way to pause and resume an upload:

It is also possible to close the browser completely or shut down your computer during the upload, then edit the production and upload the file again later. This will just resume the file upload from the position where it was stopped before.
(Previously uploaded chunks are saved for 24h on our servers, after that you have to start the whole upload again.)

In case of a network problem or if you switch to a different connection, we will resume the upload automatically.
This should solve many problems which were reported by some users in the past!

You can of course also use any of our external services for stable incoming and outgoing file transfers!

Do you still have Uploading Issues?

We hope that uploads to Auphonic are much more reliable now, even on poor connections.

If you still experience any problems, please let us know.
We are very happy about any bug reports and will do our best to fix them!







ni

Auphonic Adaptive Leveler Customization (Beta Update)

In late August, we launched the private beta program of our advanced audio algorithm parameters. After feedback by our users and many new experiments, we are proud to release a complete rework of the Adaptive Leveler parameters:

In the previous version, we based our Adaptive Leveler parameters on the Loudness Range descriptor (LRA), which is included in the EBU R128 specification.
Although it worked, it turned out that it is very difficult to set a loudness range target for diverse audio content, which does include speech, background sounds, music parts, etc. The results were not predictable and it was hard to find good target values.
Therefore we developed our own algorithm to measure the dynamic range of audio signals, which works similarly for speech, music and other audio content.

The following advanced parameters for our Adaptive Leveler allow you to customize which parts of the audio should be leveled (foreground, all, speech, music, etc.), how much they should be leveled (dynamic range), and how much micro-dynamics compression should be applied.

To try out the new algorithms, please join our private beta program and let us know your feedback!

Leveler Preset

The Leveler Preset defines which parts of the audio should be adjusted by our Adaptive Leveler:

  • Default Leveler:
    Our classic, default leveling algorithm as demonstrated in the Leveler Audio Examples. Use it if you are unsure.
  • Foreground Only Leveler:
    This preset reacts slower and levels foreground parts only. Use it if you have background speech or background music, which should not be amplified.
  • Fast Leveler:
    A preset which reacts much faster. It is built for recordings with fast and extreme loudness differences, for example, to amplify very quiet questions from the audience in a lecture recording, to balance fast-changing soft and loud voices within one audio track, etc.
  • Amplify Everything:
    Amplify as much as possible. Similar to the Fast Leveler, but also amplifies non-speech background sounds like noise.

Leveler Dynamic Range

Our default Leveler tries to normalize all speakers to a similar loudness so that a consumer in a car or subway doesn't feel the need to reach for the volume control.
However, in other environments (living room, cinema, etc.) or in dynamic recordings, you might want more level differences (Dynamic Range, Loudness Range / LRA) between speakers and within music segments.

The parameter Dynamic Range controls how much leveling is applied: Higher values result in more dynamic output audio files (less leveling). If you want to increase the dynamic range by 3dB (or LU), just increase the Dynamic Range parameter by 3dB.
We also like to call this Loudness Comfort Zone: above a maximum and below a minimum possible level (the comfort zone), no leveling is applied. So if your input file already has a small dynamic range (is within the comfort zone), our leveler will be just bypassed.

Example Use Cases:
Higher dynamic range values should be used if you want to keep more loudness differences in dynamic narration or dynamic music recordings (live concert/classical).
It is also possible to utilize this parameter to generate automatic mixdowns with different loudness range (LRA) values for different target environments (very compressed ones like mobile devices or Alexa, very dynamic ones like home cinema, etc.).

Compressor

Controls Micro-Dynamics Compression:
The compressor reduces the volume of short and loud spikes like "p", "t" or laughter ( short-term dynamics) and also shapes the sound of your voice (it will sound more or less "processed").
The Leveler, on the other hand, adjusts mid-term level differences, as done by a sound engineer, using the faders of an audio mixer, so that a listener doesn't have to adjust the playback volume all the time.
For more details please see Loudness Normalization and Compression of Podcasts and Speech Audio.

Possible values are:
  • Auto:
    The compressor setting depends on the selected Leveler Preset. Medium compression is used in Foreground Only and Default Leveler presets, Hard compression in our Fast Leveler and Amplify Everything presets.
  • Soft:
    Uses less compression.
  • Medium:
    Our default setting.
  • Hard:
    More compression, especially tries to compress short and extreme level overshoots. Use this preset if you want your voice to sound very processed, our if you have extreme and fast-changing level differences.
  • Off:
    No short-term dynamics compression is used at all, only mid-term leveling. Switch off the compressor if you just want to adjust the loudness range without any additional micro-dynamics compression.

Separate Music/Speech Parameters

Use the switch Separate MusicSpeech Parameters (top right), to see separate Adaptive Leveler parameters for music and speech segments, to control all leveling details separately for speech and music parts:

For dialog intelligibility improvements in films and TV, it is important that the speech/dialog level and loudness range is not too soft compared to the overall programme level and loudness range. This parameter allows you to use more leveling in speech parts while keeping music and FX elements less processed.
Note: Speech, music and overall loudness and loudness range of your production are also displayed in our Audio Processing Statistics!

Example Use Case:
Music live recordings or dynamic music mixes, where you want to amplify all speakers (speech dynamic range should be small) but keep the dynamic range within and between music segments (music dynamic range should be high).
Dialog intelligibility improvements for films and TV, without effecting music and FX elements.

Other Advanced Audio Algorithm Parameters

We also offer advanced audio parameters for our Noise, Hum Reduction and Global Loudness Normalization algorithms:

For more details, please see the Advanced Audio Algorithms Documentation.

Want to know more?

If you want to know more details about our advanced algorithm parameters (especially the leveler parameters), please listen to the following podcast interview with Chris Curran (Podcast Engineering School):
Auphonic’s New Advanced Features, with Georg Holzmann – PES 108

Advanced Parameters Private Beta and Feedback

At the moment the advanced algorithm parameters are for beta users only. This is to allow us to get user feedback, so we can change the parameters to suit user needs.
Please let us know your case studies, if you need any other algorithm parameters or if you have any questions!

Here are some private beta invitation codes:

jbwCVpLYrl 6zmLqq8o3z RXYIUbC6al QDmIZLuPKa JIrnGRZBgl SWQOWeZOBD ISeBCA9gTy w5FdsyhZVI qWAvANQ5mC twOjdHrit3
KwnL2Le6jB 63SE2V54KK G32AULFyaM 3H0CLYAwLU mp1GFNVZHr swzvEBRCVa rLcNJHUNZT CGGbL0O4q1 5o5dUjruJ9 hAggWBpGvj
ykJ57cFQSe 0OHAD2u1Dx RG4wSYTLbf UcsSYI78Md Xedr3NPCgK mI8gd7eDvO 0Au4gpUDJB mYLkvKYz1C ukrKoW5hoy S34sraR0BU
J2tlV0yNwX QwNdnStYD3 Zho9oZR2e9 jHdjgUq420 51zLbV09p4 c0cth0abCf 3iVBKHVKXU BK4kTbDQzt uTBEkMnSPv tg6cJtsMrZ
BdB8gFyhRg wBsLHg90GG EYwxVUZJGp HLQ72b65uH NNd415ktFS JIm2eTkxMX EV2C5RAUXI a3iwbxWjKj X1AT7DCD7V y0AFIrWo5l
We are happy to send further invitation codes to all interested users - please do not hesitate to contact us!

If you have an invitation code, you can enter it here to activate the advanced audio algorithm parameters:
Auphonic Algorithm Parameters Private Beta Activation







ni

More Languages for Amazon Transcribe Speech Recognition

Until recently, Amazon Transcribe supported speech recognition in English and Spanish only.
Now they included French, Italian and Portuguese as well - and a few other languages (including German) are in private beta.

Update March 2019:
Now Amazon Transcribe supports German and Korean as well.

The Auphonic Audio Inspector on the status page of a finished Multitrack Production including speech recognition.
Please click on the screenshot to see it in full resolution!


Amazon Transcribe is integrated as speech recognition engine within Auphonic and offers accurate transcriptions (compared to other services) at low costs, including keywords / custom vocabulary support, word confidence, timestamps, and punctuation.
See the following AWS blog post and video for more information about recent Amazon Transcribe developments: Transcribe speech in three new languages: French, Italian, and Brazilian Portuguese.

Amazon Transcribe is also a perfect fit if you want to use our Transcript Editor because you will be able to see word timestamps and confidence values to instantly check which section/words should be corrected manually to increase the transcription accuracy:


Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.

These features are also available if you use Speechmatics, but unfortunately not in our other integrated speech recognition services.

About Speech Recognition within Auphonic

Auphonic has built a layer on top of a few external speech recognition services to make audio searchable:
Our classifiers generate metadata during the analysis of an audio signal (music segments, silence, multiple speakers, etc.) to divide the audio file into small and meaningful segments, which are processed by the speech recognition engine. The results from all segments are then combined, and meaningful timestamps, simple punctuation and structuring are added to the resulting text.

To learn more about speech recognition within Auphonic, take a look at our Speech Recognition and Transcript Editor help pages or listen to our Speech Recognition Audio Examples.

A comparison table of our integrated services (price, quality, languages, speed, features, etc.) can be found here: Speech Recognition Services Comparison.

Conclusion

We hope that Amazon and others will continue to add new languages, to get accurate and inexpensive automatic speech recognition in many languages.

Don't hesitate to contact us if you have any questions or feedback about speech recognition or our transcript editor!






ni

Scurry: A Race-To-Finish Scavenger Hunt App

We have a lot of traditions here at Viget, many of which you may have read about - TTT, FLF, Pointless Weekend. There are others, but you have to be an insider for more information on those.

Pointless Weekend is one of our favorite traditions, though. It’s been around over a decade and some pretty fun work has come out of it over the years, like Storyboard, Baby Bookie, and Short Order. At a high level, we take 48 hours to build a tool, experiment, or stunt as a team, across all four of our offices. These projects are entirely separate from our client work and we use them to try out new technologies, explore roles on the team, and stress-test our processes.

The first step for a Pointless Weekend is assembling the teams. We had two teams this year, with a record number of participants. You can read about TrailBuddy, what the other team built, here.

The Scurry team was split between the DC and Durham offices, so all meetings were held via Hangout.

Once we were assembled, we set out to understand the constraints and the goals of our Pointless Project. We went into this weekend with an extra pep in our step, as we were determined to build something for the upcoming Viget 20th anniversary TTT this summer. Here’s what we knew we wanted:

  1. An activity all Vigets could do together, where they could create memories, and share broadly on social
  2. Something that we could use in a spotty network at C Lazy U Ranch in Colorado
  3. A product we can share with others: corporate groups, families and friends, schools, bachelor/ette parties

We landed on a scavenger hunt native app, which we named Scurry (Scavenger + Hurry = Scurry. Brilliant, right?). There are already a few scavenger apps available, so we set out to create something that was

  • Quick and easy to set up hunts
  • Free and intuitive for users
  • A nice combination of trivia and activities
  • Social! We wanted to enable teams to share photos and progress

One of the main reasons we have Pointless Weekends is to test out new technologies and processes. In that vein, we tried out Notion as our central organizing tool - we used it for user journeys, data modeling, and even writing tickets, which we typically use Github for.

We tested out Notion as our primary tool, writing tickets and tracking progress.

When we built the app, we needed to prepare for spotty network service, as internet connectivity isn’t guaranteed at C Lazy U Ranch – where our Viget20 celebration will be. A Progressive Web Application (PWA) didn't make sense for our tech requirements, so we chose the route of creating a native application.

There are a number of options available to build native applications. But, as we were looking to make as much progress as possible in 48-hours, we chose one of our favorite frameworks: React Native. React Native allows developers to build true, cross-platform native applications, using some of our favorite technologies: javascript, the React framework, and a native-specific variant of CSS. We decided on the turn-key solution Expo. Expo has extra tooling allowing for easy development, deployment, and debugging.

This is a snap shot of our app and Expo.

Our frontend developers were able to immediately dive in making screens and styling components, and quickly made the mockups in Whimsical a reality.

On the backend, we used the supported library to connect to the backend datastore, Firebase. Firebase is a hosted solution for data storage, with key features built-in like authentication, realtime updates, and offline support. Our backend developer worked behind the frontend developers hooking those views up to live data.

Both of these tools, Expo and Firebase, were easy to use and allowed us to focus on building a working application quickly, rather than being mired in setup or bespoke solutions to common problems.

Whimsical is one of our favorite tools for building out mockups of an app.

We made impressive progress in our 48-hour sprint, but there’s still some work to do. We have some additional features we hope to add before TTT, which will require additional testing and refining. For now, stay tuned and sign up for our newsletter. We’ll be sure to share when Scurry is ready for the world!



  • News & Culture

ni

5 things to Note in a New Phoenix 1.5 App

Yesterday (Apr 22, 2020) Phoenix 1.5 was officially released ????

There’s a long list of changes and improvements, but the big feature is better integration with LiveView. I’ve previously written about why LiveView interests me, so I was quite excited to dive into this release. After watching this awesome Twitter clone in 15 minutes demo from Chris McCord, I had to try out some of the new features. I generated a new phoenix app with the —live flag, installed dependencies and started a server. Here are five new features I noticed.

1. Database actions in browser

Oops! Looks like I forgot to configure the database before starting the server. There’s now a helpful message and a button in the browser that can run the command for me. There’s a similar button when migrations are pending. This is a really smooth UX to fix a very common error while developing.

2. New Tagline!

Peace-of-mind from prototype to production

This phrase looked unfamiliar, so I went digging. Turns out that the old tagline was “A productive web framework that does not compromise speed or maintainability.” (I also noticed that it was previously “speed and maintainability” until this PR from 2019 was opened on a dare to clarify the language.)

Chris McCord updated the language while adding phx.new —live. I love this framing, particularly for LiveView. I am very excited about the progressive enhancement path for LiveView apps. A project can start out with regular, server rendered HTML templates. This is a very productive way to work, and a great way to start a prototype for just about any website. Updating those templates to work with LiveView is an easier lift than a full rebuild in React. And finally, when you’re in production you have the peace-of-mind that the reliable BEAM provides.

3. Live dependency search

There’s now a big search bar right in the middle of the page. You can search through the dependencies in your app and navigate to the hexdocs for them. This doesn’t seem terribly useful, but is a cool demo of LiveView. The implementation is a good illustration of how compact a feature like this can be using LiveView.

4. LiveDashboard

This is the really cool one. In the top right of that page you see a link to LiveDashboard. Clicking it will take you to a page that looks like this.

This page is built with LiveView, and gives you a ton of information about your running system. This landing page has version numbers, memory usage, and atom count.

Clicking over to metrics brings you to this page.

By default it will tell you how long average queries are taking, but the metrics are configurable so you can define your own custom telemetry options.

The other tabs include process info, so you can monitor specific processes in your system:

And ETS tables, the in memory storage that many apps use for caching:

The dashboard is a really nice thing to get out of the box and makes it free for application developers to monitor their running system. It’s also developing very quickly. I tried an earlier version a week ago which didn’t support ETS tables, ports or sockets. I made a note to look into adding them, but it's already done! I’m excited to follow along and see where this project goes.

5. New LiveView generators

1.5 introduces a new generator mix phx.gen.live.. Like other generators, it will create all the code you need for a basic resource in your app, including the LiveView modules. The interesting part here is that it introduces patterns for organizing LiveView code, which is something I have previously been unsure about. At first glance, the new organization makes sense and feels like a good approach. I look forward to seeing how this works on a real project.

Conclusion

The 1.5 release brings more changes under the hood of course, but these are the first five differences you’ll notice after generating a new Phoenix 1.5 app with LiveView. Congratulations to the entire Phoenix team, but particularly José Valim and Chris McCord for getting this work released.



  • Code
  • Back-end Engineering