voice

'I was ecstatic when I gave my voice to Elsa!'

'Today, when I watch animated films again, I still feel connected with the child in me.' Sunidhi Chauhan talks Frozen.




voice

The final voicemails / Max Ritvo ; edited by Louise Glück

Hayden Library - PS3618.I8 A6 2018




voice

Usability Testing for Voice Content

It’s an important time to be in voice design. Many of us are turning to voice assistants in these times, whether for comfort, recreation, or staying informed. As the interest in interfaces driven by voice continues to reach new heights around the world, so too will users’ expectations and the best practices that guide their design.

Voice interfaces (also known as voice user interfaces or VUIs) have been reinventing how we approach, evaluate, and interact with user interfaces. The impact of conscious efforts to reduce close contact between people will continue to increase users’ expectations for the availability of a voice component on all devices, whether that entails a microphone icon indicating voice-enabled search or a full-fledged voice assistant waiting patiently in the wings for an invocation.

But voice interfaces present inherent challenges and surprises. In this relatively new realm of design, the intrinsic twists and turns in spoken language can make things difficult for even the most carefully considered voice interfaces. After all, spoken language is littered with fillers (in the linguistic sense of utterances like hmm and um), hesitations and pauses, and other interruptions and speech disfluencies that present puzzling problems for designers and implementers alike.

Once you’ve built a voice interface that introduces information or permits transactions in a rich way for spoken language users, the easy part is done. Nonetheless, voice interfaces also surface unique challenges when it comes to usability testing and robust evaluation of your end result. But there are advantages, too, especially when it comes to accessibility and cross-channel content strategy. The fact that voice-driven content lies on the opposite extreme of the spectrum from the traditional website confers it an additional benefit: it’s an effective way to analyze and stress-test just how channel-agnostic your content truly is.

The quandary of voice usability

Several years ago, I led a talented team at Acquia Labs to design and build a voice interface for Digital Services Georgia called Ask GeorgiaGov, which allowed citizens of the state of Georgia to access content about key civic tasks, like registering to vote, renewing a driver’s license, and filing complaints against businesses. Based on copy drawn directly from the frequently asked questions section of the Georgia.gov website, it was the first Amazon Alexa interface integrated with the Drupal content management system ever built for public consumption. Built by my former colleague Chris Hamper, it also offered a host of impressive features, like allowing users to request the phone number of individual government agencies for each query on a topic.

Designing and building web experiences for the public sector is a uniquely challenging endeavor due to requirements surrounding accessibility and frequent budgetary challenges. Out of necessity, governments need to be exacting and methodical not only in how they engage their citizens and spend money on projects but also how they incorporate new technologies into the mix. For most government entities, voice is a completely different world, with many potential pitfalls.

At the outset of the project, the Digital Services Georgia team, led by Nikhil Deshpande, expressed their most important need: a single content model across all their content irrespective of delivery channel, as they only had resources to maintain a single rendition of each content item. Despite this editorial challenge, Georgia saw Alexa as an exciting opportunity to open new doors to accessible solutions for citizens with disabilities. And finally, because there were relatively few examples of voice usability testing at the time, we knew we would have to learn on the fly and experiment to find the right solution.

Eventually, we discovered that all the traditional approaches to usability testing that we’d executed for other projects were ill-suited to the unique problems of voice usability. And this was only the beginning of our problems.

How voice interfaces improve accessibility outcomes

Any discussion of voice usability must consider some of the most experienced voice interface users: people who use assistive devices. After all, accessibility has long been a bastion of web experiences, but it has only recently become a focus of those implementing voice interfaces. In a world where refreshable Braille displays and screen readers prize the rendering of web-based content into synthesized speech above all, the voice interface seems like an anomaly. But in fact, the exciting potential of Amazon Alexa for disabled citizens represented one of the primary motivations for Georgia’s interest in making their content available through a voice assistant.

Questions surrounding accessibility with voice have surfaced in recent years due to the perceived user experience benefits that voice interfaces can offer over more established assistive devices. Because screen readers make no exceptions when they recite the contents of a page, they can occasionally present superfluous information and force the user to wait longer than they’re willing. In addition, with an effective content schema, it can often be the case that voice interfaces facilitate pointed interactions with content at a more granular level than the page itself.

Though it can be difficult to convince even the most forward-looking clients of accessibility’s value, Georgia has been not only a trailblazer but also a committed proponent of content accessibility beyond the web. The state was among the first jurisdictions to offer a text-to-speech (TTS) phone hotline that read web pages aloud. After all, state governments must serve all citizens equally—no ifs, ands, or buts. And while these are still early days, I can see voice assistants becoming new conduits, and perhaps more efficient channels, by which disabled users can access the content they need.

Managing content destined for discrete channels

Whereas voice can improve accessibility of content, it’s seldom the case that web and voice are the only channels through which we must expose information. For this reason, one piece of advice I often give to content strategists and architects at organizations interested in pursuing voice-driven content is to never think of voice content in isolation. Siloing it is the same misguided approach that has led to mobile applications and other discrete experiences delivering orphaned or outdated content to a user expecting that all content on the website should be up-to-date and accessible through other channels as well.

After all, we’ve trained ourselves for many years to think of content in the web-only context rather than across channels. Our closely held assumptions about links, file downloads, images, and other web-based marginalia and miscellany are all aspects of web content that translate poorly to the conversational context—and particularly the voice context. Increasingly, we all need to concern ourselves with an omnichannel content strategy that straddles all those channels in existence today and others that will doubtlessly surface over the horizon.

With the advantages of structured content in Drupal 7, Georgia.gov already had a content model amenable to interlocution in the form of frequently asked questions (FAQs). While question-and-answer formats are convenient for voice assistants because queries for content tend to come in the form of questions, the returned responses likewise need to be as voice-optimized as possible.

For Georgia.gov, the need to preserve a single rendition of all content across all channels led us to perform a conversational content audit, in which we read aloud all of the FAQ pages, putting ourselves in the shoes of a voice user, and identified key differences between how a user would interpret the written form and how they would parse the spoken form of that same content. After some discussion with the editorial team at Georgia, we opted to limit calls to action (e.g., “Read more”), links lacking clear context in surrounding text, and other situations confusing to voice users who cannot visualize the content they are listening to.

Here’s a table containing examples of how we converted certain text on FAQ pages to counterparts more appropriate for voice. Reading each sentence aloud, one by one, helped us identify cases where users might scratch their heads and say “Huh?” in a voice context.

Before After
Learn how to change your name on your Social Security card. The Social Security Administration can help you change your name on your Social Security card.
You can receive payments through either a debit card or direct deposit. Learn more about payments. You can receive payments through either a debit card or direct deposit.
Read more about this. In Georgia, the Family Support Registry typically pulls payments directly from your paycheck. However, you can send your own payments online through your bank account, your credit card, or Western Union. You may also send your payments by mail to the address provided in your court order.

In areas like content strategy and content governance, content audits have long been key to understanding the full picture of your content, but it doesn’t end there. Successful content audits can run the gamut from automated checks for orphaned content or overly wordy articles to more qualitative analyses of how content adheres to a specific brand voice or certain design standards. For a content strategy truly prepared for channels both here and still to come, a holistic understanding of how users will interact with your content in a variety of situations is a baseline requirement today.

Other conversational interfaces have it easier

Spoken language is inherently hard. Even the most gifted orators can have trouble with it. It’s littered with mistakes, starts and stops, interruptions, hesitations, and a vertiginous range of other uniquely human transgressions. The written word, because it’s committed instantly to a mostly permanent record, is tame, staid, and carefully considered in comparison.

When we talk about conversational interfaces, we need to draw a clear distinction between the range of user experiences that traffic in written language rather than spoken language. As we know from the relative solidity of written language and literature versus the comparative transience of spoken language and oral traditions, in many ways the two couldn’t be more different from one another. The implications for designers are significant because spoken language, from the user’s perspective, lacks a graphical equivalent to which those scratching their head can readily refer. We’re dealing with the spoken word and aural affordances, not pixels, written help text, or visual affordances.

Why written conversational interfaces are easier to evaluate

One of the privileges that chatbots and textbots enjoy over voice interfaces is the fact that by design, they can’t hide the previous steps users have taken. Any conversational interface user working in the written medium has access to their previous history of interactions, which can stretch back days, weeks, or months: the so-called backscroll. A flight passenger communicating with an airline through Facebook Messenger, for example, knows that they can merely scroll up in the chat history to confirm that they’ve already provided the company with their e-ticket number or frequent flyer account information.

This has outsize implications for information architecture and conversational wayfinding. Since chatbot users can consult their own written record, it’s much harder for things to go completely awry when they make a move they didn’t intend. Recollection is much more difficult when you have to remember what you said a few minutes ago off the top of your head rather than scrolling up to the information you provided a few hours or weeks ago. An effective chatbot interface may, for example, enable a user to jump back to a much earlier, specific place in a conversation’s history.An effective chatbot interface may, for example, enable a user to jump back to a much earlier, specific place in a conversation’s history. Voice interfaces that live perpetually in the moment have no such luxury.

Eye tracking only works for visual components

In many cases, those who work with chatbots and messaging bots (especially those leveraging text messages or other messaging services like Facebook Messenger, Slack, or WhatsApp) have the unique privilege of benefiting from a visual component. Some conversational interfaces now insert other elements into the conversational flow between a machine and a person, such as embedded conversational forms (like SPACE10’s Conversational Form) that allow users to enter rich input or select from a range of possible responses.

The success of eye tracking in more traditional usability testing scenarios highlights its appropriateness for visual interfaces such as websites, mobile applications, and others. However, from the standpoint of evaluating voice interfaces that are entirely aural, eye tracking serves only the limited (but still interesting from a research perspective) purpose of assessing where the test subject is looking while speaking with an invisible interlocutor—not whether they are able to use the interface successfully. Indeed, eye tracking is only a viable option for voice interfaces that have some visual component, like the Amazon Echo Show.

Think-aloud and concurrent probing interrupt the conversational flow

A well-worn approach for usability testing is think-aloud, which allows for users working with interfaces to present their frequently qualitative impressions of interfaces verbally while interacting with the user experience in question. Paired with eye tracking, think-aloud adds considerable dimension to a usability test for visual interfaces such as websites and web applications, as well as other visually or physically oriented devices.

Another is concurrent probing (CP). Probing involves the use of questions to gather insights about the interface from users, and Usability.gov describes two types: concurrent, in which the researcher asks questions during interactions, and retrospective, in which questions only come once the interaction is complete.

Conversational interfaces that utilize written language rather than spoken language can still be well-suited to think-aloud and concurrent probing approaches, especially for the components in the interface that require manual input, like conversational forms and other traditional UI elements interspersed throughout the conversation itself.

But for voice interfaces, think-aloud and concurrent probing are highly questionable approaches and can catalyze a variety of unintended consequences, including accidental invocations of trigger words (such as Alexa mishearing “selected” as “Alexa”) and introduction of bad data (such as speech transcription registering both the voice interface and test subject). After all, in a hypothetical think-aloud or CP test of a voice interface, the user would be responsible for conversing with the chatbot while simultaneously offering up their impressions to the evaluator overseeing the test.

Voice usability tests with retrospective probing

Retrospective probing (RP), a lesser-known approach for usability testing, is seldom seen in web usability testing due to its chief weakness: the fact that we have awful memories and rarely remember what occurred mere moments earlier with anything that approaches total accuracy. (This might explain why the backscroll has joined the pantheon of rigid recordkeeping currently occupied by cuneiform, the printing press, and other means of concretizing information.)

For users of voice assistants lacking scrollable chat histories, retrospective probing introduces the potential for subjects to include false recollections in their assessments or to misinterpret the conclusion of their conversations. That said, retrospective probing permits the participant to take some time to form their impressions of an interface rather than dole out incremental tidbits in a stream of consciousness, as would more likely occur in concurrent probing.

What makes voice usability tests unique

Voice usability tests have several unique characteristics that distinguish them from web usability tests or other conversational usability tests, but some of the same principles unify both visual interfaces and their aural counterparts. As always, “test early, test often” is a mantra that applies here, as the earlier you can begin testing, the more robust your results will be. Having an individual to administer a test and another to transcribe results or watch for signs of trouble is also an effective best practice in settings beyond just voice usability.

Interference from poor soundproofing or external disruptions can derail a voice usability test even before it begins. Many large organizations will have soundproof rooms or recording studios available for voice usability researchers. For the vast majority of others, a mostly silent room will suffice, though absolute silence is optimal. In addition, many subjects, even those well-versed in web usability tests, may be unaccustomed to voice usability tests in which long periods of silence are the norm to establish a baseline for data.

How we used retrospective probing to test Ask GeorgiaGov

For Ask GeorgiaGov, we used the retrospective probing approach almost exclusively to gather a range of insights about how our users were interacting with voice-driven content. We endeavored to evaluate interactions with the interface early and diachronically. In the process, we asked each of our subjects to complete two distinct tasks that would require them to traverse the entirety of the interface by asking questions (conducting a search), drilling down into further questions, and requesting the phone number for a related agency. Though this would be a significant ask of any user working with a visual interface, the unidirectional focus of voice interface flows, by contrast, reduced the likelihood of lengthy accidental detours.

Here are a couple of example scenarios:

You have a business license in Georgia, but you’re not sure if you have to register on an annual basis. Talk with Alexa to find out the information you need. At the end, ask for a phone number for more information.

You’ve just moved to Georgia and you know you need to transfer your driver’s license, but you’re not sure what to do. Talk with Alexa to find out the information you need. At the end, ask for a phone number for more information.

We also peppered users with questions after the test concluded to learn about their impressions through retrospective probing:

  • “On a scale of 1–5, based on the scenario, was the information you received helpful? Why or why not?”
  • “On a scale of 1–5, based on the scenario, was the content presented clear and easy to follow? Why or why not?”
  • “What’s the answer to the question that you were tasked with asking?”

Because state governments also routinely deal with citizen questions having to do with potentially traumatic issues such as divorce and sexual harassment, we also offered the choice for participants to opt out of certain categories of tasks.

While this testing procedure yielded compelling results that indicated our voice interface was performing at the level it needed to despite its experimental nature, we also ran into considerable challenges during the usability testing process. Restoring Amazon Alexa to its initial state and troubleshooting issues on the fly proved difficult during the initial stages of the implementation, when bugs were still common.

In the end, we found that many of the same lessons that apply to more storied examples of usability testing were also relevant to Ask GeorgiaGov: the importance of testing early and testing often, the need for faithful yet efficient transcription, and the surprising staying power of bugs when integrating disparate technologies. Despite Ask GeorgiaGov’s many similarities to other interface implementations in terms of technical debt and the role of usability testing, we were overjoyed to hear from real Georgians whose engagement with their state government could not be more different from before.

Conclusion

Many of us may be building interfaces for voice content to experiment with newfangled channels, or to build for disabled people and people newer to the web. Now, they are necessities for many others, especially as social distancing practices continue to take hold worldwide. Nonetheless, it’s crucial to keep in mind that voice should be only one component of a channel-agnostic strategy equipped for content ripped away from its usual contexts. Building usable voice-driven content experiences can teach us a great deal about how we should envisage our milieu of content and its future in the first place.

Gone are the days when we could write a page in HTML and call it a day; content now needs to be rendered through synthesized speech, augmented reality overlays, digital signage, and other environments where users will never even touch a personal computer. By focusing on structured content first and foremost with an eye toward moving past our web-based biases in developing our content for voice and others, we can better ensure the effectiveness of our content on any device and in any form factor.

Eight months after we finished building Ask GeorgiaGov in 2017, we conducted a retrospective to inspect the logs amassed over the past year. The results were striking. Vehicle registration, driver’s licenses, and the state sales tax comprised the most commonly searched topics. 79.2% of all interactions were successful, an achievement for one of the first content-driven Alexa skills in production, and 71.2% of all interactions led to the issuance of a phone number that users could call for further information.

But deep in the logs we implemented for the Georgia team’s convenience, we found a number of perplexing 404 Not Found errors related to a search term that kept being recorded over and over again as “Lawson’s.” After some digging and consulting the native Georgians in the room, we discovered that one of our dear users with a particularly strong drawl was repeatedly pronouncing “license” in her native dialect to no avail.

As this anecdote highlights, just as no user experience can be truly perfect for everyone, voice content is an environment where imperfections can highlight considerations we missed in developing cross-channel content. And just as we have much to learn when it comes to the new shapes content can take as it jumps off the screen and out the window, it seems our voice interfaces still have a ways to go before they take over the world too.

Special thanks to Nikhil Deshpande for his feedback during the writing process.




voice

A Cycle of songs: for female voice, trumpet or clarinet, and piano / Susan Kander

STACK SCORE Mu pts K1311 cyc




voice

Touch not the cat: voice and solo tambourine (one performer), 2015 / words and music by David Lang

STACK SCORE Mu pts L251 tou




voice

Jubiläums-Liederalbum: 14 Lieder für hohe Singstimme und Klavier = Anniversary songbook: 14 songs for high voice and piano / Clara Schumann ; in Zusammenarbeit mit dem Schumann-Haus Leipzig

STACK SCORE Mu pts Sch85.9 song a




voice

Songs and Haiku: for female voice and piano: (1987-2004) / Jonathan Harvey

STACK SCORE Mu pts H262 songs




voice

Twelve easy songs: for low voice and piano / Charles Ives ; edited by Neely Bruce and James B. Sinclair

STACK SCORE Mu pts Iv3 song c




voice

Saarikoski songs: for voice and piano: (2013-2017) / Kaija Saariaho ; text: Pentti Saarikoski

STACK SCORE Mu pts Sa12 saa




voice

Twelve easy songs: for high voice and piano / Charles Ives ; edited by Neely Bruce and James B. Sinclair

STACK SCORE Mu pts Iv3 song b




voice

The Oxford handbook of voice perception / edited by Sascha Frühholz and Pascal Belin

Hayden Library - BF463.S64 O83 2019




voice

Statistical modeling in biomedical research: contemporary topics and voices in the field / Yichuan Zhao, Ding-Geng (Din) Chen, editors

Online Resource




voice

Fictions of Authority: Women Writers and Narrative Voice.

Online Resource




voice

New Voices in Psychosocial Studies [electronic resource] / edited by Stephen Frosh




voice

A place outside the law: forgotten voices from Guantanamo / Peter Jan Honigsberg

Dewey Library - HV6432.H67 2019




voice

MPs voice reservation on SC verdict on homosexuality

SC set aside Delhi HC's 2009 judgment, terms gay sex a punishable offence.




voice

JSJ 274: Amazon Voice Services and Echo Skills with Terrance Smith

JSJ 274 Amazon Voice Services and Echo Skills with Terrance Smith

On today’s episode of JavaScript Jabber, we have panelists Joe Eames, Aimee Knight, Charles Max Wood, and we have special guest Terrance Smith. He’s here today to talk about the Amazon Alexa platform. So tune in and learn more about Amazon Voice Services!

[01:00] – Introduction to Terrance Smith

Terrance is from Hacker Ferrer Software. They hack love into software.

[01:30] – Amazon Voice Service

What I’m working on is called My CareTaker named probably pending change. What it will do and what it is doing will be to help you be there as a caretaker’s aid for the person in your life. If you have to take care an older parent, My CareTaker will be there in your place if you have to work that day. It will be your liaison to that person. Your mom and dad can talk to My CareTaker and My CareTaker could signal you via SMS or email message or tweet, anything on your usage dashboard, and you would be able to respond. It’s there when you’re not.

[04:35] – Capabilities

Getting started with it, there are different layers. The first layer is the Skills Kit for generally getting into the Amazon IoT. It has a limited subset of the functionality. You can give commands. The device parses them, sends them to Amazon’s endpoint, Amazon sends a call back to your API endpoint, and you can do whatever you want. That is the first level. You can make it do things like turn on your light switch, start your car, change your thermostat, or make an API call to some website somewhere to do anything.

[05:50] – Skills Kit

Skills Kit is different with AVS. Skills Kit, you can install it on any device. You’re spinning up a web service and register it on Amazon’s website. As long as you have an endpoint, you can register, say, the Amazon Web Services Lambda. Start that up and do something. The Skills Kit is literally the web endpoint response. Amazon Voice Services is a bit more in-depth.

[07:00] – Steps for programming

With the Skills Kit, you register what would be your utterance, your skill name, and you would give it a couple of sets of phrases to accept. Say, you have a skill that can start a car, your skill is “Car Starter.” “Alexa tell Car Starter to start the car.” At which point, your web service will be notified that that is the utterance. It literally has a case statement. You can have any number of individual conditional branches outside of that. The limitation for the Skills Kit is you have to have the “tell” or “ask” and the name of the skill to do whatever. It’s also going to be publicly accessible. For the most part, it’s literally a web service.

[10:55] – Boilerplates for AWS Lambda

Boilerplates can be used if you want to develop for production. If you publish a skill, you get free AVS instance time. You can host your skill for free for some amount of time. There are GUI tools to make it easier but if you’re a developer, you’re probably going to do the spin up a web service and deal it that way.

[11:45] – Do you have to have an Amazon Echo?

At one point, you have to have the Echo but now there is this called Echoism, which allows you to run it in your browser. In addition to that, you can potentially install it on a device like a Raspberry Pi and run Amazon Voice Services. The actual engine is on your PC, Mac, or Linux box. You have different options.

[12:35] – Machine learning

There are certain things that Amazon Alexa understand now that it did last year or time before that like understanding utterances and phrases better. A lot of the machine learning is definitely under the covers. The other portion of it Alexa Voice Service, which is a whole engine that you have untethered access to other portions like how to handle responses. That’s where you can build a custom device and take it apart. So the API that we’re working with here is just using JSON and HTTP.

[16:40] – Amazon Echo Show

You have that full real-time back and forth communication ability but there is no video streaming or video processing ability yet. You can utilize the engine in such a way that Amazon Voice Services can work with your existing tool language. If you have a Raspberry Pi and you have a camera to it, you can potentially work within that. But again, the official API’s and docs for that are not available yet.

[27:20] – Challenges

There’s an appliance in this house that listens to everything I say. There’s that natural inclination to not trust it, especially with the older generations. Giving past that is getting people to use the device. Some of the programming sides of it are getting the communication to work, doing something that Alexa isn’t pre-programmed to do. There isn’t a lot of documentation out there, just a couple of examples. The original examples are written in Java and trying to convert it to Node or JavaScript would be some of the technical challenges. In addition, getting it installed and setup takes at least an hour at the beginning. There’s also a learning curve involved.

[29:35] – Is your product layered in an Echo or is your product a separate device?

Terrance’s product is a completely separate device. One of the functionality of his program is medicine reminders. It can only respond to whatever the API calls from Amazon tells you to respond to but it can’t do anything like send something back. It can do an immediate audio response with a picture or turn on and off a light switch. But it can’t send a message back in like two hours from now. You do want your Alexa device to have (verbally) a list of notifications like on your phone. TLDR, Terrance can go a little further with just the Skills Kit.

[32:00] – Could you set it up through a web server?

Yes. There are examples out there. There’s Alexa in the browser. You can open up a browser and communicate with that. There are examples of it being installed like an app. You can deploy it to your existing iPhone app or Android app and have it interact that way. Or you can have it interact independently on a completely different device like a Raspberry Pi. But not a lot of folks are using it that way.

[33:10] – Monetization

Amazon isn’t changing anything in terms of monetization. They make discovery a lot easier though. If you knew the name of the app, you could just say, “Alexa, [tell the name of the app].” It will do a lazy load of the actual skill and it will add it to your available skill’s list.

However, there is something called the Alexa Fund, which is kind of a startup fund that they have, which you can apply for. If you’re doing something interesting, there is a number of things you have to do. Ideally, you can get funding for whatever your product is. It is an available avenue for you.

[36:25] – More information, documentation, walkthroughs

The number one place to go to as far as getting started is the Amazon websites. They have the Conexant 4-Mic Far-Field Dev Kit. It has 4 mics and it has already a lot of what you need. You have to boot it up and/or SSH into it or plug it up and code it. They have a couple of these kits for $300 to $400. It’s one of the safe and simpler options.

There are also directions for the AVS sites which is under Alexa Voice Services, where you can go to the Github from there. There will give you directions using the Raspberry Pi.  If not that, there’s also the Slack chatroom. It is alexaslack.com. Travis Teague is the guy in charge in there.

Picks

Joe Eames

Aimee Knight

Charles Max Wood

Terrance Smith




voice

Incorporating the Patient Voice Into Shared Decision-Making for the Treatment of Aortic Stenosis

Increased attention has focused on shared decision-making (SDM) and use of decision aids for treatment decisions in cardiology. In this issue of JAMA Cardiology, Coylewright et al report the results of a rigorously performed pilot study on the use of a decision aid to facilitate SDM for patients with symptomatic severe aortic stenosis (AS) at high or prohibitive risk for surgery considered for transcatheter aortic valve replacement vs medical therapy. Comparisons were made between encounters before clinicians were trained to use a decision aid and the first and fifth encounters after a decision aid was used. The patient-clinician interactions were audio recorded and later coded by independent reviewers using a validated measure to assess SDM. This mixed-methods study found that SDM significantly improved in a stepwise manner from the initial usual care encounter (before use of a decision aid) to the first and then fifth encounters after implementation of the decision aid. Along with this improvement in SDM, patients (n = 35) demonstrated increased knowledge about their treatment choices and reported increased satisfaction in their care with no increase in decisional conflict. In contrast, clinicians (n = 6) reported that they believed they already engaged in SDM prior to use of the decision aid and, after multiple uses of the decision aid, believed patients did not understand or benefit from this tool. The disconnect between clinician and patient perspectives was sobering and has implications for the adoption of decision aids or other tools to facilitate SDM in the clinical setting. Notable limitations of the study, which are acknowledged by the authors, include (1) small sample size (of clinicians and patients); (2) the decision aid is most useful for the relatively smaller number of patients at high or prohibitive risk for surgery for whom transcatheter aortic valve replacement and medical therapy may both be reasonable options; and (3) the lack of diversity in the clinicians (all male), which reflects the current demographics of interventional cardiology and cardiac surgery.




voice

Unheard voices in the Bible / guest editor, Dr L. Sutton




voice

Hearing John's voice : insights for teaching and preaching / M. Eugene Boring

Boring, M. Eugene, author





voice

Climate change and the voiceless: protecting future generations, wildlife, and natural resources / Randall S. Abate, Monmouth University

Dewey Library - K3585.A23 2020




voice

Voices from S-21 : terror and history in Pol Pot's secret prison / David Chandler

Chandler, David P. (David Porter), 1933-




voice

Flow: the rhythmic voice in rap music / Mitchell Ohriner

STACK BOOKS MT146.O37 2019




voice

The Oxford handbook of voice studies / edited by Nina Sun Eidsheim and Katherine Meizel

STACK BOOKS BF592.V64 O94 2019




voice

Bread and roses: voices of Australian academics from the working class / edited by Dee Michell, Jacqueline Z. Wilson and Verity Archer

Online Resource




voice

Global voices in education: Ruth Wong Memorial Lectures / Oon Seng Tan, Hee Ong Wong, Seok Hoon Seng, editors

Online Resource




voice

China CEO II: Voices of Experience from 25 Top Executives Leading MNCs in China


 

Straight from the China CEO: Advice on leading operations in the world’s fastest-moving, highest stakes market. 25 top executives leading high-profile multinational companies in China, as well as seasoned and respected China-based consultants, give their front-line advice on succeeding in this market.

Soaring spending power among the world’s largest consumer population, radical digital transformation creating a cash-less, ‘always on’ society, severe



Read More...




voice

When Lata Mangeshkar almost lost her voice

'Many thought it was the end for me,' Lataji told Subhash K Jha.'I just couldn't sing!'




voice

Voices from the rainforest : testimonies of a threatened people / Bruno Manser ; translated by Anthony Dixon and Johanna Veth

Manser, Bruno, 1954- author




voice

Architectural voices of India: a blend of contemporary and traditional ethos / by Apurva Bose Dutta

Rotch Library -




voice

Contemporary voices: from the Asian and Islamic art worlds / Olivia Sand

Rotch Library - N7260.S26 2018




voice

Louis Albert Bourgault-Ducoudray 1840-1910 French composer, opera, 30 Melodies populaires de la Basse-Bretagne with French translations for voice. 1885




voice

The experience of loss of voice in adolescent girls




voice

Packet loss concealment in voice over the Internet




voice

Finding my voice




voice

Voices from a marginalized population




voice

Giving voice to parents of young children with challenging behavior




voice

The voices of sex workers (prostitues?) and the dilemma of feminist discourse




voice

Multiple voices and the single individual




voice

Voice recognition system based on intra-modal fusion and accent classification




voice

"Oye mi voz!" (hear my voice!)




voice

Contemporary afro-cuban voices in tampa :




voice

"the wound and the voiceless :




voice

Graduate Voice Recital




voice

Graduate Voice Recital




voice

Graduate Voice Recital