speech

The 'United States of Europe' speech that Winston Churchill so nearly made

A recently discovered document sheds new light on the wartime leader’s ‘iron curtain’ address

It was a speech that electrified the world, one that coined a phrase that was to characterise the political era that followed the second world war. But its content could have been very different, reveals a document freshly unearthed by a historian researching the life of Winston Churchill.

On 5 March 1946 in Fulton, Missouri, before a huge crowd which included the US president, Harry Truman, Britain’s wartime leader issued a famous description of the political division that was opening across Europe between the Soviet-dominated Communist east and the western democracies. “From Stettin in the Baltic to Trieste in the Adriatic,” Churchill declared, “an iron curtain has descended across the continent.”

Continue reading...




speech

Boris Johnson's lockdown speech: What to watch out for

Boris Johnson's address from No 10 is expected to set out a "roadmap" for easing lockdown restrictions.




speech

President Trump drinks bleach, gives commencement speech to 'the class of COVID-19' in SNL cold open

Alec Baldwin reprised his imitation of President Trump on the season finale of Saturday Night Live - this time with the Commander in Chief giving a commencement speech via Zoom.




speech

Will Young thanks Prince William for his LGBT speech

'I want to let him know what a wonderful thing he did,' Young says. 'Him standing up and saying, "I wouldn't mind if my child was gay" is just incredible,' the 40-year-old singer enthuses.




speech

President Trump drinks bleach, gives commencement speech to 'the class of COVID-19' in SNL cold open

Alec Baldwin reprised his imitation of President Trump on the season finale of Saturday Night Live - this time with the Commander in Chief giving a commencement speech via Zoom.




speech

Is free speech an Indian value?


Is freedom of speech and expression deeply accepted in Indian society? Or is it merely a European cultural import that made its way along with the English language and appeared in the Constitution because of the founding fathers' genius? Satarupa Sen Bhattacharya reviews Freedom Song, a film and connects the dots.




speech

Lockdown Extended: Top 10 Points from PM Narendra Modi Speech on April 14

While the nationwide lockdown due to COVID-19 came to an end today, and several state governments have already announced the lockdown extension in their states, today, on 14 April at 10 am, PM Narendra Modi has announced the extension




speech

John G. Farnsworth, receiver of the Bankers' and Merchants' Telegraph Co. vs. Western Union Telegraph Co.: Robert G. Ingersoll's opening speech to the jury, delivered May 21st, 1886.

Archives, Room Use Only - HE7645.I54 1886




speech

In Lok Sabha speech, PM taunts Congress, defends CAA

Replying to a debate on Motion of Thanks to the President's Address in Lok Sabha, Modi also attacked the Congress for its politics in the last seven decades, saying the party's politics of last 70 years has been such that no Congress leader can be self-sufficient.




speech

Seeman booked for defamatory speech

The Kuniyamuthur police here have registered a case against Naam Tamilar Katchi coordinator Seeman for allegedly speaking against the Central Governme




speech

Increase in the Number of Children Who Receive Federal Disability Benefits for Speech and Language Disorders Similar to Trends in the General Population, Says New Report

The increase in the number of children from low-income families who are receiving federal disability benefits for speech and language disorders over the past decade parallels the rise in the prevalence of these disorders among all U.S. children, says a new report by the National Academies of Sciences, Engineering, and Medicine.




speech

Social media to join hands to fight fake news, hate speech

The proposed alliance — to be named the Information Trust Alliance (ITA) — will be a grouping of digital platforms and publishers, fact checkers, civil society and academia that will aim to control the spread of harmful content, including fake news and hate speech. So far, discussions have taken place among Facebook, Google, Twitter, Byte-Dance, ShareChat and YY Inc.





speech

NTU President's speech at the 100th Anniversary Annual Meeting of the Royal Swedish Academy of Engineering Sciences

...




speech

Birthday party invite for boy with speech disorder goes viral

Mom posts open invite on Facebook and receives a 'Reddit hug' from the online community.




speech

Leonardo DiCaprio's Oscar speech moved the needle on climate change

Researchers analyzing social media after the speech discovered the actor's words started the largest engagement on climate change ever.



  • Climate & Weather

speech

Glenn Beck rally: Environmental hate speech likely

Conservative lightning rod has history of anti-environmental comments.




speech

Al Gore's Copenhagen climate summit speech

Video: Former vice president and environmental icon rallies the troops and turns up the pressure on Obama to pass climate change law.



  • Climate & Weather

speech

Best Man Speech

Are You the Best Man For the Job? You are in a position of honor. Your close friend is assuming the position and he needs your help.




speech

Former Sheriff's Deputy Represented by Nichols Kaster, PLLP Files Free-Speech and Overtime Lawsuit in South Dakota Against McCook County and McCook County Sheriff Mark Norris

The Complaint alleges the plaintiff's right to free-speech was violated under the First Amendment and the South Dakota Constitution




speech

Drug Free Therapy for kids with ADHD,ADD, Dyslexia, Dysgraphia, Dyscalculia, ODD, Speech Delay, Executive Dysfunction, Auditory Delay, Sensory Processing Disorder

ABC Foundations is a drug-free educational remediation center that offers services to children with learning and academic difficulties. Some children struggle with learning and academics due to delay at the brain stem level known as NDD.




speech

Ted Cruz says San Antonio's decision to label the term 'Chinese virus' as hate speech is 'nuts'

Jabin Botsford/The Washington Post via Getty Images

  • San Antonio City Council in Texas has unanimously voted to label terms including "Chinese virus" and "kung-fu virus" as hate speech.
  • It was responding to a growth in racist and antisemitic incidents in the city, triggered by the coronavirus crisis.
  • "Unfortunately, during times of crises, we do see the best of humanity and sometimes we also see the worst," said Mayor Ron Nirenberg. 
  • Senator Ted Cruz called the decision "nuts," saying that the city council was "behaving like a lefty college faculty lounge." 
  • It comes after Trump faced criticism for his use of the term "Chinese virus" at a White House Coronavirus Task Force press briefing.
  • Visit Business Insider's homepage for more stories.

The city of San Antonio in Texas has unanimously passed a resolution condemning the use of terms such as "Chinese virus" and "kung-fu virus" as hate speech.

It also encouraged residents to report "any such antisemitic, discriminatory or racist incidents" to the relevant authorities following several incidents in the city since the pandemic began, reports San Antonio's WOAI-TV.

See the rest of the story at Business Insider

NOW WATCH: Inside London during COVID-19 lockdown

See Also:




speech

Zoom icon with speech bubble

A zoom icon with a popup sub menu in a speech bubble.




speech

New Auphonic Transcript Editor and Improved Speech Recognition Services

Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable.
Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor:

Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.
Try out the Transcript Editor Examples yourself!


The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing.

The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services.
They also return word confidence values, timestamps and some punctuation, which is exported to our output files.

The Auphonic Transcript Editor

With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing.
Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible.

Main features of our transcript editor:
  • Edit the transcription directly in the HTML document.
  • Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine).
  • Listen to audio playback of specific words directly in the HTML editor.
  • Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription.
  • Export the edited transcript to HTML, PDF or WebVTT.
  • Completely useable on all mobile devices and desktop browsers.

Examples: Try Out the Transcript Editor

Here are two examples of the new transcript editor, taken from our speech recognition audio examples page:

1. Singletrack Transcript Editor Example
Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing.
2. Multitrack Transcript Editor Example
A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing.
As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!).

Transcript Editing

By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below).
If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes.


When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript.

Word Confidence Highlighting

Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy.

Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing.
Use the button Add/Remove Highlighting to disable/enable word confidence highlighting.

NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services!

Audio Playback

The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor).
This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file.

If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor.
Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days.

If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback.

Export Formats, Save/Share Transcript Editor

Click on the button Export... to see all export and saving/sharing options:

Save/Share Editor
The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies).
Export HTML / Export PDF / Export WebVTT
Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee).
Every export format is rendered directly in the browser, no server needed.

Amazon Transcribe

The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation.

UPDATE 2019:
Amazon Transcribe offers more languages now - please see Amazon Transcribe Features!

Pricing
The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h).
More information is available at Amazon Transcribe Pricing.
Custom Vocabulary (Keywords) Support
Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals).
The same feature is also available in the Google Cloud Speech API.
Timestamps, Word Confidence, and Punctuation
Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text.
It also adds some punctuation, which is combined with our own punctuation and formatting automatically.

The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages.
However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services!

Try it yourself:
Connect your Auphonic account with Amazon Transcribe at our External Services Page.

Speechmatics

Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation.

Many Languages
Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages).
It also has a Global English feature, which supports different English accents during transcription.
Timestamps, Word Confidence, and Punctuation
Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation.
Pricing
Speechmatics is the most expensive speech recognition service at Auphonic.
Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks.
They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic.
More information is available at Speechmatics Pricing.

Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic.

Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet.

Try it yourself:
Connect your Auphonic account with Speechmatics at our External Services Page.

What do you think?

Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated.

We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better?
Please let us know!






speech

More Languages for Amazon Transcribe Speech Recognition

Until recently, Amazon Transcribe supported speech recognition in English and Spanish only.
Now they included French, Italian and Portuguese as well - and a few other languages (including German) are in private beta.

Update March 2019:
Now Amazon Transcribe supports German and Korean as well.

The Auphonic Audio Inspector on the status page of a finished Multitrack Production including speech recognition.
Please click on the screenshot to see it in full resolution!


Amazon Transcribe is integrated as speech recognition engine within Auphonic and offers accurate transcriptions (compared to other services) at low costs, including keywords / custom vocabulary support, word confidence, timestamps, and punctuation.
See the following AWS blog post and video for more information about recent Amazon Transcribe developments: Transcribe speech in three new languages: French, Italian, and Brazilian Portuguese.

Amazon Transcribe is also a perfect fit if you want to use our Transcript Editor because you will be able to see word timestamps and confidence values to instantly check which section/words should be corrected manually to increase the transcription accuracy:


Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.

These features are also available if you use Speechmatics, but unfortunately not in our other integrated speech recognition services.

About Speech Recognition within Auphonic

Auphonic has built a layer on top of a few external speech recognition services to make audio searchable:
Our classifiers generate metadata during the analysis of an audio signal (music segments, silence, multiple speakers, etc.) to divide the audio file into small and meaningful segments, which are processed by the speech recognition engine. The results from all segments are then combined, and meaningful timestamps, simple punctuation and structuring are added to the resulting text.

To learn more about speech recognition within Auphonic, take a look at our Speech Recognition and Transcript Editor help pages or listen to our Speech Recognition Audio Examples.

A comparison table of our integrated services (price, quality, languages, speed, features, etc.) can be found here: Speech Recognition Services Comparison.

Conclusion

We hope that Amazon and others will continue to add new languages, to get accurate and inexpensive automatic speech recognition in many languages.

Don't hesitate to contact us if you have any questions or feedback about speech recognition or our transcript editor!






speech

Talking to computers (part 1): Why is speech recognition so difficult?

Although the performance of today's speech recognition systems is impressive, the experience for many is still one of errors, corrections, frustration and abandoning speech in favour of alternative interaction methods. We take a closer look at speech and find out why speech recognition is so difficult.




speech

Continuous speech separation: dataset and analysis. (arXiv:2001.11482v3 [cs.SD] UPDATED)

This paper describes a dataset and protocols for evaluating continuous speech separation algorithms. Most prior studies on speech separation use pre-segmented signals of artificially mixed speech utterances which are mostly emph{fully} overlapped, and the algorithms are evaluated based on signal-to-distortion ratio or similar performance metrics. However, in natural conversations, a speech signal is continuous, containing both overlapped and overlap-free components. In addition, the signal-based metrics have very weak correlations with automatic speech recognition (ASR) accuracy. We think that not only does this make it hard to assess the practical relevance of the tested algorithms, it also hinders researchers from developing systems that can be readily applied to real scenarios. In this paper, we define continuous speech separation (CSS) as a task of generating a set of non-overlapped speech signals from a extit{continuous} audio stream that contains multiple utterances that are emph{partially} overlapped by a varying degree. A new real recorded dataset, called LibriCSS, is derived from LibriSpeech by concatenating the corpus utterances to simulate a conversation and capturing the audio replays with far-field microphones. A Kaldi-based ASR evaluation protocol is also established by using a well-trained multi-conditional acoustic model. By using this dataset, several aspects of a recently proposed speaker-independent CSS algorithm are investigated. The dataset and evaluation scripts are available to facilitate the research in this direction.




speech

The Perceptimatic English Benchmark for Speech Perception Models. (arXiv:2005.03418v1 [cs.CL])

We present the Perceptimatic English Benchmark, an open experimental benchmark for evaluating quantitative models of speech perception in English. The benchmark consists of ABX stimuli along with the responses of 91 American English-speaking listeners. The stimuli test discrimination of a large number of English and French phonemic contrasts. They are extracted directly from corpora of read speech, making them appropriate for evaluating statistical acoustic models (such as those used in automatic speech recognition) trained on typical speech data sets. We show that phone discrimination is correlated with several types of models, and give recommendations for researchers seeking easily calculated norms of acoustic distance on experimental stimuli. We show that DeepSpeech, a standard English speech recognizer, is more specialized on English phoneme discrimination than English listeners, and is poorly correlated with their behaviour, even though it yields a low error on the decision task given to humans.




speech

Cotatron: Transcription-Guided Speech Encoder for Any-to-Many Voice Conversion without Parallel Data. (arXiv:2005.03295v1 [eess.AS])

We propose Cotatron, a transcription-guided speech encoder for speaker-independent linguistic representation. Cotatron is based on the multispeaker TTS architecture and can be trained with conventional TTS datasets. We train a voice conversion system to reconstruct speech with Cotatron features, which is similar to the previous methods based on Phonetic Posteriorgram (PPG). By training and evaluating our system with 108 speakers from the VCTK dataset, we outperform the previous method in terms of both naturalness and speaker similarity. Our system can also convert speech from speakers that are unseen during training, and utilize ASR to automate the transcription with minimal reduction of the performance. Audio samples are available at https://mindslab-ai.github.io/cotatron, and the code with a pre-trained model will be made available soon.




speech

ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context. (arXiv:2005.03191v1 [eess.AS])

Convolutional neural networks (CNN) have shown promising results for end-to-end speech recognition, albeit still behind other state-of-the-art methods in performance. In this paper, we study how to bridge this gap and go beyond with a novel CNN-RNN-transducer architecture, which we call ContextNet. ContextNet features a fully convolutional encoder that incorporates global context information into convolution layers by adding squeeze-and-excitation modules. In addition, we propose a simple scaling method that scales the widths of ContextNet that achieves good trade-off between computation and accuracy. We demonstrate that on the widely used LibriSpeech benchmark, ContextNet achieves a word error rate (WER) of 2.1\%/4.6\% without external language model (LM), 1.9\%/4.1\% with LM and 2.9\%/7.0\% with only 10M parameters on the clean/noisy LibriSpeech test sets. This compares to the previous best published system of 2.0\%/4.6\% with LM and 3.9\%/11.3\% with 20M parameters. The superiority of the proposed ContextNet model is also verified on a much larger internal dataset.




speech

Script compliance and quality assurance based on speech recognition and duration of interaction

Apparatus and methods are provided for using automatic speech recognition to analyze a voice interaction and verify compliance of an agent reading a script to a client during the voice interaction. In one aspect of the invention, a communications system includes a user interface, a communications network, and a call center having an automatic speech recognition component. In other aspects of the invention, a script compliance method includes the steps of conducting a voice interaction between an agent and a client and evaluating the voice interaction with an automatic speech recognition component adapted to analyze the voice interaction and determine whether the agent has adequately followed the script. In yet still further aspects of the invention, the duration of a given interaction can be analyzed, either apart from or in combination with the script compliance analysis above, to seek to identify instances of agent non-compliance, of fraud, or of quality-analysis issues.




speech

Using a physical phenomenon detector to control operation of a speech recognition engine

A device may include a physical phenomenon detector. The physical phenomenon detector may detect a physical phenomenon related to the device. In response to detecting the physical phenomenon, the device may record audio data that includes speech. The speech may be transcribed with a speech recognition engine. The speech recognition engine may be included in the device, or may be included with a remote computing device with which the device may communicate.




speech

Thought recollection and speech assistance device

Some embodiments of the inventive subject matter include a method for detecting speech loss and supplying appropriate recollection data to the user. Such embodiments include detecting a speech stream from a user, converting the speech stream to text, storing the text, detecting an interruption to the speech stream, wherein the interruption to the speech stream indicates speech loss by the user, searching a catalog using the text as a search parameter to find relevant catalog data and, presenting the relevant catalog data to remind the user about the speech stream.




speech

System, method and program product for providing automatic speech recognition (ASR) in a shared resource environment

A speech recognition system, method of recognizing speech and a computer program product therefor. A client device identified with a context for an associated user selectively streams audio to a provider computer, e.g., a cloud computer. Speech recognition receives streaming audio, maps utterances to specific textual candidates and determines a likelihood of a correct match for each mapped textual candidate. A context model selectively winnows candidate to resolve recognition ambiguity according to context whenever multiple textual candidates are recognized as potential matches for the same mapped utterance. Matches are used to update the context model, which may be used for multiple users in the same context.




speech

Speech recognition and synthesis utilizing context dependent acoustic models containing decision trees

A speech recognition method including the steps of receiving a speech input from a known speaker of a sequence of observations and determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model. The acoustic model has a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation and has been trained using first training data and adapted using second training data to said speaker. The speech recognition method also determines the likelihood of a sequence of observations occurring in a given language using a language model and combines the likelihoods determined by the acoustic model and the language model and outputs a sequence of words identified from said speech input signal. The acoustic model is context based for the speaker, the context based information being contained in the model using a plurality of decision trees and the structure of the decision trees is based on second training data.




speech

Speech recognition apparatus with means for preventing errors due to delay in speech recognition

When a speech sound of at least a predetermined sound pressure is externally input while a time measurement is not being performed, a time measuring circuit starts a time measurement responsive to a signal from a speech detector. When another speech sound of at least a predetermined sound pressure is externally input while a time measurement is being performed by the time measuring circuit, a measurement time measured by the time measuring circuit at this moment is stored in a time information memory. After a predetermined time has elapsed, if a speech recognition circuit recognizes that the externally input speech sound is a "stop" command, the time measurement operation performed by the time measuring circuit is stopped, and the time information stored in the time information memory is read out and displayed as measurement time information on a display unit.




speech

Speech effects

A method of complementing a spoken text. The method including receiving text data representative of a natural language text, receiving effect control data including at least one effect control record, each effect control record being associated with a respective location in the natural language text, receiving a stream of audio data, analyzing the stream of audio data for natural language utterances that correlate with the natural language text at a respective one of the locations, and outputting, in response to a determination by the analyzing that a natural language utterance in the stream of audio data correlates with a respective one of the locations, at least one effect control signal based on the effect control record associated with the respective location.




speech

Kelly Clarkson Unveils the Real Cause Behind Son's Speech Problem

Talking about the issue four-year-old Remington once had, the 'American Idol' alum admits before finding out the problem, she used to worry that the boy was hearing impaired.




speech

Robot ceremonies. Virtual dance parties. Online speeches. How Arizona colleges and universities are celebrating graduates

Arizona colleges and universities have dramatically altered graduation ceremonies to adapt to COVID-19.

       




speech

Trump Speech On Post-Impeachment Acquittal

President Trump is addressing the nation a day after the Senate acquitted him of both articles of impeachment. Trump said he was "totally vindicated" after a months-long impeachment inquiry and trial. Watch his remarks live.




speech

Sting in his musical ‘The Last Ship’ - Volti: ‘Almost Speechless’, the voice as an instrument

This week on Open Air, KALW’s radio magazine for the Bay Area performing arts, host David Latulippe talks with composer, singer-songwriter, actor, author, activist, international rock star, and 17-time Grammy Award-winner Sting (pictured, center), who is in town to star in his own new musical, The Last Ship , playing at the Golden Gate Theatre (1 Taylor St.) in San Francisco, through March 22.




speech

Musician And Author Billy Bragg Says Free Speech Depends On Accountability, Music On Empathy

Billy Bragg is many things: a poet, punk rocker, folk musician, and singer-songwriter. He’s also an activist, music historian, and best-selling author. In the words of another poet, he contains multitudes. Bragg’s newest work, The Three Dimensions of Freedom , is a slim volume that makes a weighty argument. It’s a pamphlet in the tradition of Thomas Paine, whose influential polemics helped spark the American Revolution, and later got him convicted of sedition.




speech

Watch: Seahawks Shaquill and Shaquem Griffin give a virtual commencement speech to alma mater Central Florida


Dressed in "half suits'' bearing UCF colors, the Griffin twins took turns speaking, each saying they didn't want to read off a piece of paper but instead that they wanted to speak from the heart.




speech

Watch: Seahawks Shaquill and Shaquem Griffin give a virtual commencement speech to alma mater Central Florida


Dressed in "half suits'' bearing UCF colors, the Griffin twins took turns speaking, each saying they didn't want to read off a piece of paper but instead that they wanted to speak from the heart.




speech

Queensland Premier forced to apologise after threatening Katter MPs over Fraser Anning speech

Annastacia Palaszczuk is forced to apologise to Parliament over her threats to strip Katter's Australian Party MPs of resources when they refused to denounce former colleague Fraser Anning's speech calling for a Muslim immigration ban.




speech

How to ensure free speech; and the EU’s new copyright directive

Many Western governments continue to struggle with free speech. It’s not that they’re necessarily against it, it’s just that they don’t know how to effectively regulate out the offensive stuff.







speech

Fertility expert criticises 'explosion in bad media' about IVF in speech to industry

A senior member of the Fertility Society of Australia has used a speech at the opening of the society's conference to criticise researchers for making negative comments about the IVF industry in the media.