language

Biodiversity databases: language and location help explain biases

Richer countries have more resources for gathering biodiversity information, creating a biased view of the worlds' species and their distribution. However, a new study argues that there are other reasons why some countries are underrepresented in global biodiversity databases, with low numbers of English speakers, large distances from the database host and low security acting as key barriers to data collection.




language

​More NTU exchange students opt for European languages

While most NTU exchange students pick up local languages such as Chinese and Malay, a growing number from Western countries have over the past few years opted for European languages....




language

Hand talk: Preserving a language legacy

Video: Historical films and field work reveal more about endangered Native American language.



  • Arts & Culture

language

Sign language learning made easy

From video games to cell phone apps, people are making sign language easier to learn.



  • Arts & Culture

language

New home movies resurrect endangered Native American language

Minnesota educator develops multimedia tools to share and preserve Ojibwe language and culture.



  • Arts & Culture

language

To end the age of fossil fuels, try learning to speak its language

Climate activists are about to launch two weeks of protest against a pipeline from Alberta's tar sands to the Gulf Coast. They'd do well to remember there are n




language

Does ancient cave art provide the clues to early human language?

A paper hypothesizes that some of our language skills evolved out of specific cave art features.



  • Arts & Culture

language

These gloves translate sign language into text

Two college sophomores designed the gloves to make communication easier for the deaf community.



  • Fitness & Well-Being

language

How emoji have changed language

Emoji can add emotion and intent to emails and texts, but can they ever become their own language?



  • Arts & Culture

language

We can now speak the universal language of honey bees

Virginia Tech researchers have deciphered and codified the honey bee language with remarkable precision.




language

Speaking the language of climate negotiations

Leaders during the Cancun climate meetings are walking a fine line of keeping expectations in perspective, but also wanting to get something done as deadlines l



  • Climate & Weather

language

These 5 backyard birds can teach you bird language

Become an expert in knowing what birds are saying by studying these common species.




language

How to learn bird language in 5 steps

Listening to birds reveals a lot about what's happening around you, including what other wildlife is roaming nearby!




language

Music is the language we all share

Harvard's Music Lab has spent five years compiling a large database of thousands of songs from all over the world — with some striking similarities.



  • Arts & Culture

language

8 languages on the verge of extinction

Nearly 7,000 languages are spoken around the world, and one of them dies every two weeks. Here are some that could disappear in our lifetimes.



  • Arts & Culture

language

Horses and dogs share universal play language

Despite their size difference, horses and dogs understand and mimic each other when they play.




language

Positive language for a positive response

Don't fall into the trap of using negative humour such as irony. Positive language makes prospective customers feel positive.




language

The Language of Hip Hop

Bill Cosby at this point in his career is not just internationally famous; he has become an icon in the black community as a successful, inspiring individual who did not let his color get in the way making his mark on society.




language

Foreign language learning for business success

Simple foreign language learning can help improve business relationships.




language

The Leverage of Language for E-book

Language has helped many businesses to maximize their income. From Pharmacy, Publisher, and Author who want to expand their business, knows that language has the power to leverage their business.




language

Criminal Defense Law Firm in Atlanta, Philip Kim Law, P.C. Announces the Launch of Their New Spanish Language Website

Attorney Philip Kim in Lawrenceville, Georgia (GA) of Gwinnett County is a highly-regarded criminal defense specialist and is excited to better serve his Spanish-speaking clients and provide important legal information for the community.




language

Ladon Language Team: How a Group of Berkeley Students Bridge Language Barriers for Hurricane Responders

The Ladon Language Team is offering free translation support for responders to Hurricane Harvey and Irma. Ladon is a social initiative supported by The Clinton Global Initiative University 2016, The Resolution Project, and Westly Foundation.




language

Insignary Extends Capabilities in Binary Fingerprint Software Composition Analysis with the Support of Script Language




language

Ventana Research Begins New Dynamic Insights Research on Natural Language Processing

Latest research aims to understand advances in natural language capabilities and its impact on business




language

Natural Language Processing Recipes: Best Practices and Examples

Here is an overview of another great natural language processing resource, this time from Microsoft, which demonstrates best practices and implementation guidelines for a variety of tasks and scenarios.




language

Top Stories, Apr 27 – May 3: Five Cool Python Libraries for Data Science; Natural Language Processing Recipes: Best Practices and Examples

Also: Coronavirus COVID-19 Genome Analysis using Biopython; LSTM for time series prediction; A Concise Course in Statistical Inference: The Free eBook; Exploring the Impact of Geographic Information Systems




language

Keeping It Personal With Natural Language Processing

Look at your organization and consider the unstructured text or audio data you gather and the possible revelations it may hold. That data reflects the voices of those you serve and holds the potential to help you deliver better experiences, improve quality of care and enrich human engagement. There are powerful stories to be told from your unstructured text data. And the best way for you to find them is with natural language processing.




language

Text Analytics and Natural Language Processing: Knowledge Management?s Next Frontier

Text analytics and natural language processing are not new concepts. Most knowledge management professionals have been grappling with these technologies for years. From the KM perspective, these technologies share the same fundamental purpose: They help get the right information to employees at the right time.




language

More Languages for Amazon Transcribe Speech Recognition

Until recently, Amazon Transcribe supported speech recognition in English and Spanish only.
Now they included French, Italian and Portuguese as well - and a few other languages (including German) are in private beta.

Update March 2019:
Now Amazon Transcribe supports German and Korean as well.

The Auphonic Audio Inspector on the status page of a finished Multitrack Production including speech recognition.
Please click on the screenshot to see it in full resolution!


Amazon Transcribe is integrated as speech recognition engine within Auphonic and offers accurate transcriptions (compared to other services) at low costs, including keywords / custom vocabulary support, word confidence, timestamps, and punctuation.
See the following AWS blog post and video for more information about recent Amazon Transcribe developments: Transcribe speech in three new languages: French, Italian, and Brazilian Portuguese.

Amazon Transcribe is also a perfect fit if you want to use our Transcript Editor because you will be able to see word timestamps and confidence values to instantly check which section/words should be corrected manually to increase the transcription accuracy:


Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.

These features are also available if you use Speechmatics, but unfortunately not in our other integrated speech recognition services.

About Speech Recognition within Auphonic

Auphonic has built a layer on top of a few external speech recognition services to make audio searchable:
Our classifiers generate metadata during the analysis of an audio signal (music segments, silence, multiple speakers, etc.) to divide the audio file into small and meaningful segments, which are processed by the speech recognition engine. The results from all segments are then combined, and meaningful timestamps, simple punctuation and structuring are added to the resulting text.

To learn more about speech recognition within Auphonic, take a look at our Speech Recognition and Transcript Editor help pages or listen to our Speech Recognition Audio Examples.

A comparison table of our integrated services (price, quality, languages, speed, features, etc.) can be found here: Speech Recognition Services Comparison.

Conclusion

We hope that Amazon and others will continue to add new languages, to get accurate and inexpensive automatic speech recognition in many languages.

Don't hesitate to contact us if you have any questions or feedback about speech recognition or our transcript editor!






language

The return of language after brain trauma

Language sets humans apart in the animal world. Language allows us to communicate complex ideas and emotions.  But too often after brain injury be it stroke or trauma, language is lost. 




language

The return of language after brain trauma

Language sets humans apart in the animal world. Language allows us to communicate complex ideas and emotions.  But too often after brain injury be it stroke or trauma, language is lost. 




language

Which Programming Language Should Mobile Developers Choose?

When building new apps, the most important thing developers must decide is which language to program in. There are several languages out there, and some are preferred for certain operating...




language

The return of language after brain trauma

Language sets humans apart in the animal world. Language allows us to communicate complex ideas and emotions.  But too often after brain injury be it stroke or trauma, language is lost. 




language

The Sensitivity of Language Models and Humans to Winograd Schema Perturbations. (arXiv:2005.01348v2 [cs.CL] UPDATED)

Large-scale pretrained language models are the major driving force behind recent improvements in performance on the Winograd Schema Challenge, a widely employed test of common sense reasoning ability. We show, however, with a new diagnostic dataset, that these models are sensitive to linguistic perturbations of the Winograd examples that minimally affect human understanding. Our results highlight interesting differences between humans and language models: language models are more sensitive to number or gender alternations and synonym replacements than humans, and humans are more stable and consistent in their predictions, maintain a much higher absolute performance, and perform better on non-associative instances than associative ones. Overall, humans are correct more often than out-of-the-box models, and the models are sometimes right for the wrong reasons. Finally, we show that fine-tuning on a large, task-specific dataset can offer a solution to these issues.




language

Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment. (arXiv:2005.00165v3 [cs.CL] UPDATED)

A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i.e. is a grammatical sentence more probable than an ungrammatical sentence). Our work uses ambiguous relative clause attachment to extend such evaluations to cases of multiple simultaneous valid interpretations, where stark grammaticality differences are absent. We compare model performance in English and Spanish to show that non-linguistic biases in RNN LMs advantageously overlap with syntactic structure in English but not Spanish. Thus, English models may appear to acquire human-like syntactic preferences, while models trained on Spanish fail to acquire comparable human-like preferences. We conclude by relating these results to broader concerns about the relationship between comprehension (i.e. typical language model use cases) and production (which generates the training data for language models), suggesting that necessary linguistic biases are not present in the training signal at all.




language

A Tale of Two Perplexities: Sensitivity of Neural Language Models to Lexical Retrieval Deficits in Dementia of the Alzheimer's Type. (arXiv:2005.03593v1 [cs.CL])

In recent years there has been a burgeoning interest in the use of computational methods to distinguish between elicited speech samples produced by patients with dementia, and those from healthy controls. The difference between perplexity estimates from two neural language models (LMs) - one trained on transcripts of speech produced by healthy participants and the other trained on transcripts from patients with dementia - as a single feature for diagnostic classification of unseen transcripts has been shown to produce state-of-the-art performance. However, little is known about why this approach is effective, and on account of the lack of case/control matching in the most widely-used evaluation set of transcripts (DementiaBank), it is unclear if these approaches are truly diagnostic, or are sensitive to other variables. In this paper, we interrogate neural LMs trained on participants with and without dementia using synthetic narratives previously developed to simulate progressive semantic dementia by manipulating lexical frequency. We find that perplexity of neural LMs is strongly and differentially associated with lexical frequency, and that a mixture model resulting from interpolating control and dementia LMs improves upon the current state-of-the-art for models trained on transcript text exclusively.




language

Quda: Natural Language Queries for Visual Data Analytics. (arXiv:2005.03257v1 [cs.CL])

Visualization-oriented natural language interfaces (V-NLIs) have been explored and developed in recent years. One challenge faced by V-NLIs is in the formation of effective design decisions that usually requires a deep understanding of user queries. Learning-based approaches have shown potential in V-NLIs and reached state-of-the-art performance in various NLP tasks. However, because of the lack of sufficient training samples that cater to visual data analytics, cutting-edge techniques have rarely been employed to facilitate the development of V-NLIs. We present a new dataset, called Quda, to help V-NLIs understand free-form natural language. Our dataset contains 14;035 diverse user queries annotated with 10 low-level analytic tasks that assist in the deployment of state-of-the-art techniques for parsing complex human language. We achieve this goal by first gathering seed queries with data analysts who are target users of V-NLIs. Then we employ extensive crowd force for paraphrase generation and validation. We demonstrate the usefulness of Quda in building V-NLIs by creating a prototype that makes effective design decisions for free-form user queries. We also show that Quda can be beneficial for a wide range of applications in the visualization community by analyzing the design tasks described in academic publications.




language

Diagnosing the Environment Bias in Vision-and-Language Navigation. (arXiv:2005.03086v1 [cs.CL])

Vision-and-Language Navigation (VLN) requires an agent to follow natural-language instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are crucial when the agent is navigating new environments about which it has no prior knowledge. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of the major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons for this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations that contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gaps between seen and unseen on multiple datasets (i.e. R2R, R4R, and CVDN) and achieve competitive unseen results to previous state-of-the-art models. Our code and features are available at: https://github.com/zhangybzbo/EnvBiasVLN




language

Language translation using preprocessor macros

A method is provided for providing consistent logical code across specific programming languages. The method incorporates preprocessor macros in a source computer program code to generate a program control flow. The preprocessor macros can be used to describe program control flow in the source programming language for execution in the source computer program code. The preprocessor macros can also be used to generate control flow objects representing the control flow, which converts the source computer program code into a general language representation. The general language representation when executed is used to output computer programming code in specific programming languages representing the same logical code as that of the source computer program code.




language

Dynamic language model

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for speech recognition. One of the methods includes receiving a base language model for speech recognition including a first word sequence having a base probability value; receiving a voice search query associated with a query context; determining that a customized language model is to be used when the query context satisfies one or more criteria associated with the customized language model; obtaining the customized language model, the customized language model including the first word sequence having an adjusted probability value being the base probability value adjusted according to the query context; and converting the voice search query to a text search query based on one or more probabilities, each of the probabilities corresponding to a word sequence in a group of one or more word sequences, the group including the first word sequence having the adjusted probability value.




language

Language model creation device

This device 301 stores a first content-specific language model representing a probability that a specific word appears in a word sequence representing a first content, and a second content-specific language model representing a probability that the specific word appears in a word sequence representing a second content. Based on a first probability parameter representing a probability that a content represented by a target word sequence included in a speech recognition hypothesis generated by a speech recognition process of recognizing a word sequence corresponding to a speech, a second probability parameter representing a probability that the content represented by the target word sequence is a second content, the first content-specific language model and the second content-specific language model, the device creates a language model representing a probability that the specific word appears in a word sequence corresponding to a part corresponding to the target word sequence of the speech.




language

Messaging response system providing translation and conversion written language into different spoken language

A messaging response system is disclosed wherein a service providing system provides services to users via messaging communications. In accordance with an exemplary embodiment of the present invention, multiple respondents servicing users through messaging communications may appear to simultaneously use a common “screen name” identifier.




language

Programming language conditional event dispatcher

Methods and systems of monitoring events occurring in a computer system are provided. An event monitoring instruction including a condition is parsed, the event monitoring instruction expressed using syntax defined in source code, the parsing resulting in an event channel to monitor and the condition. Then execution of an application is paused. The event channel is monitored until an event occurs on the event channel. Then an event handler for the event is run in response to the event occurring on the event channel. The condition is evaluated to determine whether the condition is satisfied. Execution of the application is resumed in response to the condition being satisfied.




language

System, method and multipoint control unit for providing multi-language conference

A system for providing multi-language conference is provided. The system includes conference terminals and a multipoint control unit. The conference terminals are adapted to process a speech of a conference site, transmitting the processed speech to the multipoint control unit, process an audio data received from the multipoint control unit and output it. At least one of the conference terminals is an interpreting terminal adapted to interpret the speech of the conference according to the audio data transmitted from the multipoint control unit, process the interpreted audio data and output the processed audio data. The multipoint control unit is adapted to perform a sound mixing process of the audio data from the conference terminals in different sound channels according to language types, and then sends mixed audio data after the sound mixing process to the conference terminals.




language

RDX ENHANCEMENT OF SYSTEM AND METHOD FOR IMPLEMENTING REUSABLE DATA MARKUP LANGUAGE (RDL)

Methods and systems in accordance with the present invention allow users to efficiently manipulate, analyze, and transmit eXtensible Business Reporting Language (“XBRL”) reports. They allow users to automatically build financial reports that are acceptable to governing agencies such as the IRS. In one embodiment, the reports are developed by a parser that transforms text documents into software elements containing a format with a hierarchal relationship between the software elements, and an editor that develops reports by referencing the software elements transformed from the text documents. Methods and systems in accordance with the present invention also enable reports to be automatically scheduled by gathering desired information from an accounting system, formatting the information into an XBRL document, and transmitting it to an end source. Furthermore, systems and methods in accordance with the present invention allow a user to translate an XBRL document into RDL format and use the RDL system to manipulate and analyze it.




language

How to teach your child English at home as a second language

IF your child uses English as an additional language, you might be worried about them not being at school just now and missing out on using and learning English.




language

US spars with China over pro-WHO language in UN Security Council ceasefire resolution

A Chinese push to include support for the World Health Organization in a U.N. Security Council resolution calling for a global ceasefire is  putting the entire text in limbo – after strong U.S. opposition to the Beijing effort. 




language

Languages, Text, and Context (Lesson #7)

'Some people not only have the Bible translated into their native language but even have various versions of it in their own language. Others might have only one version, if even that. But regardless of what you have, the key point is to cherish it as the Word of God and, most important, to obey what it teaches.'




language

How We Learn Language (Rebroadcast)

Can you remember what it was like for you to learn your native language?  Probably not, but why is that? As humans, we begin learning to speak our native language during the earliest stages of our lives, in infancy.  Most people don’t have many accessible memories from this period of development. How do we do...




language

Beginner's Guide to Rust: Start coding with the Rust language

What better way to learn a new programming language than to create a favorite old game? In this tutorial, learn how to create a simple game of Tic-Tac-Toe.