recognition

Do Laws Shape Attitudes? Evidence from Same-Sex Relationship Recognition Policies in Europe [electronic journal].




recognition

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) [electronic journal].

IEEE / Institute of Electrical and Electronics Engineers Incorporated




recognition

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) [electronic journal].

IEEE Computer Society




recognition

2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) [electronic journal].

IEEE / Institute of Electrical and Electronics Engineers Incorporated




recognition

Isostructural chiral metal–organic frameworks with metal-regulated performances for electrochemical enantiomeric recognition of tyrosine and tryptophan

Inorg. Chem. Front., 2024, 11,2402-2412
DOI: 10.1039/D3QI02662K, Research Article
Ran An, Qiu-Yan Hu, Liu-Yang Song, Xu Zhang, Rui-Xuan Li, En-Qing Gao, Qi Yue
Two isostructural homochiral MOFs exhibit significantly different enantioselective recognition performances closely associated with the different coordination habits of the metal centers in the MOFs.
The content of this RSS Feed (c) The Royal Society of Chemistry




recognition

From small changes to big gains: pyridinium-based tetralactam macrocycle for enhanced sugar recognition in water

Chem. Sci., 2024, Advance Article
DOI: 10.1039/D4SC06190J, Edge Article
Open Access
Canjia Zhai, Ethan Cross Zulueta, Alexander Mariscal, Chengkai Xu, Yunpeng Cui, Xudong Wang, Huang Wu, Carson Doan, Lukasz Wojtas, Haixin Zhang, Jianfeng Cai, Libin Ye, Kun Wang, Wenqi Liu
Incorporating pyridinium into an anthracene-based macrocycle significantly enhances its sugar binding affinity by increasing hydrogen bonding and expanding the contact surface area.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




recognition

Border checkposts in Nilgiris to soon have automatic number plate recognition cameras

This comes in the wake of the Madras High Court recently expressing dissatisfaction over the implementation of the e-pass system in the Nilgiris and Kodaikanal




recognition

Chain stretching in brushes favors sequence recognition for nucleobase-functionalized flexible precise oligomers

Soft Matter, 2024, 20,8303-8311
DOI: 10.1039/D4SM00866A, Paper
Kseniia Grafskaia, Qian Qin, Jie Li, Delphine Magnin, David Dellemme, Mathieu Surin, Karine Glinel, Alain M. Jonas
Flexible oligomers having precise sequences of nucleobases do not specifically recognize surface-grafted target chains at low grafting density. Moderately higher grafting densities promote sequence-specific recognition thanks to chain stretching.
The content of this RSS Feed (c) The Royal Society of Chemistry




recognition

Self-assembly of amphiphilic homopolymers grafted onto spherical nanoparticles: complete embedded minimal surfaces and a machine learning algorithm for their recognition

Soft Matter, 2024, 20,8385-8394
DOI: 10.1039/D4SM00616J, Paper
D. A. Mitkovskiy, A. A. Lazutin, A. L. Talis, V. V. Vasilevskaya
Amphiphilic macromolecules grafted onto spherical nanoparticles can self-assemble into morphological structures corresponding to the family of complete embedded minimal surfaces. They arise situationally, can coexist and transform into each other.
The content of this RSS Feed (c) The Royal Society of Chemistry




recognition

Schools disaffiliated by CBSE in Delhi continue functioning as usual; students, parents say they don’t know about the derecognition

The students from Classes 1 to 9 may not be directly impacted, those set to appear for their Board examinations may have little choice except to shift to another school in the middle of the year




recognition

Unsupervised pattern recognition on the surface of simulated metal nanoparticles for catalytic applications

Catal. Sci. Technol., 2024, 14,6651-6661
DOI: 10.1039/D4CY01000K, Paper
Jonathan Y. C. Ting, George Opletal, Amanda S. Barnard
The structural patterns and catalytic activities of the surface atoms of simulated metal nanoparticles are characterised by an automatable data-driven unsupervised machine learning approach.
The content of this RSS Feed (c) The Royal Society of Chemistry




recognition

Electrochemical chiral recognition of tryptophan enantiomers by using chiral polyaniline and β-CD-MOF

Nanoscale, 2024, Advance Article
DOI: 10.1039/D4NR02854F, Paper
Jiamin Liang, Yuxin Song, Huan Xing, Liang Ma, Fengxia Wang, Mingfang Zhang, Hongli Zhang, Gang Zou, Guang Yang
A electrochemical sensor for the chiral recognition of Trp based on chiral polyaniline (D-PANI) and β-CD-MOF was designed. It displayed higher affinity of L-Trp, and the oxidation peak current ratio (IL/ID) of DPV could be reached at 2.26.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




recognition

Highly stable and ultra-fast vibration-responsive flexible iontronic sensors for accurate acoustic signal recognition

Nanoscale, 2024, Advance Article
DOI: 10.1039/D4NR03370A, Paper
Yan Wang, Weiqiang Liao, Xikai Yang, Kexin Wang, Shengpeng Yuan, Dan Liu, Cheng Liu, Shiman Yang, Li Wang
A wearable acoustic vibration pressure sensor was developed based on an interfacial-enhanced iontronic dielectric and integrated with an acoustic recognition system using a deep learning classifier.
To cite this article before page numbers are assigned, use the DOI form of citation above.
The content of this RSS Feed (c) The Royal Society of Chemistry




recognition

Despite winning medals in the open and deaf categories at global meets, this young shooter awaits recognition

The high point for Dhanush Srikanth came in 2023 when he bagged a gold in the individual and the silver team medal in the 10 metre Air Rifle event in Germany in the normal category




recognition

Structural insight into the recognition of pathogen-derived phosphoglycolipids by C-type lectin receptor DCAR [Protein Structure and Folding]

The C-type lectin receptors (CLRs) form a family of pattern recognition receptors that recognize numerous pathogens, such as bacteria and fungi, and trigger innate immune responses. The extracellular carbohydrate-recognition domain (CRD) of CLRs forms a globular structure that can coordinate a Ca2+ ion, allowing receptor interactions with sugar-containing ligands. Although well-conserved, the CRD fold can also display differences that directly affect the specificity of the receptors for their ligands. Here, we report crystal structures at 1.8–2.3 Å resolutions of the CRD of murine dendritic cell-immunoactivating receptor (DCAR, or Clec4b1), the CLR that binds phosphoglycolipids such as acylated phosphatidyl-myo-inositol mannosides (AcPIMs) of mycobacteria. Using mutagenesis analysis, we identified critical residues, Ala136 and Gln198, on the surface surrounding the ligand-binding site of DCAR, as well as an atypical Ca2+-binding motif (Glu-Pro-Ser/EPS168–170). By chemically synthesizing a water-soluble ligand analog, inositol-monophosphate dimannose (IPM2), we confirmed the direct interaction of DCAR with the polar moiety of AcPIMs by biolayer interferometry and co-crystallization approaches. We also observed a hydrophobic groove extending from the ligand-binding site that is in a suitable position to interact with the lipid portion of whole AcPIMs. These results suggest that the hydroxyl group-binding ability and hydrophobic groove of DCAR mediate its specific binding to pathogen-derived phosphoglycolipids such as mycobacterial AcPIMs.




recognition

Structural basis of substrate recognition and catalysis by fucosyltransferase 8 [Protein Structure and Folding]

Fucosylation of the innermost GlcNAc of N-glycans by fucosyltransferase 8 (FUT8) is an important step in the maturation of complex and hybrid N-glycans. This simple modification can dramatically affect the activities and half-lives of glycoproteins, effects that are relevant to understanding the invasiveness of some cancers, development of mAb therapeutics, and the etiology of a congenital glycosylation disorder. The acceptor substrate preferences of FUT8 are well-characterized and provide a framework for understanding N-glycan maturation in the Golgi; however, the structural basis of these substrate preferences and the mechanism through which catalysis is achieved remain unknown. Here we describe several structures of mouse and human FUT8 in the apo state and in complex with GDP, a mimic of the donor substrate, and with a glycopeptide acceptor substrate at 1.80–2.50 Å resolution. These structures provide insights into a unique conformational change associated with donor substrate binding, common strategies employed by fucosyltransferases to coordinate GDP, features that define acceptor substrate preferences, and a likely mechanism for enzyme catalysis. Together with molecular dynamics simulations, the structures also revealed how FUT8 dimerization plays an important role in defining the acceptor substrate-binding site. Collectively, this information significantly builds on our understanding of the core fucosylation process.




recognition

The Contribution of Mutual Recognition to International Regulatory Co-operation

This OECD Regulatory Policy Working Paper relies on an empirical stocktaking of mutual recognition agreements (MRAs) among selected OECD countries. It aims to build a greater understanding of the benefits and pitfalls of one of the 11 mechanisms of international regulatory co-operation.




recognition

Are Teachers Getting the Recognition They Deserve?

More and more countries are having discussions about how to evaluate the quality of their teaching workforce and, subsequently, how to reward teachers for their work. The OECD’s newest series of briefs, Teaching in Focus, launches this month with a discussion of the appraisal and feedback teachers receive and the impact of both on their teaching.




recognition

Wilfried Zaha glad ex-teammate Aaron Wan-Bissaka is finally getting the recognition he deserves

Wilfried Zaha says it is nice to see former Crystal Palace team-mate Aaron Wan-Bissaka getting recognised on the world stage since his move to Manchester United. 




recognition

SnapPay launches facial recognition payments for North American merchants

(The Paypers) SnapPay has announced the availability of facial recognition payment technology for North...




recognition

Facial recognition technique could improve hail forecasts

Full Text:

The same artificial intelligence technique typically used in facial recognition systems could help improve prediction of hailstorms and their severity, according to a new, National Science Foundation-funded study. Instead of zeroing in on the features of an individual face, scientists trained a deep learning model called a convolutional neural network to recognize features of individual storms that affect the formation of hail and how large the hailstones will be, both of which are notoriously difficult to predict. The promising results highlight the importance of taking into account a storm's entire structure, something that's been challenging to do with existing hail-forecasting techniques.

Image credit: Carlye Calvin




recognition

Clear recognition of uncertainty is lacking in scientific advice for policymakers

Sustainable management of complex ecosystems requires clear understanding of uncertainty. However, scientific guidance documents show a lack of clarity and coherence regarding uncertainties and tend to focus solely on the need for more data or monitoring, new research indicates. The researchers suggest that scientific guidance should recognise uncertainty as an inherent part of any complex ecosystem.




recognition

Greater recognition of ecosystem services needed for food security

Global food security under a changing climate is possible if the vital role of healthy ecosystems is recognised, according to a recent study. The researchers suggest that an ecosystem-based approach must be integrated with other measures to tackle food security under climate change, to protect ecosystems and supply the essential services on which humanity depends.




recognition

​Augmented reality magazine by NTU Singapore earns international recognition with brand new reading experience

...




recognition

​Augmented reality magazine by NTU Singapore earns international recognition with brand new reading experience

With its fresh and bold design, engaging content, and the creative use of augmented reality (AR) in its bimonthly magazine for students, NTU has earned approval from new and old readers alike, and now the evaluators at the prestigious International Association of Business Communicators (IABC) Gold Quill Awards this year....




recognition

In the voice recognition era, good hearing will matter even more

You can't have an Internet of Voice without an Internet of Hearing, so protect those ears!



  • Gadgets & Electronics

recognition

Optical Character Recognition Sensor

Even if printing is distorted or unclear due to conveyor line conditions, a unique reading method with a built-in dictionary enables stable reading of characters.(FQ2-CH Series)




recognition

#FitFarmTN Received with Stellar Recognition in Los Angeles, Atlanta and South Beach Miami! A New Star Is Born!

At The Golden Globes, Fit Farm won the gold! At The Super Bowl, Fit Farm scored the most touchdowns! And at The South Beach Food & Wine Festival, Fit Farm was a star itself!




recognition

Bridge Makes Patient Portal Login Faster and More Secure With Fingerprint and Facial Recognition

Bridge Patient Portal introduces biometric authentication on mobile devices for fast, easy, and secure patient portal login




recognition

The Ministry of Health and Prevention (MoHAP) was honored by the Emirates International Accreditation Center (EIAC), in Recognition of Getting 11 ISO Accreditation for its Labs within One Year Only

The Ministry of Health boasts today the largest network of ISO-certified labs in the region with total lab accreditation of 14.




recognition

Morris, King, & Hodge Announces Recognition in 2020 Edition of The Best Lawyers in America©

Four of the Huntsville law firm's attorneys have been selected for inclusion in the publication.




recognition

Uplift Education High Schools Receive National Recognition as Some of the Top High Schools in the Nation for 2020

Uplift North Hills, Uplift Summit International, and Uplift Williams were ranked among the top 1% of high schools in the nation, at 40th, 134th, and 198th respectively.




recognition

Cardinal Capital Management, Inc. awarded PSN's 6- Star recognition for its Balanced Portfolio for the 5-year period ending September 30, 2019

This marks the strategy's position among the top ten performers for its peer group which includes both U.S. based and non-U.S. based managers and their products.




recognition

Florida Department of Education Honors Alsco with Commissioner's Business Recognition Award for Leon County

Award highlights partnership with business community and local schools




recognition

VR Office Place Introduces Real Time Facial Recognition and Social Fingerprint Analysis

The world is ripe for a new shift in the use of facial recognition by organizations. VR Office Place is proud to present real time social fingerprint identification in the form of facial recognition and publicly available social presence.




recognition

Paterson Music Project Receives Dr. Martin Luther King Youth Recognition Award

The Paterson Music Project, a program of the Wharton Institute for the Performing Arts, was honored for its impact in Paterson with the Dr. Martin Luther King Youth Recognition Award.




recognition

Succeeding Quietly in Our Recognition-Obsessed Culture

David Zweig, author of "Invisibles," on employees who value good work over self-promotion.




recognition

2020 NECA Recognition of Achievement in Safety Excellence and ZERO Injury Programs

The recipients of the 2020 Recognition of Achievement in Safety Excellence and Recognition of Achievement in Zero Injury programs will be posted on the NECA Recognition of Safety Achievement Program website in the near future. There were 159 Recognition of Achievement in Safety Excellence and 90 Recognition of Achievement in Zero Injury winners for 2020. These recipients will each receive plaques commemorating their accomplishment and be recognized during a session at the 10th Annual NSPC in Chicago, IL later this year. Thank you to all the companies that submitted their applications and continue to strive for Safety Excellence and Zero Injuries in the Electrical Industry.




recognition

How the New Revenue Recognition Standard Will Impact Manufacturers

The new revenue recognition standard includes important provisions that manufacturers need to be aware of. Effective 1/1/2019 for private companies with calendar year ends, the new standards will change the way manufacturing companies recognize revenue.
Variable Consideration
Manufacturing companies will… Read More

The post How the New Revenue Recognition Standard Will Impact Manufacturers appeared first on Anders CPAs.



  • Audit and Advisory
  • Manufacturing and Distribution
  • revenue recognition
  • revenue recognition standard

recognition

SCCM Pod-327 Does Simulation Improve Recognition and Management of Pediatric Septic Shock?

Margaret Parker, MD, MCCM, speaks with Mark C. Dugan, MD, about the article: Does Simulation Improve Recognition and Management of Pediatric Septic Shock, and If One Simulation Is Good, Is More Simulation Better?




recognition

Grand Canyon’s Trail of Time Receives National Recognition

Grand Canyon National Park's Trail of time was recently honored with a National Association for Interpretation Media Award - first place in the Wayside Exhibit category. https://www.nps.gov/grca/learn/news/2011-12-16_tot.htm




recognition

New Auphonic Transcript Editor and Improved Speech Recognition Services

Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable.
Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor:

Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.
Try out the Transcript Editor Examples yourself!


The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing.

The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services.
They also return word confidence values, timestamps and some punctuation, which is exported to our output files.

The Auphonic Transcript Editor

With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing.
Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible.

Main features of our transcript editor:
  • Edit the transcription directly in the HTML document.
  • Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine).
  • Listen to audio playback of specific words directly in the HTML editor.
  • Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription.
  • Export the edited transcript to HTML, PDF or WebVTT.
  • Completely useable on all mobile devices and desktop browsers.

Examples: Try Out the Transcript Editor

Here are two examples of the new transcript editor, taken from our speech recognition audio examples page:

1. Singletrack Transcript Editor Example
Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing.
2. Multitrack Transcript Editor Example
A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing.
As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!).

Transcript Editing

By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below).
If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes.


When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript.

Word Confidence Highlighting

Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy.

Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing.
Use the button Add/Remove Highlighting to disable/enable word confidence highlighting.

NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services!

Audio Playback

The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor).
This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file.

If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor.
Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days.

If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback.

Export Formats, Save/Share Transcript Editor

Click on the button Export... to see all export and saving/sharing options:

Save/Share Editor
The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies).
Export HTML / Export PDF / Export WebVTT
Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee).
Every export format is rendered directly in the browser, no server needed.

Amazon Transcribe

The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation.

UPDATE 2019:
Amazon Transcribe offers more languages now - please see Amazon Transcribe Features!

Pricing
The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h).
More information is available at Amazon Transcribe Pricing.
Custom Vocabulary (Keywords) Support
Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals).
The same feature is also available in the Google Cloud Speech API.
Timestamps, Word Confidence, and Punctuation
Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text.
It also adds some punctuation, which is combined with our own punctuation and formatting automatically.

The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages.
However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services!

Try it yourself:
Connect your Auphonic account with Amazon Transcribe at our External Services Page.

Speechmatics

Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation.

Many Languages
Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages).
It also has a Global English feature, which supports different English accents during transcription.
Timestamps, Word Confidence, and Punctuation
Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation.
Pricing
Speechmatics is the most expensive speech recognition service at Auphonic.
Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks.
They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic.
More information is available at Speechmatics Pricing.

Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic.

Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet.

Try it yourself:
Connect your Auphonic account with Speechmatics at our External Services Page.

What do you think?

Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated.

We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better?
Please let us know!






recognition

More Languages for Amazon Transcribe Speech Recognition

Until recently, Amazon Transcribe supported speech recognition in English and Spanish only.
Now they included French, Italian and Portuguese as well - and a few other languages (including German) are in private beta.

Update March 2019:
Now Amazon Transcribe supports German and Korean as well.

The Auphonic Audio Inspector on the status page of a finished Multitrack Production including speech recognition.
Please click on the screenshot to see it in full resolution!


Amazon Transcribe is integrated as speech recognition engine within Auphonic and offers accurate transcriptions (compared to other services) at low costs, including keywords / custom vocabulary support, word confidence, timestamps, and punctuation.
See the following AWS blog post and video for more information about recent Amazon Transcribe developments: Transcribe speech in three new languages: French, Italian, and Brazilian Portuguese.

Amazon Transcribe is also a perfect fit if you want to use our Transcript Editor because you will be able to see word timestamps and confidence values to instantly check which section/words should be corrected manually to increase the transcription accuracy:


Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.

These features are also available if you use Speechmatics, but unfortunately not in our other integrated speech recognition services.

About Speech Recognition within Auphonic

Auphonic has built a layer on top of a few external speech recognition services to make audio searchable:
Our classifiers generate metadata during the analysis of an audio signal (music segments, silence, multiple speakers, etc.) to divide the audio file into small and meaningful segments, which are processed by the speech recognition engine. The results from all segments are then combined, and meaningful timestamps, simple punctuation and structuring are added to the resulting text.

To learn more about speech recognition within Auphonic, take a look at our Speech Recognition and Transcript Editor help pages or listen to our Speech Recognition Audio Examples.

A comparison table of our integrated services (price, quality, languages, speed, features, etc.) can be found here: Speech Recognition Services Comparison.

Conclusion

We hope that Amazon and others will continue to add new languages, to get accurate and inexpensive automatic speech recognition in many languages.

Don't hesitate to contact us if you have any questions or feedback about speech recognition or our transcript editor!






recognition

Talking to computers (part 1): Why is speech recognition so difficult?

Although the performance of today's speech recognition systems is impressive, the experience for many is still one of errors, corrections, frustration and abandoning speech in favour of alternative interaction methods. We take a closer look at speech and find out why speech recognition is so difficult.




recognition

Text Recognition in the Wild: A Survey. (arXiv:2005.03492v1 [cs.CV])

The history of text can be traced back over thousands of years. Rich and precise semantic information carried by text is important in a wide range of vision-based application scenarios. Therefore, text recognition in natural scenes has been an active research field in computer vision and pattern recognition. In recent years, with the rise and development of deep learning, numerous methods have shown promising in terms of innovation, practicality, and efficiency. This paper aims to (1) summarize the fundamental problems and the state-of-the-art associated with scene text recognition; (2) introduce new insights and ideas; (3) provide a comprehensive review of publicly available resources; (4) point out directions for future work. In summary, this literature review attempts to present the entire picture of the field of scene text recognition. It provides a comprehensive reference for people entering this field, and could be helpful to inspire future research. Related resources are available at our Github repository: https://github.com/HCIILAB/Scene-Text-Recognition.




recognition

ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context. (arXiv:2005.03191v1 [eess.AS])

Convolutional neural networks (CNN) have shown promising results for end-to-end speech recognition, albeit still behind other state-of-the-art methods in performance. In this paper, we study how to bridge this gap and go beyond with a novel CNN-RNN-transducer architecture, which we call ContextNet. ContextNet features a fully convolutional encoder that incorporates global context information into convolution layers by adding squeeze-and-excitation modules. In addition, we propose a simple scaling method that scales the widths of ContextNet that achieves good trade-off between computation and accuracy. We demonstrate that on the widely used LibriSpeech benchmark, ContextNet achieves a word error rate (WER) of 2.1\%/4.6\% without external language model (LM), 1.9\%/4.1\% with LM and 2.9\%/7.0\% with only 10M parameters on the clean/noisy LibriSpeech test sets. This compares to the previous best published system of 2.0\%/4.6\% with LM and 3.9\%/11.3\% with 20M parameters. The superiority of the proposed ContextNet model is also verified on a much larger internal dataset.




recognition

Apparatus and method for recognizing representative user behavior based on recognition of unit behaviors

An apparatus for recognizing a representative user behavior includes a unit-data extracting unit configured to extract at least one unit data from sensor data, a feature-information extracting unit configured to extract feature information from each of the at least one unit data, a unit-behavior recognizing unit configured to recognize a respective unit behavior for each of the at least one unit data based on the feature information, and a representative-behavior recognizing unit configured to recognize at least one representative behavior based on the respective unit behavior recognized for each of the at least one unit data.




recognition

Script compliance and quality assurance based on speech recognition and duration of interaction

Apparatus and methods are provided for using automatic speech recognition to analyze a voice interaction and verify compliance of an agent reading a script to a client during the voice interaction. In one aspect of the invention, a communications system includes a user interface, a communications network, and a call center having an automatic speech recognition component. In other aspects of the invention, a script compliance method includes the steps of conducting a voice interaction between an agent and a client and evaluating the voice interaction with an automatic speech recognition component adapted to analyze the voice interaction and determine whether the agent has adequately followed the script. In yet still further aspects of the invention, the duration of a given interaction can be analyzed, either apart from or in combination with the script compliance analysis above, to seek to identify instances of agent non-compliance, of fraud, or of quality-analysis issues.




recognition

Using a physical phenomenon detector to control operation of a speech recognition engine

A device may include a physical phenomenon detector. The physical phenomenon detector may detect a physical phenomenon related to the device. In response to detecting the physical phenomenon, the device may record audio data that includes speech. The speech may be transcribed with a speech recognition engine. The speech recognition engine may be included in the device, or may be included with a remote computing device with which the device may communicate.




recognition

Speaker recognition from telephone calls

The present invention relates to a method for speaker recognition, comprising the steps of obtaining and storing speaker information for at least one target speaker; obtaining a plurality of speech samples from a plurality of telephone calls from at least one unknown speaker; classifying the speech samples according to the at least one unknown speaker thereby providing speaker-dependent classes of speech samples; extracting speaker information for the speech samples of each of the speaker-dependent classes of speech samples; combining the extracted speaker information for each of the speaker-dependent classes of speech samples; comparing the combined extracted speaker information for each of the speaker-dependent classes of speech samples with the stored speaker information for the at least one target speaker to obtain at least one comparison result; and determining whether one of the at least one unknown speakers is identical with the at least one target speaker based on the at least one comparison result.