cognition

2020 NECA Recognition of Achievement in Safety Excellence and ZERO Injury Programs

The recipients of the 2020 Recognition of Achievement in Safety Excellence and Recognition of Achievement in Zero Injury programs will be posted on the NECA Recognition of Safety Achievement Program website in the near future. There were 159 Recognition of Achievement in Safety Excellence and 90 Recognition of Achievement in Zero Injury winners for 2020. These recipients will each receive plaques commemorating their accomplishment and be recognized during a session at the 10th Annual NSPC in Chicago, IL later this year. Thank you to all the companies that submitted their applications and continue to strive for Safety Excellence and Zero Injuries in the Electrical Industry.




cognition

How the New Revenue Recognition Standard Will Impact Manufacturers

The new revenue recognition standard includes important provisions that manufacturers need to be aware of. Effective 1/1/2019 for private companies with calendar year ends, the new standards will change the way manufacturing companies recognize revenue.
Variable Consideration
Manufacturing companies will… Read More

The post How the New Revenue Recognition Standard Will Impact Manufacturers appeared first on Anders CPAs.



  • Audit and Advisory
  • Manufacturing and Distribution
  • revenue recognition
  • revenue recognition standard

cognition

SCCM Pod-327 Does Simulation Improve Recognition and Management of Pediatric Septic Shock?

Margaret Parker, MD, MCCM, speaks with Mark C. Dugan, MD, about the article: Does Simulation Improve Recognition and Management of Pediatric Septic Shock, and If One Simulation Is Good, Is More Simulation Better?




cognition

Grand Canyon’s Trail of Time Receives National Recognition

Grand Canyon National Park's Trail of time was recently honored with a National Association for Interpretation Media Award - first place in the Wayside Exhibit category. https://www.nps.gov/grca/learn/news/2011-12-16_tot.htm




cognition

New Auphonic Transcript Editor and Improved Speech Recognition Services

Back in late 2016, we introduced Speech Recognition at Auphonic. This allows our users to create transcripts of their recordings, and more usefully, this means podcasts become searchable.
Now we integrated two more speech recognition engines: Amazon Transcribe and Speechmatics. Whilst integrating these services, we also took the opportunity to develop a complete new Transcription Editor:

Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.
Try out the Transcript Editor Examples yourself!


The new Auphonic Transcript Editor is included directly in our HTML transcript output file, displays word confidence values to instantly see which sections should be checked manually, supports direct audio playback, HTML/PDF/WebVTT export and allows you to share the editor with someone else for further editing.

The new services, Amazon Transcribe and Speechmatics, offer transcription quality improvements compared to our other integrated speech recognition services.
They also return word confidence values, timestamps and some punctuation, which is exported to our output files.

The Auphonic Transcript Editor

With the integration of the two new services offering improved recognition quality and word timestamps alongside confidence scores, we realized that we could leverage these improvements to give our users easy-to-use transcription editing.
Therefore we developed a new, open source transcript editor, which is embedded directly in our HTML output file and has been designed to make checking and editing transcripts as easy as possible.

Main features of our transcript editor:
  • Edit the transcription directly in the HTML document.
  • Show/hide word confidence, to instantly see which sections should be checked manually (if you use Amazon Transcribe or Speechmatics as speech recognition engine).
  • Listen to audio playback of specific words directly in the HTML editor.
  • Share the transcript editor with others: as the editor is embedded directly in the HTML file (no external dependencies), you can just send the HTML file to some else to manually check the automatically generated transcription.
  • Export the edited transcript to HTML, PDF or WebVTT.
  • Completely useable on all mobile devices and desktop browsers.

Examples: Try Out the Transcript Editor

Here are two examples of the new transcript editor, taken from our speech recognition audio examples page:

1. Singletrack Transcript Editor Example
Singletrack speech recognition example from the first 10 minutes of Common Sense 309 by Dan Carlin. Speechmatics was used as speech recognition engine without any keywords or further manual editing.
2. Multitrack Transcript Editor Example
A multitrack automatic speech recognition transcript example from the first 20 minutes of TV Eye on Marvel - Luke Cage S1E1. Amazon Transcribe was used as speech recognition engine without any further manual editing.
As this is a multitrack production, the transcript includes exact speaker names as well (try to edit them!).

Transcript Editing

By clicking the Edit Transcript button, a dashed box appears around the text. This indicates that the text is now freely editable on this page. Your changes can be saved by using one of the export options (see below).
If you make a mistake whilst editing, you can simply use the undo/redo function of the browser to undo or redo your changes.


When working with multitrack productions, another helpful feature is the ability to change all speaker names at once throughout the whole transcript just by editing one speaker. Simply click on an instance of a speaker title and change it to the appropriate name, this name will then appear throughout the whole transcript.

Word Confidence Highlighting

Word confidence values are shown visually in the transcript editor, highlighted in shades of red (see screenshot above). The shade of red is dependent on the actual word confidence value: The darker the red, the lower the confidence value. This means you can instantly see which sections you should check/re-work manually to increase the accuracy.

Once you have edited the highlighted text, it will be set to white again, so it’s easy to see which sections still require editing.
Use the button Add/Remove Highlighting to disable/enable word confidence highlighting.

NOTE: Word confidence values are only available in Amazon Transcribe or Speechmatics, not if you use our other integrated speech recognition services!

Audio Playback

The button Activate/Stop Play-on-click allows you to hear the audio playback of the section you click on (by clicking directly on the word in the transcript editor).
This is helpful in allowing you to check the accuracy of certain words by being able to listen to them directly whilst editing, without having to go back and try to find that section within your audio file.

If you use an External Service in your production to export the resulting audio file, we will automatically use the exported file in the transcript editor.
Otherwise we will use the output file generated by Auphonic. Please note that this file is password protected for the current Auphonic user and will be deleted in 21 days.

If no audio file is available in the transcript editor, or cannot be played because of the password protection, you will see the button Add Audio File to add a new audio file for playback.

Export Formats, Save/Share Transcript Editor

Click on the button Export... to see all export and saving/sharing options:

Save/Share Editor
The Save Editor button stores the whole transcript editor with all its current changes into a new HTML file. Use this button to save your changes for further editing or if you want to share your transcript with someone else for manual corrections (as the editor is embedded directly in the HTML file without any external dependencies).
Export HTML / Export PDF / Export WebVTT
Use one of these buttons to export the edited transcript to HTML (for WordPress, Word, etc.), to PDF (via the browser print function) or to WebVTT (so that the edited transcript can be used as subtitles or imported in web audio players of the Podlove Publisher or Podigee).
Every export format is rendered directly in the browser, no server needed.

Amazon Transcribe

The first of the two new services, Amazon Transcribe, offers accurate transcriptions in English and Spanish at low costs, including keywords, word confidence, timestamps, and punctuation.

UPDATE 2019:
Amazon Transcribe offers more languages now - please see Amazon Transcribe Features!

Pricing
The free tier offers 60 minutes of free usage a month for 12 months. After that, it is billed monthly at a rate of $0.0004 per second ($1.44/h).
More information is available at Amazon Transcribe Pricing.
Custom Vocabulary (Keywords) Support
Custom Vocabulary (called Keywords in Auphonic) gives you the ability to expand and customize the speech recognition vocabulary, specific to your case (i.e. product names, domain-specific terminology, or names of individuals).
The same feature is also available in the Google Cloud Speech API.
Timestamps, Word Confidence, and Punctuation
Amazon Transcribe returns a timestamp and confidence value for each word so that you can easily locate the audio in the original recording by searching for the text.
It also adds some punctuation, which is combined with our own punctuation and formatting automatically.

The high-quality (especially in combination with keywords) and low costs of Amazon Transcribe make it attractive, despite only currently supporting two languages.
However, the processing time of Amazon Transcribe is much slower compared to all our other integrated services!

Try it yourself:
Connect your Auphonic account with Amazon Transcribe at our External Services Page.

Speechmatics

Speechmatics offers accurate transcriptions in many languages including word confidence values, timestamps, and punctuation.

Many Languages
Speechmatics’ clear advantage is the sheer number of languages it supports (all major European and some Asiatic languages).
It also has a Global English feature, which supports different English accents during transcription.
Timestamps, Word Confidence, and Punctuation
Like Amazon, Speechmatics creates timestamps, word confidence values, and punctuation.
Pricing
Speechmatics is the most expensive speech recognition service at Auphonic.
Pricing starts at £0.06 per minute of audio and can be purchased in blocks of £10 or £100. This equates to a starting rate of about $4.78/h. Reduced rate of £0.05 per minute ($3.98/h) are available if purchasing £1,000 blocks.
They offer significant discounts for users requiring higher volumes. At this further reduced price point it is a similar cost to the Google Speech API (or lower). If you process a lot of content, you should contact them directly at sales@speechmatics.com and say that you wish to use it with Auphonic.
More information is available at Speechmatics Pricing.

Speechmatics offers high-quality transcripts in many languages. But these features do come at a price, it is the most expensive speech recognition services at Auphonic.

Unfortunately, their existing Custom Dictionary (keywords) feature, which would further improve the results, is not available in the Speechmatics API yet.

Try it yourself:
Connect your Auphonic account with Speechmatics at our External Services Page.

What do you think?

Any feedback about the new speech recognition services, especially about the recognition quality in various languages, is highly appreciated.

We would also like to hear any comments you have on the transcript editor particularly - is there anything missing, or anything that could be implemented better?
Please let us know!






cognition

More Languages for Amazon Transcribe Speech Recognition

Until recently, Amazon Transcribe supported speech recognition in English and Spanish only.
Now they included French, Italian and Portuguese as well - and a few other languages (including German) are in private beta.

Update March 2019:
Now Amazon Transcribe supports German and Korean as well.

The Auphonic Audio Inspector on the status page of a finished Multitrack Production including speech recognition.
Please click on the screenshot to see it in full resolution!


Amazon Transcribe is integrated as speech recognition engine within Auphonic and offers accurate transcriptions (compared to other services) at low costs, including keywords / custom vocabulary support, word confidence, timestamps, and punctuation.
See the following AWS blog post and video for more information about recent Amazon Transcribe developments: Transcribe speech in three new languages: French, Italian, and Brazilian Portuguese.

Amazon Transcribe is also a perfect fit if you want to use our Transcript Editor because you will be able to see word timestamps and confidence values to instantly check which section/words should be corrected manually to increase the transcription accuracy:


Screenshot of our Transcript Editor with word confidence highlighting and the edit bar.

These features are also available if you use Speechmatics, but unfortunately not in our other integrated speech recognition services.

About Speech Recognition within Auphonic

Auphonic has built a layer on top of a few external speech recognition services to make audio searchable:
Our classifiers generate metadata during the analysis of an audio signal (music segments, silence, multiple speakers, etc.) to divide the audio file into small and meaningful segments, which are processed by the speech recognition engine. The results from all segments are then combined, and meaningful timestamps, simple punctuation and structuring are added to the resulting text.

To learn more about speech recognition within Auphonic, take a look at our Speech Recognition and Transcript Editor help pages or listen to our Speech Recognition Audio Examples.

A comparison table of our integrated services (price, quality, languages, speed, features, etc.) can be found here: Speech Recognition Services Comparison.

Conclusion

We hope that Amazon and others will continue to add new languages, to get accurate and inexpensive automatic speech recognition in many languages.

Don't hesitate to contact us if you have any questions or feedback about speech recognition or our transcript editor!






cognition

Talking to computers (part 1): Why is speech recognition so difficult?

Although the performance of today's speech recognition systems is impressive, the experience for many is still one of errors, corrections, frustration and abandoning speech in favour of alternative interaction methods. We take a closer look at speech and find out why speech recognition is so difficult.




cognition

Text Recognition in the Wild: A Survey. (arXiv:2005.03492v1 [cs.CV])

The history of text can be traced back over thousands of years. Rich and precise semantic information carried by text is important in a wide range of vision-based application scenarios. Therefore, text recognition in natural scenes has been an active research field in computer vision and pattern recognition. In recent years, with the rise and development of deep learning, numerous methods have shown promising in terms of innovation, practicality, and efficiency. This paper aims to (1) summarize the fundamental problems and the state-of-the-art associated with scene text recognition; (2) introduce new insights and ideas; (3) provide a comprehensive review of publicly available resources; (4) point out directions for future work. In summary, this literature review attempts to present the entire picture of the field of scene text recognition. It provides a comprehensive reference for people entering this field, and could be helpful to inspire future research. Related resources are available at our Github repository: https://github.com/HCIILAB/Scene-Text-Recognition.




cognition

ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context. (arXiv:2005.03191v1 [eess.AS])

Convolutional neural networks (CNN) have shown promising results for end-to-end speech recognition, albeit still behind other state-of-the-art methods in performance. In this paper, we study how to bridge this gap and go beyond with a novel CNN-RNN-transducer architecture, which we call ContextNet. ContextNet features a fully convolutional encoder that incorporates global context information into convolution layers by adding squeeze-and-excitation modules. In addition, we propose a simple scaling method that scales the widths of ContextNet that achieves good trade-off between computation and accuracy. We demonstrate that on the widely used LibriSpeech benchmark, ContextNet achieves a word error rate (WER) of 2.1\%/4.6\% without external language model (LM), 1.9\%/4.1\% with LM and 2.9\%/7.0\% with only 10M parameters on the clean/noisy LibriSpeech test sets. This compares to the previous best published system of 2.0\%/4.6\% with LM and 3.9\%/11.3\% with 20M parameters. The superiority of the proposed ContextNet model is also verified on a much larger internal dataset.




cognition

Apparatus and method for recognizing representative user behavior based on recognition of unit behaviors

An apparatus for recognizing a representative user behavior includes a unit-data extracting unit configured to extract at least one unit data from sensor data, a feature-information extracting unit configured to extract feature information from each of the at least one unit data, a unit-behavior recognizing unit configured to recognize a respective unit behavior for each of the at least one unit data based on the feature information, and a representative-behavior recognizing unit configured to recognize at least one representative behavior based on the respective unit behavior recognized for each of the at least one unit data.




cognition

Script compliance and quality assurance based on speech recognition and duration of interaction

Apparatus and methods are provided for using automatic speech recognition to analyze a voice interaction and verify compliance of an agent reading a script to a client during the voice interaction. In one aspect of the invention, a communications system includes a user interface, a communications network, and a call center having an automatic speech recognition component. In other aspects of the invention, a script compliance method includes the steps of conducting a voice interaction between an agent and a client and evaluating the voice interaction with an automatic speech recognition component adapted to analyze the voice interaction and determine whether the agent has adequately followed the script. In yet still further aspects of the invention, the duration of a given interaction can be analyzed, either apart from or in combination with the script compliance analysis above, to seek to identify instances of agent non-compliance, of fraud, or of quality-analysis issues.




cognition

Using a physical phenomenon detector to control operation of a speech recognition engine

A device may include a physical phenomenon detector. The physical phenomenon detector may detect a physical phenomenon related to the device. In response to detecting the physical phenomenon, the device may record audio data that includes speech. The speech may be transcribed with a speech recognition engine. The speech recognition engine may be included in the device, or may be included with a remote computing device with which the device may communicate.




cognition

Speaker recognition from telephone calls

The present invention relates to a method for speaker recognition, comprising the steps of obtaining and storing speaker information for at least one target speaker; obtaining a plurality of speech samples from a plurality of telephone calls from at least one unknown speaker; classifying the speech samples according to the at least one unknown speaker thereby providing speaker-dependent classes of speech samples; extracting speaker information for the speech samples of each of the speaker-dependent classes of speech samples; combining the extracted speaker information for each of the speaker-dependent classes of speech samples; comparing the combined extracted speaker information for each of the speaker-dependent classes of speech samples with the stored speaker information for the at least one target speaker to obtain at least one comparison result; and determining whether one of the at least one unknown speakers is identical with the at least one target speaker based on the at least one comparison result.




cognition

System, method and program product for providing automatic speech recognition (ASR) in a shared resource environment

A speech recognition system, method of recognizing speech and a computer program product therefor. A client device identified with a context for an associated user selectively streams audio to a provider computer, e.g., a cloud computer. Speech recognition receives streaming audio, maps utterances to specific textual candidates and determines a likelihood of a correct match for each mapped textual candidate. A context model selectively winnows candidate to resolve recognition ambiguity according to context whenever multiple textual candidates are recognized as potential matches for the same mapped utterance. Matches are used to update the context model, which may be used for multiple users in the same context.




cognition

Speech recognition and synthesis utilizing context dependent acoustic models containing decision trees

A speech recognition method including the steps of receiving a speech input from a known speaker of a sequence of observations and determining the likelihood of a sequence of words arising from the sequence of observations using an acoustic model. The acoustic model has a plurality of model parameters describing probability distributions which relate a word or part thereof to an observation and has been trained using first training data and adapted using second training data to said speaker. The speech recognition method also determines the likelihood of a sequence of observations occurring in a given language using a language model and combines the likelihoods determined by the acoustic model and the language model and outputs a sequence of words identified from said speech input signal. The acoustic model is context based for the speaker, the context based information being contained in the model using a plurality of decision trees and the structure of the decision trees is based on second training data.




cognition

Image-based character recognition

Various embodiments enable a device to perform tasks such as processing an image to recognize and locate text in the image, and providing the recognized text an application executing on the device for performing a function (e.g., calling a number, opening an internet browser, etc.) associated with the recognized text. In at least one embodiment, processing the image includes substantially simultaneously or concurrently processing the image with at least two recognition engines, such as at least two optical character recognition (OCR) engines, running in a multithreaded mode. In at least one embodiment, the recognition engines can be tuned so that their respective processing speeds are roughly the same. Utilizing multiple recognition engines enables processing latency to be close to that of using only one recognition engine.




cognition

Speech recognition apparatus with means for preventing errors due to delay in speech recognition

When a speech sound of at least a predetermined sound pressure is externally input while a time measurement is not being performed, a time measuring circuit starts a time measurement responsive to a signal from a speech detector. When another speech sound of at least a predetermined sound pressure is externally input while a time measurement is being performed by the time measuring circuit, a measurement time measured by the time measuring circuit at this moment is stored in a time information memory. After a predetermined time has elapsed, if a speech recognition circuit recognizes that the externally input speech sound is a "stop" command, the time measurement operation performed by the time measuring circuit is stopped, and the time information stored in the time information memory is read out and displayed as measurement time information on a display unit.




cognition

Disposable electrode and automatic information recognition apparatus

A disposable electrode includes: an electrode pad; and a connector, connecting the electrode pad to a defibrillator, and including an information holder that can be provided with a transmissive opening or a light reflective member, the information holder holding information about at least an expiration date, depending on presence or absence of the transmissive opening or the light reflective member, the information holder allowing the information to be notified from the defibrillator when the connector is connected to the defibrillator.




cognition

Number of players determined using facial recognition

There is provided a system and method for determining a number of players present using facial recognition. There is provided a method comprising capturing an image of the players present, and determining the number of players present based on the image. In this manner, players may more easily configure game settings, whereas spectators may be presented a more engaging experience.




cognition

Method and system for quantitative assessment of word recognition sensitivity

A method and system are presented to address quantitative assessment of word recognition sensitivity of a subject, where the method comprises the steps of: (1) presenting at least one scene, comprising a plurality of letters and a background, to a subject on a display; (2) moving the plurality of letters relative to the scene; (3) receiving feedback from the subject via at least one; (4) quantitatively refining the received feedback; (5) modulating the saliency of the plurality of letters relative to accuracy of the quantitatively refined feedback; (6) calculating a critical threshold parameter; and (7) recording a critical threshold parameter onto a tangible computer readable medium.




cognition

Amusement ride comprising a facial expression recognition system

The amusement ride 1 comprises a track 2 and a vehicle 3 being moveable along the track 2 at a velocity v. Within the vehicle 3 a video camera 4 is installed. The video camera 4 takes a video film of the face of a passenger received within the vehicle 3 during a ride. A sender 5 transmits the data 6 to a facial expression recognition system 7. The result 10 of the process carried out by facial expression recognition system 7 may be downloaded from a server 11 by a client 13.




cognition

Bezel assembly comprising image recognition for use with an automated transaction device

The bezel assembly for data reception, for use with a bill validator in a financial transactional device, includes a bezel housing and a data reception assembly. The bezel housing includes a customer-facing front portion and a back plate connectable to the bill validator that is mounted within the transactional device cabinet. The front portion includes an insertion/dispensing slot for receiving currency and a projecting protrusion forward of the casing. The forward-extending protrusion accommodates at least a portion of the data reception assembly. The bezel assembly can include a wireless communication function that is communicably connectable with a mobile device via a wireless communication method, a manual entry function, a biometric reader, one or more cameras for scanning and decrypting 2D barcodes and the like, thus enhancing the overall functionality of the financial transactional device.




cognition

CONTINUOUS KEYBOARD RECOGNITION

Methods, systems, and apparatus for receiving data indicating a location of a particular touchpoint representing a latest received touchpoint in a sequence of received touchpoints; identifying candidate characters associated with the particular touchpoint; generating, for each of the candidate characters, a confidence score; identifying different candidate sequences of characters each including for each received touchpoint, one candidate character associated with a location of the received touchpoint, and one of the candidate characters associated with the particular touchpoint; for each different candidate sequence of characters, determining a language model score and generating a transcription score based at least on the confidence score for one or more of the candidate characters in the candidate sequence of characters and the language model score for the candidate sequence of characters; selecting, and providing for output, a representative sequence of characters from among the candidate sequences of characters based at least on the transcription scores.




cognition

Latency enhanced note recognition method in gaming

The present invention relates to the field of audio recognition, in particular to computer implemented note recognition methods in a gaming application. Furthermore, the present invention relates to improving latency of such audio recognition methods. One of the embodiments of the invention described herein is a method for note recognition of an audio source. The method includes: dividing an audio input into a plurality of frames, each frame having a pre-determined length, conducting a frequency analysis of at least a set of the plurality of frames, based on the frequency analysis, determining if a frame is a transient frame with a frequency change between the beginning and end of the frame, comparing the frequency analysis of each said transient frame to the frequency analysis of an immediately preceding frame and, based on said comparison, determining at least one probable pitch present at the end of each transient frame, and for each transient frame, outputting pitch data indicative of the probable pitch present at the end of the transient frame.




cognition

Indigo Paints takes to aggressive advertising to improve brand recognition

Established in 2000, Indigo Paints is a relatively new entrant to the decorative paints industry that is dominated by the like of Asian Paints, Berger and Nerolac.




cognition

Letters: NHS staff deserve permanent recognition - not just a clap

CLAPPING the NHS each week is all well and good but surely we can think of a more permanent recognition?




cognition

Buddha Machine Variations No. 20 (Pattern Cognition)

This is a short one, and a change of approach. It’s a test run, really. (Every entry is an experiment of some sort.) Samples extracted from three different loops of the first-generation Buddha Machine, which dates from 2005, were recorded on the Teenage Engineering PO-33 K.O! and then run as a series of patterns, the […]




cognition

Cognition and Civic Engagement

Join KUT’s Rebecca McInroy along with professors Art Markman and Bob Duke as they talk about the psychology of social activism, the effectiveness of deterrence, and the health consequences of negative emotions. Views and Brews is free and open to the public, hope to see you at the Cactus soon!




cognition

Interrogating Embodied Cognition

Dr. Art Markman and Dr. Bob Duke talk about some problems with research on embodied cognition and look at what it means and what it doesn’t.




cognition

Treaty's value questioned by Indigenous elders, but recognition of Australia's first people important

This year's NAIDOC Week theme is Voice. Treaty. Truth. But the truth is that many Indigenous people feel voiceless when it comes to expressing where Australia stands on treaty today.




cognition

Australian anthem rewritten to represent all Australians and promote Indigenous constitutional recognition

The national anthem has been rewritten and performed for the first time in Alice Springs by a group that says it should be more inclusive of all Australians.



  • 783 ABC Alice Springs
  • alicesprings
  • Community and Society:All:All
  • Community and Society:Community Organisations:All
  • Community and Society:Indigenous (Aboriginal and Torres Strait Islander):All
  • Community and Society:Indigenous (Aboriginal and Torres Strait Islander):Indigenous Culture
  • Human Interest:All:All
  • Australia:All:All
  • Australia:NT:Alice Springs 0870
  • Australia:NT:All

cognition

Indigenous constitutional recognition is needed to 'shift national consciousness'

Dani Larkin knows the struggles of a young Indigenous woman in a "nation of divisiveness", and insists that constitutional recognition is the key to unlocking meaningful change.



  • ABC Gold Coast
  • northcoast
  • goldcoast
  • Community and Society:Indigenous (Aboriginal and Torres Strait Islander):All
  • Community and Society:Indigenous (Aboriginal and Torres Strait Islander):Indigenous Culture
  • Community and Society:Indigenous (Aboriginal and Torres Strait Islander):Indigenous Protocols
  • Government and Politics:All:All
  • Government and Politics:Federal Government:All
  • Government and Politics:Indigenous Policy:All
  • Human Interest:All:All
  • Australia:NSW:Baryulgil 2460
  • Australia:QLD:Mermaid Beach 4218

cognition

What is constitutional recognition?

The constitution was written more than a century ago, but Aboriginal and Torres Strait Islander people are not mentioned in it at all, despite having lived here for more than 50,000 years. What is constitutional recognition and why is it important? What are some of the perceived barriers to changing the constitution?




cognition

Harrisburg University Researchers Claim Their 'Unbiased' Facial Recognition Software Can Identify Potential Criminals

Given all we know about facial recognition tech, it is literally jaw-dropping that anyone could make this claim… especially without being vetted independently.

A group of Harrisburg University professors and a PhD student have developed an automated computer facial recognition software capable of predicting whether someone is likely to be a criminal.

The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.

There's a whole lot of "what even the fuck" in CBS 21's reprint of a press release, but let's start with the claim about "no racial bias." That's a lot to swallow when the underlying research hasn't been released yet. Let's see what the National Institute of Standards and Technology has to say on the subject. This is the result of the NIST's examination of 189 facial recognition AI programs -- all far more established than whatever it is Harrisburg researchers have cooked up.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Why is this acceptable? The report inadvertently supplies the answer:

Middle-aged white men generally benefited from the highest accuracy rates.

Yep. And guess who's making laws or running police departments or marketing AI to cops or telling people on Twitter not to break the law or etc. etc. etc.

To craft a terrible pun, the researchers' claim of "no racial bias" is absurd on its face. Per se stupid af to use legal terminology.

Moving on from that, there's the 80% accuracy, which is apparently good enough since it will only threaten the life and liberty of 20% of the people it's inflicted on. I guess if it's the FBI's gold standard, it's good enough for everyone.

Maybe this is just bad reporting. Maybe something got copy-pasted wrong from the spammed press release. Let's go to the source… one that somehow still doesn't include a link to any underlying research documents.

What does any of this mean? Are we ready to embrace a bit of pre-crime eugenics? Or is this just the most hamfisted phrasing Harrisburg researchers could come up with?

A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.

The most charitable interpretation of this statement is that the wrong-20%-of-the-time AI is going to be applied to the super-sketchy "predictive policing" field. Predictive policing -- a theory that says it's ok to treat people like criminals if they live and work in an area where criminals live -- is its own biased mess, relying on garbage data generated by biased policing to turn racist policing into an AI-blessed "work smarter not harder" LEO equivalent.

The question about "likely" is answered in the next paragraph, somewhat assuring readers the AI won't be applied to ultrasound images.

With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.

There's a big difference between "going to be" and "is," and researchers using actual science should know better than to use both phrases to describe their AI efforts. One means scanning someone's face to determine whether they might eventually engage in criminal acts. The other means matching faces to images of known criminals. They are far from interchangeable terms.

If you think the above quotes are, at best, disjointed, brace yourself for this jargon-fest which clarifies nothing and suggests the AI itself wrote the pullquote:

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

"Minute features in an image that are highly predictive of criminality." And what, pray tell, are those "minute features?" Skin tone? "I AM A CRIMINAL IN THE MAKING" forehead tattoos? Bullshit on top of bullshit? Come on. This is word salad, but a salad pretending to be a law enforcement tool with actual utility. Nothing about this suggests Harrisburg has come up with anything better than the shitty "tools" already being inflicted on us by law enforcement's early adopters.

I wish we could dig deeper into this but we'll all have to wait until this excitable group of clueless researchers decide to publish their findings. According to this site, the research is being sealed inside a "research book," which means it will take a lot of money to actually prove this isn't any better than anything that's been offered before. This could be the next Clearview, but we won't know if it is until the research is published. If we're lucky, it will be before Harrisburg patents this awful product and starts selling it to all and sundry. Don't hold your breath.




cognition

Racial justice groups criticize city teachers union’s use of controversial face recognition technology

The United Federation of Teachers tested security camera technology from a company affiliated with Clearview AI




cognition

Communities come face-to-face with the growing power of facial recognition technology

As law enforcement agencies deploy AI-powered facial recognition systems, some communities are pushing back, insisting on having a say in how they’re used.




cognition

Substrate recognition and ATPase activity of the E. coli cysteine/cystine ABC transporter YecSC-FliY [Microbiology]

Sulfur is essential for biological processes such as amino acid biogenesis, iron–sulfur cluster formation, and redox homeostasis. To acquire sulfur-containing compounds from the environment, bacteria have evolved high-affinity uptake systems, predominant among which is the ABC transporter family. Theses membrane-embedded enzymes use the energy of ATP hydrolysis for transmembrane transport of a wide range of biomolecules against concentration gradients. Three distinct bacterial ABC import systems of sulfur-containing compounds have been identified, but the molecular details of their transport mechanism remain poorly characterized. Here we provide results from a biochemical analysis of the purified Escherichia coli YecSC-FliY cysteine/cystine import system. We found that the substrate-binding protein FliY binds l-cystine, l-cysteine, and d-cysteine with micromolar affinities. However, binding of the l- and d-enantiomers induced different conformational changes of FliY, where the l- enantiomer–substrate-binding protein complex interacted more efficiently with the YecSC transporter. YecSC had low basal ATPase activity that was moderately stimulated by apo FliY, more strongly by d-cysteine–bound FliY, and maximally by l-cysteine– or l-cystine–bound FliY. However, at high FliY concentrations, YecSC reached maximal ATPase rates independent of the presence or nature of the substrate. These results suggest that FliY exists in a conformational equilibrium between an open, unliganded form that does not bind to the YecSC transporter and closed, unliganded and closed, liganded forms that bind this transporter with variable affinities but equally stimulate its ATPase activity. These findings differ from previous observations for similar ABC transporters, highlighting the extent of mechanistic diversity in this large protein family.




cognition

Facial recognition on the rise: can current laws protect the public?

The ICO is investigating reports that a property developer has quietly installed a facial recognition system in London's King's Cross. We spoke to experts from the legal and technology sectors to find some clarity about the rules




cognition

CBD News: The Society for Ecological Restoration has conferred its 2011 Special Recognition Award to the Parties to the Convention on Biological Diversity (CBD) at the Gala Awards Banquet on 23 August 2011 in Mérida, Yucatán, Mexico, on the marg




cognition

CBD News: The Natural Capital Declaration (NCD) has been declared a Biodiversity Champion by the Secretariat of the Convention on Biological Diversity (CBD) in recognition of its important contribution to the implementation of the Convention's Strateg




cognition

CBD News: The World Public Health Nutrition Association (WPHNA) has been declared a Biodiversity Champion by the Executive Secretary of the Convention on Biological Diversity (CBD) in recognition of its important contribution to the implementation of th




cognition

CBD News: Bringing international recognition and a substantial monetary prize to three outstanding individuals, nominations are now invited for The MIDORI Prize for Biodiversity 2014. The call for nominations remains open from 1 March to 31 May 2014.




cognition

Recognition




cognition

Type 2 Diabetes, Cognition, and Dementia in Older Adults: Toward a Precision Health Approach

Brenna Cholerton
Nov 1, 2016; 29:210-219
From Research to Practice




cognition

Substrate recognition and ATPase activity of the E. coli cysteine/cystine ABC transporter YecSC-FliY [Microbiology]

Sulfur is essential for biological processes such as amino acid biogenesis, iron–sulfur cluster formation, and redox homeostasis. To acquire sulfur-containing compounds from the environment, bacteria have evolved high-affinity uptake systems, predominant among which is the ABC transporter family. Theses membrane-embedded enzymes use the energy of ATP hydrolysis for transmembrane transport of a wide range of biomolecules against concentration gradients. Three distinct bacterial ABC import systems of sulfur-containing compounds have been identified, but the molecular details of their transport mechanism remain poorly characterized. Here we provide results from a biochemical analysis of the purified Escherichia coli YecSC-FliY cysteine/cystine import system. We found that the substrate-binding protein FliY binds l-cystine, l-cysteine, and d-cysteine with micromolar affinities. However, binding of the l- and d-enantiomers induced different conformational changes of FliY, where the l- enantiomer–substrate-binding protein complex interacted more efficiently with the YecSC transporter. YecSC had low basal ATPase activity that was moderately stimulated by apo FliY, more strongly by d-cysteine–bound FliY, and maximally by l-cysteine– or l-cystine–bound FliY. However, at high FliY concentrations, YecSC reached maximal ATPase rates independent of the presence or nature of the substrate. These results suggest that FliY exists in a conformational equilibrium between an open, unliganded form that does not bind to the YecSC transporter and closed, unliganded and closed, liganded forms that bind this transporter with variable affinities but equally stimulate its ATPase activity. These findings differ from previous observations for similar ABC transporters, highlighting the extent of mechanistic diversity in this large protein family.




cognition

Structural insight into the recognition of pathogen-derived phosphoglycolipids by C-type lectin receptor DCAR [Protein Structure and Folding]

The C-type lectin receptors (CLRs) form a family of pattern recognition receptors that recognize numerous pathogens, such as bacteria and fungi, and trigger innate immune responses. The extracellular carbohydrate-recognition domain (CRD) of CLRs forms a globular structure that can coordinate a Ca2+ ion, allowing receptor interactions with sugar-containing ligands. Although well-conserved, the CRD fold can also display differences that directly affect the specificity of the receptors for their ligands. Here, we report crystal structures at 1.8–2.3 Å resolutions of the CRD of murine dendritic cell-immunoactivating receptor (DCAR, or Clec4b1), the CLR that binds phosphoglycolipids such as acylated phosphatidyl-myo-inositol mannosides (AcPIMs) of mycobacteria. Using mutagenesis analysis, we identified critical residues, Ala136 and Gln198, on the surface surrounding the ligand-binding site of DCAR, as well as an atypical Ca2+-binding motif (Glu-Pro-Ser/EPS168–170). By chemically synthesizing a water-soluble ligand analog, inositol-monophosphate dimannose (IPM2), we confirmed the direct interaction of DCAR with the polar moiety of AcPIMs by biolayer interferometry and co-crystallization approaches. We also observed a hydrophobic groove extending from the ligand-binding site that is in a suitable position to interact with the lipid portion of whole AcPIMs. These results suggest that the hydroxyl group-binding ability and hydrophobic groove of DCAR mediate its specific binding to pathogen-derived phosphoglycolipids such as mycobacterial AcPIMs.




cognition

Smart Energy Council calls for state to abandon facial recognition

Some users have been brought to tears by 'broken' facial recognition software now required to approve solar rebate applications.




cognition

Episode 89 - The Internet of Pirates (IoP) Hacker pirates, face recognition ethics and Elon Musk

Back once again like the Renegade Master, the UK Tech Weekly Podcast is coming to you from its new, earlier-in-the-week time slot.


Host Scott Carey is joined by Tamlin Magee to talk about pirate-obsessed Nigerian hacking syndicates, and Charlotte Jee is on board to discuss the ethics of facial (and racial) recognition technology.


We wrap things up with an Elon Musk news roundup, from his latest bae to building bricks.

 

See acast.com/privacy for privacy and opt-out information.




cognition

Substrate recognition and ATPase activity of the E. coli cysteine/cystine ABC transporter YecSC-FliY [Microbiology]

Sulfur is essential for biological processes such as amino acid biogenesis, iron–sulfur cluster formation, and redox homeostasis. To acquire sulfur-containing compounds from the environment, bacteria have evolved high-affinity uptake systems, predominant among which is the ABC transporter family. Theses membrane-embedded enzymes use the energy of ATP hydrolysis for transmembrane transport of a wide range of biomolecules against concentration gradients. Three distinct bacterial ABC import systems of sulfur-containing compounds have been identified, but the molecular details of their transport mechanism remain poorly characterized. Here we provide results from a biochemical analysis of the purified Escherichia coli YecSC-FliY cysteine/cystine import system. We found that the substrate-binding protein FliY binds l-cystine, l-cysteine, and d-cysteine with micromolar affinities. However, binding of the l- and d-enantiomers induced different conformational changes of FliY, where the l- enantiomer–substrate-binding protein complex interacted more efficiently with the YecSC transporter. YecSC had low basal ATPase activity that was moderately stimulated by apo FliY, more strongly by d-cysteine–bound FliY, and maximally by l-cysteine– or l-cystine–bound FliY. However, at high FliY concentrations, YecSC reached maximal ATPase rates independent of the presence or nature of the substrate. These results suggest that FliY exists in a conformational equilibrium between an open, unliganded form that does not bind to the YecSC transporter and closed, unliganded and closed, liganded forms that bind this transporter with variable affinities but equally stimulate its ATPase activity. These findings differ from previous observations for similar ABC transporters, highlighting the extent of mechanistic diversity in this large protein family.




cognition

Structural insight into the recognition of pathogen-derived phosphoglycolipids by C-type lectin receptor DCAR [Protein Structure and Folding]

The C-type lectin receptors (CLRs) form a family of pattern recognition receptors that recognize numerous pathogens, such as bacteria and fungi, and trigger innate immune responses. The extracellular carbohydrate-recognition domain (CRD) of CLRs forms a globular structure that can coordinate a Ca2+ ion, allowing receptor interactions with sugar-containing ligands. Although well-conserved, the CRD fold can also display differences that directly affect the specificity of the receptors for their ligands. Here, we report crystal structures at 1.8–2.3 Å resolutions of the CRD of murine dendritic cell-immunoactivating receptor (DCAR, or Clec4b1), the CLR that binds phosphoglycolipids such as acylated phosphatidyl-myo-inositol mannosides (AcPIMs) of mycobacteria. Using mutagenesis analysis, we identified critical residues, Ala136 and Gln198, on the surface surrounding the ligand-binding site of DCAR, as well as an atypical Ca2+-binding motif (Glu-Pro-Ser/EPS168–170). By chemically synthesizing a water-soluble ligand analog, inositol-monophosphate dimannose (IPM2), we confirmed the direct interaction of DCAR with the polar moiety of AcPIMs by biolayer interferometry and co-crystallization approaches. We also observed a hydrophobic groove extending from the ligand-binding site that is in a suitable position to interact with the lipid portion of whole AcPIMs. These results suggest that the hydroxyl group-binding ability and hydrophobic groove of DCAR mediate its specific binding to pathogen-derived phosphoglycolipids such as mycobacterial AcPIMs.