techno

Technologists lead crowdsourced Coronavirus Tech Handbook response

A group of technologists has led crowdsourcing efforts to create a single repository of information for specialists fighting the Coronavirus outbreak. Techworld speaks with founder Edward Saperia to hear more about how collaboration tools can help the efforts




techno

Active ageing with technology

Seventy-year-old Theresa Chan suffers from joint degeneration, which is why she needs a walking stick.

 

“My joints are severely abraded. They are painful when I walk or stand. My doctor suggested I replace some joints with implants, but I don't want surgery,” she said.

 

 

Smart seniors

 

Interested in how gerontechnology can help improve the quality of life for older adults, Ms Chan joined the Gerontechnology Practitioner Training Course under Lingnan University’s Gerontechnology & Smart Ageing Project.

 

The course targets elderly people and caregivers. It introduces them to devices that help seniors get around with ease and live more independent lives.

 

Ms Chan was impressed by a smart walking stick that is equipped with senior-worthy features like a flashlight with an adjustable safety light, a siren for emergency use and a radio.

 

“I am grateful for the course because I can learn more about gerontechnology in my free time. After class, I can share my knowledge with others who have no idea about this new technology. It is modern and I want to keep up with the times and progress,” she said.

 

Youth opportunities

 

Gerontechnology has also opened up career opportunities for young people.

 

Cecilia Auyeung recently graduated from Lingnan University with a major in marketing. Last year she set up a social enterprise called Gatherly. It provides a platform for the elderly to teach other people handicraft skills and helps them to sell their products online.

 

Ms Auyeung joined the Socialpreneur Incubation Course under the Lingnan University project. It encourages the development of social enterprises in Hong Kong, gerontechnology and marketing. She quickly drew inspiration from other socialpreneurs.

 

“I discovered that my social enterprise has to change every two years according to market demands. For example, fabric bags are popular among youngsters. While the elderly at Gatherly are equipped with blue-dye techniques, I make use of their skills to create products that match the market situation.”

 

University contribution

 

According to the Census & Statistics Department, in 2036, the proportion of the population aged 65 and over is projected to be 31%. To meet the needs of an ageing population, the Government is promoting gerontechnology, which combines gerontology and technology. It focuses on providing effective solutions to increase vitality and quality of life.

 

The Lingnan University’s three-year project aims to enhance public understanding of gerontechnology and support smart ageing socialpreneurship development. The project also covers studies with the goal of building a database for gerontech products and services as well as social innovation and startups. The overall vision is to offer policy recommendations to the industry and Government.

 

Lingnan University Asia-Pacific Institute of Ageing Studies Project Manager Chloe Siu said: “Most of the participating students are willing to think out-of-the-box, using their creativity and knowledge learnt from their courses and to echo the needs of the society. We are trying to facilitate different stakeholders and community partners to create a co-working atmosphere.”

 

The university has also built a 2,000 sq ft Gerontech-X Lab to display over 40 gerontech products catered to healthcare, dining, living and transport. Anyone interested can book an appointment for a free visit from November 28.




techno

Technologies to extract, purify critical rare earth metals could be a 'game changer'

(Purdue University) New environmentally friendly technologies promise to be 'game changers' in the rare earth metals field and enable the US to create a more stable and reliable domestic source of these essential metals. Purdue University patented extraction and purifying processes using ligand-assisted chromatography are shown to remove and purify such metals from coal ash, recycled magnets and raw ore safely, efficiently and with virtually no detrimental environmental impact.




techno

Making a Difference With Interactive Technology: Considerations in Using and Evaluating Computerized Aids for Diabetes Self-Management Education

Russell E. Glasgow
Apr 1, 2001; 14:
Feature Articles




techno

Autonomous Vehicles: Futurist Technologies in Markets and Society

What are the ethical, logistical and legal complexities that accompany Autonomous Vehicle technology—and what role should business strategists play in guiding AVs integration into business and society?




techno

Virtual Issue: Technological Innovations

Anne-Claude Gingras
Apr 1, 2020; 19:572-573
Editorial




techno

Human Control Is Essential to the Responsible Use of Military Neurotechnology

8 August 2019

Yasmin Afina

Research Assistant, International Security Programme
The military importance of AI-connected brain–machine interfaces is growing. Steps must be taken to ensure human control at all times over these technologies.

2019-08-08-BABWIB.jpg

A model of a human brain is displayed at an exhibition in Lisbon, Portugal. Photo: Getty Images.

Technological progress in neurotechnology and its military use is proceeding apace. As early as the 1970s, brain-machine interfaces have been the subject of study. By 2014, the UK’s Ministry of Defence was arguing that the development of artificial devices, such as artificial limbs, is ‘likely to see refinement of control to provide… new ways to connect the able-bodied to machines and computers.’ Today, brain-machine interface technology is being investigated around the world, including in Russia, China and South Korea.

Recent developments in the private sector are producing exciting new capabilities for people with disabilities and medical conditions. In early July, Elon Musk and Neuralink presented their ‘high-bandwidth’ brain-machine interface system, with small and flexible electrode threads packaged into a small device containing custom chips and to be inserted and implanted into the user’s brain for medical purposes.

In the military realm, in 2018, the United States’ Defense Advanced Research Projects Agency (DARPA) put out a call for proposals to investigate the potential of nonsurgical brain-machine interfaces to allow soldiers to ‘interact regularly and intuitively with artificially intelligent, semi-autonomous and autonomous systems in a manner currently not possible with conventional interfaces’. DARPA further highlighted the need for these interfaces to be bidirectional – where information is sent both from brain to machine (neural recording) and from machine to brain (neural stimulation) – which will eventually allow machines and humans to learn from each other.

This technology may provide soldiers and commanders with a superior level of sensory sensitivity and the ability to process a greater amount of data related to their environment at a faster pace, thus enhancing situational awareness. These capabilities will support military decision-making as well as targeting processes.

Neural recording will also enable the obtention of a tremendous amount of data from operations, including visuals, real-time thought processes and emotions. These sets of data may be used for feedback and training (including for virtual wargaming and for machine learning training), as well as for investigatory purposes. Collected data will also feed into research that may help researchers understand and predict human intent from brain signals – a tremendous advantage from a military standpoint.

Legal and ethical considerations

The flip side of these advancements is the responsibilities they will impose and the risks and vulnerabilities of the technology as well as legal and ethical considerations.

The primary risk would be for users to lose control over the technology, especially in a military context; hence a fail-safe feature is critical for humans to maintain ultimate control over decision-making. Despite the potential benefits of symbiosis between humans and AI, users must have the unconditional possibility to override these technologies should they believe it is appropriate and necessary for them to do so.

This is important given the significance of human control over targeting, as well as strategic and operational decision-making. An integrated fail-safe in brain-machine interfaces may in fact allow for a greater degree of human control over critical, time-sensitive decision-making. In other words, in the event of incoming missiles alert, while the AI may suggest a specific course of action, users must be able to decide in a timely manner whether to execute it or not.

Machines can learn from coded past experiences and decisions, but humans also use gut feelings to make life and death decisions. A gut feeling is a human characteristic that is not completely transferable, as it relies on both rational and emotional traits – and is part of the ‘second-brain’ and the gut-brain axis which is currently poorly understood. It is however risky to take decisions solely on gut feelings or solely on primary brain analysis—therefore, receiving a comprehensive set of data via an AI-connected brain-machine interface may help to verify and evaluate the information in a timely manner, and complement decision-making processes. However, these connections and interactions would have to be much better understood than the current state of knowledge. 

Fail-safe features are necessary to ensure compliance with the law, including international humanitarian law and international human rights law. As a baseline, human control must be used to 1) define areas where technology may or may not be trusted and to what extent, and 2) ensure legal, political and ethical accountability, responsibility and explainability at all times. Legal and ethical considerations must be taken into account from as early as the design and conceptualizing stage of these technologies, and oversight must be ensured across the entirety of the manufacturing supply chain.  

The second point raises the need to further explore and clarify whether existing national, regional and international legal, political and ethical frameworks are sufficient to cover the development and use of these technologies. For instance, there is value in assessing to what extent AI-connected brain-machine interfaces will affect the assessment of the mental element in war crimes and their human rights implications.

In addition, these technologies need to be highly secure and invulnerable to cyber hacks. Neural recording and neural stimulation will be directly affecting brain processes in humans and if an adversary has the ability to connect to a human brain, steps need to be taken to ensure that memory and personality could not be damaged.

Future questions

Military applications of technological progress in neurotechnology is inevitable, and their implications cannot be ignored. There is an urgent need for policymakers to understand the fast-developing neurotechnical capabilities, develop international standards and best practices – and, if necessary, new and dedicated legal instruments – to frame the use of these technologies.

Considering the opportunities that brain-machine interfaces may present in the realms of security and defence, inclusive, multi-stakeholder discussions and negotiations leading to the development of standards must include the following considerations:

  • What degree of human control would be desirable, at what stage and by whom? To what extent could human users be trusted with their own judgment in decision-making processes?
  • How could algorithmic and human biases, the cyber security and vulnerabilities of these technologies and the quality of data be factored into these discussions?
  • How can ethical and legal considerations be incorporated into the design stage of these technologies?
  • How can it be ensured that humans cannot be harmed in the process, either inadvertently or deliberately?
  • Is there a need for a dedicated international forum to discuss the military applications of neurotechnology? How could these discussions be integrated to existing international processes related to emerging military applications of technological progress, such as the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems?




techno

How Is New Technology Driving Geopolitical Relations?

Research Event

22 October 2019 - 6:00pm to 7:00pm

Chatham House, London

Event participants

Rt Hon Baroness Neville-Jones DCMG, Minister of State for Security and Counter Terrorism (2010-11)
Jamie Condliffe, Editor, DealBook Newsletter and Writer, Bits Tech Newsletter, The New York Times
Jamie Saunders, Partner, Wychwood Partners LLP; Visiting Professor, University College London
Chair: Dr Patricia Lewis, Research Director, International Security Department, Chatham House

New technology such as 5G, artificial intelligence, nanotechnology and robotics have become, now more than ever, intertwined with geopolitical, economic and trade interests. Leading powers are using new technology to exert power and influence and to shape geopolitics more generally.

The ongoing race between the US and China around 5G technology is a case in point. Amid these tensions, the impact on developing countries is not sufficiently addressed.

Arguably, the existing digital divide will increase leading developing countries to the early, if not hasty, adoption of new technology for fear of lagging behind. This could create opportunities but will also pose risks.

This panel discusses how new technology is changing the geopolitical landscape. It also discusses the role that stakeholders, including governments, play in the creation of standards for new technologies and what that means for its deployment in key markets technically and financially.

Finally, the panel looks at the issue from the perspective of developing countries, addressing the choices that have to be made in terms of affordability, development priorities and security concerns.

This event was organized with the kind support of DXC Technology.

Nicole Darabian

Research Assistant, Cyber Policy, International Security Department




techno

Combining Precursor and Fragment Information for Improved Detection of Differential Abundance in Data Independent Acquisition [Technological Innovation and Resources]

In bottom-up, label-free discovery proteomics, biological samples are acquired in a data-dependent (DDA) or data-independent (DIA) manner, with peptide signals recorded in an intact (MS1) and fragmented (MS2) form. While DDA has only the MS1 space for quantification, DIA contains both MS1 and MS2 at high quantitative quality. DIA profiles of complex biological matrices such as tissues or cells can contain quantitative interferences, and the interferences at the MS1 and the MS2 signals are often independent. When comparing biological conditions, the interferences can compromise the detection of differential peptide or protein abundance and lead to false positive or false negative conclusions.

We hypothesized that the combined use of MS1 and MS2 quantitative signals could improve our ability to detect differentially abundant proteins. Therefore, we developed a statistical procedure incorporating both MS1 and MS2 quantitative information of DIA. We benchmarked the performance of the MS1-MS2-combined method to the individual use of MS1 or MS2 in DIA using four previously published controlled mixtures, as well as in two previously unpublished controlled mixtures. In the majority of the comparisons, the combined method outperformed the individual use of MS1 or MS2. This was particularly true for comparisons with low fold changes, few replicates, and situations where MS1 and MS2 were of similar quality. When applied to a previously unpublished investigation of lung cancer, the MS1-MS2-combined method increased the coverage of known activated pathways.

Since recent technological developments continue to increase the quality of MS1 signals (e.g. using the BoxCar scan mode for Orbitrap instruments), the combination of the MS1 and MS2 information has a high potential for future statistical analysis of DIA data.




techno

Thorough Performance Evaluation of 213 nm Ultraviolet Photodissociation for Top-down Proteomics [Technological Innovation and Resources]

Top-down proteomics studies intact proteoform mixtures and offers important advantages over more common bottom-up proteomics technologies, as it avoids the protein inference problem. However, achieving complete molecular characterization of investigated proteoforms using existing technologies remains a fundamental challenge for top-down proteomics. Here, we benchmark the performance of ultraviolet photodissociation (UVPD) using 213 nm photons generated by a solid-state laser applied to the study of intact proteoforms from three organisms. Notably, the described UVPD setup applies multiple laser pulses to induce ion dissociation, and this feature can be used to optimize the fragmentation outcome based on the molecular weight of the analyzed biomolecule. When applied to complex proteoform mixtures in high-throughput top-down proteomics, 213 nm UVPD demonstrated a high degree of complementarity with the most employed fragmentation method in proteomics studies, higher-energy collisional dissociation (HCD). UVPD at 213 nm offered higher average proteoform sequence coverage and degree of proteoform characterization (including localization of post-translational modifications) than HCD. However, previous studies have shown limitations in applying database search strategies developed for HCD fragmentation to UVPD spectra which contains up to nine fragment ion types. We therefore performed an analysis of the different UVPD product ion type frequencies. From these data, we developed an ad hoc fragment matching strategy and determined the influence of each possible ion type on search outcomes. By paring down the number of ion types considered in high-throughput UVPD searches from all types down to the four most abundant, we were ultimately able to achieve deeper proteome characterization with UVPD. Lastly, our detailed product ion analysis also revealed UVPD cleavage propensities and determined the presence of a product ion produced specifically by 213 nm photons. All together, these observations could be used to better elucidate UVPD dissociation mechanisms and improve the utility of the technique for proteomic applications.




techno

Mass Spectrometry Based Immunopeptidomics Leads to Robust Predictions of Phosphorylated HLA Class I Ligands [Technological Innovation and Resources]

The presentation of peptides on class I human leukocyte antigen (HLA-I) molecules plays a central role in immune recognition of infected or malignant cells. In cancer, non-self HLA-I ligands can arise from many different alterations, including non-synonymous mutations, gene fusion, cancer-specific alternative mRNA splicing or aberrant post-translational modifications. Identifying HLA-I ligands remains a challenging task that requires either heavy experimental work for in vivo identification or optimized bioinformatics tools for accurate predictions. To date, no HLA-I ligand predictor includes post-translational modifications. To fill this gap, we curated phosphorylated HLA-I ligands from several immunopeptidomics studies (including six newly measured samples) covering 72 HLA-I alleles and retrieved a total of 2,066 unique phosphorylated peptides. We then expanded our motif deconvolution tool to identify precise binding motifs of phosphorylated HLA-I ligands. Our results reveal a clear enrichment of phosphorylated peptides among HLA-C ligands and demonstrate a prevalent role of both HLA-I motifs and kinase motifs on the presentation of phosphorylated peptides. These data further enabled us to develop and validate the first predictor of interactions between HLA-I molecules and phosphorylated peptides.




techno

MaXLinker: Proteome-wide Cross-link Identifications with High Specificity and Sensitivity [Technological Innovation and Resources]

Protein-protein interactions play a vital role in nearly all cellular functions. Hence, understanding their interaction patterns and three-dimensional structural conformations can provide crucial insights about various biological processes and underlying molecular mechanisms for many disease phenotypes. Cross-linking mass spectrometry (XL-MS) has the unique capability to detect protein-protein interactions at a large scale along with spatial constraints between interaction partners. The inception of MS-cleavable cross-linkers enabled the MS2-MS3 XL-MS acquisition strategy that provides cross-link information from both MS2 and MS3 level. However, the current cross-link search algorithm available for MS2-MS3 strategy follows a "MS2-centric" approach and suffers from a high rate of mis-identified cross-links. We demonstrate the problem using two new quality assessment metrics ["fraction of mis-identifications" (FMI) and "fraction of interprotein cross-links from known interactions" (FKI)]. We then address this problem, by designing a novel "MS3-centric" approach for cross-link identification and implementing it as a search engine named MaXLinker. MaXLinker outperforms the currently popular search engine with a lower mis-identification rate, and higher sensitivity and specificity. Moreover, we performed human proteome-wide cross-linking mass spectrometry using K562 cells. Employing MaXLinker, we identified a comprehensive set of 9319 unique cross-links at 1% false discovery rate, comprising 8051 intraprotein and 1268 interprotein cross-links. Finally, we experimentally validated the quality of a large number of novel interactions identified in our study, providing a conclusive evidence for MaXLinker's robust performance.




techno

Concentration Determination of >200 Proteins in Dried Blood Spots for Biomarker Discovery and Validation [Technological Innovation and Resources]

The use of protein biomarkers as surrogates for clinical endpoints requires extensive multilevel validation including development of robust and sensitive assays for precise measurement of protein concentration. Multiple reaction monitoring (MRM) is a well-established mass-spectrometric method that can be used for reproducible protein-concentration measurements in biological specimens collected via microsampling. The dried blood spot (DBS) microsampling technique can be performed non-invasively without the expertise of a phlebotomist, and can enhance analyte stability which facilitate the application of this technique in retrospective studies while providing lower storage and shipping costs, because cold-chain logistics can be eliminated. Thus, precise, sensitive, and multiplexed methods for measuring protein concentrations in DBSs can be used for de novo biomarker discovery and for biomarker quantification or verification experiments. To achieve this goal, MRM assays were developed for multiplexed concentration measurement of proteins in DBSs.

The lower limit of quantification (LLOQ) was found to have a median total coefficient of variation (CV) of 18% for 245 proteins, whereas the median LLOQ was 5 fmol of peptide injected on column, and the median inter-day CV over 4 days for measuring endogenous protein concentration was 8%. The majority (88%) of the assays displayed parallelism, whereas the peptide standards remained stable throughout the assay workflow and after exposure to multiple freeze-thaw cycles. For 190 proteins, the measured protein concentrations remained stable in DBS stored at ambient laboratory temperature for up to 2 months. Finally, the developed assays were used to measure the concentration ranges for 200 proteins in twenty same sex, same race and age matched individuals.




techno

Tandem Mass Tag Approach Utilizing Pervanadate BOOST Channels Delivers Deeper Quantitative Characterization of the Tyrosine Phosphoproteome [Technological Innovation and Resources]

Dynamic tyrosine phosphorylation is fundamental to a myriad of cellular processes. However, the inherently low abundance of tyrosine phosphorylation in the proteome and the inefficient enrichment of phosphotyrosine(pTyr)-containing peptides has led to poor pTyr peptide identification and quantitation, critically hindering researchers' ability to elucidate signaling pathways regulated by tyrosine phosphorylation in systems where cellular material is limited. The most popular approaches to wide-scale characterization of the tyrosine phosphoproteome use pTyr enrichment with pan-specific, anti-pTyr antibodies from a large amount of starting material. Methods that decrease the amount of starting material and increase the characterization depth of the tyrosine phosphoproteome while maintaining quantitative accuracy and precision would enable the discovery of tyrosine phosphorylation networks in rarer cell populations. To achieve these goals, the BOOST (Broad-spectrum Optimization Of Selective Triggering) method leveraging the multiplexing capability of tandem mass tags (TMT) and the use of pervanadate (PV) boost channels (cells treated with the broad-spectrum tyrosine phosphatase inhibitor PV) selectively increased the relative abundance of pTyr-containing peptides. After PV boost channels facilitated selective fragmentation of pTyr-containing peptides, TMT reporter ions delivered accurate quantitation of each peptide for the experimental samples while the quantitation from PV boost channels was ignored. This method yielded up to 6.3-fold boost in pTyr quantification depth of statistically significant data derived from contrived ratios, compared with TMT without PV boost channels or intensity-based label-free (LF) quantitation while maintaining quantitative accuracy and precision, allowing quantitation of over 2300 unique pTyr peptides from only 1 mg of T cell receptor-stimulated Jurkat T cells. The BOOST strategy can potentially be applied in analyses of other post-translational modifications where treatments that broadly elevate the levels of those modifications across the proteome are available.




techno

A Compact Quadrupole-Orbitrap Mass Spectrometer with FAIMS Interface Improves Proteome Coverage in Short LC Gradients [Technological Innovation and Resources]

State-of-the-art proteomics-grade mass spectrometers can measure peptide precursors and their fragments with ppm mass accuracy at sequencing speeds of tens of peptides per second with attomolar sensitivity. Here we describe a compact and robust quadrupole-orbitrap mass spectrometer equipped with a front-end High Field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS) Interface. The performance of the Orbitrap Exploris 480 mass spectrometer is evaluated in data-dependent acquisition (DDA) and data-independent acquisition (DIA) modes in combination with FAIMS. We demonstrate that different compensation voltages (CVs) for FAIMS are optimal for DDA and DIA, respectively. Combining DIA with FAIMS using single CVs, the instrument surpasses 2500 peptides identified per minute. This enables quantification of >5000 proteins with short online LC gradients delivered by the Evosep One LC system allowing acquisition of 60 samples per day. The raw sensitivity of the instrument is evaluated by analyzing 5 ng of a HeLa digest from which >1000 proteins were reproducibly identified with 5 min LC gradients using DIA-FAIMS. To demonstrate the versatility of the instrument, we recorded an organ-wide map of proteome expression across 12 rat tissues quantified by tandem mass tags and label-free quantification using DIA with FAIMS to a depth of >10,000 proteins.




techno

A Quantitative Tri-fluorescent Yeast Two-hybrid System: From Flow Cytometry to In cellula Affinities [Technological Innovation and Resources]

We present a technological advancement for the estimation of the affinities of Protein-Protein Interactions (PPIs) in living cells. A novel set of vectors is introduced that enables a quantitative yeast two-hybrid system based on fluorescent fusion proteins. The vectors allow simultaneous quantification of the reaction partners (Bait and Prey) and the reporter at the single-cell level by flow cytometry. We validate the applicability of this system on a small but diverse set of PPIs (eleven protein families from six organisms) with different affinities; the dissociation constants range from 117 pm to 17 μm. After only two hours of reaction, expression of the reporter can be detected even for the weakest PPI. Through a simple gating analysis, it is possible to select only cells with identical expression levels of the reaction partners. As a result of this standardization of expression levels, the mean reporter levels directly reflect the affinities of the studied PPIs. With a set of PPIs with known affinities, it is straightforward to construct an affinity ladder that permits rapid classification of PPIs with thus far unknown affinities. Conventional software can be used for this analysis. To permit automated analysis, we provide a graphical user interface for the Python-based FlowCytometryTools package.




techno

Virtual Issue: Technological Innovations [Editorials]




techno

Detection of multiple autoantibodies in patients with ankylosing spondylitis using nucleic acid programmable protein arrays [11. Microarrays/Combinatorics/Display Technology]

Ankylosing Spondylitis (AS) is a common, inflammatory rheumatic disease, which primarily affects the axial skeleton and is associated with sacroiliitis, uveitis and enthesitis. Unlike other autoimmune rheumatic diseases, such as rheumatoid arthritis or systemic lupus erythematosus, autoantibodies have not yet been reported to be a feature of AS. We therefore wished to determine if plasma from patients with AS contained autoantibodies and if so, characterize and quantify this response in comparison to patients with Rheumatoid Arthritis (RA) and healthy controls. Two high-density nucleic acid programmable protein arrays expressing a total of 3498 proteins were screened with plasma from 25 patients with AS, 17 with RA and 25 healthy controls. Autoantigens identified were subjected to Ingenuity Pathway Analysis in order to determine patterns of signalling cascades or tissue origin. 44% of patients with Ankylosing Spondylitis demonstrated a broad autoantibody response, as compared to 33% of patients with RA and only 8% of healthy controls. Individuals with AS demonstrated autoantibody responses to shared autoantigens, and 60% of autoantigens identified in the AS cohort were restricted to that group. The AS patients autoantibody responses were targeted towards connective, skeletal and muscular tissue, unlike those of RA patients or healthy controls. Thus, patients with AS show evidence of systemic humoral autoimmunity and multispecific autoantibody production. Nucleic Acid Programmable Protein Arrays constitute a powerful tool to study autoimmune diseases.




techno

The ProteoRed MIAPE web toolkit: A user-friendly framework to connect and share proteomics standards [Technology]

The development of the HUPO-PSI's (Proteomics Standards Initiative) standard data formats and MIAPE (Minimum Information About a Proteomics Experiment) guidelines should improve proteomics data sharing within the scientific community. Proteomics journals have encouraged the use of these standards and guidelines to improve the quality of experimental reporting and ease the evaluation and publication of manuscripts. However, there is an evident lack of bioinformatics tools specifically designed to create and edit standard file formats and reports, or embed them within proteomics workflows. In this article, we describe a new web-based software suite (The ProteoRed MIAPE web toolkit) that performs several complementary roles related to proteomic data standards. Firstly, it can verify the reports fulfill the minimum information requirements of the corresponding MIAPE modules, highlighting inconsistencies or missing information. Secondly, the toolkit can convert several XML-based data standards directly into human readable MIAPE reports stored within the ProteoRed MIAPE repository. Finally, it can also perform the reverse operation, allowing users to export from MIAPE reports into XML files for computational processing, data sharing or public database submission. The toolkit is thus the first application capable of automatically linking the PSI's MIAPE modules with the corresponding XML data exchange standards, enabling bidirectional conversions. This toolkit is freely available at http://www.proteored.org/MIAPE/.




techno

Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements [Technology]

As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.




techno

Quantitative profiling of protein tyrosine kinases in human cancer cell lines by multiplexed parallel reaction monitoring assays [Technology]

Protein tyrosine kinases (PTKs) play key roles in cellular signal transduction, cell cycle regulation, cell division, and cell differentiation. Dysregulation of PTK-activated pathways, often by receptor overexpression, gene amplification, or genetic mutation, is a causal factor underlying numerous cancers. In this study, we have developed a parallel reaction monitoring (PRM)-based assay for quantitative profiling of 83 PTKs. The assay detects 308 proteotypic peptides from 54 receptor tyrosine kinases and 29 nonreceptor tyrosine kinases in a single run. Quantitative comparisons were based on the labeled reference peptide method. We implemented the assay in four cell models: 1) a comparison of proliferating versus epidermal growth factor (EGF)-stimulated A431 cells, 2) a comparison of SW480Null (mutant APC) and SW480APC (APC restored) colon tumor cell lines, and 3) a comparison of 10 colorectal cancer cell lines with different genomic abnormalities, and 4) lung cancer cell lines with either susceptibility (11-18) or acquired resistance (11-18R) to the epidermal growth factor receptor tyrosine kinase inhibitor erlotinib. We observed distinct PTK expression changes that were induced by stimuli, genomic features or drug resistance, which were consistent with previous reports. However, most of the measured expression differences were novel observations. For example, acquired resistance to erlotinib in the 11-18 cell model was associated not only with previously reported upregulation of MET, but also with upregulation of FLK2 and downregulation of LYN and PTK7. Immunoblot analyses and shotgun proteomics data were highly consistent with PRM data. Multiplexed PRM assays provide a targeted, systems-level profiling approach to evaluate cancer-related proteotypes and adaptations. Data are available through Proteome eXchange Accession PXD002706.




techno

MaxQuant software for ion mobility enhanced shotgun proteomics [Technological Innovation and Resources]

Ion mobility can add a dimension to LC-MS based shotgun proteomics which has the potential to boost proteome coverage, quantification accuracy and dynamic range.  Required for this is suitable software that extracts the information contained in the four-dimensional (4D) data space spanned by m/z, retention time, ion mobility and signal intensity. Here we describe the ion mobility enhanced MaxQuant software, which utilizes the added data dimension. It offers an end to end computational workflow for the identification and quantification of peptides and proteins in LC-IMS-MS/MS shotgun proteomics data. We apply it to trapped ion mobility spectrometry (TIMS) coupled to a quadrupole time-of-flight (QTOF) analyzer. A highly parallelizable 4D feature detection algorithm extracts peaks which are assembled to isotope patterns. Masses are recalibrated with a non-linear m/z, retention time, ion mobility and signal intensity dependent model, based on peptides from the sample. A new matching between runs (MBR) algorithm that utilizes collisional cross section (CCS) values of MS1 features in the matching process significantly gains specificity from the extra dimension. Prerequisite for using CCS values in MBR is a relative alignment of the ion mobility values between the runs. The missing value problem in protein quantification over many samples is greatly reduced by CCS aware MBR.MS1 level label-free quantification is also implemented which proves to be highly precise and accurate on a benchmark dataset with known ground truth. MaxQuant for LC-IMS-MS/MS is part of the basic MaxQuant release and can be downloaded from http://maxquant.org.




techno

DEqMS: a method for accurate variance estimation in differential protein expression analysis [Technological Innovation and Resources]

Quantitative proteomics by mass spectrometry is widely used in biomarker research and basic biology research for investigation of phenotype level cellular events. Despite the wide application, the methodology for statistical analysis of differentially expressed proteins has not been unified. Various methods such as t-test, linear model and mixed effect models are used to define changes in proteomics experiments. However, none of these methods consider the specific structure of MS-data. Choices between methods, often originally developed for other types of data, are based on compromises between features such as statistical power, general applicability and user friendliness. Furthermore, whether to include proteins identified with one peptide in statistical analysis of differential protein expression varies between studies. Here we present DEqMS, a robust statistical method developed specifically for differential protein expression analysis in mass spectrometry data. In all datasets investigated there is a clear dependence of variance on the number of PSMs or peptides used for protein quantification. DEqMS takes this feature into account when assessing differential protein expression. This allows for a more accurate data-dependent estimation of protein variance and inclusion of single peptide identifications without increasing false discoveries. The method was tested in several datasets including E.coli proteome spike-in data, using both label-free and TMT-labelled quantification. In comparison to previous statistical methods used in quantitative proteomics, DEqMS showed consistently better accuracy in detecting altered protein levels compared to other statistical methods in both label-free and labelled quantitative proteomics data. DEqMS is available as an R package in Bioconductor.




techno

Immunopeptidomic analysis reveals that deamidated HLA-bound peptides arise predominantly from deglycosylated precursors [Technological Innovation and Resources]

The presentation of post-translationally modified (PTM) peptides by cell surface HLA molecules has the potential to increase the diversity of targets for surveilling T cells. Whilst immunopeptidomics studies routinely identify thousands of HLA-bound peptides from cell lines and tissue samples, in-depth analyses of the proportion and nature of peptides bearing one or more PTMs remains challenging. Here we have analyzed HLA-bound peptides from a variety of allotypes and assessed the distribution of mass spectrometry-detected PTMs, finding deamidation of asparagine or glutamine to be highly prevalent. Given that asparagine deamidation may arise either spontaneously or through enzymatic reaction, we assessed allele-specific and global motifs flanking the modified residues. Notably, we found that the N-linked glycosylation motif NX(S/T) was highly abundant across asparagine-deamidated HLA-bound peptides. This finding, demonstrated previously for a handful of deamidated T cell epitopes, implicates a more global role for the retrograde transport of nascently N-glycosylated polypeptides from the ER and their subsequent degradation within the cytosol to form HLA-ligand precursors. Chemical inhibition of Peptide:N-Glycanase (PNGase), the endoglycosidase responsible for the removal of glycans from misfolded and retrotranslocated glycoproteins, greatly reduced presentation of this subset of deamidated HLA-bound peptides. Importantly, there was no impact of PNGase inhibition on peptides not containing a consensus NX(S/T) motif. This indicates that a large proportion of HLA-I bound asparagine deamidated peptides are generated from formerly glycosylated proteins that have undergone deglycosylation via the ER-associated protein degradation (ERAD) pathway. The information herein will help train deamidation prediction models for HLA-peptide repertoires and aid in the design of novel T cell therapeutic targets derived from glycoprotein antigens.




techno

US–China Strategic Competition: The Quest for Global Technological Leadership

7 November 2019

The current dispute between the US and China goes far beyond trade tariffs and tit-for-tat reprisals: the underlying driver is a race for global technological supremacy. This paper examines the risks of greater strategic competition as well as potential solutions for mitigating the impacts of the US–China economic confrontation.

Marianne Schneider-Petsinger

Senior Research Fellow, US and the Americas Programme

Dr Jue Wang

Associate Fellow, Asia-Pacific Programme (based in Holland)

Dr Yu Jie

Senior Research Fellow on China, Asia-Pacific Programme

James Crabtree

Associate Fellow, Asia-Pacific Programme

Video: Marianne Schneider-Petsinger and Dr Yu Jie discuss key themes from the research paper

Summary

  • The underlying driver of the ongoing US–China trade war is a race for global technological dominance. President Trump has raised a number of issues regarding trade with China – including the US’s trade deficit with China and the naming of China as a currency manipulator. But at the heart of the ongoing tariff escalation are China’s policies and practices regarding forced technology transfer, intellectual property theft and non-market distortions.
  • As China’s international influence has expanded it has always been unlikely that Beijing would continue to accept existing global standards and institutions established and widely practised by developed countries based on ‘the Washington Consensus’.
  • China’s desire to be an alternative champion of technology standard-setting remains unfulfilled. Its ample innovation talent is a solid foundation in its quest for global technology supremacy but tightening controls over personal freedoms could undermine it and deter potential global partners.
  • It is unclear if Chinese government interventions will achieve the technological self-sufficiency Beijing has long desired. China’s approach to macroeconomic management diverges significantly from that of the US and other real market economies, particularly in its policy towards nurturing innovation.
  • Chinese actors are engaged in the globalization of technological innovation through exports and imports of high-tech goods and services; cross-border investments in technology companies and research and development (R&D) activities; cross-border R&D collaboration; and international techno-scientific research collaboration.
  • While the Chinese state pushes domestic companies and research institutes to engage in the globalization of technological innovation, its interventions in the high-tech sector have caused uneasiness in the West.
  • The current US response to its competition with China for technological supremacy, which leans towards decoupling, is unlikely to prove successful. The US has better chances of success if it focuses on America’s own competitiveness, works on common approaches to technology policy with like-minded partners around the globe and strengthens the international trading system.
  • A technically sound screening mechanism of foreign investment can prevent normal cross-border collaboration in technological innovation from being misused by geopolitical rival superpowers.




techno

Trade, Technology and National Security: Will Europe Be Trapped Between the US and China?

Invitation Only Research Event

2 March 2020 - 8:00am to 9:15am

Chatham House | 10 St James's Square | London | SW1Y 4LE

Event participants

Sir Simon Fraser, Managing Partner of Flint Global; Deputy Chairman, Chatham House
Chair: Marianne Schneider-Petsinger, Senior Research Fellow, US and the Americas Programme, Chatham House

The US and China have entered into an increasingly confrontational relationship over trade and technology. This may force Europe to make difficult choices between the two economic superpowers – or perform a balancing act. Although the recent US-China phase-1 trade deal has eased the relationship for now, the trade and technology tensions are a structural issue and are likely to persist.

The debate over Huawei’s participation in 5G networks is an example of how the UK and other countries may face competing priorities in economic, security and foreign policy. Can Europe avoid a binary choice between the US and China? Is it possible for the EU to position itself as a third global power in trade, technology and standard-setting? What strategies should Europeans adopt to keep the US and China engaged in the rules-based international order and what does the future hold for trade multilateralism?

Sir Simon Fraser will join us for a discussion on Europe’s future role between the US and China. Sir Simon is Managing Partner of Flint Global and Deputy Chairman of Chatham House. He previously served as Permanent Secretary at the Foreign and Commonwealth Office (FCO) and Head of the UK Diplomatic Service from 2010 to 2015. Prior to that he was Permanent Secretary at the UK Department for Business, Innovation and Skills. He has also served as Director General for Europe in the FCO and Chief of Staff to European Trade Commissioner Peter Mandelson.

We would like to take this opportunity to thank founding partner AIG and supporting partners Clifford Chance LLP, Diageo plc, and EY for their generous support of the Chatham House Global Trade Policy Forum.

Event attributes

Chatham House Rule

US and Americas Programme




techno

Diabetes technology: specialists are blocking access for some patients, say experts




techno

On Trial: Agricultural Biotechnology in Africa

21 July 2014

Rob Bailey

Former Research Director, Energy, Environment and Resources

Robin Willoughby
David Grzywacz 

Increasing agricultural productivity and adapting farming to climate change are central to Africa’s development prospects. There are important opportunities to enhance yields and increase resilience through the adoption of improved crop varieties. In some cases, biotechnology, and in particular genetic modification (GM), offers advantages over conventional plant-breeding approaches. Accordingly there are a various projects under way to develop new GM varieties for African farmers, ranging from drought-resistant maize to varieties of cassava, banana, sorghum, cowpea and sweet potato with resistance to pests and disease.

In addition to government funds, these projects have also attracted the support of influential donor agencies and philanthropic foundations. However, despite the expenditure of considerable resources, the potential of GM in Africa is not being realized. So far no GM trait developed for African farmers has been put to use.

Multiple barriers inhibit the development and adoption of pro-poor GM varieties in Africa. On the demand side, farmers may be reluctant to adopt GM varieties owing to a lack of export opportunities and distrust of the technology among local consumers. Farmers may also be concerned about exploitation by transnational seed companies (despite the fact that development of new GM technologies in Africa is dominated by the public sector). On the supply side, donor funding struggles to match the long timescales of research and development, while incentives among research scientists may be poorly aligned with farmer outcomes. Non-existent, poorly functioning or overly punitive regulatory regimes discourage investment.

The most important barriers – such as regulatory constraints, consumer distrust and weak farmer demand – must be understood in the context of wider social and political dynamics surrounding GM, typified by misinformation, polarized public discourse, and dysfunctional and opportunistic politics. The result is most GM projects becoming ‘stuck’ at the field trial stage without ever progressing to release. This ‘convenient deadlock’ of continual field trials allows governments to manage political risks by effectively balancing the demands of pro-GM and anti-GM lobbies – proponents of GM have a pipeline of technologies, while opponents are appeased by the failure of any to gain approval. The disabling socio-political environment for GM development in Africa greatly reduces the efficacy of investment in this technology.

This has two important implications. First, technology development needs to be located within a wider project of transformation that engages key actors – most notably politicians, policy-makers and farmers – as stakeholders from the outset, and includes strategies to address multiple demand- and supply-side barriers. Second, successful adoption is more likely in countries with less disabling political conditions, characterized by lower levels of consumer distrust and opposition, genuine farmer demand and demonstrable commitment from government. Focusing efforts and resources on a small number of ‘best bet’ countries will also allow donors and technology providers to support more ambitious, transformational projects led by national governments.




techno

The Future of Democracy in Europe: Technology and the Evolution of Representation

3 March 2020

To the extent that perceptions of a crisis in liberal democracy in Europe can be confirmed, this paper investigates the nature of the problem and its causes, and asks what part, if any, digital technology plays in it.

Hans Kundnani

Senior Research Fellow, Europe Programme

2020-02-27-Irish-Referendum.jpg

A woman writes a note on the Savita Halappanavar mural in Dublin on 26 May 2018, following a referendum on the 36th amendment to Ireland’s constitution. The referendum result was overwhelmingly in favour of removing the country’s previous near-universal ban on abortion. Photo: Getty Images.

Summary

  • There is a widespread sense that liberal democracy is in crisis, but little consensus exists on the specific nature and causes of the crisis. In particular, there are three prisms through which the crisis is usually seen: the rise of ‘populism’, ‘democratic deconsolidation’, and a ‘hollowing out’ of democracy. Each reflects normative assumptions about democracy.
  • The exact role of digital technology in the crisis is disputed. Despite the widely held perception that social media is undermining democracy, the evidence for this is limited. Over the longer term, the further development of digital technology could undermine the fundamental preconditions for democracy – though the pace and breadth of technological change make predictions about its future impact difficult.
  • Democracy functions in different ways in different European countries, with political systems on the continent ranging from ‘majoritarian democracies’ such as the UK to ‘consensual democracies’ such as Belgium and Switzerland. However, no type seems to be immune from the crisis. The political systems of EU member states also interact in diverse ways with the EU’s own structure, which is problematic for representative democracy as conventionally understood, but difficult to reform.
  • Political parties, central to the model of representative democracy that emerged in the late 18th century, have long seemed to be in decline. Recently there have been some signs of a reversal of this trend, with the emergence of parties that have used digital technology in innovative ways to reconnect with citizens. Traditional parties can learn from these new ‘digital parties’.
  • Recent years have also seen a proliferation of experiments in direct and deliberative democracy. There is a need for more experimentation in these alternative forms of democracy, and for further evaluation of how they can be integrated into the existing institutions and processes of representative democracy at the local, regional, national and EU levels.
  • We should not think of democracy in a static way – that is, as a system that can be perfected once and for all and then simply maintained and defended against threats. Democracy has continually evolved and now needs to evolve further. The solution to the crisis will not be to attempt to limit democracy in response to pressure from ‘populism’ but to deepen it further as part of a ‘democratization of democracy’.




techno

Diabetes Technology Update: Use of Insulin Pumps and Continuous Glucose Monitoring in the Hospital

Guillermo E. Umpierrez
Aug 1, 2018; 41:1579-1589
Diabetes Care Symposium




techno

Stony Brook University opens Center for Implant and Digital Technology

Stony Brook School of Dental Medicine opened Dec. 5 its Center for Implant and Digital Technology, which will serve as a state-of-the-art space for digital dentistry-focused education, patient care and research.




techno

How Technology Is Improving Safety On the Roads and Reducing Driving Anxiety

Technology has changed a number of aspects of our everyday lives and has led to increased efficiency. But when it comes to driving, has it helped or hindered the process? In this article, we will be looking into some of the ways that technology has improved safety on our roads in the last 10 years. […]




techno

What happens when a Silicon Valley technologist works for the government | Matt Cutts

What if the government ran more like Silicon Valley? Engineer Matt Cutts shares why he decided to leave Google (where he worked for nearly 17 years) for a career in the US government -- and makes the case that if you really want to make an impact, go where your help is needed most.




techno

New Study Shows 1-to-1 Technology Improves Student Achievement in Math Over Time

A new study published in the Educational Evaluation and Policy Analysis journal found that there is potential for 1-to-1 technology programs to increase achievement in the short term, but more so in the medium term.




techno

One State Is Overhauling Its Finance Technology After Long-Standing Fights, Glitches

State education departments' finance technology can cost millions to replace, but those systems are crucial for fiscal transparency and efficiency. Hawaii's is replacing its long-troubled system with a new one to go online this summer.




techno

Choosing Down syndrome : ethics and new prenatal testing technologies / Chris Kaposy.

Down syndrome -- Diagnosis -- Moral and ethical aspects.




techno

Dictionnaire technologique : dans les langues Française, Anglaise et Allemande ; renfermant les termestechniques usités dans les arts et métiers et dans l'industrie en général / rédigé par Alexandre Tolhausen. Revu p

Leipzig : Tauchnitz, 1873-1876.




techno

Technology and adolescent mental health

9783319696386 (electronic bk.)




techno

Semantic technology : 9th Joint International Conference, JIST 2019, Hangzhou, China, November 25-27, 2019, Revised selected papers

Joint International Semantic Technology Conference (9th : 2019 : Hangzhou, China)
9789811534126 (electronic bk.)




techno

Saffron : science, technology and health

9780128187401 (ePub ebook)




techno

Nanomaterials and environmental biotechnology

9783030345440 (electronic bk.)




techno

Microalgae biotechnology for food, health and high value products

9789811501692 (electronic bk.)




techno

Information retrieval technology : 15th Asia Information Retrieval Societies Conference, AIRS 2019, Hong Kong, China, November 7-9, 2019, proceedings

Asia Information Retrieval Societies Conference (15th : 2019 : Hong Kong, China)
9783030428358




techno

Grand challenges in fungal biotechnology

9783030295417 (electronic bk.)




techno

Fresh-cut fruits and vegetables : technologies and mechanisms for safety control

9780128165393 (electronic bk.)




techno

Emerging eco-friendly green technologies for wastewater treatment

9789811513909 (electronic bk.)




techno

Current developments in biotechnology and bioengineering : resource recovery from wastes

0444643222




techno

Cotton production and uses : agronomy, crop protection, and postharvest technologies

9789811514722




techno

Bioremediation and biotechnology : sustainable approaches to pollution degradation

9783030356910 (electronic bk.)




techno

Bacteriophages : biology, technology, therapy

9783319405988 electronic book