com

Predictions and Policymaking: Complex Modelling Beyond COVID-19

1 April 2020

Yasmin Afina

Research Assistant, International Security Programme

Calum Inverarity

Research Analyst and Coordinator, International Security Programme
The COVID-19 pandemic has highlighted the potential of complex systems modelling for policymaking but it is crucial to also understand its limitations.

GettyImages-1208425931.jpg

A member of the media wearing a protective face mask works in Downing Street where Britain's Prime Minister Boris Johnson is self-isolating in central London, 27 March 2020. Photo by TOLGA AKMEN/AFP via Getty Images.

Complex systems models have played a significant role in informing and shaping the public health measures adopted by governments in the context of the COVID-19 pandemic. For instance, modelling carried out by a team at Imperial College London is widely reported to have driven the approach in the UK from a strategy of mitigation to one of suppression.

Complex systems modelling will increasingly feed into policymaking by predicting a range of potential correlations, results and outcomes based on a set of parameters, assumptions, data and pre-defined interactions. It is already instrumental in developing risk mitigation and resilience measures to address and prepare for existential crises such as pandemics, prospects of a nuclear war, as well as climate change.

The human factor

In the end, model-driven approaches must stand up to the test of real-life data. Modelling for policymaking must take into account a number of caveats and limitations. Models are developed to help answer specific questions, and their predictions will depend on the hypotheses and definitions set by the modellers, which are subject to their individual and collective biases and assumptions. For instance, the models developed by Imperial College came with the caveated assumption that a policy of social distancing for people over 70 will have a 75 per cent compliance rate. This assumption is based on the modellers’ own perceptions of demographics and society, and may not reflect all societal factors that could impact this compliance rate in real life, such as gender, age, ethnicity, genetic diversity, economic stability, as well as access to food, supplies and healthcare. This is why modelling benefits from a cognitively diverse team who bring a wide range of knowledge and understanding to the early creation of a model.

The potential of artificial intelligence

Machine learning, or artificial intelligence (AI), has the potential to advance the capacity and accuracy of modelling techniques by identifying new patterns and interactions, and overcoming some of the limitations resulting from human assumptions and bias. Yet, increasing reliance on these techniques raises the issue of explainability. Policymakers need to be fully aware and understand the model, assumptions and input data behind any predictions and must be able to communicate this aspect of modelling in order to uphold democratic accountability and transparency in public decision-making.

In addition, models using machine learning techniques require extensive amounts of data, which must also be of high quality and as free from bias as possible to ensure accuracy and address the issues at stake. Although technology may be used in the process (i.e. automated extraction and processing of information with big data), data is ultimately created, collected, aggregated and analysed by and for human users. Datasets will reflect the individual and collective biases and assumptions of those creating, collecting, processing and analysing this data. Algorithmic bias is inevitable, and it is essential that policy- and decision-makers are fully aware of how reliable the systems are, as well as their potential social implications.

The age of distrust

Increasing use of emerging technologies for data- and evidence-based policymaking is taking place, paradoxically, in an era of growing mistrust towards expertise and experts, as infamously surmised by Michael Gove. Policymakers and subject-matter experts have faced increased public scrutiny of their findings and the resultant policies that they have been used to justify.

This distrust and scepticism within public discourse has only been fuelled by an ever-increasing availability of diffuse sources of information, not all of which are verifiable and robust. This has caused tension between experts, policymakers and public, which has led to conflicts and uncertainty over what data and predictions can be trusted, and to what degree. This dynamic is exacerbated when considering that certain individuals may purposefully misappropriate, or simply misinterpret, data to support their argument or policies. Politicians are presently considered the least trusted professionals by the UK public, highlighting the importance of better and more effective communication between the scientific community, policymakers and the populations affected by policy decisions.

Acknowledging limitations

While measures can and should be built in to improve the transparency and robustness of scientific models in order to counteract these common criticisms, it is important to acknowledge that there are limitations to the steps that can be taken. This is particularly the case when dealing with predictions of future events, which inherently involve degrees of uncertainty that cannot be fully accounted for by human or machine. As a result, if not carefully considered and communicated, the increased use of complex modelling in policymaking holds the potential to undermine and obfuscate the policymaking process, which may contribute towards significant mistakes being made, increased uncertainty, lack of trust in the models and in the political process and further disaffection of citizens.

The potential contribution of complexity modelling to the work of policymakers is undeniable. However, it is imperative to appreciate the inner workings and limitations of these models, such as the biases that underpin their functioning and the uncertainties that they will not be fully capable of accounting for, in spite of their immense power. They must be tested against the data, again and again, as new information becomes available or there is a risk of scientific models becoming embroiled in partisan politicization and potentially weaponized for political purposes. It is therefore important not to consider these models as oracles, but instead as one of many contributions to the process of policymaking.




com

POSTPONED: The Development of Libyan Armed Groups since 2014: Community Dynamics and Economic Interests

Invitation Only Research Event

18 March 2020 - 9:00am to 10:30am

Chatham House | 10 St James's Square | London | SW1Y 4LE

Event participants

Abdul Rahman Alageli, Associate Fellow, MENA Programme, Chatham House
Emaddedin Badi, Non-Resident Scholar, Middle East Institute
Tim Eaton, Senior Research Fellow, MENA Programme Chatham House
Valerie Stocker, Independent Researcher

Since the overthrow of the regime of Muammar Gaddafi in 2011, Libya’s multitude of armed groups have followed a range of paths. While many of these have gradually demobilized, others have remained active, and others have expanded their influence. In the west and south of the country,  armed groups have used their state affiliation to co-opt the state and professionals from the state security apparatus into their ranks.

In the east, the Libyan Arab Armed Forces projects a nationalist narrative yet is ultimately subservient to its leader, Field Marshal Khalifa Haftar. Prevailing policy narratives presuppose that the interests of armed actors are distinct from those of the communities they claim to represent. Given the degree to which most armed groups are embedded in local society, however, successful engagement will need to address the fears, grievances and desires of the surrounding communities, even while the development of armed groups’ capacities dilutes their accountability to those communities.

This roundtable will discuss the findings of a forthcoming Chatham House research paper, ‘The Development of Libyan Armed Groups Since 2014: Community Dynamics and Economic Interests’, which presents insights from over 200 interviews of armed actors and members of local communities and posits how international policymakers might seek to curtail the continued expansion of the conflict economy.

PLEASE NOTE THIS EVENT IS POSTPONED UNTIL FURTHER NOTICE.

Event attributes

Chatham House Rule

Georgia Cooke

Project Manager, Middle East and North Africa Programme
+44 (0)20 7957 5740




com

The Development of Libyan Armed Groups Since 2014: Community Dynamics and Economic Interests

17 March 2020

This paper explores armed group–community relations in Libya and the sources of revenue that have allowed armed groups to grow in power and influence. It draws out the implications for policy and identifies options for mitigating conflict dynamics.

Tim Eaton

Senior Research Fellow, Middle East and North Africa Programme

Abdul Rahman Alageli

Associate Fellow, Middle East and North Africa Programme

Emadeddin Badi

Policy Leader Fellow, School of Transnational Governance, European University Institute

Mohamed Eljarh

Co-founder and CEO, Libya Outlook

Valerie Stocker

Researcher

Amru_24-2_13.jpg

Fighters of the UN-backed Government of National Accord patrol in Ain Zara suburb in Tripoli, February 2020. Photo: Amru Salahuddien

Summary

  • Libya’s multitude of armed groups have followed a range of paths since the emergence of a national governance split in 2014. Many have gradually demobilized, others have remained active, and others have expanded their influence. However, the evolution of the Libyan security sector in this period remains relatively understudied. Prior to 2011, Libya’s internal sovereignty – including the monopoly on force and sole agency in international relations – had been personally vested in the figure of Muammar Gaddafi. After his death, these elements of sovereignty reverted to local communities, which created armed organizations to fill that central gap. National military and intelligence institutions that were intended to protect the Libyan state have remained weak, with their coherence undermined further by the post-2014 governance crisis and ongoing conflict. As a result, the most effective armed groups have remained localized in nature; the exception is the Libyan Arab Armed Forces (LAAF), which has combined and amalgamated locally legitimate forces under a central command.
  • In the west and south of the country, the result of these trends resembles a kind of inversion of security sector reform (SSR) and disarmament, demobilization and reintegration (DDR): the armed groups have used their state affiliation to co-opt the state and professionals from the state security apparatus into their ranks; and have continued to arm, mobilize and integrate themselves into the state’s security apparatus without becoming subservient to it. In the eastern region, the LAAF projects a nationalist narrative yet is ultimately subservient to its leader, Field Marshal Khalifa Haftar. The LAAF has co-opted social organizations to dominate political and economic decision-making.
  • The LAAF has established a monopoly over the control of heavy weapons and the flow of arms in eastern Libya, and has built alliances with armed groups in the east. Armed groups in the south have been persuaded to join the LAAF’s newly established command structure. The LAAF’s offensive on the capital, which started in April 2019, represents a serious challenge to armed groups aligned with the Tripoli-based Government of National Accord (GNA). The fallout from the war will be a challenge to the GNA or any future government, as groups taking part in the war will expect to be rewarded. SSR is thus crucial in the short term: if the GNA offers financial and technical expertise and resources, plus legal cover, to armed groups under its leadership, it will increase the incentive for armed groups to be receptive to its plans for reform.
  • Prevailing policy narratives presuppose that the interests of armed actors are distinct from those of the communities they claim to represent. Given the degree to which most armed groups are embedded in local society, however, successful engagement will necessarily rely on addressing the fears, grievances and desires of the surrounding communities. Yet the development of armed groups’ capacities, along with their increasing access to autonomous means of generating revenue, has steadily diluted their accountability to local communities. This process is likely to be accelerated by the ongoing violence around Tripoli.
  • Communities’ relationship to armed groups varies across different areas of the country, reflecting the social, political, economic and security environment:
  • Despite their clear preference for a more formal, state-controlled security sector, Tripoli’s residents broadly accept the need for    the presence of armed groups to provide security. The known engagement of the capital’s four main armed groups in criminal activity is a trade-off that many residents seem able to tolerate, providing that overt violence remains low. Nonetheless, there is a widespread view that the greed of Tripoli’s armed groups has played a role in stoking the current conflict.
  • In the east, many residents appear to accept (or even welcome) the LAAF’s expansion beyond the security realm, provided that it undertakes these roles effectively. That said, such is the extent of LAAF control that opposition to the alliance comes at a high price.
  • In the south, armed groups draw heavily on social legitimacy, acting as guardians of tribal zones of influence and defenders of their respective communities against outside threats, while also at times stoking local conflicts. Social protections continue to hold sway, meaning that accountability within communities is also limited.
  • To varying extents since 2014, Libya’s armed groups have developed networks that enmesh political and business stakeholders in revenue-generation models:
  • Armed groups in Tripoli have compensated for reduced financial receipts from state budgets by cultivating unofficial and illicit sources of income. They have also focused on infiltrating state institutions to ensure access to state budgets and contracts dispersed in the capital.
  • In the east of the country, the LAAF has developed a long-term strategy to dominate the security, political and economic spheres through the establishment of a quasi-legal basis for receiving funds from Libya’s rival state authorities. It has supplemented this with extensive intervention in the private sector. External patronage supports military operations, but also helps to keep this financial system, based on unsecured debt, afloat.
  • In the south, limited access to funds from the central state has spurred armed groups to become actively involved in the economy. This has translated into the taxation of movement and the imposition of protection fees, particularly on informal (and often illicit) activity.
  • Without real commitment from international policymakers to enforcing the arms embargo and protecting the economy from being weaponized, Libya will be consigned to sustained conflict, further fragmentation and potential economic collapse. Given the likely absence of a political settlement in the short term, international policymakers should seek to curtail the continued expansion of the conflict economy by reducing armed groups’ engagement in economic life.
  • In order to reduce illicit activities, international policymakers should develop their capacity to identify and target chokepoints along illicit supply chains, with a focus on restraining activities and actors in closest proximity to violence. Targeted sanctions against rent maximizers (both armed and unarmed) is likely to be the most effective strategy. More effective investigation and restraint of conflict economy actors will require systemic efforts to improve transparency and enhance the institutional capacity of anti-corruption authorities. International policymakers should also support the development of tailored alternative livelihoods that render conflict economy activities less attractive.




com

Head-to-head comparison of 68Ga-DOTA-JR11 and 68Ga-DOTATATE PET/CT in patients with metastatic, well-differentiated neuroendocrine tumors: a prospective study

Purpose: 68Ga-DOTA-JR11 is an antagonist for somatostatin receptor used in neuroendocrine imaging. The purpose of this study is to compare 68Ga-DOTA-JR11 and 68Ga-DOTATATE PET/CT in patients with metastatic, well-differentiated neuroendocrine tumors. Methods: Patients with histologically-proven, metastatic and/or unresectable, well-differentiated neuroendocrine tumors were prospectively recruited in this study. They received an intravenous injection of 68Ga-DOTATATE (4.0 ± 1.3 mCi) on the first day and 68Ga-DOTA-JR11 (4.0 ± 1.4 mCi) on the second day. Whole-body PET/CT scans were performed at 40 to 60 minutes after injection on the same scanner. Physiologic uptake of normal organs, lesion numbers, and lesion uptake were compared. Results: Twenty-nine patients were prospectively enrolled in the study. The SUVmax of the spleen, renal cortex, adrenal glands, pituitary glands, stomach wall, normal liver parenchyma, small intestine, pancreas, and bone marrow were significantly lower on 68Ga-DOTA-JR11 than on 68Ga-DOTATATE PET/CT (P<0.001). 68Ga-DOTA-JR11 detected significantly more liver lesions (539 vs. 356, P = 0.002), but fewer bone lesions (156 vs. 374, P = 0.031, Figure 3) than 68Ga-DOTATATE. The tumor-to-background ratio of liver lesions was significantly higher on 68Ga-DOTA-JR11 (7.6 ± 5.1 vs. 3.4 ± 2.0, P<0.001). 68Ga-DOTA-JR11 and 68Ga-DOTATATE PET/CT showed comparable results for primary tumors and lymph node metastases based on either patient-based or lesion-based comparison. Conclusion: 68Ga-DOTA-JR11 performs better in the detection ability and TBR of liver metastases. However, 68Ga-DOTATATE outperforms 68Ga-DOTA-JR11 in the detection of bone metastases. The differential affinity of different metastatic sites provides key information for patient selection in imaging and peptide receptor radionuclide therapy.




com

Combined Visual and Semi-quantitative Evaluation Improves Outcome Prediction by Early Mid-treatment 18F-fluoro-deoxi-glucose Positron Emission Tomography in Diffuse Large B-cell Lymphoma.

The purpose of this study was to assess the predictive and prognostic value of interim FDG PET (iPET) in evaluating early response to immuno-chemotherapy after two cycles (PET-2) in diffuse large B-cell lymphoma (DLBCL) by applying two different methods of interpretation: the Deauville visual five-point scale (5-PS) and a change in standardised uptake value by semi-quantitative evaluation. Methods: 145 patients with newly diagnosed DLBCL underwent pre-treatment PET (PET-0) and PET-2 assessment. PET-2 was classified according to both the visual 5-PS and percentage SUV changes (SUV). Receiver operating characteristic (ROC) analysis was performed to compare the accuracy of the two methods for predicting progression-free survival (PFS). Survival estimates, based on each method separately and combined, were calculated for iPET-positive (iPET+) and iPET-negative (iPET–) groups and compared. Results: Both with visual and SUV-based evaluations significant differences were found between the PFS of iPET– and iPET+ patient groups (p<0.001). Visually the best negative (NPV) and positive predictive value (PPV) occurred when iPET was defined as positive if Deauville score 4-5 (89% and 59%, respectively). Using the 66% SUV cut-off value, reported previously, NPV and PPV were 80 and 76%, respectively. SUV at 48.9% cut-off point, reported for the first time here, produced 100% specificity along with the highest sensitivity (24%). Visual and semi-quantitative SUV<48.9% assessment of each PET-2 gave the same PET-2 classification (positive or negative) in 70% (102/145) of all patients. This combined classification delivered NPV and PPV of 89% and 100% respectively, and all iPET+ patients failed to achieve or remain in remission. Conclusion: In this large consistently treated and assessed series of DLBCL, iPET had good prognostic value interpreted either visually or semi-quantitatively. We determined that the most effective SUV cut-off was at 48.9%, and that when combined with visual 5-PS assessment, a positive PET-2 was highly predictive of treatment failure.




com

Long term follow-up and outcomes of re-treatment in an expanded 50 patient single-center phase II prospective trial of Lutetium-177 (177Lu) PSMA-617 theranostics in metastatic castrate-resistant prostate cancer

Objectives: Lutetium-177 (177Lu)-PSMA-617 (LuPSMA) is a radioligand with high affinity for prostate specific membrane antigen (PSMA) enabling targeted beta-irradiation of prostate cancer. We have previously reported favorable activity with low toxicity in a prospective phase II trial involving 30 men with metastatic castrate-resistant prostate cancer (mCRPC). We now report their longer-term outcomes including a 20 patient extension cohort and outcomes of subsequent systemic treatments following completion of trial therapy. Methods: 50 patients with PSMA-avid mCRPC who had progressed after standard therapies received up to 4 cycles of LuPSMA every 6 weeks. Endpoints included PSA response (PCWG2), toxicity (CTCAE v4.03), imaging response, patient-reported health-related quality of life (QoL), progression-free and overall survival. We also describe, as a novel finding, outcomes of men who subsequently progressed and had further systemic therapies, including LuPSMA. Results: 75 men were screened to identify 50 patients eligible for treatment. Adverse prognostic features of the cohort included short median PSA doubling time (2.3 months) and extensive prior treatment including prior docetaxel (84%), cabazitaxel (48%), and abiraterone and/or enzalutamide (90%). The mean administered radioactivity was 7.5 GBq/cycle. PSA decline ≥ 50% was achieved in 32 of 50 patients (64%, 95% CI 50-77%), including 22 patients (44%, 95% CI 30-59%) with ≥ 80% decrease. Of 27 patients with measurable soft tissue disease, 15 (56%) achieved an objective response by RECIST 1.1. The most common toxicities attributed to LuPSMA were self-limiting G1-2 dry mouth (66%), transient G1-2 nausea (48%), G3-4 thrombocytopenia (10%) and G3 anemia (10%). Brief pain inventory severity and interference scores decreased at all time points including at the 3 month follow-up with a decrease of -1.2 (95% CI -0.5 to -1.9, P = 0.001) and 1.0 (95% CI -0.2 to -0.18, P = 0.013), respectively. At a median follow-up of 31.4 months, median OS was 13.3 months (95% CI 10.5-18.7) with a significantly longer survival of 18.4 months (95% CI 13.8-23.8) in patients achieving a PSA decline ≥ 50%. At progression following prior response, further LuPSMA was administered to 15 (30%) patients (median 2 cycles commencing 359 days from enrolment) with PSA decline ≥ 50% in 11 patients (73%). 4 of 21 patients (19%) receiving other systemic therapies upon progression experienced PSA decline ≥ 50%. There were no unexpected adverse events with LuPSMA re-treatment. Conclusion: This expanded 50 patient cohort of men with extensive prior therapy confirms our earlier report of high response rates, low toxicity and improved QoL with LuPSMA radioligand therapy. Upon progression, re-challenge LuPSMA demonstrated higher response rates than other systemic therapies.




com

Comparison between 18F-FDG-PET- and CT-based criteria in non-small cell lung cancer (NSCLC) patients treated with Nivolumab

Due to their peculiar mechanism of action, the evaluation of radiological response to immune checkpoint inhibitors (ICI) presents many challenges in solid tumors. We aimed to compare the evaluation of first response to Nivolumab by means of CT-based criteria with respect to fluorodeoxyglucose positron emission tomography (FDG-PET) response criteria in non-small-cell lung cancer (NSCLC) patients. Methods: 72 patients with advanced NSCLC were recruited in a mono-institutional ancillary trial within the expanded access program (EAP; NCT02475382) for Nivolumab. Patients underwent CT scan and FDG-PET at baseline and after 4 cycles (first evaluation). In case of progressive disease (PD), an additional evaluation was performed after two further cycles in order to confirm progression. We evaluated the response to treatment with CT scan by means of response evaluation criteria in solid tumors (RECIST) 1.1 and Immuno-related Response Criteria (IrRC) and with FDG-PET by means of PERCIST and immunotherapy-modified-PERCIST (imPERCIST) criteria. The concordance between CT- and PET-based criteria and the capability of each method to predict overall survival (OS) were evaluated. Results: 48/72 patients were evaluable for first response assessment with both PET- and CT-based criteria. We observed low concordance between CT- and PET-based criteria (Kappa value of 0.346 and 0.355 and Kappa value of 0.128 and 0.198 between PERCIST and imPERCIST versus RECIST and irRC respectively). Looking at OS, IrRC were more reliable to distinguish responders from non-responders. However thanks to the prognostic value of partial metabolic response assessed by both PERCIST and Immuno-PERCIST, PET-based response maintained prognostic significant in patients classified as progressive disease on the basis of irRC. Conclusion: Even though the present study did not support the routine use of FDG-PET in the general population of NSCLC patients treated with ICI, it suggests the added prognostic value of the metabolic response assessment, potentially improving the therapeutic decision-making.




com

Inflammation-based index and 68Ga-DOTATOC PET-derived uptake and volumetric parameters predict outcome in neuroendocrine tumor patients treated with 90Y-DOTATOC

We performed post-hoc analyses on the utility of pre-therapeutic and early interim 68Ga-DOTA-Tyr3-octreotide (68Ga-DOTATOC) positron emission tomography (PET) tumor uptake and volumetric parameters and a recently proposed biomarker, the inflammation-based index (IBI), for peptide receptor radionuclide therapy (PRRT) in neuroendocrine tumor (NET) patients treated with 90Y-DOTATOC in the setting of a prospective phase II trial. Methods: Forty-three NET patients received up to four cycles of 1.85 GBq/m²/cycle 90Y-DOTATOC with a maximal kidney biologic effective dose of 37 Gy. All patients underwent a 68Ga-DOTATOC PET/computed tomography (CT) at baseline and seven weeks after the first PRRT cycle. 68Ga-DOTATOC-avid tumor lesions were semi-automatically delineated using a customized standardized uptake value (SUV) threshold-based approach. PRRT response was assessed on CT using RECIST 1.1. Results: Median progression-free survival (PFS) and overall survival (OS) were 13.9 and 22.3 months, respectively. An SUVmean higher than 13.7 (75th percentile (P75)) was associated with better survival (hazard ratio (HR) 0.45; P = 0.024), whereas a 68Ga-DOTATOC-avid tumor volume higher than 578 ml (P75) was associated with worse OS (HR 2.18; P = 0.037). Elevated baseline IBI was associated with worse OS (HR 3.90; P = 0.001). Multivariate analysis corroborated independent associations between OS and SUVmean (P = 0.016) and IBI (P = 0.015). No significant correlations with PFS were found. A composite score based on SUVmean and IBI allowed to further stratify patients in three categories with significantly different survival. On early interim PET, a decrease in SUVmean of more than 17% (P75) was associated with worse survival (HR 2.29; P = 0.024). Conclusion: Normal baseline IBI and high 68Ga-DOTATOC tumor uptake predict better outcome in NET patients treated with 90Y-DOTATOC. This can be used for treatment personalization. Interim 68Ga-DOTATOC PET does not provide information for treatment personalization.




com

18F-Fluorodeoxyglucose Positron Emission Tomography / Computed Tomography in Left-Ventricular Assist Device Infection: Initial Results Supporting the Usefulness of Image-Guided Therapy

Background: Accurate definition of the extent and severity of left-ventricular assist device (LVAD) infection may facilitate therapeutic decision making and targeted surgical intervention. Here, we explore the value of 18F-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) for guidance of patient management. Methods: Fifty-seven LVAD-carrying patients received 85 whole-body 18F-FDG PET/CT scans for the work-up of device infection. Clinical follow-up was obtained over a period of up to two years. Results: PET/CT showed various patterns of infectious involvement of the 4 LVAD components: driveline entry point (77% of cases), subcutaneous driveline path (87%), pump pocket (49%) and outflow tract (58%). Driveline smears revealed staphylococcus or pseudomonas strains as the underlying pathogen in a majority of cases (48 and 34%, respectively). At receiver-operating characteristics analysis, an 18F-FDG standardized uptake value (SUV) >2.5 was most accurate to identify smear-positive driveline infection. Infection of 3 or all 4 LVAD components showed a trend towards lower survival vs infection of 2 or less components (P = 0.089), while involvement of thoracic lymph nodes was significantly associated with adverse outcome (P = 0.001 for nodal SUV above vs below median). Finally, patients that underwent early surgical revision within 3 months after PET/CT (n = 21) required significantly less inpatient hospital care during follow-up when compared to those receiving delayed surgical revision (n = 11; p<0.05). Conclusion: Whole-body 18F-FDG PET/CT identifies the extent of LVAD infection and predicts adverse outcome. Initial experience suggests that early image-guided surgical intervention may facilitate a less complicated subsequent course.




com

Head to head prospective comparison of quantitative lung scintigraphy and segment counting in predicting pulmonary function of lung cancer patients undergoing video-assisted thoracoscopic lobectomy

Prediction of post-operative pulmonary function in lung cancer patients before tumor resection is essential for patient selection for surgery and is conventionally done with a non-imaging segment counting method (SC) or a two-dimensional planar lung perfusion scintigraphy (PS). The purpose of this study was to compare quantitative analysis of PS to single photon emission computed tomography/computed tomography (SPECT/CT) and to estimate the accuracy of SC, PS and SPECT/CT in predicting post-operative pulmonary function in patients undergoing lobectomy. Methods: Seventy-five non-small cell lung cancer (NSCLC) patients planned for lobectomy were prospectively enrolled (68% males, average age 68.1±8 years ). All patients completed pre-operative forced expiratory volume capacity (FEV1), diffusing capacity of the lung for carbon monoxide (DLCO), Tc99m-MAA lung perfusion scintigraphy with PS and SPECT/CT quantification. A subgroup of 60 patients underwent video-assisted thoracoscopic (VATS) lobectomy and measurement of post-operative FEV1 and DLCO. Relative uptake of the lung lobes estimated by PS and SPECT/CT were compared. Predicted post-operative FEV1 and DLCO were derived from SC, PS and SPECT/CT. Prediction results were compared between the different methods and the true post-operative measurements in patients who underwent lobectomy. Results: Relative uptake measurements differed significantly between PS and SPECT/CT in right lung lobes, with a mean difference of -8.2±3.8, 18.0±5.0 and -11.5±6.1 for right upper, middle and lower lobes respectively (p<0.001). The differences between the methods in the left lung lobes were minor with a mean difference of -0.4±4.4 (p>0.05) and -2.0±4.0 (p<0.001) for left upper and lower lobes respectively. No significant difference and strong correlation (R=0.6-0.76, p<0.001) were found between predicted post-operative lung function values according to SC, PS, SPECT/CT and the actual post-operative FEV1 and DLCO. Conclusion: Although lobar quantification parameters differed significantly between PS and SPECT/CT, no significant differences were found between the predicted post-operative lung function results derived from these methods and the actual post-operative results. The additional time and effort of SPECT/CT quantification may not have an added value in patient selection for surgery. SPECT/CT may be advantageous in patients planned for right lobectomies but further research is warranted.




com

Evaluation of 11C-NR2B-SMe and its Enantiomers as PET Radioligands for Imaging the NR2B Subunit within the NMDA Receptor Complex in Rats

[S-Methyl-11C](±)-7-methoxy-3-(4-(4-(methylthio)phenyl)butyl)-2,3,4,5-tetrahydro-1H-benzo[d]azepin-1-ol (11C-NR2B-SMe) and its enantiomers were synthesized as candidates for imaging the NR2B subunit within the N-methyl-D-aspartate receptor with positron emission tomography (PET). Methods: Brains were scanned with PET for 90 min after intravenous injection of one of the candidate radioligands into rats. To detect any NR2B specific binding of radioligand in brain, various pre-blocking or displacing agents were evaluated for their impact on the PET brain imaging data. Radiometabolites from brain and other tissues were measured ex vivo and in vitro. Results: Each radioligand gave high early whole brain uptake of radioactivity, followed by a brief fast decline and then a slow final decline. 11C-(S)-NR2B-SMe was studied extensively. Ex vivo measurements showed that radioactivity in rat brain at 30 min after radioligand injection was virtually unchanged radioligand. Only less lipophilic radiometabolites appeared in plasma. High-affinity NR2B ligands, Ro-25-6981, ifenprodil, and CO10124, showed increasing preblock of whole brain radioactivity retention with increasing dose (0.01 to 1.25 mg/kg, i.v.). Five 1 antagonists (FTC146, BD1407, F3, F4, and NE100) and four 1 agonists ((+)-pentazocine, (±)-PPCC, PRE-084, (+)-SKF10047) were ineffective preblocking agents, except FTC146 and F4 at high dose. Two potent 1 receptor agonists, TC1 and SA4503, showed dose-dependent preblocking effects in the presence or absence of pharmacological 1 receptor blockade with FTC146. Conclusion: 11C-(S)-NR2B-SMe has adequate NR2B-specific PET signal in rat brain to warrant further evaluation in higher species. TC1 and SA4503 likely have off-target binding to NR2B in vivo.




com

Data-driven motion detection and event-by-event correction for brain PET: Comparison with Vicra

Head motion degrades image quality and causes erroneous parameter estimates in tracer kinetic modeling in brain PET studies. Existing motion correction methods include frame-based image-registration (FIR) and correction using real-time hardware-based motion tracking (HMT) information. However, FIR cannot correct for motion within one predefined scan period while HMT is not readily available in the clinic since it typically requires attaching a tracking device to the patient. In this study, we propose a motion correction framework with a data-driven algorithm, i.e., using the PET raw data itself, to address these limitations. Methods: We propose a data-driven algorithm, Centroid of Distribution (COD), to detect head motion. In COD, the central coordinates of the line of response (LOR) of all events are averaged over 1-sec intervals to generate a COD trace. A point-to-point change in the COD trace in one direction that exceeded a user-defined threshold was defined as a time point of head motion, which was followed by manually adding additional motion time points. All the frames defined by such time points were reconstructed without attenuation correction and rigidly registered to a reference frame. The resulting transformation matrices were then used to perform the final motion compensated reconstruction. We applied the new COD framework to 23 human dynamic datasets, all containing large head motions, with 18F-FDG (N = 13) and 11C-UCB-J (N = 10), and compared its performance with FIR and with HMT using the Vicra, which can be considered as the "gold standard". Results: The COD method yielded 1.0±3.2% (mean ± standard deviation across all subjects and 12 grey matter regions) SUV difference for 18F-FDG (3.7±5.4% for 11C-UCB-J) compared to HMT while no motion correction (NMC) and FIR yielded -15.7±12.2% (-20.5±15.8%) and -4.7±6.9% (-6.2±11.0%), respectively. For 18F-FDG dynamic studies, COD yielded differences of 3.6±10.9% in Ki value as compared to HMT, while NMC and FIR yielded -18.0±39.2% and -2.6±19.8%, respectively. For 11C-UCB-J, COD yielded 3.7±5.2% differences in VT compared to HMT, while NMC and FIR yielded -20.0±12.5% and -5.3±9.4%, respectively. Conclusion: The proposed COD-based data-driven motion correction method outperformed FIR and achieved comparable or even better performance as compared to the Vicra HMT method in both static and dynamic studies.




com

Performance of digital PET compared to high-resolution conventional PET in patients with cancer

Recently introduced PET systems using silicon photomultipliers with digital readout (dPET) have an improved timing and spatial resolution, aiming at a better image quality, over conventional PET (cPET) systems. We prospectively evaluated the performance of a dPET system in patients with cancer, as compared to high-resolution (HR) cPET imaging. Methods: After a single FDG-injection, 66 patients underwent dPET (Vereos, Philips Healthcare) and cPET (Ingenuity TF, Philips Healthcare) imaging in a randomized order. We used HR-reconstructions (2x2x2 mm3 voxels) for both scanners and determined SUVmax, SUVmean, lesion-to-background ratio (LBR), metabolic tumor volume (MTV) and lesion diameter in up to 5 FDG-positive lesions per patient. Furthermore, we counted the number of visible and measurable lesions on each PET scan. Two nuclear medicine specialists blindly determined the Tumor Node Metastasis (TNM) score from both image sets in 30 patients referred for initial staging. For all 66 patients, these specialists separately and blindly evaluated image quality (4-point scale) and determined the scan preference. Results: We included 238 lesions that were visible and measurable on both PET scans. We found 37 additional lesions on dPET in 27 patients (41%), which were unmeasurable (n = 14) or invisible (n = 23) on cPET. SUVmean, SUVmax, LBR and MTV on cPET were 5.2±3.9 (mean±SD), 6.9±5.6, 5.0±3.6 and 2991±13251 mm3, respectively. On dPET SUVmean, SUVmax and LBR increased 24%, 23% and 27%, respectively (p<0.001) while MTV decreased 13% (p<0.001) compared to cPET. Visual analysis showed TNM upstaging with dPET in 13% of the patients (4/30). dPET images also scored higher in image quality (P = 0.003) and were visually preferred in the majority of cases (65%). Conclusion: Digital PET improved the detection of small lesions, upstaged the disease and images were visually preferred as compared to high-resolution conventional PET. More studies are necessary to confirm the superior diagnostic performance of digital PET.




com

Early prostate-specific antigen changes and clinical outcome following 177Lu-PSMA radionuclide treatment in patients with metastatic castration-resistant prostate cancer

Background: Prostate-specific antigen (PSA) is widely used to monitor treatment response in patients with metastatic castration-resistant prostate cancer (mCRPC). However, PSA measurements are considered only after 12 wk of treatment. We aimed to evaluate the prognostic value of early PSA changes following 177Lu-labelled prostate specific membrane antigen (LuPSMA) radionuclide treatment in mCRPC patients. Methods: Men who were treated under a compassionate access program with LuPSMA at our institution and had available PSA values at baseline, at 6 wk after treatment initiation were included in this retrospective analysis. Patients were assigned to three groups based on PSA changes: 1) response: ≥30% decline, 2) progression: ≥25% increase and 3) stable: <30% decline and <25% increase. The co-primary endpoints were overall survival and imaging-based progression-free survival. The secondary end points were PSA changes at 12 wk and PSA flare-up. Results: We identified 124 eligible patients with PSA values at 6 wk. A ≥30% decline in PSA at 6 wk was associated with longer overall survival (median 16.7 mo; 95%CI 14.4–19.0) compared with patients with stable PSA (median: 11.8 mo; 95%CI 8.6–15.1; P = 0.007) and progression (median: 6.5 mo; 95%CI 5.2–7.8; p<0.001). Patients with ≥30% decline in PSA at 6 wk also had a reduced risk of imaging-based progression compared with patients with stable PSA (HR: 0.60; 95%CI 0.38–0.94; P = 0.02), while patients with PSA progression had a higher risk of imaging-based progression compared with those showing stable PSA (HR: 3.18; 95%CI 1.95–5.21; p<0.001). The percentage changes of PSA at 6 wk and 12 wk were highly associated (r=0.90; p<0.001). 29 of 31 (94%) patients who experienced early PSA progression at 6 wk achieved biochemical progression at 12 wk. Overall, only 1 of 36 (3%) patients with PSA progression at 6 wk achieved any PSA decline at 12 wk (1% of the entire cohort). Limitations of the study included its retrospective nature and the single center experience. Conclusion: PSA changes at 6 wk after LuPSMA initiation are an early indicator of long-term clinical outcome. Patients progressing by PSA after 6 wk of treatment could benefit from a very early treatment switch decision. PSA flare-up during LuPSMA treatment is very uncommon. Prospective studies are now warranted to validate our findings and potentially inform clinicians earlier on the effectiveness of LuPSMA.




com

A Prospective, Comparative Study of Planar and Single-photon Emission Computed Tomography Ventilation/Perfusion Imaging for Chronic Thromboembolic Pulmonary Hypertension

Objectives: The study compared the diagnostic performance of Planar Ventilation/perfusion (V/Q) and V/Q Single-photon computed tomography (SPECT), and determined whether combining perfusion scanning with low-dose computed tomography (Q-LDCT) may be equally effective in a prospective study of patients with chronic thromboembolic pulmonary hypertension (CTEPH) patients. Background: V/Q scanning is recommended for excluding CTEPH during the diagnosis of pulmonary hypertension (PH). However, Planar V/Q and V/Q SPECT techniques have yet to be compared in patients with CTEPH. Methods: Patients with suspected PH were eligible for the study. PH attributable to left heart disease or lung disease was excluded, and patients whose PH was confirmed by right heart catheterization and who completed Planar V/Q, V/Q-SPECT, Q-LDCT, and pulmonary angiography were included. V/Q images were interpreted and patients were diagnosed as instructed by the 2009 EANM guidelines, and pulmonary angiography analyses were used as a reference standard. Results: A total of 208 patients completed the study, including 69 with CTEPH confirmed by pulmonary angiography. Planar V/Q, V/Q-SPECT, and Q-LDCT were all highly effective for diagnosing CTEPH, with no significant differences in sensitivity or specificity observed among the three techniques (Planar V/Q [sensitivity/specificity]: 94.20%/92.81%; V/Q-SPECT: 97.10%/91.37%, Q-LCDT: 95.65%/90.65%). However, V/Q-SPECT was significantly more sensitive (V/Q-SPECT: 79.21%; Planar V/Q: 75.84%, P = 0.012; Q-LDCT: 74.91%, p<0.001), and Planar V/Q was significantly more specific (Planar V/Q: 54.14%; V/Q-SPECT 46.05%, p<0.001; Q-LDCT: 46.05%, P = 0.001) than the other two techniques for identifying perfusion defects in individual lung segments. Conclusion: Both Planar V/Q and V/Q-SPECT were highly effective for diagnosing CTEPH, and Q-LDCT may be a reliable alternative method for patients who are unsuitable for ventilation imaging.




com

Interim PET evaluation in diffuse large B-cell lymphoma employing published recommendations: Comparison of the Deauville 5-point scale and the {Delta}SUVmax method

The value of interim 18F-fluorodeoxyglucose positron emission tomography (iPET) guided treatment decisions in patients with diffuse large B-cell lymphoma (DLBCL) has been the subject of much debate. This investigation focuses on a comparison of the Deauville score and the deltaSUVmax (SUVmax) approach – two methods to assess early metabolic response to standard chemotherapy in DLBCL. Methods: Of 609 DLBCL patients participating in the Positron Emission Tomography-guided Therapy of Aggressive non-Hodgkin Lymphomas (PETAL) trial, iPET scans of 596 patients originally evaluated using the SUVmax method were available for post-hoc assessment of the Deauville score. A commonly used definition of an unfavorable iPET result according to the Deauville score is an uptake greater than that of the liver, whereas an unfavorable iPET scan with regard to the SUVmax approach is characterized as a relative reduction of the maximum standardized uptake value between baseline and iPET staging of less than or equal to 66%. We investigated the two methods’ correlation and concordance by Spearman’s rank correlation coefficient and the agreement in classification, respectively. We further used Kaplan-Meier curves and Cox regression to assess differences in survival between patient subgroups defined by the pre-specified cut-offs. Time-dependent receiver operating curve analysis provided information on the methods’ respective discrimination performance. Results: Deauville score and SUVmax approach differed in their iPET-based prognosis. The SUVmax approach outperformed the Deauville score in terms of discrimination performance – most likely due to a high number of false-positive decisions by the Deauville score. Cut-off-independent discrimination performance remained low for both methods, but cut-off-related analyses showed promising results. Both favored the SUVmax approach, e.g. for the segregation by iPET response, where the event-free survival hazard ratio was 3.14 (95% confidence interval (CI): 2.22 – 4.46) for SUVmax and 1.70 (95% CI: 1.29 – 2.24) for the Deauville score. Conclusion: When considering treatment intensification, the currently used Deauville score cut-off of an uptake above that of the liver seems to be inappropriate and associated with potential harm for DLBCL patients. The SUVmax criterion of a relative reduction of the maximum standardized uptake value of less than or equal to 66% should be considered as an alternative.




com

Rebel diplomacy and digital communication: public diplomacy in the Sahel

6 November 2019 , Volume 95, Number 6

Michèle Bos and Jan Melissen

Most research on social media as a tool for public diplomacy focuses on its use by recognized international actors to advance their national interest and reputation, deliver foreign policy objectives or promote their global interests. This article highlights the need for paying more attention to non-state diplomacy in conflict situations outside the western world. We examine how rebel groups use new media to enhance their communications, and what the motivations behind this are. Our public diplomacy perspective helps convey the scope of rebel communications with external actors and provides insights for policy-makers seeking to ascertain the nature, intentions and capacities of myriad rebel groups. Our focus is on the Sahel region, where numerous such groups vying for international attention and support make use of multiple social media channels. We analyse two groups in Mali: the MNLA, a Tuareg secessionist group; and Ansar Dine, a Salafist insurgency with ties to Al-Qaeda in the Islamic Maghreb. Our qualitative analysis of Ansar Dine and MNLA communications on several digital platforms helps identify these African rebel groups' international and local framing activities. Rebel groups use public diplomacy nimbly and pragmatically. The digital age has fundamentally changed which stakeholders such groups can reach, and we suggest that social media increase the power they are able to carve out for themselves on the international stage.




com

Can the New European Commission Deliver on Its Promises to Africa?

4 December 2019

Fergus Kell

Projects Assistant, Africa Programme

Damir Kurtagic

Former Academy Robert Bosch Fellow, Africa Programme
Familiar promises of equal partnership must be backed by bolder action, including an expanded budget, internal reform and a rethink of its approach to trade negotiations.

2019-12-03-Urpilainen.jpg

Jutta Urpilainen, new EU commissioner for international partnerships, at the European Parliament in Brussels in October. Photo: Getty Images.

The new European Commission, headed by Ursula von der Leyen, assumed office on 1 December, and there are early signs that Africa will begin near the top of their foreign policy priorities. Policy towards Africa under the new EU administration is yet to be fully defined, but its contours are already visible in the selection of commissioners and assignment of portfolios.  

Although rumours of a dedicated commissioner for Africa were unfounded, the appointment of Jutta Urpilainen to the new role of commissioner for international partnerships – replacing the former post of development commissioner – is a strong signal of ongoing change in EU development thinking, away from bilateral aid towards trade and investment, including by the private sector. 

This may have significant consequences for the EU’s relationship with Africa. In her mission letter to Urpilainen in September, von der Leyen listed the first objective as a new ‘comprehensive strategy for Africa’. Urpilainen, Finland’s finance minister before being posted to Ethiopia as special representative on mediation, has also described her appointment as an opportunity to move on from traditional measures of aid delivery. 

Ambition or incoherence? 

However, this ambition may be at odds with other EU priorities and practices, notably managing migration and institutions and instruments for governing EU–Africa relations that remain rooted in a ‘traditional’ model of North–South development cooperation rather than equitable partnership.

Another newly created post will see Margaritis Schinas assume the role of vice-president for promoting the European way of life – formerly ‘protecting our European way of life’ before a backlash saw it changed – a reminder that migration will remain high on the EU’s foreign policy agenda. The new high representative for foreign and security policy and chief EU diplomat, Josep Borrell, has highlighted the need for bilateral partnership with countries of origin and transit, mainly in Africa. 

Negotiations also continue to stall on a replacement to the Cotonou Agreement, the 20-year partnership framework between the EU and the African, Caribbean and Pacific (ACP) group of states, which now looks certain to be extended for at least 12 months beyond its expiry in February 2020.

Ambiguities in the EU’s negotiating approach have certainly contributed to the delay: having pushed initially for a separate regional pillar for Africa that would be opened to the North African countries (who are not ACP members) and include a loosely defined role for the African Union, this would later be abandoned in favour of a dual-track process on separate new agreements with the AU and ACP respectively.

The EU also continues to pursue controversial economic partnership agreements under the aegis of Cotonou, despite their increasing appearance of incompatibility with the pathbreaking African Continental Free Trade Area (AfCFTA) – one of the clearest expressions to date of African agency.

The EU has so far attempted to gloss over this incoherence, claiming that EPAs can somehow act as the ‘building blocks’ for Africa-wide economic integration. But tensions are appearing between EU departments and within the commission, with the European External Action Service inclined to prioritize a more strategic continental relationship with the AU, while the Directorate-General for International Cooperation and Development remains committed to the ACP as the conduit for financial support and aid delivery.

And it is unlikely to get away with such incoherence for much longer. Change is now urgent, as numerous countries in sub-Saharan Africa continue to attract the strategic and commercial interests of the EU’s competitors: from established players such as China and potentially in future the UK, which is intent on remodelling its Africa ties post-Brexit, to emerging actors such as Turkey or Russia, which held its first Africa summit in October. 

The need for delivery

If the EU is serious about its rhetoric on equal partnership, it must therefore move beyond convoluted hybrid proposals. Delivering on the Juncker administration’s proposal to increase funding for external action by 30 per cent for 2021–27 would mark an important first step, particularly as this involves streamlining that would see the European Development Fund – the financial instrument for EU-ACP relations – incorporated into the main EU budget.

The new commission should therefore continue to exert pressure on the European Council and European Parliament to adopt this proposal, as negotiations on this financial framework have been repeatedly subject to delay and may not be resolved before the end of the year. 

Beyond this, proactive support for the AfCFTA and for structural transformation more broadly must be prioritized ahead of vague promises for a continent-to-continent free trade agreement, as held out by Juncker in his final State of the Union address in 2018. 

The significance of internal EU reforms for Africa should also not be discounted. The EU’s Common Agricultural Policy, for instance, has placed the African sector at a particular disadvantage and has made it harder to compete even in domestic markets, let alone in the distant EU export markets. EU efforts to stimulate inflows of private investments into the African agricultural sector, abolish import tariffs and offer technical support for African producers to satisfy EU health and safety regulations will be of little use if they are undermined by heavy subsidies across Europe.

Ultimately, changes to job titles alone will be insufficient. The new commission’s rhetoric, while ambitious, differs little from that of the previous decade – Africa has heard the promise of a ‘partnership of equals’ and of ‘shared ownership’ since before the advent of the Joint Africa–EU Strategy in 2007. Now is the time for truly bold steps to implement this vision.




com

Combining Precursor and Fragment Information for Improved Detection of Differential Abundance in Data Independent Acquisition [Technological Innovation and Resources]

In bottom-up, label-free discovery proteomics, biological samples are acquired in a data-dependent (DDA) or data-independent (DIA) manner, with peptide signals recorded in an intact (MS1) and fragmented (MS2) form. While DDA has only the MS1 space for quantification, DIA contains both MS1 and MS2 at high quantitative quality. DIA profiles of complex biological matrices such as tissues or cells can contain quantitative interferences, and the interferences at the MS1 and the MS2 signals are often independent. When comparing biological conditions, the interferences can compromise the detection of differential peptide or protein abundance and lead to false positive or false negative conclusions.

We hypothesized that the combined use of MS1 and MS2 quantitative signals could improve our ability to detect differentially abundant proteins. Therefore, we developed a statistical procedure incorporating both MS1 and MS2 quantitative information of DIA. We benchmarked the performance of the MS1-MS2-combined method to the individual use of MS1 or MS2 in DIA using four previously published controlled mixtures, as well as in two previously unpublished controlled mixtures. In the majority of the comparisons, the combined method outperformed the individual use of MS1 or MS2. This was particularly true for comparisons with low fold changes, few replicates, and situations where MS1 and MS2 were of similar quality. When applied to a previously unpublished investigation of lung cancer, the MS1-MS2-combined method increased the coverage of known activated pathways.

Since recent technological developments continue to increase the quality of MS1 signals (e.g. using the BoxCar scan mode for Orbitrap instruments), the combination of the MS1 and MS2 information has a high potential for future statistical analysis of DIA data.




com

Multi-omic Characterization of the Mode of Action of a Potent New Antimalarial Compound, JPC-3210, Against Plasmodium falciparum [Research]

The increasing incidence of antimalarial drug resistance to the first-line artemisinin combination therapies underpins an urgent need for new antimalarial drugs, ideally with a novel mode of action. The recently developed 2-aminomethylphenol, JPC-3210, (MMV 892646) is an erythrocytic schizonticide with potent in vitro antimalarial activity against multidrug-resistant Plasmodium falciparum lines, low cytotoxicity, potent in vivo efficacy against murine malaria, and favorable preclinical pharmacokinetics including a lengthy plasma elimination half-life. To investigate the impact of JPC-3210 on biochemical pathways within P. falciparum-infected red blood cells, we have applied a "multi-omics" workflow based on high resolution orbitrap mass spectrometry combined with biochemical approaches. Metabolomics, peptidomics and hemoglobin fractionation analyses revealed a perturbation in hemoglobin metabolism following JPC-3210 exposure. The metabolomics data demonstrated a specific depletion of short hemoglobin-derived peptides, peptidomics analysis revealed a depletion of longer hemoglobin-derived peptides, and the hemoglobin fractionation assay demonstrated decreases in hemoglobin, heme and hemozoin levels. To further elucidate the mechanism responsible for inhibition of hemoglobin metabolism, we used in vitro β-hematin polymerization assays and showed JPC-3210 to be an intermediate inhibitor of β-hematin polymerization, about 10-fold less potent then the quinoline antimalarials, such as chloroquine and mefloquine. Further, quantitative proteomics analysis showed that JPC-3210 treatment results in a distinct proteomic signature compared with other known antimalarials. While JPC-3210 clustered closely with mefloquine in the metabolomics and proteomics analyses, a key differentiating signature for JPC-3210 was the significant enrichment of parasite proteins involved in regulation of translation. These studies revealed that the mode of action for JPC-3210 involves inhibition of the hemoglobin digestion pathway and elevation of regulators of protein translation. Importantly, JPC-3210 demonstrated rapid parasite killing kinetics compared with other quinolones, suggesting that JPC-3210 warrants further investigation as a potentially long acting partner drug for malaria treatment.




com

Combined EGFR and ROCK Inhibition in Triple-negative Breast Cancer Leads to Cell Death Via Impaired Autophagic Flux [Research]

Triple-negative breast cancer (TNBC) is an aggressive subtype of breast cancer with very limited therapeutic options. We have recently shown that the combined inhibition of EGFR and ROCK in TNBC cells results in cell death, however, the underlying mechanisms remain unclear. To investigate this, here we applied a mass spectrometry-based proteomic approach to identify proteins altered on single and combination treatments. Our proteomic data revealed autophagy as the major molecular mechanism implicated in the cells' response to combinatorial treatment. We here show that EGFR inhibition by gefitinib treatment alone induces autophagy, a cellular recycling process that acts as a cytoprotective response for TNBC cells. However, combined inhibition of EGFR and ROCK leads to autophagy blockade and accumulation of autophagic vacuoles. Our data show impaired autophagosome clearance as a likely cause of antitumor activity. We propose that the inhibition of the autophagic flux on combinatorial treatment is attributed to the major cytoskeletal changes induced on ROCK inhibition, given the essential role the cytoskeleton plays throughout the various steps of the autophagy process.




com

Interaction Proteomics Identifies ERbeta Association with Chromatin Repressive Complexes to Inhibit Cholesterol Biosynthesis and Exert An Oncosuppressive Role in Triple-negative Breast Cancer [Research]

Triple-negative breast cancer (TNBC) is characterized by poor response to therapy and low overall patient survival. Recently, Estrogen Receptor beta (ERβ) has been found to be expressed in a fraction of TNBCs where, because of its oncosuppressive actions on the genome, it represents a potential therapeutic target, provided a better understanding of its actions in these tumors becomes available. To this end, the cell lines Hs 578T, MDA-MB-468 and HCC1806, representing the claudin-low, basal-like 1 and 2 TNBC molecular subtypes respectively, were engineered to express ERβ under the control of a Tetracycline-inducible promoter and used to investigate the effects of this transcription factor on gene activity. The antiproliferative effects of ERβ in these cells were confirmed by multiple functional approaches, including transcriptome profiling and global mapping of receptor binding sites in the genome, that revealed direct negative regulation by ERβ of genes, encoding for key components of cellular pathways associated to TNBC aggressiveness representing novel therapeutic targets such as angiogenesis, invasion, metastasis and cholesterol biosynthesis. Supporting these results, interaction proteomics by immunoprecipitation coupled to nano LC-MS/MS mass spectrometry revealed ERβ association with several potential nuclear protein partners, including key components of regulatory complexes known to control chromatin remodeling, transcriptional and post-transcriptional gene regulation and RNA splicing. Among these, ERβ association with the Polycomb Repressor Complexes 1 and 2 (PRC1/2), known for their central role in gene regulation in cancer cells, was confirmed in all three TNBC subtypes investigated, suggesting its occurrence independently from the cellular context. These results demonstrate a significant impact of ERβ in TNBC genome activity mediated by its cooperation with regulatory multiprotein chromatin remodeling complexes, providing novel ground to devise new strategies for the treatment of these diseases based on ligands affecting the activity of this nuclear receptor or some of its protein partners.




com

Guidance Document: Validation of a High-Performance Liquid Chromatography-Tandem Mass Spectrometry Immunopeptidomics Assay for the Identification of HLA Class I Ligands Suitable for Pharmaceutical Therapies [Commentary]

For more than two decades naturally presented, human leukocyte antigen (HLA)-restricted peptides (immunopeptidome) have been eluted and sequenced using liquid chromatography-tandem mass spectrometry (LC-MS/MS). Since, identified disease-associated HLA ligands have been characterized and evaluated as potential active substances. Treatments based on HLA-presented peptides have shown promising results in clinical application as personalized T cell-based immunotherapy. Peptide vaccination cocktails are produced as investigational medicinal products under GMP conditions. To support clinical trials based on HLA-presented tumor-associated antigens, in this study the sensitive LC-MS/MS HLA class I antigen identification pipeline was fully validated for our technical equipment according to the current US Food and Drug Administration (FDA) and European Medicines Agency (EMA) guidelines.

The immunopeptidomes of JY cells with or without spiked-in, isotope labeled peptides, of peripheral blood mononuclear cells of healthy volunteers as well as a chronic lymphocytic leukemia and a bladder cancer sample were reliably identified using a data-dependent acquisition method. As the LC-MS/MS pipeline is used for identification purposes, the validation parameters include accuracy, precision, specificity, limit of detection and robustness.




com

A Compact Quadrupole-Orbitrap Mass Spectrometer with FAIMS Interface Improves Proteome Coverage in Short LC Gradients [Technological Innovation and Resources]

State-of-the-art proteomics-grade mass spectrometers can measure peptide precursors and their fragments with ppm mass accuracy at sequencing speeds of tens of peptides per second with attomolar sensitivity. Here we describe a compact and robust quadrupole-orbitrap mass spectrometer equipped with a front-end High Field Asymmetric Waveform Ion Mobility Spectrometry (FAIMS) Interface. The performance of the Orbitrap Exploris 480 mass spectrometer is evaluated in data-dependent acquisition (DDA) and data-independent acquisition (DIA) modes in combination with FAIMS. We demonstrate that different compensation voltages (CVs) for FAIMS are optimal for DDA and DIA, respectively. Combining DIA with FAIMS using single CVs, the instrument surpasses 2500 peptides identified per minute. This enables quantification of >5000 proteins with short online LC gradients delivered by the Evosep One LC system allowing acquisition of 60 samples per day. The raw sensitivity of the instrument is evaluated by analyzing 5 ng of a HeLa digest from which >1000 proteins were reproducibly identified with 5 min LC gradients using DIA-FAIMS. To demonstrate the versatility of the instrument, we recorded an organ-wide map of proteome expression across 12 rat tissues quantified by tandem mass tags and label-free quantification using DIA with FAIMS to a depth of >10,000 proteins.




com

Compliance Checklists No Longer Required at Initial Manuscript Submission [Editorials]




com

Targeting the polyamine pathway&#x2014;&#x201C;a means&#x201D; to overcome chemoresistance in triple-negative breast cancer [Cell Biology]

Triple-negative breast cancer (TNBC) is characterized by its aggressive biology, early metastatic spread, and poor survival outcomes. TNBC lacks expression of the targetable receptors found in other breast cancer subtypes, mandating use of cytotoxic chemotherapy. However, resistance to chemotherapy is a significant problem, encountered in about two-thirds of TNBC patients, and new strategies are needed to mitigate resistance. In this issue of the Journal of Biological Chemistry, Geck et al. report that TNBC cells are highly sensitive to inhibition of the de novo polyamine synthesis pathway and that inhibition of this pathway sensitizes cells to TNBC-relevant chemotherapy, uncovering new opportunities for addressing chemoresistance.




com

iiNet CEO David Buckingham leaves company

CEO of Perth-based internet service provider iiNet, David Buckingham, has left the company, according to multiple sources.




com

Human Services' computers keep disabled out of work

Disabled workers are caught in bureaucratic limbo by problematic computer systems.




com

Malcolm Turnbull visits Sunshine Coast to view proposal for new undersea communications cable

A plan to make the Sunshine Coast a vital internet gateway is luring Communications Minister Malcolm Turnbull to the area on Friday to view the proposal in person.




com

Australian companies targeted by identity thieves for tax frauds

Australian companies are having their identities hijacked by international criminals who use them to try to defraud the Australian Taxation Office.




com

Bureau of Meteorology computers breached, ABC reports

Australia's Bureau of Meteorology has reportedly had its computer systems breached.




com

Digital government could become just more cost cutting, warns Internet Australia

Revolving door at digital agency must stop, says Labor.




com

ACT scientist teaches computers to police the border

A Canberra-based scientist is teaching computers to pick up suspicious activity at the border.




com

ACT human rights commission 'concerned' about new app for ACT police

Canberrans' privacy rights could be threatened by the new app.




com

Privacy Commissioner&#8217;s small budget to make policing new data breach laws difficult, experts say

New laws that mandate companies notify individuals about data breaches add to Privacy Commissioner's already-stacked caseload, but do not come with new funding.




com

Economic containment as a strategy of Great Power competition

6 November 2019 , Volume 95, Number 6

Dong Jung Kim

Economic containment has garnered repeated attention in the discourse about the United States' response to China. Yet, the attributes of economic containment as a distinct strategy of Great Power competition remain unclear. Moreover, the conditions under which a leading power can employ economic containment against a challenging power remain theoretically unelaborated. This article first suggests that economic containment refers to the use of economic policies to weaken the targeted state's material capacity to start military aggression, rather than to influence the competitor's behaviour over a specific issue. Then, this article suggests that economic containment becomes a viable option when the leading power has the ability to inflict more losses on the challenging power through economic restrictions, and this ability is largely determined by the availability of alternative economic partners. When the leading power cannot effectively inflict more losses on the challenging power due to the presence of alternative economic partners, it is better off avoiding economic containment. The author substantiates these arguments through case-studies of the United States' responses to the Soviet Union during the Cold War. The article concludes by examining the nature of the United States' recent economic restrictions against China.




com

Coronavirus: All Citizens Need an Income Support

16 March 2020

Jim O'Neill

Chair, Chatham House
We cannot expect policies such as the dramatic monetary steps announced by the Federal Reserve Board and others like it, to end this crisis. A People's Quantitative Easing (QE) could be the answer.

2020-03-16-coronavirus-delivery.jpg

Delivery bike rider wearing a face mask as a precaution against coronavirus at Madrid Rio park. Photo by Pablo Cuadra/Getty Images.

Linked to the call for a global response to the Covid-19 pandemic that I, Robin Niblett and Creon Butler have outlined, the case for a specific dramatic economic policy gesture from many policymakers in large economies is prescient.

It may not be warranted from all G20 nations, although given the uncertainties, and the desire to show collective initiative, I think it should be G20 driven and inclusive.

We need some sort of income support for all our citizens, whether employees or employers. Perhaps one might call it a truly People’s QE (quantitative easing).

Against the background of the previous economic crisis from 2008, and the apparent difficulties that more traditional forms of economic stimulus have faced in trying to help their economies and their people - especially against a background of low wage growth, and both actual, and perception of rising inequality - other ideas have emerged.

Central banks printing money

Both modern monetary theory (MMT) and universal basic income (UBI) essentially owe their roots to the judgement that conventional economic policies have not been helping.

At the core of these views is the notion of giving money to people, especially lower income people, directly paid for by our central banks printing money. Until recently, I found myself having very little sympathy with these views but, as a result of COVID-19, I have changed my mind.

This crisis is extraordinary in so far as it is both a colossal demand shock and an even bigger colossal supply shock. The crisis epicentre has shifted from China - and perhaps the rest of Asia - to Europe and the United States. We cannot expect policies, however unconventional by modern times, such as the dramatic monetary steps announced by the Federal Reserve Board and others like it, to put a floor under this crisis.

We are consciously asking our people to stop going out, stop travelling, not go to their offices - in essence, curtailing all forms of normal economic life. The only ones not impacted are those who entirely work through cyberspace. But even they have to buy some forms of consumer goods such as food and, even if they order online, someone has to deliver it.

As a result, markets are, correctly, worrying about a collapse of economic activity and, with it, a collapse of companies, not just their earnings. Expansion of central bank balance sheets is not going to do anything to help that, unless it is just banks we are again worried about saving.

What is needed in current circumstances, are steps to make each of us believe with high confidence that, if we take the advice from our medical experts, especially if we self-isolate and deliberately restrict our personal incomes, then we will have this made good by our governments. In essence, we need smart, persuasive People’s QE.

Having discussed the idea with a couple of economic experts, there are considerable difficulties with moving beyond the simple concept. In the US for example, I believe the Federal Reserve is legally constrained from pursuing a direct transfer of cash to individuals or companies, and this may be true elsewhere.

But this is easily surmounted by fiscal authorities issuing a special bond, the proceeds of which could be transferred to individuals and business owners. And central banks could easily finance such bonds.

It is also the case that such a step would encroach on the perception and actuality of central bank independence, but I would be among those that argue central banks can only operate this independence if done wisely. Others will argue that, in the spirit of the equality debate, any income support should be targeted towards those on very low incomes, while higher earners or large businesses, shouldn’t be given any, or very little.

I can sympathise with such spirit, but this also ignores the centrality of this particular economic shock. All of our cafes and restaurants, and many of our airlines, and such are at genuine risk of not being able to survive, and these organisations are considerable employers of people on income.

It is also the case that time is of the essence, and we need our policymakers to act as soon as possible, otherwise the transmission mechanisms, including those about the permanent operation of our post World War 2 form of life may be challenged.

We need some kind of smart People’s QE now.




com

Dynamics of sphingolipids and the serine palmitoyltransferase complex in rat oligodendrocytes during myelination

Deanna L. Davis
Apr 1, 2020; 61:505-522
Research Articles




com

A simple method for sphingolipid analysis of tissues embedded in optimal cutting temperature compound

Timothy D Rohrbach
Apr 27, 2020; 0:jlr.D120000809v1-jlr.D120000809
Methods




com

Circulating oxidized LDL increased in patients with acute myocardial infarction is accompanied by heavily modified HDL.

Naoko Sawada
Apr 14, 2020; 0:jlr.RA119000312v1-jlr.RA119000312
Research Articles




com

Metabolic phospholipid labeling of intact bacteria enables a fluorescence assay that detects compromised outer membranes

Inga Nilsson
Mar 10, 2020; 0:jlr.RA120000654v1-jlr.RA120000654
Research Articles




com

Commentary on SSO and other putative inhibitors of FA transport across membranes by CD36 disrupt intracellular metabolism, but do not affect fatty acid translocation

Henry J. Pownall
May 1, 2020; 61:595-597
Commentary




com

Comparative profiling and comprehensive quantification of stratum corneum ceramides in humans and mice by LC-MS/MS

Momoko Kawana
Apr 7, 2020; 0:jlr.RA120000671v1-jlr.RA120000671
Research Articles




com

Episode 28 - The Internet of Gamescom (IoG) Blackberry security and plane hacks

David Price is in the host chair this week and is joined by Lewis Painter, staff writer at PC Advisor and Macworld UK to discuss all the news coming out of Gamescom, including No Man's Sky, Metal Gear, Final Fantasy and Battlefield. Henry Burrell, staff writer at PC Advisor and Macworld UK jumps in to chat Blackberry and its trumped up security claims (15:00). Finally, Charlotte Jee, editor at Techworld.com talks about hacking planes, trains and automobiles (26:30).  


See acast.com/privacy for privacy and opt-out information.




com

Episode 81 - The Internet of Cashierless Shopping (IoCS) Open banking, Qualcomm fines and Amazon Go

This week host Charlotte Jee breaks down open banking with Computerworld UK editor Scott Carey: what is it and why should we care?


Then audience development editor Christina Mercer explains why chip-maker Qualcomm has been fined nearly €1 billion and the EU's sustained attack on big tech (12:00)


Last up is senior staff writer at Tech Advisor Dom Preston to talk about Amazon's revolutionary concept Go store opening in Seattle and if this is really the future of shopping (20:00).

 

See acast.com/privacy for privacy and opt-out information.




com

Episode 83 - The Internet of White Rings (IoWR) HomePod, Kingdom Come: Deliverance and no spoiler Black Panther chat

Scott Carey assembles half the Tech Advisor squad to chat about the HomePod's great audio and then all the things that make it a tabloid headline. Jim Martin lets us know if Apple ruined his oak and/or pine.


Lewis Painter chats us through Kingdom Come: Deliverance and all the wacky things you can do in its slow paced but huge world. Dom Preston then lets us know - without spoilers - just how good Black Panther is, Marvel's latest marvel (hopefully).

 

See acast.com/privacy for privacy and opt-out information.




com

Episode 97 - The Internet of Big Companies (IoBC) Apple results, Amazon worker rights and Google Cloud Next

This week our host Scott Carey is joined by Macworld UK editor Karen Khan to chat about Apple's latest blockbuster results.


Then group production editor Tamlin Magee jumps in to discuss Amazon's working practices following the collective action around Prime Day.


Finally, Scott chats through his experience at the Google Cloud Next conference in San Francisco last week to see how it is trying to compete with the big boys at Amazon and Microsoft.

 

See acast.com/privacy for privacy and opt-out information.




com

Metabolic phospholipid labeling of intact bacteria enables a fluorescence assay that detects compromised outer membranes [Research Articles]

Gram-negative bacteria possess an asymmetric outer membrane (OM) composed primarily of lipopolysaccharides (LPS) on the outer leaflet and phospholipids (PLs) on the inner leaflet. Loss of this asymmetry due to mutations in the lipopolysaccharide (LPS) biosynthesis or transport pathways causes externalization of PLs to the outer leaflet of the OM and leads to OM permeability defects. Here, we employed metabolic labeling to detect a compromised OM in intact bacteria. Phosphatidylcholine synthase (Pcs) expression in Escherichia coli allowed for incorporation of exogenous propargylcholine (PCho) into phosphatidyl(propargyl)choline (PPC) and for incorporation of exogenous 1-azidoethyl-choline (AECho) into phosphatidyl(azidoethyl)choline (AEPC) as confirmed by LC-MS analyses. A fluorescent copper-free click reagent poorly labeled AEPC in intact wild-type cells, but readily labeled AEPC from lysed cells. Fluorescence microscopy and flow cytometry analyses confirmed the absence of significant AEPC labeling from intact wild-type E. coli strains, and revealed significant AEPC labeling in an E. coli LPS transport mutant (lptD4213) and an LPS biosynthesis mutant (E. coli lpxC101). Our results suggest that metabolic PL labeling with AECho is a promising tool to detect a compromised bacterial OM, reveal aberrant PL externalization, and identify or characterize novel cell-active inhibitors of LPS biosynthesis or transport.




com

Comparative profiling and comprehensive quantification of stratum corneum ceramides in humans and mice by LC-MS/MS [Research Articles]

Ceramides are the predominant lipids in the stratum corneum (SC) and are crucial components for normal skin barrier function. Although the composition of various ceramide classes in the human SC has been reported, that in mice is still unknown, despite mice being widely used as animal models of skin barrier function. Here, we performed LC–MS/MS analyses using recently available ceramide class standards to measure 25 classes of free ceramides and 5 classes of protein-bound ceramides from the human and mouse SC. Phytosphingosine-type ceramides (P-ceramides) and 6-hydroxy sphingosine-type ceramides (H-ceramides), which both contain an additional hydroxyl group, were abundant in human SC (35% and 45% of total ceramides, respectively). In contrast, in mice, P-ceramides and H-ceramides were present at ~1% and undetectable levels, respectively, and sphingosine-type ceramides accounted for ~90%. In humans, ceramides containing α-hydroxy FA were abundant, whereas ceramides containing β-hydroxy FA (B-ceramides) or -hydroxy FA were abundant in mice. The hydroxylated β-carbon in B-ceramides was in the (R)-configuration. Genetic knockout of β-hydroxy acyl-CoA dehydratases in HAP1 cells increased B-ceramide levels, suggesting that β-hydroxy acyl-CoA, an FA-elongation cycle intermediate in the endoplasmic reticulum, is a substrate for B-ceramide synthesis. We anticipate that our methods and findings will help to elucidate the role of each ceramide class in skin barrier formation and in the pathogenesis of skin disorders.




com

Circulating oxidized LDL increased in patients with acute myocardial infarction is accompanied by heavily modified HDL. [Research Articles]

Oxidized low-density lipoprotein (oxLDL) is a known risk factor for atherogenesis. This study aimed to reveal structural features of oxLDL present in human circulation related to atherosclerosis. When LDL was fractionated on an anion-exchange column, in vivo-oxLDL, detected by the anti-oxidized phosphatidylcholine (oxPC) monoclonal antibody, was recovered in flow-through and electronegative LDL (LDL(-)) fractions. The amount of the electronegative in vivo-oxLDL, namely oxLDL in LDL(-) fraction, present in patients with acute myocardial infarction (AMI) was three-fold higher than that observed in healthy subjects. Surprisingly, LDL(-) fraction contained apoA1 in addition to apoB, and HDL-sized particles were observed with transmission electron microscopy. In LDL(-) fractions, acrolein adducts were identified at all lysine residues in apoA1, with only a small number of acrolein-modified residues were identified in apoB. The amount of oxPC adducts of apoB was higher in LDL(-) than in L1 fraction as determined using western blotting. The electronegative in vivo-oxLDL was immunologically purified from the LDL(-) fraction with an anti-oxPC monoclonal antibody. Majority of PC species was not oxidized, whereas oxPC and lysoPC did not accumulate. Here, we propose that there are two types of in vivo-oxLDL in human circulating plasma and the electronegative in vivo-oxLDL accompanies oxidized HDL.