co

Everything you need to know about the COVID-19 therapy trials

Researchers around the world are working at record speed to find the best ways to treat and prevent COVID-19, from investigating the possibility of repurposing existing drugs to searching for novel therapies against the virus.




co

Health boards say around half of pharmacies have expressed interest in providing COVID-19 vaccines

Around half of Wales’ community pharmacies have expressed interest to health boards in providing COVID-19 vaccinations as part of the national programme.




co

NHS England lowers threshold for COVID-19 vaccination site applications

Community pharmacies able to administer up to 400 COVID-19 vaccines per week can now apply to become designated vaccination sites, NHS England has said.




co

New drug cuts the risk of death in bladder cancer by 30% compared with chemotherapy, study suggests

A new type of drug that targets chemotherapy directly to cancer cells reduces the risk of death from the most common type of bladder cancer by 30%, a phase III trial in the New England Journal of Medicine has suggested.




co

Pharmacy negotiators discuss patient registration with community pharmacies

Pharmacy negotiators have discussed proposals to take “a patient registration-based approach” to the community pharmacy contractual framework.




co

Deconstructing the Diligence Process: An Approach to Vetting New Product Theses

By Aimee Raleigh, Principal at Atlas Venture, as part of the From The Trenches feature of LifeSciVC Ever wondered what goes into diligencing a new idea, program, company, or platform? While each diligence is unique and every investor will have

The post Deconstructing the Diligence Process: An Approach to Vetting New Product Theses appeared first on LifeSciVC.




co

Pharmacology: The Anchor for Nearly Every Diligence

By Haojing Rong and Aimee Raleigh, as part of the From The Trenches feature of LifeSciVC This blog post is the second in a series on key diligence concepts and questions. If you missed the intro blog post yesterday, click

The post Pharmacology: The Anchor for Nearly Every Diligence appeared first on LifeSciVC.




co

The Biotech Startup Contraction Continues… And That’s A Good Thing

Venture creation in biotech is witnessing a sustained contraction. After the pandemic bubble’s over-indulgence, the venture ecosystem appears to have reset its pace of launching new startups. According to the latest Pitchbook data, venture creation in biotech hit its slowest

The post The Biotech Startup Contraction Continues… And That’s A Good Thing appeared first on LifeSciVC.




co

Mariana Oncology’s Radiopharm Platform Acquired By Novartis

Novartis recently announced the acquisition of Mariana Oncology, an emerging biotech focused on advancing a radioligand therapeutics platform, for up to $1.75 billion in upfronts and future milestones. The capstone of its three short years of operations, this acquisition represents

The post Mariana Oncology’s Radiopharm Platform Acquired By Novartis appeared first on LifeSciVC.




co

Has Spring Sprouted New Growth in Immuno-Oncology?

By Jonathan Montagu, CEO of HotSpot Therapeutics, as part of the From The Trenches feature of LifeSciVC As Boston’s weather has started its turn from the frigid darkness that is a northeast winter to the longer days and lighter conditions

The post Has Spring Sprouted New Growth in Immuno-Oncology? appeared first on LifeSciVC.




co

Boiling It Down: Conveying Complexity For Decision-makers

By Ankit Mahadevia, former CEO of Spero Therapeutics, as part of the From The Trenches feature of LifeSciVC Drug development is complex. So is running a business. Sometimes, the work of doing both can make your head spin. In my

The post Boiling It Down: Conveying Complexity For Decision-makers appeared first on LifeSciVC.




co

Clinical Trial Enrollment, ASCO 2013 Edition

Even by the already-painfully-embarrassingly-low standards of clinical trial enrollment in general, patient enrollment in cancer clinical trials is slow. Horribly slow. In many cancer trials, randomizing one patient every three or four months isn't bad at all – in fact, it's par for the course. The most
commonly-cited number is that only 3% of cancer patients participate in a trial – and although exact details of how that number is measured are remarkably difficult to pin down, it certainly can't be too far from reality.

Ultimately, the cost of slow enrollment is borne almost entirely by patients; their payment takes the form of fewer new therapies and less evidence to support their treatment decisions.

So when a couple dozen thousand of the world's top oncologists fly into Chicago to meet, you'd figure that improving accrual would be high on everyone’s agenda. You can't run your trial without patients, after all.

But every year, the annual ASCO meeting underdelivers in new ideas for getting more patients into trials. I suppose this a consequence of ASCO's members-only focus: getting the oncologists themselves to address patient accrual is a bit like asking NASCAR drivers to tackle the problems of aerodynamics, engine design, and fuel chemistry.

Nonetheless, every year, a few brave souls do try. Here is a quick rundown of accrual-related abstracts at this year’s meeting, conveniently sorted into 3 logical categories:

1. As Lord Kelvin may or may not have said, “If you cannot measure it, you cannot improve it.”


Probably the most sensible of this year's crop, because rather than trying to make something out of nothing, the authors measure exactly how pervasive the nothing is. Specifically, they attempt to obtain fairly basic patient accrual data for the last three years' worth of clinical trials in kidney cancer. Out of 108 trials identified, they managed to get – via search and direct inquiries with the trial sponsors – basic accrual data for only 43 (40%).

That certainly qualifies as “terrible”, though the authors content themselves with “poor”.

Interestingly, exactly zero of the 32 industry-sponsored trials responded to the authors' initial survey. This fits with my impression that pharma companies continue to think of accrual data as proprietary, though what sort of business advantage it gives them is unclear. Any one company will have only run a small fraction of these studies, greatly limiting their ability to draw anything resembling a valid conclusion.


CALGB investigators look at 110 trials over the past 10 years to see if they can identify any predictive markers of successful enrollment. Unfortunately, the trials themselves are pretty heterogeneous (accrual periods ranged from 6 months to 8.8 years), so finding a consistent marker for successful trials would seem unlikely.

And, in fact, none of the usual suspects (e.g., startup time, disease prevalence) appears to have been significant. The exception was provision of medication by the study, which was positively associated with successful enrollment.

The major limitation with this study, apart from the variability of trials measured, is in its definition of “successful”, which is simply the total number of planned enrolled patients. Under both of their definitions, a slow-enrolling trial that drags on for years before finally reaching its goal is successful, whereas if that same trial had been stopped early it is counted as unsuccessful. While that sometimes may be the case, it's easy to imagine situations where allowing a slow trial to drag on is a painful waste of resources – especially if results are delayed enough to bring their relevance into question.

Even worse, though, is that a trial’s enrollment goal is itself a prediction. The trial steering committee determines how many sites, and what resources, will be needed to hit the number needed for analysis. So in the end, this study is attempting to identify predictors of successful predictions, and there is no reason to believe that the initial enrollment predictions were made with any consistent methodology.

2. If you don't know, maybe ask somebody?



With these two abstracts we celebrate and continue the time-honored tradition of alchemy, whereby we transmute base opinion into golden data. The magic number appears to be 100: if you've got 3 digits' worth of doctors telling you how they feel, that must be worth something.

In the first abstract, a working group is formed to identify and vote on the major barriers to accrual in oncology trials. Then – and this is where the magic happens – that same group is asked to identify and vote on possible ways to overcome those barriers.

In the second, a diverse assortment of community oncologists were given an online survey to provide feedback on the design of a phase 3 trial in light of recent new data. The abstract doesn't specify who was initially sent the survey, so we cannot tell response rate, or compare survey responders to the general population (I'll take a wild guess and go with “massive response bias”).

Market research is sometimes useful. But what cancer clinical trial do not need right now are more surveys are working groups. The “strategies” listed in the first abstract are part of the same cluster of ideas that have been on the table for years now, with no appreciable increase in trial accrual.

3. The obligatory “What the What?” abstract



The force with which my head hit my desk after reading this abstract made me concerned that it had left permanent scarring.

If this had been re-titled “Poor Measurement of Accrual Factors Leads to Inaccurate Accrual Reporting”, would it still have been accepted for this year’s meeting? That's certainly a more accurate title.

Let’s review: a trial intends to enroll both white and minority patients. Whites enroll much faster, leading to a period where only minority patients are recruited. Then, according to the authors, “an almost 4-fold increase in minority accrual raises question of accrual disparity.” So, sites will only recruit minority patients when they have no choice?

But wait: the number of sites wasn't the same during the two periods, and start-up times were staggered. Adjusting for actual site time, the average minority accrual rate was 0.60 patients/site/month in the first part and 0.56 in the second. So the apparent 4-fold increase was entirely an artifact of bad math.

This would be horribly embarrassing were it not for the fact that bad math seems to be endemic in clinical trial enrollment. Failing to adjust for start-up time and number of sites is so routine that not doing it is grounds for a presentation.

The bottom line


What we need now is to rigorously (and prospectively) compare and measure accrual interventions. We have lots of candidate ideas, and there is no need for more retrospective studies, working groups, or opinion polls to speculate on which ones will work best.  Where possible, accrual interventions should themselves be randomized to minimize confounding variables which prevent accurate assessment. Data needs to be uniformly and completely collected. In other words, the standards that we already use for clinical trials need to be applied to the enrollment measures we use to engage patients to participate in those trials.

This is not an optional consideration. It is an ethical obligation we have to cancer patients: we need to assure that we are doing all we can to maximize the rate at which we generate new evidence and test new therapies.

[Image credit: Logarithmic turtle accrual rates courtesy of Flikr user joleson.]




co

Brazen Scofflaws? Are Pharma Companies Really Completely Ignoring FDAAA?

Results reporting requirements are pretty clear. Maybe critics should re-check their methods?

Ben Goldacre has rather famously described the clinical trial reporting requirements in the Food and Drug Administration Amendments Act of 2007 as a “fake fix” that was being thoroughly “ignored” by the pharmaceutical industry.

Pharma: breaking the law in broad daylight?
He makes this sweeping, unconditional proclamation about the industry and its regulators on the basis of  a single study in the BMJ, blithely ignoring the fact that a) the authors of the study admitted that they could not adequately determine the number of studies that were meeting FDAAA requirements and b) a subsequent FDA review that identified only 15 trials potentially out of compliance, out of a pool of thousands.


Despite the fact that the FDA, which has access to more data, says that only a tiny fraction of studies are potentially noncompliant, Goldacre's frequently repeated claims that the law is being ignored seems to have caught on in the general run of journalistic and academic discussions about FDAAA.

And now there appears to be additional support for the idea that a large percentage of studies are noncompliant with FDAAA results reporting requirements, in the form of a new study in the Journal of Clinical Oncology: "Public Availability of Results of Trials Assessing Cancer Drugs in the United States" by Thi-Anh-Hoa Nguyen, et al.. In it, the authors report even lower levels of FDAAA compliance – a mere 20% of randomized clinical trials met requirements of posting results on clinicaltrials.gov within one year.

Unsurprisingly, the JCO results were immediately picked up and circulated uncritically by the usual suspects.

I have to admit not knowing much about pure academic and cooperative group trial operations, but I do know a lot about industry-run trials – simply put, I find the data as presented in the JCO study impossible to believe. Everyone I work with in pharma trials is painfully aware of the regulatory environment they work in. FDAAA compliance is a given, a no-brainer: large internal legal and compliance teams are everywhere, ensuring that the letter of the law is followed in clinical trial conduct. If anything, pharma sponsors are twitchily over-compliant with these kinds of regulations (for example, most still adhere to 100% verification of source documentation – sending monitors to physically examine every single record of every single enrolled patient - even after the FDA explicitly told them they didn't have to).

I realize that’s anecdotal evidence, but when such behavior is so pervasive, it’s difficult to buy into data that says it’s not happening at all. The idea that all pharmaceutical companies are ignoring a highly visible law that’s been on the books for 6 years is extraordinary. Are they really so brazenly breaking the rules? And is FDA abetting them by disseminating incorrect information?

Those are extraordinary claims, and would seem to require extraordinary evidence. The BMJ study had clear limitations that make its implications entirely unclear. Is the JCO article any better?

Some Issues


In fact, there appear to be at least two major issues that may have seriously compromised the JCO findings:

1. Studies that were certified as being eligible for delayed reporting requirements, but do not have their certification date listed.

The study authors make what I believe to be a completely unwarranted assumption:

In trials for approval of new drugs or approval for a new indication, a certification [permitting delayed results reporting] should be posted within 1 year and should be publicly available.

It’s unclear to me why the authors think the certifications “should be” publicly available. In re-reading FDAAA section 801, I don’t see any reference to that being a requirement. I suppose I could have missed it, but the authors provide a citation to a page that clearly does not list any such requirement.

But their methodology assumes that all trials that have a certification will have it posted:

If no results were posted at ClinicalTrials.gov, we determined whether the responsible party submitted a certification. In this case, we recorded the date of submission of the certification to ClinicalTrials.gov.

If a sponsor gets approval from FDA to delay reporting (as is routine for all drugs that are either not approved for any indication, or being studied for a new indication – i.e., the overwhelming majority of pharma drug trials), but doesn't post that approval on the registry, the JCO authors deem that trial “noncompliant”. This is not warranted: the company may have simply chosen not to post the certification despite being entirely FDAAA compliant.

2. Studies that were previously certified for delayed reporting and subsequently reported results

It is hard to tell how the authors treated this rather-substantial category of trials. If a trial was certified for delayed results reporting, but then subsequently published results, the certification date becomes difficult to find. Indeed, it appears in the case where there were results, the authors simply looked at the time from study completion to results posting. In effect, this would re-classify almost every single one of these trials from compliant to non-compliant. Consider this example trial:


  • Phase 3 trial completes January 2010
  • Certification of delayed results obtained December 2010 (compliant)
  • FDA approval June 2013
  • Results posted July 2013 (compliant)


In looking at the JCO paper's methods section, it really appears that this trial would be classified as reporting results 3.5 years after completion, and therefore be considered noncompliant with FDAAA. In fact, this trial is entirely kosher, and would be extremely typical for many phase 2 and 3 trials in industry.

Time for Some Data Transparency


The above two concerns may, in fact, be non-issues. They certainly appear to be implied in the JCO paper, but the wording isn't terribly detailed and could easily be giving me the wrong impression.

However, if either or both of these issues are real, they may affect the vast majority of "noncompliant" trials in this study. Given the fact that most clinical trials are either looking at new drugs, or looking at new indications for new drugs, these two issues may entirely explain the gap between the JCO study and the unequivocal FDA statements that contradict it.

I hope that, given the importance of transparency in research, the authors will be willing to post their data set publicly so that others can review their assumptions and independently verify their conclusions. It would be more than a bit ironic otherwise.

[Image credit: Shamless lawlessness via Flikr user willytronics.]


Thi-Anh-Hoa Nguyen, Agnes Dechartres, Soraya Belgherbi, and Philippe Ravaud (2013). Public Availability of Results of Trials Assessing Cancer Drugs in the United States JOURNAL OF CLINICAL ONCOLOGY DOI: 10.1200/JCO.2012.46.9577




co

Counterfeit Drugs in Clinical Trials?

This morning I ran across a bit of a coffee-spitter: in the middle of an otherwise opaquely underinformative press release fromTranscelerate Biopharma about the launch of their

Counterfeits flooding
the market? Really?
"Comparator Network" - which will perhaps streamline member companies' ability to obtain drugs from each other for clinical trials using active comparator arms -  the CEO of the consortium, Dalvir Gill, drops a rather remarkable quote:

"Locating and accessing these comparators at the right time, in the right quantities and with the accompanying drug stability and regulatory information we need, doesn't always happen efficiently. This is further complicated by infiltration of the commercial drug supply chain by counterfeit drugs.  With the activation of our Comparator Network the participating TransCelerate companies will be able to source these comparator drugs directly from each other, be able to secure supply when they need it in the quantities they need, have access to drug data and totally mitigate the risk of counterfeit drugs in that clinical trial."

[Emphasis added.]

I have to admit to being a little floored by the idea that there is any sort of risk, in industry-run clinical trials, of counterfeit medication "infiltration".

Does Gill know something that the rest of us don't? Or is this just an awkward slap at perceived competition – innuendo against the companies that currently manage clinical trial comparator drug supply? Or an attempt at depicting the trials of non-Transcelerate members as risky and prone to fraud?

Either way, it could use some explaining. Thinking I might have missed something, I did do a quick literature search to see if I could come across any references to counterfeits in trials. Google Scholar and PubMed produced no useful results, but Wikipedia helpfully noted in its entry on counterfeit medications:

Counterfeit drugs have even been known to have been involved in clinical drug trials.[citation needed]


And on that point, I think we can agree: Citation needed. I hope the folks at Transcelerate will oblige.




co

The Coming of the MOOCT?

Big online studies, in search of millions of participants.

Back in September, I enrolled in the Heath eHeart Study - an entirely online research study tracking cardiac health. (Think Framingham Heart, cast wider and shallower - less intensive follow-up, but spread out to the entire country.)


[In the spirit of full disclosure, I should note that I haven’t completed any follow-up activities on the Heath eHeart website yet. Yes, I am officially part of the research adherence problem…]


Yesterday, I learned of the Quantified Diet Project, an entirely online/mobile app-supported randomized trial of 10 different weight loss regimens. The intervention is short - only 4 weeks - but that’s probably substantially longer than most New Year diets manage to last, and should be just long enough to detect some early differences among the approaches.


I have been excited about the potential for online medical research for quite some time. For me, the real beginning was when PatientsLikeMe published the results of their online lithium for ALS research study - as I wrote at the time, I have never been so enthused about a negative trial before or since.



That was two and a half years ago, and there hasn't been a ton of activity since then outside of PatientsLikeMe (who have expanded and formalized their activities in the Open Research Exchange). So I’m eager to hear how these two new studies go. There are some interesting similarities and differences:


  • Both are university/private collaborations, and both (perhaps unsurprisingly) are rooted in California: Heath eHeart is jointly run by UCSF and the American Heart Association, while Quantified Diet is run by app developer Lift with scientific support from a (unidentified?) team at Berkeley.
  • Both are pushing for a million or more participants, dwarfing even very large traditional studies by orders of magnitude.
  • Health eHeart is entirely observational, and researchers will have the ability to request its data to test their own hypotheses, whereas Quantified Diet is a controlled, randomized trial.


Data entry screen on Health eHeart
I really like the user interface for Heath eHeart - it’s extremely simple, with a logical flow to the sections. It clearly appears to be designed for older participants, and the extensive data intake is subdivided into a large number of subsections, each of which can typically be completed in 2-4 minutes.



I have not enrolled into the Quantified Diet, but it appears to have a strong social media presence. You can follow the Twitter conversation through the #quantdiet hashtag. The semantic web and linked data guru Kerstin Forsberg has already posted about joining, and I hope to hear more from her and from clinical trial social media expert Rahlyn Gossen, who’s also joined.


To me, probably the most intriguing technical feature of the QuantDiet study is its “voluntary randomization” design. Participants can self-select into the diet of their choice, or can choose to be randomly assigned by the application. It will be interesting to see whether any differences emerge between the participants who chose a particular arm and those who were randomized into that arm - how much does a person’s preference matter?


In an earlier tweet I asked, “is this a MOOCT?” - short for Massive Open Online Clinical Trial. I don’t know if that’s the best name for it, and l’d love to hear other suggestions. By any other name, however, these are still great initiatives and I look forward to seeing them thrive in the coming years.

The implications for pharmaceutical and medical device companies is still unclear. Pfizer's jump into world of "virtual trials" was a major bust, and widely second-guessed. I believe there is definitely a role and a path forward here, and these big efforts may teach us a lot about how patients want to be engaged online.




co

Waiver of Informed Consent - proposed changes in the 21st Century Cures Act

Adam Feuerstein points out - and expresses considerable alarm over - an overlooked clause in the 21st Century Cures Act:


In another tweet, he suggests that the act will "decimate" informed consent in drug trials. Subsequent responses and retweets  did nothing to clarify the situation, and if anything tended to spread, rather than address, Feuerstein's confusion.

Below is a quick recap of the current regulatory context and a real-life example of where the new wording may be helpful. In short, though, I think it's safe to say:


  1. Waiving informed consent is not new; it's already permitted under current regs
  2. The standards for obtaining a waiver of consent are stringent
  3. They may, in fact, be too stringent in a small number of situations
  4. The act may, in fact, be helpful in those situations
  5. Feuerstein may, in fact, need to chill out a little bit


(For the purposes of this discussion, I’m talking about drug trials, but I believe the device trial situation is parallel.)

Section 505(i) - the section this act proposes to amend - instructs the Secretary of Health and Human Services to propagate rules regarding clinical research. Subsection 4 addresses informed consent:

…the manufacturer, or the sponsor of the investigation, require[e] that experts using such drugs for investigational purposes certify to such manufacturer or sponsor that they will inform any human beings to whom such drugs, or any controls used in connection therewith, are being administered, or their representatives, that such drugs are being used for investigational purposes and will obtain the consent of such human beings or their representatives, except where it is not feasible or it is contrary to the best interests of such human beings.

[emphasis  mine]

Note that this section already recognizes situations where informed consent may be waived for practical or ethical reasons.

These rules were in fact promulgated under 45 CFR part 46, section 116. The relevant bit – as far as this conversation goes – regards circumstances under which informed consent might be fully or partially waived. Specifically, there are 4 criteria, all of which need to be met:

 (1) The research involves no more than minimal risk to the subjects;
 (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
 (3) The research could not practicably be carried out without the waiver or alteration; and
 (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

In practice, this is an especially difficult set of criteria to meet for most studies. Criterion (1) rules out most “conventional” clinical trials, because the hallmarks of those trials (use of an investigational medicine, randomization of treatment, blinding of treatment allocation) are all deemed to be more than “minimal risk”. That leaves observational studies – but even many of these cannot clear the bar of criterion (3).

That word “practicably” is a doozy.

Here’s an all-too-real example from recent personal experience. A drug manufacturer wants to understand physicians’ rationales for performing a certain procedure. It seems – but there is little hard data – that a lot of physicians do not strictly follow guidelines on when to perform the procedure. So we devise a study: whenever the procedure is performed, we ask the physician to complete a quick form categorizing why they made their decision. We also ask him or her to transcribe a few pieces of data from the patient chart.

Even though the patients aren’t personally identifiable, the collection of medical data qualifies this as a clinical trial.

It’s a minimal risk trial, definitely: the trial doesn’t dictate at all what the doctor should do, it just asks him or her to record what they did and why, and supply a bit of medical context for the decision. All told, we estimated 15 minutes of physician time to complete the form.

The IRB monitoring the trial, however, denied our request for a waiver of informed consent, since it was “practicable” (not easy, but possible) to obtain informed consent from the patient.  Informed consent – even with a slimmed-down form – was going to take a minimum of 30 minutes, so the length of the physician’s involvement tripled. In addition, many physicians opted out of the trial because they felt that the informed consent process added unnecessary anxiety and alarm for their patients, and provided no corresponding benefit.

The end result was not surprising: the budget for the trial more than doubled, and enrollment was far below expectations.

Which leads to two questions:

1.       Did the informed consent appreciably help a single patient in the trial? Very arguably, no. Consenting to being “in” the trial made zero difference in the patients’ care, added time to their stay in the clinic, and possibly added to their anxiety.
2.       Was less knowledge collected as a result? Absolutely, yes. The sponsor could have run two studies for the same cost. Instead, they ultimately reduced the power of the trial in order to cut losses.


Bottom line, it appears that the modifications proposed in the 21st Century Cures Act really only targets trials like the one in the example. The language clearly retains criteria 1 and 2 of the current HHS regs, which are the most important from a patient safety perspective, but cuts down the “practicability” requirement, potentially permitting high quality studies to be run with less time and cost.

Ultimately, it looks like a very small, but positive, change to the current rules.

The rest of the act appears to be a mash-up of some very good and some very bad (or at least not fully thought out) ideas. However, this clause should not be cause for alarm.




co

Patrick Dempsey aims to raise awareness of cancer disparities and encourage screening

NPR's Leila Fadel talks with actor Patrick Dempsey about his efforts to raise money for cancer treatment and prevention.




co

With Trump coming into power, the NIH is in the crosshairs

The National Institutes of Health, the crown jewel of biomedical research in the U.S., could face big changes under the new Trump administration, some fueled by pandemic-era criticisms of the agency.




co

Kumpulan Game Slot Gacor Dengan Persentase RTP Tertinggi Hari Ini

Dalam dunia perjudian online yang terus berkembang, pencarian para pemain untuk menemukan peluang terbaik dalam meraih kemenangan mengarah pada fenomena populer: kumpulan game slot gacor dengan persentase RTP tertinggi hari…

The post Kumpulan Game Slot Gacor Dengan Persentase RTP Tertinggi Hari Ini appeared first on Biosimilarnews.




co

Tips Rahasia Menang Mudah Main Slot Online Gacor

Mengungkap rahasia menang mudah dalam bermain slot online gacor menjadi dambaan setiap pemain judi daring. Pertama, perhatikan dengan seksama pemilihan mesin slot yang tepat. Pilihlah mesin dengan tingkat pembayaran atau…

The post Tips Rahasia Menang Mudah Main Slot Online Gacor appeared first on Biosimilarnews.




co

Game Slot Gacor Gampang Menang Habanero

Habanero tidak hanya menyajikan game slot biasa, melainkan sebuah petualangan menang tanpa batas. Dengan tema-tema yang beragam, mulai dari petualangan antariksa hingga ke dunia mitologi, setiap game Habanero memiliki keunikan…

The post Game Slot Gacor Gampang Menang Habanero appeared first on Biosimilarnews.




co

Provider Judi Slot Gacor Online Terbaik serta Populer di Tahun 2024

Seolah-olah melintasi portal waktu, kita memasuki tahun 2024 dengan deretan provider judi slot online yang tidak hanya menemani, tetapi juga menggoda imajinasi. Setiap klik, setiap putaran gulungan, membuka lembaran baru…

The post Provider Judi Slot Gacor Online Terbaik serta Populer di Tahun 2024 appeared first on Biosimilarnews.




co

Link Daftar Situs Slot Gacor Gampang Menang Maxwin Terpercaya Hari Ini

Keuntungan besar dan kegembiraan yang ditawarkan oleh mesin slot online membuatnya semakin populer. Namun, dalam lautan situs slot yang ada, bagaimana Anda bisa menemukan situs slot terbaik yang dapat memberikan…

The post Link Daftar Situs Slot Gacor Gampang Menang Maxwin Terpercaya Hari Ini appeared first on Biosimilarnews.




co

MRI Sheds Its Shielding and Superconducting Magnets



Magnetic resonance imaging (MRI) has revolutionized healthcare by providing radiation-free, non-invasive 3-D medical images. However, MRI scanners often consume 25 kilowatts or more to power magnets producing magnetic fields up to 1.5 tesla. These requirements typically limits scanners’ use to specialized centers and departments in hospitals.

A University of Hong Kong team has now unveiled a low-power, highly simplified, full-body MRI device. With the help of artificial intelligence, the new scanner only requires a compact 0.05 T magnet and can run off a standard wall power outlet, requiring only 1,800 watts during operation. The researchers say their new AI-enabled machine can produce clear, detailed images on par with those from high-power MRI scanners currently used in clinics, and may one day help greatly improve access to MRI worldwide.

To generate images, MRI applies a magnetic field to align the poles of the body’s protons in the same direction. An MRI scanner then probes the body with radio waves, knocking the protons askew. When the radio waves turn off, the protons return to their original alignment, transmitting radio signals as they do so. MRI scanners receive these signals, converting them into images.

More than 150 million MRI scans are conducted worldwide annually, according to the Organization for Economic Cooperation and Development. However, despite five decades of development, clinical MRI procedures remain out of reach for more than two-thirds of the world’s population, especially in low- and middle-income countries. For instance, whereas the United States has 40 scanners per million inhabitants, in 2016 there were only 84 MRI units serving West Africa’s population of more than 370 million.

This disparity largely stems from the high costs and specialized settings required for standard MRI scanners. They use powerful superconducting magnets that require a lot of space, power, and specialized infrastructure. They also need rooms shielded from radio interference, further adding to hardware costs, restricting their mobility, and hampering their availability in other medical settings.

Scientists around the globe have already been exploring low-cost MRI scanners that operate at ultra-low-field (ULF) strengths of less than 0.1 T. These devices may consume much less power and prove potentially portable enough for bedside use. Indeed, as the Hong Kong team notes, MRI development initially focused on low fields of about 0.05 T, until the introduction of the first whole-body 1.5 T superconducting scanner by General Electric in 1983.

The new MRI scanner (top left) is smaller than conventional scanners, and does away with bulky RF shielding and superconducting magnetics. The new scanner’s imaging resolution is on par with conventional scanners (bottom).Ed X. Wu/The University of Hong Kong

Current ULF MRI scanners often rely on AI to help reconstruct images from what signals they gather using relatively weak magnetic fields. However, until now, these devices were limited to solely imaging the brain, extremities, or single organs, Udunna Anazodo, an assistant professor of neurology and neurosurgery at McGill University in Montreal who did not take part in the work, notes in a review of the new study.

The Hong Kong team have now developed a whole-body ULF MRI scanner in which patients are placed between two permanent neodymium ferrite boron magnet plates—one above the body and the other below. Although these permanent magnets are far weaker than superconductive magnets, they are low-cost, readily available, and don’t require liquid helium or to be cooled to superconducting temperatures. In addition, the amount of energy ULF MRI scanners deposit into the body is roughly one-thousandth that from conventional scanners, making heat generation during imaging much less of a concern, Anazodo notes in her review. ULF MRI is also much quieter than regular MRI, which may help with pediatric scanning, she adds.

The new machine consists of two units, each roughly the size of a hospital gurney. One unit houses the MRI device, while the other supports the patient’s body as it slides into the scanner.

To account for radio interference from both the outside environment and the ULF MRI’s own electronics, the scientists deployed 10 small sensor coils around the scanner and inside the electronics cabinet to help the machine detect potentially disruptive radio signals. They also employed deep learning AI methods to help reconstruct images even in the presence of strong noise. They say this eliminates the need for shielding against radio waves, making the new device far more portable than conventional MRI.

In tests on 30 healthy volunteers, the device captured detailed images of the brain, spine, abdomen, heart, lung, and extremities. Scanning each of these targets took eight minutes or less for image resolutions of roughly 2 by 2 by 8 cubic millimeters. In Anazodo’s review, she notes the new machine produced image qualities comparable to those of conventional MRI scanners.

“It’s the beginning of a multidisciplinary endeavor to advance an entirely new class of simple, patient-centric and computing-powered point-of-care diagnostic imaging device,” says Ed Wu, a professor and chair of biomedical engineering at the University of Hong Kong.

The researchers used standard off-the-shelf electronics. All in all, they estimate hardware costs at about US $22,000. (According to imaging equipment company Block Imaging in Holt, Michigan, entry-level MRI scanners start at $225,000, and advanced premium machines can cost $500,000 or more.)

The prototype scanner’s magnet assembly is relatively heavy, weighing about 1,300 kilograms. (This is still lightweight compared to a typical clinical MRI scanner, which can weigh up to 17 tons, according to New York University’s Langone Health center.) The scientists note that optimizing the hardware could reduce the magnet assembly’s weight to about 600 kilograms, which would make the entire scanner mobile.

The researchers note their new device is not meant to replace conventional high-magnetic-field MRI. For instance, a 2023 study notes that next-generation MRI scanners using powerful 7 T magnets could yield a resolution of just 0.35 millimeters. Instead, ULF MRI can complement existing MRI by going to places that can’t host standard MRI devices, such as intensive care units and community clinics.

In an email, Anazodo adds this new Hong Kong work is just one of a number of exciting ULF MRI scanners under development. For instance, she notes that Gordon Sarty at the University of Saskatchewan and his colleagues are developing that device that is potentially even lighter, cheaper and more portable than the Hong Kong machine, which they are researching for use in whole-body imaging on the International Space Station.

Wu and his colleagues detailed their findings online 10 May in the journal Science.

This article appears in the July 2024 print issue as “Compact MRI Ditches Superconducting Magnets.”




co

Microneedle Glucose Sensors Keep Monitoring Skin-Deep



For people with diabetes, glucose monitors are a valuable tool to monitor their blood sugar. The current generation of these biosensors detect glucose levels with thin, metallic filaments inserted in subcutaneous tissue, the deepest layer of the skin where most body fat is stored.

Medical technology company Biolinq is developing a new type of glucose sensor that doesn’t go deeper than the dermis, the middle layer of skin that sits above the subcutaneous tissue. The company’s “intradermal” biosensors take advantage of metabolic activity in shallower layers of skin, using an array of electrochemical microsensors to measure glucose—and other chemicals in the body—just beneath the skin’s surface.

Biolinq just concluded a pivotal clinical trial earlier this month, according to CEO Rich Yang, and the company plans to submit the device to the U.S. Food and Drug Administration for approval at the end of the year. In April, Biolinq received US $58 million in funding to support the completion of its clinical trials and subsequent submission to the FDA.

Biolinq’s glucose sensor is “the world’s first intradermal sensor that is completely autonomous,” Yang says. While other glucose monitors require a smartphone or other reader to collect and display the data, Biolinq’s includes an LED display to show when the user’s glucose is within a healthy range (indicated by a blue light) or above that range (yellow light). “We’re providing real-time feedback for people who otherwise could not see or feel their symptoms,” Yang says. (In addition to this real-time feedback, the user can also load long-term data onto a smartphone by placing it next to the sensor, like Abbott’s FreeStyle Libre, another glucose monitor.)

More than 2,000 microsensor components are etched onto each 200-millimeter silicon wafer used to manufacture the biosensors.Biolinq

Biolinq’s hope is that its approach could lead to sustainable changes in behavior on the part of the individual using the sensor. The device is intentionally placed on the upper forearm to be in plain sight, so users can receive immediate feedback without manually checking a reader. “If you drink a glass of orange juice or soda, you’ll see this go from blue to yellow,” Yang explains. That could help users better understand how their actions—such as drinking a sugary beverage—change their blood sugar and take steps to reduce that effect.

Biolinq’s device consists of an array of microneedles etched onto a silicon wafer using semiconductor manufacturing. (Other glucose sensors’ filaments are inserted with an introducer needle.) Each chip has a small 2-millimeter by 2-millimeter footprint and contains seven independent microneedles, which are coated with membranes through a process similar to electroplating in jewelry making. One challenge the industry has faced is ensuring that microsensors do not break at this small scale. The key engineering insight Biolinq introduced, Yang says, was using semiconductor manufacturing to build the biosensors. Importantly, he says, silicon “is harder than titanium and steel at this scale.”

Miniaturization allows for sensing closer to the surface of the skin, where there is a high level of metabolic activity. That makes the shallow depth ideal for monitoring glucose, as well as other important biomarkers, Yang says. Due to this versatility, combined with the use of a sensor array, the device in development can also monitor lactate, an important indicator of muscle fatigue. With the addition of a third data point, ketones (which are produced when the body burns fat), Biolinq aims to “essentially have a metabolic panel on one chip,” Yang says.

Using an array of sensors also creates redundancy, improving the reliability of the device if one sensor fails or becomes less accurate. Glucose monitors tend to drift over the course of wear, but with multiple sensors, Yang says that drift can be better managed.

One downside to the autonomous display is the drain on battery life, Yang says. The battery life limits the biosensor’s wear time to 5 days in the first-generation device. Biolinq aims to extend that to 10 days of continuous wear in its second generation, which is currently in development, by using a custom chip optimized for low-power consumption rather than off-the-shelf components.

The company has collected nearly 1 million hours of human performance data, along with comparators including commercial glucose monitors and venous blood samples, Yang says. Biolinq aims to gain FDA approval first for use in people with type 2 diabetes not using insulin and later expand to other medical indications.

This article appears in the August 2024 print issue as “Glucose Monitor Takes Page From Chipmaking.”




co

Biocompatible Mic Could Lead to Better Cochlear Implants



Cochlear implants—the neural prosthetic cousins of standard hearing aids—can be a tremendous boon for people with profound hearing loss. But many would-be users are turned off by the device’s cumbersome external hardware, which must be worn to process signals passing through the implant. So researchers have been working to make a cochlear implant that sits entirely inside the ear, to restore speech and sound perception without the lifestyle restrictions imposed by current devices.

A new biocompatible microphone offers a bridge to such fully internal cochlear implants. About the size of a grain of rice, the microphone is made from a flexible piezoelectric material that directly measures the sound-induced motion of the eardrum. The tiny microphone’s sensitivity matches that of today’s best external hearing aids.

Cochlear implants create a novel pathway for sounds to reach the brain. An external microphone and processor, worn behind the ear or on the scalp, collect and translate incoming sounds into electrical signals, which get transmitted to an electrode that’s surgically implanted in the cochlea, deep within the inner ear. There, the electrical signals directly stimulate the auditory nerve, sending information to the brain to interpret as sound.

But, says Hideko Heidi Nakajima, an associate professor of otolaryngology at Harvard Medical School and Massachusetts Eye and Ear, “people don’t like the external hardware.” They can’t wear it while sleeping, or while swimming or doing many other forms of exercise, and so many potential candidates forgo the device altogether. What’s more, incoming sound goes directly into the microphone and bypasses the outer ear, which would otherwise perform the key functions of amplifying sound and filtering noise. “Now the big idea is instead to get everything—processor, battery, microphone—inside the ear,” says Nakajima. But even in clinical trials of fully internal designs, the microphone’s sensitivity—or lack thereof—has remained a roadblock.

Nakajima, along with colleagues from MIT, Harvard, and Columbia University, fabricated a cantilever microphone that senses the motion of a bone attached behind the eardrum called the umbo. Sound entering the ear canal causes the umbo to vibrate unidirectionally, with a displacement 10 times as great as other nearby bones. The tip of the “UmboMic” touches the umbo, and the umbo’s movements flex the material and produce an electrical charge through the piezoelectric effect. These electrical signals can then be processed and transmitted to the auditory nerve. “We’re using what nature gave us, which is the outer ear,” says Nakajima.

Why a cochlear implant needs low-noise, low-power electronics

Making a biocompatible microphone that can detect the eardrum’s minuscule movements isn’t easy, however. Jeff Lang, a professor of electrical engineering at MIT who jointly led the work, points out that only certain materials are tolerated by the human body. Another challenge is shielding the device from internal electronics to reduce noise. And then there’s long-term reliability. “We’d like an implant to last for decades,” says Lang.

In tests of the implantable microphone prototype, a laser beam measures the umbo’s motion, which gets transferred to the sensor tip. JEFF LANG & HEIDI NAKAJIMA

The researchers settled on a triangular design for the 3-by-3-millimeter sensor made from two layers of polyvinylidene fluoride (PVDF), a biocompatible piezoelectric polymer, sandwiched between layers of flexible, electrode-patterned polymer. When the cantilever tip bends, one PVDF layer produces a positive charge and the other produces a negative charge—taking the difference between the two cancels much of the noise. The triangular shape provides the most uniform stress distribution within the bending cantilever, maximizing the displacement it can undergo before it breaks. “The sensor can detect sounds below a quiet whisper,” says Lang.

Emma Wawrzynek, a graduate student at MIT, says that working with PVDF is tricky because it loses its piezoelectric properties at high temperatures, and most fabrication techniques involve heating the sample. “That’s a challenge especially for encapsulation,” which involves encasing the device in a protective layer so it can remain safely in the body, she says. The group had success by gradually depositing titanium and gold onto the PVDF while using a heat sink to cool it. That approach created a shielding layer that protects the charge-sensing electrodes from electromagnetic interference.

The other tool for improving a microphone’s performance is, of course, amplifying the signal. “On the electronics side, a low-noise amp is not necessarily a huge challenge to build if you’re willing to spend extra power,” says Lang. But, according to MIT graduate student John Zhang, cochlear implant manufacturers try to limit power for the entire device to 5 milliwatts, and just 1 mW for the microphone. “The trade-off between noise and power is hard to hit,” Zhang says. He and fellow student Aaron Yeiser developed a custom low-noise, low-power charge amplifier that outperformed commercially available options.

“Our goal was to perform better than or at least equal the performance of high-end capacitative external microphones,” says Nakajima. For leading external hearing-aid microphones, that means sensitivity down to a sound pressure level of 30 decibels—the equivalent of a whisper. In tests of the UmboMic on human cadavers, the researchers implanted the microphone and amplifier near the umbo, input sound through the ear canal, and measured what got sensed. Their device reached 30 decibels over the frequency range from 100 hertz to 6 kilohertz, which is the standard for cochlear implants and hearing aids and covers the frequencies of human speech. “But adding the outer ear’s filtering effects means we’re doing better [than traditional hearing aids], down to 10 dB, especially in speech frequencies,” says Nakajima.

Plenty of testing lies ahead, at the bench and on sheep before an eventual human trial. But if their UmboMic passes muster, the team hopes that it will help more than 1 million people worldwide go about their lives with a new sense of sound.

The work was published on 27 June in the Journal of Micromechanics and Microengineering.




co

Superconducting Wire Sets New Current Capacity Record



UPDATE 31 OCTOBER 2024: No. 1 no longer. The would-have-been groundbreaking study published in Nature Communications by Amit Goyal et al. claiming the world’s highest-performing high-temperature superconducting wires yet has been retracted by the authors.

The journal’s editorial statement that now accompanies the paper says that after publication, an error in the calculation of the reported performance was identified. All of the study’s authors agreed with the retraction.

The researchers were first alerted to the issue by Evgeny Talantsev at the Mikheev Institute of Metal Physics in Ekaterinburg, Russia, and Jeffery Tallon at the Victoria University of Wellington in New Zealand. In a 2015 study, the two researchers had suggested upper limits for thin-film superconductors, and Tallon notes follow-up papers showed these limits held for more than 100 known superconductors. “The Goyal paper claimed current densities 2.5 times higher, so it was immediately obvious to us that there was a problem here,” he says.

Upon request, Goyal and his colleagues “very kindly agreed to release their raw data and did so quickly,” Tallon says. He and Talantsev discovered a mistake in the conversion of magnetization units.

“Most people who had been in the game for a long time would be fully conversant with the units conversion because the instruments all deliver magnetic data in [centimeter-gram-second] gaussian units, so they always have to be converted to [the International System of Units],” Tallon says. “It has always been a little tricky, but students are asked to take great care and check their numbers against other reports to see if they agree.”

In a statement, Goyal notes he and his colleagues “intend to continue to push the field forward” by continuing to explore ways to enhance wire performance using nanostructural modifications. —Charles Q. Choi

Original article from 17 August, 2024 follows:

Superconductors have for decades spurred dreams of extraordinary technological breakthroughs, but many practical applications for them have remained out of reach. Now a new study reveals what may be the world’s highest-performing high-temperature superconducting wires yet, ones that carry 50 percent as much current as the previous record-holder. Scientists add this advance was achieved without increased costs or complexity to how superconducting wires are currently made.

Superconductors conduct electricity with zero resistance. Classic superconductors work only at super-cold temperatures below 30 degrees Kelvin. In contrast, high-temperature superconductors can operate at temperatures above 77 K, which means they can be cooled to superconductivity using comparatively inexpensive and less burdensome cryogenics built around liquid nitrogen coolant.

Regular electrical conductors all resist electron flow to some degree, resulting in wasted energy. The fact that superconductors conduct electricity without dissipating energy has long lead to dreams of significantly more efficient power grids. In addition, the way in which rivers of electric currents course through them means superconductors can serve as powerful electromagnets, for applications such as maglev trains, better MRI scanners for medicine, doubling the amount of power generated from wind turbines, and nuclear fusion power plants.

“Today, companies around the world are fabricating kilometer-long, high-temperature superconductor wires,” says Amit Goyal, SUNY Distinguished Professor and SUNY Empire Innovation Professor at the University of Buffalo in New York.

However, many large-scale applications for superconductors may stay fantasies until researchers can find a way to fabricate high-temperature superconducting wires in a more cost-effective manner.

In the new research, scientists have created wires that have set new records for the amount of current they can carry at temperatures ranging from 5 K to 77 K. Moreover, fabrication of the new wires requires processes no more complex or costly than those currently used to make high-temperature superconducting wires.

“The performance we have reported in 0.2-micron-thick wires is similar to wires almost 10 times thicker,” Goyal says.

At 4.2 K, the new wires carried 190 million amps per square centimeter without any externally applied magnetic field. This is some 50 percent better than results reported in 2022 and a full 100 percent better than ones detailed in 2021, Goyal and his colleagues note. At 20 K and under an externally applied magnetic field of 20 tesla—the kind of conditions envisioned for fusion applications—the new wires may carry about 9.3 million amps per square centimeter, roughly 5 times greater than present-day commercial high-temperature superconductor wires, they add.

Another factor key to the success of commercial high-temperature superconductor wires is pinning force—the ability to keep magnetic vortices pinned in place within the superconductors that could otherwise interfere with electron flow. (So in that sense higher pinning force values are better here—more conducive to the range of applications expected for such high-capacity, high-temperature superconductors.) The new wires showed record-setting pinning forces of more than 6.4 trillion newtons at 4.3 K under a 7 tesla magnetic field. This is more than twice as much as results previously reported in 2022.

The new wires are based on rare-earth barium copper oxide (REBCO). The wires use nanometer-sized columns of insulating, non-superconducting barium zirconate at nanometer-scale spacings within the superconductor that can help pin down magnetic vortices, allowing for higher supercurrents.

The researchers made these gains after a few years spent optimizing deposition processes, Goyal says. “We feel that high-temperature superconductor wire performance can still be significantly improved,” he adds. “We have several paths to get to better performance and will continue to explore these routes.”

Based on these results, high-temperature superconductor wire manufacturers “will hopefully further optimize their deposition conditions to improve the performance of their wires,” Goyal says. “Some companies may be able to do this in a short time.”

The hope is that superconductor companies will be able to significantly improve performance without too many changes to present-day manufacturing processes. “If high-temperature superconductor wire manufacturers can even just double the performance of commercial high-temperature superconductor wires while keeping capital equipment costs the same, it could make a transformative impact to the large-scale applications of superconductors,” Goyal says.

The scientists detailed their findings on 7 August in the journal Nature Communications.

This story was updated on 19 August 2024 to correct Amit Goyal’s title and affiliation.




co

Bluetooth Microscope Reveals the Inner Workings of Mice



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Any imaging technique that allows scientists to observe the inner workings of a living organism, in real-time, provides a wealth of information compared to experiments in a test tube. While there are many such imaging approaches in existence, they require test subjects—in this case rodents—to be tethered to the monitoring device. This limits the ability of animals under study to roam freely during experiments.

Researchers have recently designed a new microscope with a unique feature: It’s capable of transmitting real-time imaging from inside live mice via Bluetooth to a nearby phone or laptop. Once the device has been further miniaturized, the wireless connection will allow mice and other test subject animals to roam freely, making it easier to observe them in a more natural state.

“To the best of our knowledge, this is the first Bluetooth wireless microscope,” says Arvind Pathak, a professor at the Johns Hopkins University School of Medicine.

Through a series of experiments, Pathak and his colleagues demonstrate how the novel wireless microscope, called BLEscope, offers continuous monitoring of blood vessels and tumors in the brains of mice. The results are described in a study published 24 September in IEEE Transactions on Biomedical Engineering.

Microscopes have helped shed light on many biological mysteries, but the devices typically require that cells be removed from an organism and studied in a test tube. Any opportunity to study the biological process as it naturally occurs in the in the body (“in vivo”) tends to offer more useful and thorough information.

Several different miniature microscopes designed for in vivo experiments in animals exist. However, Pathak notes that these often require high power consumption or a wire to be tethered to the device to transmit the data—or both—which may restrict an animal’s natural movements and behavior.

“To overcome these hurdles, [Johns Hopkins University Ph.D. candidate] Subhrajit Das and our team designed an imaging system that operates with ultra-low power consumption—below 50 milliwatts—while enabling wireless data transmission and continuous, functional imaging at spatial resolutions of 5 to 10 micrometers in [rodents],” says Pathak.

The researchers created BLEscope using an off-the-shelf, low-power image sensor and microcontroller, which are integrated on a printed circuit board. Importantly, it has two LED lights of different colors—green and blue—that help create contrast during imaging.

“The BLE protocol enabled wireless control of the BLEscope, which then captures and transmits images wirelessly to a laptop or phone,” Pathak explains. “Its low power consumption and portability make it ideal for remote, real-time imaging.”

Pathak and his colleagues tested BLEscope in live mice through two experiments. In the first scenario, they added a fluorescent marker into the blood of mice and used BLEscope to characterize blood flow within the animals’ brains in real-time. In the second experiment, the researchers altered the oxygen and carbon dioxide ratios of the air being breathed in by mice with brain tumors, and were able to observe blood vessel changes in the fluorescently marked tumors.

“The BLEscope’s key strength is its ability to wirelessly conduct high-resolution, multi-contrast imaging for up to 1.5 hours, without the need for a tethered power supply,” Pathak says.

However, Pathak points out that the current prototype is limited by its size and weight. BLEscope will need to be further miniaturized, so that it doesn’t interfere with animals’ abilities to roam freely during experiments.

“We’re planning to miniaturize the necessary electronic components onto a flexible light-weight printed circuit board, which would reduce weight and footprint of the BLEscope to make it suitable for use on freely moving animals,” says Pathak.

This story was updated on 14 October 2024, to correct a statement about the size of the BLEscope.




co

How Did Attendees at a Behavioral Health Conference React to Trump’s Victory?

When it comes to the effects that the upcoming Trump presidency will have on healthcare, attendees’ attitudes ranged from cautiously optimistic to fairly anxious. Some of the issues they highlighted included mental health parity, telehealth prescribing flexibilities, and the role of Robert F. Kennedy Jr.

The post How Did Attendees at a Behavioral Health Conference React to Trump’s Victory? appeared first on MedCity News.




co

The Startup Economy is Turbulent. Here’s How Founders Can Recognize and Avoid Common Pitfalls

While startups in highly regulated industries like healthcare and finance are almost certain to face heightened scrutiny, there are controllable factors that can offset these challenges.

The post The Startup Economy is Turbulent. Here’s How Founders Can Recognize and Avoid Common Pitfalls appeared first on MedCity News.




co

FDA Takes Step Toward Removal of Ineffective Decongestants From the Market

The FDA has proposed removing oral phenylephrine from its guidelines for over-the-counter drugs due to inefficacy as a decongestant. Use of this ingredient in cold and allergy medicines grew after a federal law required that pseudoephedrine-containing products be kept behind pharmacy counters.

The post FDA Takes Step Toward Removal of Ineffective Decongestants From the Market appeared first on MedCity News.




co

How Can Healthcare Organizations Earn Trust with Marginalized Communities?

Access to care isn’t enough. Healthcare organizations need to build trust in order to reach underserved communities, experts said on a recent panel.

The post How Can Healthcare Organizations Earn Trust with Marginalized Communities? appeared first on MedCity News.




co

Closing Staffing Gaps in Healthcare by Utilizing Diverse Pipelines of Contingent Talent

By adopting a contingent workforce model and investing in the right data tools to power better informed decision-making and talent strategy, healthcare organizations can begin to address staffing challenges and turn their talent goals into reality. 

The post Closing Staffing Gaps in Healthcare by Utilizing Diverse Pipelines of Contingent Talent appeared first on MedCity News.




co

How One Massachusetts Maternal Mental Health Program Scaled Across the Country

During a recent panel, experts discussed the Massachusetts Child Psychiatry Access Program (MCPAP) for Moms and how it achieved scale.

The post How One Massachusetts Maternal Mental Health Program Scaled Across the Country appeared first on MedCity News.




co

Unlocking the Future of Radioligand Therapy: From Discovery to Delivering at Scale

As radiopharmaceuticals enter a new phase, industry leaders must rethink external services and internal capabilities to master the complexities of delivering advanced therapies.

The post Unlocking the Future of Radioligand Therapy: From Discovery to Delivering at Scale appeared first on MedCity News.




co

Medications for Opioid Use Disorder Improve Patient Outcomes

In 2018, opioid overdoses in the United States caused one death every 11 minutes, resulting in nearly 47,000 fatalities. The most effective treatments for opioid use disorder (OUD) are three medications approved by the Food and Drug Administration (FDA): methadone, buprenorphine, and naltrexone.




co

To Help Combat COVID-19, Federal Government Should Enforce Health Data Rules

Breaking COVID-19’s chain of transmission requires effective physical distancing, contact tracing and rapid analyses of demographic data to reveal illness clusters and populations at high risk, such as people older than 65, Latinos and Blacks.




co

Diagnostic Test Regulation Should Rank High on Agenda of New Congress

Faulty diagnostic tests can compromise both patient care and the nation’s response to infectious diseases—as made all too clear earlier this month when the Food and Drug Administration issued a safety alert about a COVID-19 test that carries a high risk of false negative results.




co

Extended Medicaid Coverage Would Help Postpartum Patients With Treatment for Opioid Use Disorder

Between 1999 and 2014, opioid use disorder (OUD) among pregnant women more than quadrupled, risking the health of the women—before and after giving birth—and their infants. As states grapple with COVID-19’s exacerbation of the opioid crisis, several are taking innovative steps to address the needs of high-risk groups, including low-income, postpartum patients with OUD.




co

Despite COVID-19 Challenges Dental Therapy Had a Watershed 2020 and Is Poised to Grow

2020 was a difficult year for dental providers as the COVID-19 pandemic swept across the country. When stay-at-home orders went into effect in the spring, dental offices closed their doors to all but emergency patients.




co

Standard Technology Presents Opportunities for Medical Record Data Extraction

Technology has revolutionized the way people live their lives. Individuals can use smartphones to access their bank account, shop from almost any store, and connect with friends and family around the globe. In fact, these personal devices have tethered communities together during the coronavirus pandemic, allowing many people to maintain much of their lives remotely.




co

Day Three Notes – JP Morgan Healthcare Conference, San Francisco

Yesterday’s conference sessions surfaced interesting questions and approaches regarding the post-acute sector, bundled payment, emergency medicine and anesthesia. Post-Acute Focus: With more and more focus on the need to rationalize and re-organize the post-acute sector, we have seen multiple industry leaders start to evolve their strategies.  I blogged yesterday about AccentCare’s interesting strategy in the...… Continue Reading




co

En Banc: Federal Circuit Provides Guidance on Application of On-Sale Bar to Contract Manufacturers

Pharmaceutical and biotech companies breathed a sigh of relief Monday when the Federal Circuit unanimously ruled in a precedential opinion that the mere sale of manufacturing services to create embodiments of a patented product is not a “commercial sale” of the invention that triggers the on-sale bar of 35 U.S.C. § 102(b) (pre-AIA).[1]  The en banc opinion...… Continue Reading




co

Looking Forward/Looking Backward – Day 1 Notes from the JPMorgan Healthcare Conference

A large amount of wind, much discussion about the U.S healthcare, and the public getting soaked again – if you were thinking about Washington, DC and the new Congress, you’re 3,000 miles away from the action. This is the week of the annual JP Morgan Healthcare conference in San Francisco, with many thousands of healthcare...… Continue Reading




co

Food for Thought (and Health): Day 2 Notes from the JP Morgan Healthcare Conference

Addressing the Social Determinants of Health:  Is the healthcare industry pushing a rock up a hill?  We collectively are trying to provide healthcare with improved quality and reduced cost, but the structure of the nation’s healthcare system remains heavily siloed with the social determinants of health often falling wholly or partly outside the mandate and...… Continue Reading




co

The Old and the New – Day 3 Notes from the JPMorgan Healthcare Conference

Day 3 of the JPMorgan healthcare conference was one of striking contrasts between the old and the new. (And, by the way, the rain finally stopped for a day, but it will be back tomorrow to finish off the last day of the conference). The Old:  Sitting in the Community Health Systems (CHS) presentation and...… Continue Reading




co

Notes on Day 4 of the JPMorgan Healthcare Conference

Some interesting presentations on the last day of the JPMorgan Healthcare Conference that concentrated on common themes – the increasing importance of ancillary business line to bolster core business revenue and of filling in holes to achieve scale and full-service offerings. Genesis Healthcare – The largest U.S. skilled nursing facility (SNF) provider, which also is...… Continue Reading






co

Statistical Model Building for Large, Complex Data: Five New Directions in SAS/STAT Software

This paper provides a high-level tour of five modern approaches to model building that are available in recent releases of SAS/STAT.