study sites

Study Sites: Too Many Vendors, Too Little Time

By Laurie Meehan


“I can’t get the IWRS to assign a kit number.”

“My ECG reports take forever to come back from the Core Lab.”

“The eCRF won’t let me create a new subject.”

“This stupid machine is blinking an error code again.”

Sound familiar?  Sprinkle in some colorful adjectives and it probably does -- these problems are common enough at clinical research sites.  Equipment and systems have become increasingly technical and specialized, and study site staff has had to contend with more technology than ever before.  And because of the proliferation of niche vendors who provide the new tech, sites have had to deal with more vendors than ever before, too.  



And how are problems like these typically resolved?  Someone at the study site works his/her way through a list of maybe 20 or more vendor contact numbers, places a call, navigates a series of menu options, and hopefully gets directed to someone who can help.  And that assumes the site calls the right company; with tightly integrated systems, it’s not always obvious in which vendor’s system the problem lies.  This is frustrating for sites.  It takes time.  It costs money (since “vendor wrangling” is seldom sufficiently covered in the budget).  And it keeps study staff from doing what study staff does best – run the study, work with the study volunteers, and keep them safe.

So what’s the solution? 

Hint: It’s Not Training
Calm down.  Of course, adequate training on equipment and systems is important. But training doesn’t solve every problem.  Training doesn’t keep equipment from malfunctioning.  Training doesn’t ensure vendors deliver what and when they’ve promised.  Training can’t anticipate every situation nor address an unusual site circumstance.  And training doesn’t turn people into infallible little machines; we make mistakes.  And so, in all these cases, we’re back to site personnel interacting with perhaps scores of vendors, by phone or email, all over the world.

The Solution: a Single Point of Contact
Q: How do you help sites interact with dozens of vendors?
A:  You don’t.  You do it for them.  Establish a single point of contact within the Sponsor* organization for a site to call when vendor issues arise. 

Why is this a good idea when the expertise to resolve the issue lies with the vendor?  Why is this a good idea when the introduction of a middleman may result in some inefficiencies?

Excellent questions.  Here are our responses. 

  • Better Vendor Oversight.  When sites filter their vendor issues through the Sponsor, the Sponsor can more easily track vendor performance.  Are there vendors that provide low-quality solutions, are repeatedly late, or difficult to deal with?  At best, these vendors are wasting time and money, and aren’t good for business (let alone site relations).  At worst, these vendors are jeopardizing subject safety or study data integrity, and require immediate Sponsor intervention.

  • Better Site Oversight.  When sites filter their vendor issues through the Sponsor, the Sponsor can more easily track site performance.  Are there sites that routinely use equipment and computer systems incorrectly?  (Yes, now’s the time for that training.)  Are there high-performing sites that are able to work independently?  This information has always been important, but in an RBM paradigm, it’s essential.  Adaptive monitoring plans rely on on-going site performance measurements so Sponsors can adjust resources accordingly.  A reduction in monitoring visits means less opportunity to assess a site’s comfort level with study technology.  The corollary of “if it ain’t broke, don’t fix it” is “if you don’t know it’s broke, you can’t fix it.”
  • Ability to Identify Pervasive Problems. After the third or fourth site reports the same problem, it’s clear that this is not an isolated occurrence.  Knowing that, the Sponsor can work with the vendor to resolve the problem before other sites experience the same troubles.

  • Better Functioning Sites.  We have a saying: “The Site Comes First."™  In our experience, all things being equal, Sponsors that put their sites first -- make things as easy as possible for the study coordinators -- get the best results.  They also build the good relationships that keep the best sites coming back to work on future studies.

  • Better Functioning Vendors.  The efficiencies for the vendor here are clear.  Who wouldn’t rather interact with a single point of contact than field individual calls from multiple study sites?  Plus, with far fewer players, miscommunicating both problem descriptions and problem solutions is less likely to occur.  The Sponsor contact and the vendor contacts will eventually settle into common terminology and build a history regarding past issues and resolutions.

What Do You Think?
We know that not everyone espouses this idea, and we recognize there are probably other effective processes out there.  Sponsors, how do you help your sites deal with multiple vendors?  Sites, do you have experiences and/or suggestions you can share?  (Be kind, anonymize!)  Leave a comment here, visit our website, or send us an email.




____________________
*When we use the term “Sponsors” in this post, we’re including CROs that take on Vendor Management responsibilities on behalf of Sponsors.




study sites

Study Sites: Show 'Em Your QC!

Sites frequently want to know how they can stand out to Sponsors and CROs to win more studies.
Our advice: Implement internal QC procedures.

Sponsors and CROs we work with consider a tight quality control program to be evidence that a site can be counted on to produce reliable data. It shows that managing quality at your site is a continual process, and doesn’t wait for monitors to arrive. In a risk-based monitoring environment, this is an increasingly compelling attribute.

Where to Start: The Usual Suspects
It makes sense for you to focus your QC efforts on those areas where you’ve historically had the most problems. If the phrase “trend analysis” makes you want to jump through a window -- it's okay -- you can climb back inside. You don't have to do a trend analysis. We've identified 3 areas in which audit findings are common and how you can avoid them.



Adverse Events (AEs) and Concomitant Medications (ConMeds). Often two sides of the same coin, AE and ConMed documentation needs to tell a consistent story. If source documents indicate a study participant had a sinus infection, it must be documented on an AE page, and any associated medications documented on the ConMeds page. A medication noted on the AE page must have a corresponding notation on the ConMed page. And all start and end dates must match across the source, AE, and ConMeds pages.

Drug Accountability Records. Calculating compliance percentages and counting pills are positively uninteresting tasks, easy to mess up, and involve math (which for some people triggers terrifying flashbacks of word problems about trains leaving stations). Is it any wonder that drug accountability records are frequent sources of error? Do some spot-checking: verify that the number of returned tablets matches the tallies recorded for them and recheck compliance calculations.

Essential Documents. Maintaining a complete, organized, uniform set of essential documents is an important, yet decidedly unsexy task. That’s why it’s a good indicator of your commitment to quality; a site that is disciplined enough to keep tight control over its essential documents is likely to carry that control into all aspects of trial execution. Make sure to file all documents associated with protocol amendments, such as IRB approvals and revised informed consent forms -- our auditors find these are the items most frequently missing from the essential document set. 

Write It All Down
Document your QC procedures in an SOP. It will serve as training material for site staff and a repository for worksheets and checklists.

There’s no magic organization for this QC SOP. A general set of instructions could outline how reviewers can verify that all documents follow ALCOA principles. For example, on (paper) source documents, are all pages and required signatures present? Are entries legible? Are corrections initialed, dated, and explained? Does the data make sense and lie within expected ranges? Have all data elements been populated? (Tip: turn the paper upside down to catch missing data.)

Checklists that are focused on particular types of documents should be as specific as possible. For example, QC reviews of source documents for screening visits would verify that the correct informed consent form was used, administration of consent was documented, medical release forms were sent if required, demographics were correct, all labs were received, reviewed and signed, all protocol assessments were completed, and all inclusion/exclusion criteria were met and documented.

A Virtuous Cycle
While designed to control quality, performing QC over time may actually improve quality. Results of QC reviews often suggest revisions you should make to your tools and operations to reduce error in the future.

Okay, you can climb back through the window again -- no one said CAPA. (But wouldn't that be impressive?)


Showcasing Site QC Processes
Does implementing a QC program require resources and time? Yes, and that’s the point. It’s evidence to Sponsors and CROs of your commitment to running a quality study. Not only that, but it demonstrates a proper respect for your study participants by ensuring their data can be used.

Oh, and make sure you highlight your QC program on feasibility questionnaires. It’s something to brag about.

________________________________________________________________________
A version of this article originally appeared in InSite, the Journal of the Society for Clinical Research Sites