Ensuring Quality Statistical Output

How do the experts at PharPoint produce quality statistical analyses?

It all starts with strategic planning of the analysis. This includes everything from how the study team is selected, to how programming is implemented and reviewed.

PharPoint’s extensive use of independent verification and a review of all statistical output, followed by additional statistical review by our highly reputable statisticians, helps ensure nothing is sent prior to being thoroughly vetted by our experienced team. Learn more about PharPoint’s process for ensuring the delivery of quality statistical output.

Quality Statistical Output Begins with Team Selection and Kick-off

When a project is awarded, our managers collaborate both within and across functional areas to identify the best leads and team members suited for a project. Once a team has been selected, a kick-off meeting is scheduled. This brings together team members of contracted services.

An effective kick-off meeting

At kick-off, discussions should include:

Study team selection

Considerations for selecting a study team should include:

  • A team member’s previous experience with a client
  • Therapeutic area of the project
  • Phase of the project

 

Key Components for Statistical Programming

Once a project moves to the programming phase, PharPoint’s process ensures top quality output.

One of our key components for statistical programming is the use of custom programming. Custom programming allows for the ultimate flexibility. Other CROs often instead rely on rigid tools or macros to create outputs from a pre-set library of templates.

Independent programming is our preferred and most widely used method of verifying statistical output. With independent programming, one programmer, who’s referred to as the production programmer, creates the dataset or output based on the study documents and specifications.

Separately, another programmer, referred to as the validation programmer, is given the same documents and asked to programmatically recreate the results.

That validation programmer then compares the results from the original output using an electronic comparison procedure, while also visually inspecting the output. The two programmers work together to resolve any differences between the two results. Once all differences are resolved, the results are considered independently verified by double programming.

As an extra level of validation, if a study is contracted to map the raw data into SDTM (or study data tabulation model), then any production analysis datasets that are produced are programmed off the SDTM datasets.

On the validation side, the analysis dataset is programmed from the raw data. This allows us to provide an extra level of quality control to confirm that the data, and therefore the results, are not being altered when they’re being mapped into SDTM.

This same process would be use anytime SDTM is used for an output as well. This is a unique part of PharPoint’s process, and we feel it is important to ensure the quality of the outputs and the results.

 

Review for Completeness & Accuracy

Once the data is final and the statistical outputs are complete (which means they’ve already been independently verified), the study statistician then reviews for completeness and accuracy.

The goal of the review is to ensure that the outputs are consistent and that they make sense with the data that’s been provided. This is also used as a means to identify any potential data issues to provide back to data management, and also to confirm that the outputs are in line with the SAP. Most importantly we want to ensure the quality of the outputs.

About PharPoint Research:

PharPoint is a right-sized, client-focused, and award-winning CRO that works with sponsors of all sizes to improve global health.


RELATED RESOURCES

EBOOK

Standard Clinical Trial Timelines: A Sponsor’s Guide to Evaluating Biometrics CROs

This brief guide provides timeline benchmarks for Sponsors evaluating biometrics contract research organizations (CROs).


Exploring Standard CRO Timeline Benchmarks

Preparing to work with a top biometrics contract research organization and wondering how their promised data management, biostatistics, and medical writing timelines match up to the industry average? To help sponsors dig into these details and ensure the timelines they’re receiving are competitive, we’re providing PharPoint’s typical timelines alongside research that calculates industry standard timelines, when available.

Our hope is that this document can help sponsors set realistic expectations, confidently ask the right questions of their vendors, and ultimately, partner with a top biometrics CRO that keeps their study moving: because patients are waiting.

 

eBook contents include:

PharPoint’s short eBook, Standard Clinical Trial Timelines: A Sponsor’s Guide to Evaluating Biometrics CROs, includes the below information.

  • Standard database build timeline
    • The bigger picture: Considering site identification and study start-up
  • Standard mid-study database change timeline
  • Standard database lock time
    • Six strategies for a faster database lock
  • Evaluating database lock to top line results timeline
    • Ensuring a rapid delivery
  • Evaluating database lock to delivery of tables, listings, and figures (TLF) timeline
  • Standard clinical study report delivery timeline
  • About PharPoint Research

Compare Standard Clinical Trial Timelines

For instant access to the eBook, fill out the form below.


RELATED RESOURCES