The Journey of Clinical Trial Data Points

From Patient to Tables, Listings, and Figures (TLFs)

When you look at a package insert from a drug you’re taking – have you ever wondered about the journey clinical trial data takes before ending up on the tables, listings, and figures (TLFs)?

In this article, we cover:

  • Ensuring Compliance
  • The Journey from Patient to Raw Dataset
  • How do Queries get issued?
  • Moving from Raw Data to SDTM Datasets
  • Creating an ADaM Dataset
  • Change from Baseline Vital Sign Table

 

Ensuring Compliance

Clinical trials are heavily regulated to ensure participant safety, good clinical practice, responsible research conduct, and quality data collection. We can’t properly explore the journey clinical trial data makes without first diving into some of the ways we ensure compliance.

What is a TLF?

Tables, listings, and figures are what are used when writing the Clinical Study Report (CSR), publishing results on ClinicalTrials.Gov, and developing the package insert for approved products.

What is EDC?

EDC, or Electronic Data Capture, is a system used to store patient data collected in clinical trials. All EDC systems that PharPoint uses go through a validation process for both the system and the study specific database, and are 21 CFR Part 11 compliant.

What is 21 CFR Part 11?

Put simply, 21 CFR Part 11 is the FDA’s regulation of electronic records and signatures.

Within an EDC system, all actions that create or modify the electronic data records within the system are maintained in a computer generated, time-stamped audit trail that independently records the date and time of operator entries and actions.

Featured Resource: How to Select the right EDC Provider for Your Study

The dataset and analysis programming is performed using SAS, which also goes through a validation process during the installation. All output that is produced in the forms of TLFs at PharPoint is 21 CFR Part 11 compliant. The output includes a date and time stamp at the bottom of the output – which is one of the key components of Part 11 compliance.

The programs used to produce the output and datasets are batch submitted, which produces a log file showing the date and time of the program execution along with the executed code. These logs are reviewed to ensure there are no errors or warnings during the execution of each program.

For each deliverable, all programs and logs used to generate the Study Data Tabulation Model (SDTM) datasets, Analysis Data Model (ADaM) datasets, and output are filed with the delivery in a data storage archive.

The Journey from Patient to Raw Dataset

Clinical Trial Data - Patient

In this journey, we’re following along with clinical trial data for a fake study that requires blood pressure information to be collected from patients.

Our subject is participating in this clinical trial. While at the investigative site, this participant has his blood pressure taken. His Systolic Blood Pressure is 120 mmHg. That data is recorded in the patient’s source documentation.

Qualified site personnel then enters the information from that source documentation into the EDC system that is being used for the study. In this example, there is an error while transferring the collected trial data into the EDC system, accidentally recording the participant’s blood pressure as 86 mmHg instead of 120 mmHg.

An edit check fires in real time because the participant’s Systolic Blood Pressure was recorded as less than 90. This opens a query stating that systolic blood pressure is typically greater than or equal to 90, and the collected value was less than 90. With the error caught, the number is updated to the correct value – 120mmHg.

An audit trail shows this change. This information includes data and time of the change, the person making the change, and the reason the value was changed. In this scenario, the value would be changed due to a Data Entry Error.

 

How do Queries get Issued?

In the case of our participant’s misrecorded blood pressure, a query was issued thanks to an edit check. There are a number of ways a query may be issued.

  • Edit checks from within the system, which fired upon submission of the aCRF page during data entry.
  • Data management writes edit checks outside of the system that are run periodically. A clinical data management team member may issue a query in the system based on these checks.
  • A CRA supporting the trial may issue a query in the system based on source document verification.
  • A clinical operations team member may issue a query in the system based on issues found during remote monitoring.

Moving from Raw Data to SDTM Datasets

Data is mapped into SDTM using SAS 9.4 or higher following a specified version of the SDTM implementation guide (SDTMIG). Both the implementation guide version and the SDTM version will be noted for each study.

What is STDM? Study Data Tabulation Model (SDTM) is the CDISC standard for organizing and formatting the database of record, which includes data from the EDC, all vendor data collected outside of the EDC, and protocol deviations.

SDTM streamlines the process for data collection, management, analysis, and reporting. SDTM is required for data submission to the FDA (US) and PMDA (Japan).

PharPoint uses independent double programming to generate and validate the SDTM datasets. The final SDTM datasets for a NDA or BLA submission also include trial level domains. These additional domains, which provide information about the study, can be added when preparing a submission, or they can be added at the time that the SDTM is being created. For a submission, item 11 documentation in the form of define.XML and hyperlinked aCRF are generated. SDTM is checked against the standards using Pinnacle21. A reviewer’s guide is also produced.

What is a reviewer’s guide? A reviewer’s guide is a document that provides regulatory reviewers with additional context for SDTM datasets that are received as a part of a regulatory submission.

Creating an ADaM Dataset

ADaM derived datasets are generated using SAS version 9.4 or higher following a pre-specified version of the ADaM implementation guide (ADaMIG). The ADaM version and implementation guide version will be noted for each study.

ADaM is one of the required standards for data submission to FDA (US) and PMDA (Japan). ADaM is the CDISC standard for the analysis datasets. This dataset supports traceability among analysis results, analysis data, and data represented in the SDTM.

PharPoint uses independent double programming to generate and validate the ADaM datasets. For a submission, item 11 documentation in the form of define.XML are generated. ADaM is checked against the standards using Pinnacle21. A reviewer’s guide is also produced to provide regulatory reviewer’s with additional context for ADaM datasets.

Change from Baseline Vital Sign Table

SAS version 9.4 or higher is used to produce all output for the study.

The tables and listings are verified using independent double programming. The points on the figures are verified electronically. In addition, the figures are hand checked against the output for which they are supporting or the SAS output from a KM analysis.

Every output includes a footer that notes the location of the program and the date and time that it was executed. The verification of all output is documented on a verification form.

 

Contributors

This article was originally written in 2019 with contributions from:

Amy Flynt, PhD – Sr. Director Strategic Consulting Operations, Biostatistics 

Baker Sharpe – Director, Data Management 

Sheri Holt – Manager, Database Programming 

 

About PharPoint Research

PharPoint Research is an award-winning contract research organization (CRO) that offers clinical operations and project management, biostatistics and statistical programming, and data management services to innovative clients of all sizes.

PharPoint has supported 1,000+ clinical trials across multiple therapeutic areas with a high repeat business rate. To learn more about how PharPoint Research can support your upcoming or current study, reach out to a representative.


RELATED RESOURCES

EBOOK

Standard Clinical Trial Timelines: A Sponsor’s Guide to Evaluating Biometrics CROs

This brief guide provides timeline benchmarks for Sponsors evaluating biometrics contract research organizations (CROs).


Exploring Standard CRO Timeline Benchmarks

Preparing to work with a top biometrics contract research organization and wondering how their promised data management, biostatistics, and medical writing timelines match up to the industry average? To help sponsors dig into these details and ensure the timelines they’re receiving are competitive, we’re providing PharPoint’s typical timelines alongside research that calculates industry standard timelines, when available.

Our hope is that this document can help sponsors set realistic expectations, confidently ask the right questions of their vendors, and ultimately, partner with a top biometrics CRO that keeps their study moving: because patients are waiting.

 

eBook contents include:

PharPoint’s short eBook, Standard Clinical Trial Timelines: A Sponsor’s Guide to Evaluating Biometrics CROs, includes the below information.

  • Standard database build timeline
    • The bigger picture: Considering site identification and study start-up
  • Standard mid-study database change timeline
  • Standard database lock time
    • Six strategies for a faster database lock
  • Evaluating database lock to top line results timeline
    • Ensuring a rapid delivery
  • Evaluating database lock to delivery of tables, listings, and figures (TLF) timeline
  • Standard clinical study report delivery timeline
  • About PharPoint Research

Compare Standard Clinical Trial Timelines

For instant access to the eBook, fill out the form below.


RELATED RESOURCES