Process Improvement – Cut Complexity Costs

Process Improvement – Cut Complexity Costs

[vc_row][vc_column][vc_column_text]

Written By: Conor Smith, Technical Writer, Midwest

[/vc_column_text][vc_btn title=”Learn More” color=”orange” size=”lg” link=”url:https%3A%2F%2Fwww.pscsoftware.com%2Fcontact-us||target:%20_blank|” css=”.vc_custom_1586897434053{border-bottom-width: 15px !important;padding-bottom: 15px !important;}”][/vc_column][/vc_row][vc_row][vc_column][vc_single_image image=”2609″ img_size=”full”][vc_column_text]Quality Document (QD) organization and scaling is an important consideration. There are a number of reasons for establishing and maintaining QDs and many challenges associated with managing QDs are consistent across different industries and applications.

One such challenge is complexity costs. In general, complicated systems are more difficult and expensive than simple systems (the exception proves the rule for complexity costs). A set of complicated and inter-related documents requires much more time to update than a simple set of documents. In part, this is because QDs typically cross-reference other related QDs. Updating one document requires modifying other documents as well. As a system of documents becomes more inter-related, the time to modify elements of the system vs. the total number of the elements of a system is more accurately modeled by an exponential curve than a linear approximation. Taken to the extreme, an accurate model incorporates the fact that in practice, very large numbers of documents become impossible to update. The following example in Figure 1 is aggregated and simplified based on experience in the pharmaceutical industry:[/vc_column_text][vc_row_inner][vc_column_inner width=”1/4″][vc_single_image image=”2613″ img_size=”full”][/vc_column_inner][vc_column_inner width=”3/4″][vc_column_text]Figure 1: A simple model of Time vs. # of Documents to Edit. Document editing time is not linear (blue line) vs. the number of documents. Instead, the actual document editing time is exponential (red line) vs. the number of documents. The difference (dotted green line) between the curves could be interpreted as the true added value of harmonization. A. For a small number of documents (20) the difference in editing time is small – 40 days for ‘ideal’ vs. 49 days for ‘actual’ – a difference of 9 days. B. For a larger number of documents (40) the difference in time is larger – 80 days for ‘ideal’ vs. 119 days for actual – a difference of 39 days.

Linear model (ideal): Assuming 2 days editing per document, decreasing the number of documents from 40 to 20 would save the company 40 days in editing time. In other words, ideally, a 50% decrease in the number of documents from 40 to 20 is a 50% decrease in editing time.

However, for a complicated system the editing time exponentially increases with the number of related documents.[/vc_column_text][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row][vc_row][vc_column][vc_column_text]Exponential model (actual): Decreasing the number of documents from 40 to 20 saves closer to 70 days in editing time. In Figure 1 on the ‘difference’ curve, going from point ‘B’ (39 days difference for 40 documents) to ‘A’ (9 days difference for 20 document) is 30 days. Thus, the complexity savings is ~ a month of editing time. With the exponential model, a 50% decrease in the number of documents from 40 to 20 is a 58% decrease in editing time.

All this really means is that there is value in reducing complexity costs and that. quantitatively, the value is greater than if there was no cost associated with complexity. In practice, there are many strategies for reducing the costs due to complexity. One simple example is a reduction in the number of documents:

Finally, the most cost-effective strategy is to mitigate complexity costs by organizing QD systems before they become overly complex. PSC Software’s highly configurable electronic Quality Management System (eQMS), ACE™, is designed to provide the framework for a highly organized and integrated QD system. Additional offerings including consulting, project management and execution services will optimize your system processes, and prevent run-away costs due to unnecessarily complex documentation. For more information and to schedule a free demo, click here: https://www.pscsoftware.com/contact-us[/vc_column_text][/vc_column][/vc_row]

How To Write An Effective Validation Master Plan.

How To Write An Effective Validation Master Plan.

A Validation Master Plan or VMP summaries how you will qualify the facility, equipment, process, or product. A VMP is part of your validation program that includes process validation, facility, and utility qualification and validation, cleaning and computer validation, equipment qualification, and so on. It is a key document in the current GMP (Good Manufacturing Practice) regulated pharmaceutical industry.

Validation Master plans help organizations define validation strategies and deliver control over a particular process. The VMP is quite different from a validation process or procedure that explains the specific process to perform validation activities. Your VMP also helps you defines anticipated resource needs and delivers key input into the scheduling of project timelines. It documents the scope of the validation effort such as impacted product, processes, facilities, procedures, equipment, and utilities. Let’s take a look at its functions:

  1. Management education: Top management is not always aware of the real requirements for validations and qualifications. They generally focus on finances and business processes. The VMP helps educate management by presenting a summary assessment of what it will take to get the job done.
  2. Project monitoring and management: It includes validation schedules and the timeline for the completion of the project.
  3. Audit the validation program: It includes all activities related to the validation of processes and the qualification of manufacturing equipment and utilities.
  4. Planning purposes: It defines anticipated resource needs and offers key inputs into the scheduling of project timelines.
  5. Documenting the scope of the validation effort: It says what you plan to do and, most importantly, what you will not do.

 

How to Write A VMP?

Have one lead author write your VMP. Use your experts. A team-writing approach can be beneficial because it allows you to use and learn the skills and knowledge from people from different parts of the operation that increases the assurance that all processes, utilities, equipment, and systems will be addressed. A good VMP is an easy to follow plan.

Just write down what you want to do, how you will do it, what you need to do it, what deliverables you will have, and when you will do it. Remember, the reader did not help you write your VMP, so say what you mean in plain language. Keep in mind that your VMP should be as long as required to present the plan in the necessary detail. A good VMP is your plan for validation success.

Why Gap Assessments are Important to Accelerate Success

Why Gap Assessments are Important to Accelerate Success

 

Written by: Crystal Booth, M.M.- Regional Manager, Southeast USA at PSC Biotech

Are you finding yourself with repeated deviations or complex processes that slow you down? Did you go into an audit feeling ready and came out the other side blindsided by a 483?

Making multiple changes in processes over time, whether to prevent deviations or adapt to changing regulations, may create hidden obstacles, unnecessary or redundant steps, and broken links in procedures.  You may end up with standard operating procedures that no longer talk to one another properly, thus creating more deviations when an employee gets confused.

Identifying your desired state of operations and seeing your future state operations can be overwhelming.  How do you get to your future state without stopping and starting all over?  A gap assessment can help identify broken links and streamline processes to accelerate your success.

A gap assessment will compare your current state of operations to your desired state of operations.  In doing so, the analysis will identify gaps and areas of improvement.  In general, the steps of a gap assessment include:

  1. Identifying and documenting your future goals or desired future state of operations.
  2. Identifying and documenting your current state of operations.
  3. Comparing the current state of operations verses the future state of operations.
  4. Using gap analysis tools, such as Ishikawa, to find potential gaps and identify potential solutions.
  5. Evaluating the potential solutions by developing a plan to test one of the solutions.
  6. Testing one of the potential solutions with a small-scale study to see if processes improve.
  7. Analyzing the results of the study.
  8. Creating a plan to bridge the gap and implement the successful change into the routine process.

PSC Biotech™ provides custom fit options to help companies perform gap assessments of their operations.  Because regulations and guidance documents are periodically updated to help the industry adapt to current Good Manufacturing Practices, adjusting to the changes can be difficult for small and large companies alike.  Experienced consultants are available to perform gap analysis to ensure your current processes are compliant.

PSC Biotech™ has a wide variety of solutions to ensure success for any size company.  Some of our offerings include consulting, performing risk assessments, writing standard operating procedures, writing protocols, writing white papers, project management or even executing projects to free up your company’s valuable resources.  Whatever your need may be in the life science industry, PSC Biotech™ will be there to help. Give us a call today!

How Computer Systems Validation Can Make or Break Your Business

How Computer Systems Validation Can Make or Break Your Business

What Is Computerized System Validation (CSV)?

Computerized System Validation (CSV) is defined by the FDA as “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled” – General Principles of Software Validation: Final Guidance for Industry and FDA Staff.”

In layman’s terms, CSV is the line of work where regulated companies validate their software applications by executing different validation projects in order to prove their software is working properly.

Why Is Computerized System Validation Important to My Business?

There are many reasons as to why Computerized System Validation is important, especially if you work in a highly regulated industry. If your business does fall into that category, it is likely you’re familiar with the validation of methods, processes, equipment or instruments to ensure your science is of high quality. CSV is no different. CSV is integral to ensuring the quality and integrity of the data that supports the science.  If the FDA or any other regulatory body inspects your company, you can guarantee they will check on this.

Common Computerized System Validation Mistakes

The whole goal of CSV is to prove that computers and software will work accurately on a consistent basis in any situation as it complies with relevant regulatory bodies.

The timeline of CSV testing activities is never ending. CSV happens throughout the whole software development lifecycle (SDLC) – from system implementation to retirement.

With that being said, this leaves ample opportunity for error. Some of the more common ones in the industry include:

  1. Poor Planning – We run into poor planning issues when there are insufficient resources and inaccurate timelines.
  2. Inadequate Requirements – Typically, we see too few, too many, too detailed, or too vague.
  3. Test Script Issues – This is commonly seen with execution errors, inadequate testing, poor test incident resolution, over reliance on vendor testing.
  4. Project Team Issues – Associated issues include poor buy-in from all stakeholders, unavailability of key personnel at key times.
  5. Inadequate Focus on the Project – Resources often are pulled off to their day jobs, insufficient managerial support.
  6. Wasting Time on Low Value Testing Activities – There are typically inadequate risks and critical assessments.

 

 

 

Endotoxin OOS and the Quest for the Root Cause

Endotoxin OOS and the Quest for the Root Cause

Abstract

Out of specification endotoxin results are occasionally obtained. When this occurs, an investigation must be performed. A decision to reject the batch does not remove the requirement to perform an investigation. Finding a root cause to endotoxin contamination early can aid with the control, clean up, corrective actions, and preventative actions that may be required to protect the company, the products, and the patient. This article discusses endotoxin out of specification results and root cause investigations.

Overview of the bacterial endotoxin assay

The bacterial endotoxins test (BET) is described in the United States Pharmacopeia (USP) <85> Bacterial Endotoxins Test, European Pharmacopeia (EP) 2.6.14 Bacterial Endotoxins, and the Japanese Pharmacopeia (JP) 4.01 Bacterial Endotoxins Test. The majority of these different compendial are harmonized with each other. The portions that are not harmonized, are marked as not being harmonized [9].

The test is used to detect or quantify endotoxins from Gram-negative bacteria using amoebocyte lysate from the horseshoe crab (Limulus polyphemus or Tachypleus tridentatus) [9]. Simply put, coagulogens in the amoebocyte lysate clot in the presence of endotoxins to create a semi-solid mass or clot [11]. The BET assay is often used to test raw materials, water, components, parenteral finished products, medical devices and stability samples.

There are three techniques described in the compendial chapters for the test. The techniques include the gel-clot technique, the turbidimetric technique, and the chromogenic technique.

The gel-clot technique most resembles the first technique discovered and is often relied upon as the referee test in the compendial chapters [9]. The gel-clot test is manually involved. Dilutions are made, mixed with limulus amoebocyte lysate (LAL) in a test tube, and incubated at 37°C + 1°C for a period of 60 + 2 minutes. Following the incubation period, the tubes are inverted and observed for clots. Anything other than a solid clot is considered negative. This test has earned the nickname of the wet-hand test because negative results have the potential to fall onto a technician’s hand. One must be cautious when reading this test because the gel clots could loosen if the tubes are jarred too much while analyzing the results.

The turbidimetric technique is based on the development of turbidity after cleavage of an endogenous substrate [9]. This test also involves dilutions and mixing the testing samples with LAL. The test samples, controls, and LAL are typically spiked into a 96 well microtiter plate as opposed to test tubes. A photometric instrument is required to incubate the plate and measure the rate of turbidity change during the assay [9].

The chromogenic technique requires dilutions and mixing the testing samples with a specialized LAL reagent. The test samples, controls, and specialized LAL are typically spiked into a 96 well microtiter plate as opposed to test tubes. The technique is based on the development of color after cleavage of a synthetic peptide-chromogen complex and requires the use of a spectrophotometer. The spectrophotometer is used to incubate the plate and measure the rate of the color change [9].

When setting appropriate specifications for the test, the material should be researched. The compendial monographs may be checked to see if specifications or methods are already established. Individual monographs for specific raw materials may not be harmonized with one another in the various regions. However, utilizing the most stringent criteria from the various regions will allow one method to be developed that is globally compliant.

The USP chapter on endotoxin testing, USP <85>, describes how to calculate endotoxin limits. The calculation is described as the endotoxin limit (EL) being equal to K/M and is expressed as follows: [9] EL= K/M

In the equation, K is a threshold pyrogenic dose of endotoxin per kilogram (kg) of body weight. Five (5) endotoxin units (EU)/ kg is used in the calculation for parenteral drugs and two (2) EU/kg is used in the calculation for intrathecal drugs [9]. M is equal to the maximum recommended bolus dose of product per kg of body weight. M can also be the maximum total dose received in a single hour period when the product is injected at frequent intervals or infused continuously [9].

When performing the calculation, the average human weight is typically considered to be 70 kg. However, some countries, such as Japan, utilize 60 kg as the average human weight. Using the calcu­lation, 350 EU are allowed for a 70 kg human per hour. Many com­panies add an additional safety factor when calculating endotoxin limits to that ensure consumers are safe [4].

It is important to consider all contributing sources of endotoxin when setting a specification. For example, all components of the final product (e.g. water, raw materials, active pharmaceutical in­gredient [API], etc.) will contribute some endotoxin and this total amount of endotoxin should not exceed the maximum calculated value.

When performing the assay, invalid results may be obtained from time to time. Invalid results are different than out of speci­fication results (OOS) and may be handled differently. However, there should not be an overabundance of invalid results. Frequent invalid results may require an investigation to determine the root cause of the invalidity. Common causes of invalidity include the following:

  • Pipetting errors [10]
  • Incorrect well selection [10]
  • Subpotent endotoxin standards (particularly the lowest endotoxin concentration.) [10]
  • Dilution errors [10]

The Food and Drug Administration (FDA) Guidance for the In­dustry document “Guidance for the Industry-Pyrogen and Endo­toxins Testing: Question and Answers” provides information on commonly asked questions. The document states that “when con­flicting results occur within a test run, firms should consult USP Chapter <85>, Gel Clot Limits Test, Interpretation, for guidance on repeat testing. As specified in Chapter <85>, if the test failure occurred at less than the maximum valid dilution (MVD), the test should be repeated using a greater dilution not exceeding the MVD. A record of this failure should be included in the laboratory results. If a test is performed at the MVD and an out-of-specification (OOS) test result occurs that cannot be attributed to testing error, the lot should be rejected” [6]. When OOSs are obtained in the laboratory, they should be properly investigated to determine the root cause and to prevent reoccurrences whenever possible.

Investigating the unexpected data

Handling OOS results are discussed in many regulatory docu­ments and proper investigations are expected. Several observati­ons have been written regarding improper investigations of out of specification results. In addition, the Code of Federal Regulations (CFR) 21 CFR 211.192 states that “any unexplained discrepancy (including a percentage of theoretical yield exceeding the maxi­mum or minimum percentages established in master production and control records) or the failure of a batch or any of its compo­nents to meet any of its specifications shall be thoroughly investiga­ted, whether or not the batch has already been distributed. The in­vestigation shall extend to other batches of the same drug product and other drug products that may have been associated with the specific failure or discrepancy. A written record of the investigation shall be made and shall include the conclusions and follow up” [3]. If an OOS is obtained, it must be properly investigated even if the batch is rejected and the investigation must look at other batches to examine the possible impact. The investigation must be properly documented and discuss the conclusions, corrective and preventa­tive actions, and any effectiveness checks that may be required.

The FDA Guidance for Industry from October 2006 for Investi­gating Out-of-Specification (OOS) Test Results for Pharmaceutical Production provides expectations on investigating OOS results. The document describes the investigation process consisting of two phases, Phase 1- the laboratory investigation and Phase 2- the full scale OOS investigation.

The first phase of the investigation (Phase 1) is a short prelimi­nary investigation of the laboratory. The goal of during this phase is to look for and rule out any obvious errors that may have occurred [5]. Checklists are often helpful during this stage to help investigate data, equipment, and analysts. Generally, a re-measurement of the originally prepared sample, standard solutions, or dilutions is per­mitted during this stage if the supplies are not consumed or expired [5]. Everything that is investigated and observed must be properly documented. If clear evidence of an error is identified, the labora­tory testing results may be invalidated. If meaningful errors are not discovered that could explain the root cause or if the results of the investigation are unclear, the investigation must proceed to a full scale investigation, Phase 2 [5].

The objective of Phase 2 is to identify the root cause and estab­lish preventative and corrective actions. This part of the investiga­tion should include manufacturing, process development, produc­tion process review, review of production sampling procedures, maintenance, engineering, and additional laboratory testing when applicable [5]. Retesting and re-sampling are permitted as part of Phase 2 but the maximum number of retests should be specified in advance to avoid the appearance of testing into compliance.

If laboratory error is identified, the retest results may substi­tute for the original test results [5]. However, all of the data must be retained and properly explained. If the OOS is confirmed and a root cause is identified, the product should be rejected. If the in­vestigation is inconclusive, it is wise to err on the side of caution when making batch release decisions [5]. During the investigation, every action and decision must be documented and the investiga­tion should be expanded to examine the impact of the OOS results on other batches [5].

OOS investigation write-ups are often written similar to quality deviations. Some companies have quality managements systems designed to document OOS investigations, while other companies attach OOS investigations to deviations (or non-conformances) in their existing quality management system for tracking purposes.

When entering into Phase 1 of the OOS investigation, the fir­st steps usually include notifying management, notifying QA, and initiating the investigation documentation. The event should be described properly with a problem statement. The problem sta­tement should be written to encompass who, what, where, when, and why [2]. It is important to perform the investigation with an open mind and have no preconceived assumptions as to what may have caused the unexpected endotoxin result [5].

Whenever possible, the original test preparations should be kept to aid in the investigation [5]. If the batch has already been distributed, a field alert report (FAR) should be submitted to the FDA within 3 working days of any OOS [5].

The next step is to begin gathering information regarding the endotoxin OOS [2]. An OOS checklist is extremely helpful during this stage. Pertinent information may include the following:

  • Product/ Sample Name
  • Product Lot Number
  • Method Name/Number
  • Date of Discovery
  • Date of Test
  • Result
  • Specification
  • Analyst
  • Analyst Training
  • Analyst Interview
  • Location of Occurrence
  • Areas Notified
  • Documentation
  • Data Location (e.g. notebook)
  • Raw Data
  • Calculations
  • Testing Controls
  • Reagent Information
  • Instrument and Equipment performance history
  • Instrument ID Number
  • Calibration Date
  • Calibration Due Date
  • Sample Handling
  • Sample Labeling
  • Glassware
  • Consumables
  • Certificates of Analysis
  • Historical Data/Trends

All of the gathered information should be examined, sorted, la­beled, and be readily retrievable for audits. If possible, attaching the information to the investigation is a good course of action. The data should be reviewed promptly for accuracy and to verify calcu­lations [5]. Any hypotheses testing regarding what may have hap­pened should be tested. All observations, decisions, corrective acti­ons, and assignable root cause, if determined, should be explained in detail and documented.

When investigating endotoxin results, it is wise to apply knowle­dge of the endotoxin assay and how endotoxins behave. This infor­mation may include possible endotoxin sources, the behavior of naturally occurring endotoxins versus controlled standard endoto­xins, the potential for false positives, the reaction of beta-glucans, the potential for endotoxins to bind to surfaces, interference or enhancement factors, and the use of endotoxin free consumables for the assay.

Phase 2 may begin when lab error is not determined to be the cause and the OOS appears to be accurate. At this point in the inves­tigation, the problem has been identified, the problem statement has been written, and the data has been gathered. Production and sampling procedures should be reviewed for accuracy. Root causes for the OOS result should be investigated from all relevant areas including but not limited to: [5]

  • Manufacturing
  • Facilities
  • Laboratory

Many root cause analysis tools are available. Choosing the best RCA tool for the investigation will depend on the companies’ stan­dard operating procedures (SOP), how complex the investigation is, the ease of use of tool for the employees performing the inves­tigation, the ability of the tool to help find the root cause, and how much time and resources are available to perform the investigation correctly. The main goal of using a RCA tool is to guide the user to possible root causes. All root cause categories should be conside­red when investigating probable root causes [1].

Some available RCA tools include:

  • Ishikawa (6Ms, Fishbone Diagram, etc.)
  • 5 Whys?
  • Is/Is Not
  • Failure Mode and Effects Analysis (FMEA)
  • Others tools are available (e.g. Kepner-Tregoe Problem Analysis)

Ishikawa is a diagram tool that is also known as 6Ms or the Fishbone Diagram, among other names [1]. The goal of using Ishi­kawa is to categorize the data and information gathered into cate­gories. These categories typically include Man (Personnel), Method (Procedure), Machine (Equipment), Material, Mother Nature (En­vironment), or Miscellaneous (e.g. Process Design). This informati­on is plotted onto a “Fishbone Diagram” to look for any information that is out of the ordinary [1].

The 5 Whys is another analysis tool that is commonly utilized in root cause analysis investigations. The question “why” is asked several times to get a deeper understanding of what could have happened to cause the OOS [1]. For example, a question is asked of the problem statement. Then, “why” is asked of the answer the question. Then, why is asked of the answer to that question, and so on until a root cause is identified. It is acceptable to stop asking why before the fifth why is asked if a root cause is identified.

The “Is/Is Not” tool allows a user to compare what the problem is related to what the problem is not [1]. This can appear in a chart form for compare and contrast purposes. Utilizing this tool should allow a user to hone in on all of the impacted components of the investigations and guide the user to a root cause.

The Failure Mode and Effects Analysis (FMEA) tool allows the user to look at the severity of the problem, the probability that the problem may occur, and the probability that if the problem occurs, it can be detected. These risk factors are assigned a numeric num­ber and multiplied to establish a Risk Preference Number (RPN) [1]. This tool is good in helping to identify and eliminate risks.

Additional laboratory testing may be required. This testing will require an investigational test plan prior to the testing being performed. The test plan should describe the method, acceptance criteria, number of replicates, and how the results will be reported [5].

During the course of the investigation, multiple trends should be pulled and analyzed. These trends may include:

  • Testing of other batches
  • Utility Monitoring
  • Validation, Calibration, and Maintenance History
  • Human Error, Method Error, and Instrument Error

Trending is typically performed periodically and documen­ted in dedicated reports. Any negative trends that are identified during the course of the investigation should be investigated [5].

This may involve opening another investigation or deviation to properly address a separate or related issue to the OOS. Reviewing and analyzing trends could help determine the impact on other pro­ducts, the impact on the facility, if the problem as occurred before, how often the problem occurs, potential root causes of the problem, and how the problem can be prevented in the future.

The most probable root cause has the fewest assumptions, the simplest assumptions, the most reasonable assumptions, and as­sumptions that make the most sense.1 After the root cause analysis is complete, an impact and/or risk assessment should be comple­ted on all of the potentially impacted to batches. The impact of the OOS on already disturbed batches should be evaluated [5].

Corrective and preventative actions (CAPA) should be proper­ly documented and performed to contain, correct, and prevent the cause and spread of the endotoxin contamination. These actions should be monitored and effectiveness checks (eChecks) should be performed at a later date to follow-up on the CAPA items to ensure they were effective [2].

A clear summary and conclusion of the investigation should be written describing all of the results, findings, testing, decisions and actions that were taken during the course of the investigation [2]. Everything should be documented, the good, the bad, and the ugly. An OOS report should be written in a clear, easy to follow format. The investigation should be:

  • Thorough [5]
  • Timely (Usually completed within 30 days) [5]
  • Unbiased [5]
  • Well-documented [5]
  • Scientifically sound [5]
  • Supported by facts and data [5]

It is desirable that the OOS investigation to be a standalone docu­ment. The investigation should contain a written report that docu­ments everything that occurred in chronological order. This report should start with a clear statement and background information for the reason of the investigation [5]. Then, the report should move to a thorough description of all the details, events, and information in regards to the investigation including logical and detailed descrip­tions of the decision-making processes. All relevant data or refe­rences where the data can be located should be contained in the report [5]. The report should also discuss all of the RCA tools that were utilized during the investigation. In addition, the report sho­uld include a summary of all of the potential root causes or the most probable root cause. Any analyses that were performed during the investigation and the results that were obtained should be discus­sed and compared to the established acceptance criteria [5]. The report should also clearly indicate which results will be reported and provide a recommended batch disposition with rationale. The findings of the review process including historical trends and any CAPAs that were established during the investigation should be discussed [5]. The report should clearly distinguish between facts and assumptions and contain statements that are supported by facts and data that was gathered during the investigation. Finally, the report should include a summary of the investigation, results, conclusions, impact assessments and any planned effectiveness checks to monitor potential reoccurrences of the identified root cause [5].

Summary

In summary, endotoxin data deviations are investigated like out of specifications (OOS) and the same governing regulations apply. Invalid assays are handled differently than OOS investigations, but may require an investigation as invalid assays should be limited in occurrences. An overabundance of invalid assays may be a signal of other laboratory control problems. Some companies have quality management systems that can track OOSs separately, while others find it beneficial to attach an OOS investigation to a deviation in their existing quality management system for tracking purposes. It is important to note that the rejection of a batch does not negate the responsibility to investigate the OOS result [5].

Phase 1 of an OOS investigation is a laboratory investigation. Checklists in Phase 1 are helpful. Generally, a re-measurement of the originally prepared sample, standard solutions, or dilutions is permitted during this stage if the supplies are not consumed or expired [5]. Everything that is investigated and observed must be properly documented. If clear evidence of an error is identified, the laboratory testing results may be invalidated. If a root cause is not found in Phase 1 and the results appear accurate, Phase 2 is initiated [5].

Phase 2 is a Full Scale Investigation. This phase of the inves­tigation looks to identify a root cause and examine the potential impact on other batches. Phase 2 expands from the laboratory and out into the manufacturing facility where applicable [5]. The main goal of using a RCA tool is to guide the user to a possible root cau­se [1]. Root cause analysis tools in Phase 2 are helpful and should be chosen according to established SOPs, ease of use for the user, and the ability to identify a root cause. All root cause categories should be considered when investigating probable root causes [1]. Additional laboratory testing may be required and will require an investigational test plan. The test plan should describe the method, acceptance criteria, number of replicates, and how the results will be reported [5]. CAPAs should be established and effectiveness checks should be performed to monitor the corrective and preven­tative actions [5].

It is desirable for the investigation to be a standalone document in order for the document to be audit ready. Well written and ma­naged OOS investigations should be thorough, timely, unbiased, well-documented, scientifically sound, and supported by facts and data [5].

Finding a root cause to endotoxin contamination early can aid with the control, clean up, corrective actions, and preventative ac­tions that may be required to protect the company, the products, and the patient.

Bibliography

  1. ASQ- Cause Analysis Tools.
  2. Carmody Judy. “7 Steps to Properly Navigate an Event Investi­gation”. Carmody Quality Solutions, LLC. Pharmaceutical On­line (2017).
  3. Code of Federal Regulations (CFR) Title 21: Food and Drugs.
  4. Dawson M. “Endotoxin Limits” LAL Update Associates of Cape Cod, Inc. Woods Hole, Massachusetts 13.2 (1995).
  5. Food and Drug Administration (FDA). Guidance for Industry: Investigating Out-of-Specification (OOS) Test Results for Phar­maceutical Production, Food and Drug Administration, Rock­ville, MD, USA (2006).
  6. Food and Drug Administration (FDA). FDA Guidance for the Industry: Pyrogens and Endotoxins Testing: Question and An­swers, June 2012, Food and Drug Administration, Rockville, MD, USA (2012).
  7. European Pharmacopeia (EP) 2.6.14 Bacterial Endotoxins
  8. Japanese Pharmacopeia (JP) 4.01 Bacterial Endotoxins Test
  9. United States Pharmacopeia (USP) <85> Bacterial Endotoxins Test
  10. Schultz J. “Testing Invalidities and Stage One OOS Laboratory Investigation”. Charles River LAL Workshop, Charleston, SC. Lecture (2008).
  11. Dubczak J. “From Horseshoe Crabs to LAL Testing” Charles River LAL Workshop, Charleston, SC. Lecture (2008).

 

 

 

PMP Part III – Spiraling Into a State of Control

PMP Part III – Spiraling Into a State of Control

If you have been following this blog series so far, I have discussed my perspective on why I feel so many project managers are unsuccessful at their attempts to earn their CAPM® or PMP® certifications. In the previous edition of this blog, I expounded upon the importance of developing a plan at the outset of your CAPM® or PMP® journey. While excitement and drive may be enough to carry someone through the first phase of their journey, without a solid plan most people will fall short of their certification goals. Although I don’t have statistics to support my assertion, I would bet that are probably more CAPM® and PMP® candidates who fail to pass their certification exam as a result of failure to perform in the middle phase of their study plan – that phase in which they cover the “meat” of the PMBOK® material – than there are those that fail because of their performance on during the certification exam itself. They will make a great showing at the start where there is a lot of help available and many people are interested in helping to set them up for success. Similarly, if they manage to complete their training requirements and announce that they are ready to sign-up for the exam, they will likely experience a wave of support and encouragement from their managers, trainers and peers. However, if they enter into the examination phase without a solid foundation provided by the middle phase of their study plan, their success on the certification exam will hang in the balance.

The key to success on the exam lies not in developing a bag of tricks to release on exam day, but in finding a way to bridge the gap between those the first and last phases of your training, the phase which I refer to as “The Belly of the Beast” in my PMP® training plan. I believe this middle phase to be both the most difficult phase of certification process as well as the phase where you can shine as a soon-to-be project manager. In this phase, your mental toughness and dedication will be challenged as you attempt to balance your study regimen with your other commitments at work, at school and at home. Succeeding in this phase will require re-prioritization and re-alignment of your commitments in order to arrive at a regimen that is both robust enough that it can be sustained despite the inevitable distractions that will arise over the course of your studies but that is also malleable enough that it can accommodate new categories of information from the later part of the PMBOK® Guide – both of which are skills that are expected from a certified PMP® / CAPM®.

So, let’s not delay it any longer – it’s time to get to work.

The Spiral Method: Studying with a Purpose

My high school psychology teacher, Mrs. McBride, first introduced me to what she termed “The Spiral Method” of teaching and studying during her Brain and Behavior / Advanced Placement Psychology course. With this teaching method, the teacher and students purposefully revisit the course material multiple times during a unit, engaging the material is a different manner or with a different learning modality in each pass. For example, for the introductory unit which focused on brain anatomy, we started off by reading about the anatomy of the brain in the textbook. In the second pass, we created a table of the parts of the brain in our notebooks and developed pneumonics to help remember each of their functions. In the final pass, we watched a video-series that detailed the equipment and the methods scientists use to study the brain as well as some experimental footage of subjects with traumatic brain injuries performing common, everyday activities while simultaneously being monitored by a functional magnetic resonance imaging (fMRI) machine.

The goal of this first unit was two-fold: to show the students that brain physiology was a very real topic – something that could be both observed and measured – as well as to prime us for the teaching methods that we would encounter for the remainder of the course. As the semester progressed, I came to appreciate Mrs. McBride’s teaching style more and more, and none more so than when the College Board’s Advanced Placement (AP) tests came around later that year. After having spiraled through the course content multiple times, the AP Psychology exam felt just like another leg of the learning spiral and expressing my understanding during the exam felt more like a formality than a chore.

That memory has stuck with me for all these years, so when I was staring down at my newly-purchased PMBOK® Guide back in 2016 and about to start my own PMP® journey, although there were many things of which I was unsure of in that first week or two, there was one thing that was certain: I wanted to experience that feeling again – that feeling of knowing that when I sit down in the exam room several months from now and first open up that test booklet, earning my project management certification at that point will just be a formality. I wanted to feel that I had given my best during my studying of the PMBOK®, had spiraled through the material until I knew it inside and out and had erased all feelings of doubt from my mind.

So, what does it take to implement a “Spiral Method” approach in your studies for the PMP® or CAPM® exam and the confidence boost that accompanies it? Let’s dive right in and examine the fours legs of this exam preparation strategy in more detail.

The First Leg: An Introduction to the Course Material

Consider your first pass to be an introduction to the course material. You could start off by reading the chapter for one of the knowledge areas in the PMBOK® Guide and then summarizing the main points in your study journal. Make notes on the key concepts, the skills and techniques, the vocabulary and key terms as well as the important formulae and figures as you read. Don’t worry if you don’t understand everything being presented in this this first pass, but do be sure to flag items that were not clear and that require further explanation or research.

The Second Leg: Collaboration and Engagement

For your second pass at the course material, attend or watch the lecture on the PMBOK® knowledge area or topic as part of your classroom training. Collaborate and engage with the presenter in person during their “office hours” or through e-mail or other communication method to discuss and debate the material. In particular, be sure to ask about the items that you flagged in your first pass in order to clarify your doubts and to bolster your understanding. Update your study journal and expound upon your new-found knowledge and understanding.

The Third Leg: Application of Knowledge

Although the first and second passes in the spiral method are fairly basic and familiar to most people from their school days, the third pass is where things begin to get uncomfortable. Fear not, because the same rule applies in academia as it does in personal fitness: the improvement is in the struggle. In this third pass at the course material, the focus should move towards the application of the knowledge you have learned, rather than simply re-reading content over and over again. This leg of the spiral demands a shift of focus: a shift away from the trainer and towards the student as the driver of their own learning – a sensation that is uncomfortable for most and a sensation that many professionals have forgotten from their school days.

When I take my third pass at the course material, it often begins something along these lines: imagine for a moment that you weren’t studying for an exam. (What a relief!) Instead, put yourself in the shoes of a project manager about to use this new information on a project or in the shoes of a trainer about to teach the next group of project management recruits about the significance of this information. What tool or template would you distribute to your project team to be sure that they put the best practices from this knowledge area into use in their work? What reference document or study guide would you give to your trainees to help them grapple with this information? Try your hand at creating that tool, template or summary document – perhaps something along the lines of a CliffsNotes® or SparkNotes summary. As you work your way through creating the reference guide, based on your ability to create a succinct and accurate summary of each topic, you will be able to gauge your level of understanding more accurately and you will quickly discover which topics you are already confident in as well as discover those areas in which you still need clarification.

If you are stuck and you are not sure what type of tool or reference to create, you can always try the school teacher’s method for helping students to learn to summarize large volumes of information in very a limited space – the “cheat sheet”. You could attempt to summarize everything a PMP®/CAPM® student could want to know about a particular topic or knowledge area on a single 3” x 5” index card, or the entire PMBOK® on one 8.5” x 11” piece of paper in order to make a “cheat sheet” such as the one a high school teacher or college professor might let a student bring into a final exam.

Although most students think their teacher is letting them off easy, the teacher knows the student won’t be able to fit everything they needed to know to take a final exam in such a limited space. The challenge (and benefit) of this assignment lies in forcing the student to distill the target material into its most basic form and in creating memory cues that the student can use to recall or to re-create the information that they will need on the spot in order to work through a problem on that topic during the exam. When done correctly and with purpose, the most mindful students will barely need to refer to their card during the exam save for a quick check of a formula, as it is the process of creating the reference card that helps to engrain the information in the student’s memory. (And here you thought your teachers were throwing you a bone for all these years!)

The Fourth Leg: The Broader Implications

In your final pass at the material, you should begin to widen your perspective and to consider the broader implications of the material you are studying. One way to do this would be to test out your study tool or reference by using your reference to teach another person about the PMBOK® topic. It could be a fellow project management student, your project management mentor, an existing PMI® credential holder, or someone else well-versed in the project management topic. Share your study tool with them and ask them for their feedback on it: Is the tool easy to use? Are my explanations clear and accurate? Are there any important topics I have left out?

You could even carry out the exercise with a study buddy who is not so strong in project management, such as another project management student who is still studying for the certification exam themselves, or better yet, a colleague or an unsuspecting family member who is not involved in project management at all. Try explaining the PMBOK® concept to them using only the tool or reference that you created. If your cross-disciplinary study buddies are anything like the friends and family whom I called upon to assist me with my studies, you will find that they bring a perspective to the table that is quite refreshing. Rather than challenge your ability to memorize the nitty-gritty details, such as definitions, formulas, and calculations, they will likely ask you probing questions such as: “Developing a project management plan and all its smaller plans seems like a big waste of time that takes attention away from the actual project work – why should I even bother making any of those plans at all?”.  Although it may not seem like it at first, it is a very valid question that challenges your understanding of the logic, the purpose and the reasoning behind the project management plan and the role that the planning processes serve on a project.

After your mini-lesson, test your study buddy by having them reflect back to you what they learned either through a skills demonstration or by having them try their hand at answering a few questions on the topic from the practice exam. If your study buddy is able to properly explain the concept or can demonstrate proficiency in using your tool to answer the practice exam questions, then you know that you are proficient on that topic and are ready to take the exam on that section. If, on the other hand, if your study buddy struggles with the concept, or if their explanation is incomplete, or if they are confused by your reference tool, then you should consider it as a reflection of your own understanding of the topic – they couldn’t demonstrate proficiency on the topic because you didn’t know understand the material well enough when you presented it to them. You must devote more time to or try a different approach with that topic in order to iron out the wrinkles from your own understanding prior to sitting for the certification exam.

Wrapping-Up

Although it may seem like a lot of work, after a bit of practice the “Spiral Method” of studying can become second nature. When approaching a new topic in the PMBOK®, especially those that are very large and complicated, start by breaking the topic into more manageable chunks and then applying your study approach. Customize the legs of the spiral and alter the number of passes you take at a topic as needed based on your level of experience and familiarity with the topic. As many project management skills cut across multiple PMBOK® knowledge areas and the techniques used to analyze their performance are similar (e.g. time, cost, and scope management), consider double-dipping to incorporate several knowledge areas into a single activity in your later passes through the spiral. The payoff from following the spiral method is enormous as it can give you the sense of control over the material that is elusive for many project management students.

In the next edition of this blog, I will explore how to build upon the knowledge you have gained through your studies and how to use your past experiences to boost your level of confidence and to set yourself up success on the certification exam. I hope you’ll join me.