Nutrition Screening Pediatrics

NSP: FAQs and Definitions (2018)

FREQUENTLY ASKED QUESTIONS (FAQS) - Validity, Reliability and Agreement

1.  What is validity of a screening tool? What does it mean to say that a screening tool is valid?

The validity of a nutrition screening tool is its ability to identify those who may have malnutrition versus those who may not have malnutrition. For this systematic review, the validity of a nutrition screening tool is evaluated by comparing it with an acceptable reference standard (i.e., criterion validity) using sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). In the absence of a gold standard, a reference standard was used to measure criterion validity of the tools. An acceptable reference standard is defined as anthropometrics (growth parameters) at a given time or changes in anthropometrics over time.1,2 

2.  Why are four measures (sensitivity, specificity, PPV and NPV) considered, when determining the validity of a tool?
Sensitivity, specificity, PPV, and NPV provide a sense of how well a nutrition screening tool is able to identify those who may or may not have malnutrition, when compared with an acceptable reference standard. Sensitivity measures the ability of the tool to correctly identify patients with malnutrition (proportion of true positives). That is, patients identified to be at risk for malnutrition by the screening tool are malnourished according to the reference standard. Specificity measures the ability of the tool to correctly identify patients without malnutrition (proportion of true negatives). That is, patients identified to be at no or low risk for malnutrition by the screening tool are not malnourished according to the reference standard. PPV measures the probability that patients identified to be at risk for malnutrition (positive test on screening tool) are indeed malnourished, while NPV measures the probability that patients who are identified to be at no or low risk for malnutrition (negative test on screening tool) are truly not malnourished. PPV and NPV are affected by the malnutrition prevalence of the population, whereas sensitivity and specificity are not. See “Definitions” below for more details on the four measures.

3.  Why do sensitivity and NPV carry more weight than specificity and PPV in determining the degree of validity of a screening tool? 
During the screening process, it is crucial to not miss individuals with potential risk for nutrition problems. Otherwise, the patient is not referred to an RDN for nutrition assessment to determine need for nutrition intervention. When screening for a medical condition, such as malnutrition, sensitivity and NPV carry more weight than specificity and PPV, because the goal is to avoid missing any individual who may be at malnutrition risk (minimize false negatives and capture the greatest number of people at risk). In contrast, minimizing false positives (individuals identified to be at risk for malnutrition, but who are not malnourished) is of less consequence.  A positive screen for malnutrition risk enables access to nutrition care for malnourished individuals who need it (true positives).  

4.  How is validity different than agreement? 
Measures such as sensitivity, specificity, NPV and PPV are typically utilized to describe how a tool performs when compared to a gold standard or an acceptable reference standard, in order to measure to what degree the tool accurately measures what it is supposed to measure (e.g., malnutrition). The kappa statistic (agreement) is typically used to test consistency or reproducibility or reliability of a tool, either in repeat uses with one user (intra-rater reliability) or between users (inter-rater reliability). 

5.  Why is kappa the preferred measure in determining agreement and reliability of a screening tool? 
While correlation describes the extent to which two tools agree only, kappa also considers the expected agreement that would occur from chance only, thus providing a more conservative, robust estimate than either correlation or AUC (area under the receiver operating characteristic curve). 

6.  Why is inter-rater reliability important in evaluating a screening tool? What does it mean to say that a screening tool is reliable? 
Inter-rater reliability describes the extent to which two different users of a tool produce the same results. If inter-rater reliability is low, it may mean that users of the tool interpret the questions in different ways, and this may result in different levels of validity in identifying the outcome of interest (e.g., malnutrition), depending on who is using the tool. For a tool to be considered valid, results must be consistent and reliable for all who use the tool. 

DEFINITIONS

Agreement:
 Measures the extent (kappa value) to which two sets of scores are identical (e.g., malnutrition risk classifications obtained from two different tools; or malnutrition risk classification between a tool compared to nutritional status as classified according to a reference standard).

Food insecurity: The United States Department of Agriculture (USDA) defines food insecurity as “the limited or uncertain availability of nutritionally adequate and safe foods or limited or uncertain ability to acquire acceptable foods in socially acceptable ways.” Food security status is most commonly reported on a continuum from high and marginal food security indicating food security and low or very low food security indicating food insecurity.3

Kappa (k): A statistical measure defined as the agreement beyond chance divided by the amount of possible agreement beyond chance used to measure agreement between two observers on a binary variable. The equation for kappa is: k = Observed – Expected agreement / (1- Expected agreement). When k is zero, agreement is at the level expected by chance. When negative, the observed agreement level is at a level less than one would expect to be than by chance alone.4 

Negative Predictive Value (NPV): The probability that a person with a negative test result does not have the disease being test for. NPV = Number of True Negatives / (Number of True Negatives + Number of False Negatives)5

Nutrition Screening: The process of identifying patients, clients, or groups who may have a nutrition diagnosis and benefit from nutrition assessment and intervention by a registered dietitian (RD) or registered dietitian nutritionist (RDN).  Key considerations:

  • Nutrition screening may be conducted in any practice setting as appropriate.
  • Nutrition screening tools should be quick, easy to use, valid and reliable for the patient, population, setting.
  • The nutrition screening process and parameters are established by the institution by a multidisciplinary team (including RDNs)
  • Nutrition screening is generally carried out by medical professionals who have been trained in  the screening process. 
  • Nutrition screening and rescreening should occur within an appropriate timeframe.6

Positive Predictive Value (PPV): The probability that a person with a positive test result does have the disease being tested for. PPV = Number of True Positives / (Number of True Positives + Number of False Positives5

Quick and Easy Nutrition Screening Tool: For purposes of the evidence analysis, the Nutrition Screening workgroup defined a quick and easy tool as one that can be completed in less than 10 minutes and requires minimal training.

Reliability: Measures the agreement between the results of the tool when administered by different users (inter-rater) or by the same user on different occasions (intra-rater).4

Sensitivity: The proportion of subjects with disease in whom a test is positive; also called positive in disease. Sensitivity = True Positive / (True Positive + False Negative)5

Specificity: The proportion of subjects without disease being tested for in whom a test is negative; also called negative in health. Specificity = True Negative / (True Negative + False Positive)5

Validity

  • Construct validity - Extent to which measures are correlated in accordance with theoretical expectations
  • Criterion validity - Comparison of a measure/tool with some other measure (usually gold standard). 
  • Concurrent validity - Comparison of tool with a criterion measured at a similar time to tool
  • Predictive validity - Compares tool with a future criterion.7


REFERENCES:
1Becker P, Carney LN, Corkins MR, Monczka J, Smith E, Smith SE, Spear BA, White JV. Consensus statement of the Academy of Nutrition and Dietetics/American Society for Parenteral and Enteral Nutrition: indicators recommended for the identification and documentation of pediatric malnutrition (undernutrition). Academy of Nutrition and Dietetics; American Society for Parenteral and Enteral Nutrition. Nutr Clin Pract. 2015 Feb; 30 (1): 147-161. PMID: 25422273
2Mehta NM, Corkins MR, Lyman B, Malone A, Goday PS, Carney LN, Monczka JL, Plogsted SW, Schwenk WF. American Society for Parenteral and Enteral Nutrition Board of Directors. Defining pediatric malnutrition: a paradigm shift toward etiology-related definitions. J Parenter Enteral Nutr (JPEN). 2013 Jul; 37 (4): 460-481. Epub 2013 Mar 25. PMID: 23528324.

3United States Department of Agriculture.  Food Security in the U.S.  https://www.ers.usda.gov/topics/food-nutrition-assistance/food-security-in-the-us/measurement/.  Published August 20, 2018.  Accessed May 10, 2019.
4Chapter 5. Research Questions About One Group. In: Dawson B, Trapp RG. eds. Basic & Clinical Biostatistics, 4e New York, NY: McGraw-Hill; 2004. http://accessmedicine.mhmedical.com.proxy.libraries.rutgers.edu/content.aspx?bookid=356§ionid=40086284. Accessed January 05, 2018.
5Field L, Hand R. Differentiating Malnutrition Screening and Assessment: A Nutrition Care Process Perspective. J Acad Nutr Diet 2015; 824-828.
6CDR Definition of Terms List https://www.cdrnet.org/vault/2459/web/files/DefinitionofTerms.pdf Updated 4/2014
7Chapter 11. Survey Research. In: Dawson B, Trapp RG. eds. Basic & Clinical Biostatistics, 4e New York, NY: McGraw-Hill; 2004. http://accessmedicine.mhmedical.com.proxy.libraries.rutgers.edu/content.aspx?bookid=356§ionid=40086290. Accessed January 05, 2018.