Nutrition Screening Pediatrics

NSP: Validity and Reliability Criteria (2018)

Criteria Used to Evaluate Validity, Reliability and Agreement of Pediatric Nutrition Screening Tools

VALIDITY

Individual Validity Results
As cited in Neelemaat, 2011, cut-offs for interpretation of individual validity results [sensitivity (Se), specificity (Sp), and positive and negative predictive value (PPV; NPV)] were as follows:
 
Se, Sp, PPV, NPV Value Classification
90-100% Excellent
80-90% Good
70-80% Fair
60-70% Insufficient
50-60% Poor
Reference: Neelemaat F, Meijers J, Kruizenga H, van Ballegooijen H, van Bokhorst-de van der Schueren M. Comparison of five malnutrition screening tools in one hospital inpatient sample. J Clin Nurs. 2011 Aug; 20(15-16): 2,144-2,152. doi: 10.1111/j.1365-2702.2010.03667.x. Epub 2011 Apr 28. PMID: 21535274.

OVERALL Validity Results
To interpret the degree of individual validity measures, based on the aggregate of all study results for each tool, cut offs for Se, Sp, PPV and NPV were established:
 
Se, Sp, PPV, NPV Cut Offs Classification
90% to 100% HIGH
80% to ≤89% MODERATE
≤79% LOW

OVERALL Degree of Validity 

The sensitivity, specificity, PPV and NPV classifications were entered into the Validity Algorithm (below) to determine the OVERALL degree of validity of each tool. Sensitivity and NPV carried more weight than specificity and PPV in determining the degree of validity.



RELIABILITY AND AGREEMENT

Individual Kappa Results
Cut-offs for interpretation of individual reliability and agreement results using kappa were as follows:
 
Kappa Value Level of Agreement
Above 0.90 Almost Perfect
0.80-0.90 Strong
0.60-0.79 Moderate
0.40-0.59 Weak
0.21-0.39 Minimal
0-0.20 None
Reference: Adapted from Table 3, Interpretation of Cohen’s kappa, in McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012; 22(3): 276-282. PMID: 23092060.

OVERALL Kappa Results
To interpret the degree of reliability and agreement, based on the aggregate of all study results for each tool, overall kappa cut offs were established:
 
Kappa Cut Offs Classification
0.80 to 1.0 HIGH
0.60 to ≤0.79 MODERATE
≤0.59 LOW

Individual Cronbach's Alpha Results
Cut-offs for interpretation of individual reliability (internal consistency) results using coefficient alpha were as follows:
 
Cronbach’s alpha Value Level of Internal Consistency
a ≥ 0.9 Excellent
0.9 > a ≥ 0.8 Good
0.8 > a ≥ 0.7 Acceptable
0.7 > a ≥ 0.6 Questionable
0.6 > a ≥ 0.5 Poor
0.5 > a Unacceptable
Reference: Statistics How To. Rule of Thumb for Results. Cronbach’s Alpha: Simple Definition, Use and Interpretation. Accessed online 5/23/2018: http://www.statisticshowto.com/cronbachs-alpha-spss/.

OVERALL Cronbach's Alpha Results
To interpret the degree of reliability (internal consistency) based on the aggregate of all study results for each tool, overall coefficient alpha cut offs were established:
 
Cronbach's Alpha Cut Offs Classification
a ≥ 0.8 HIGH
0.8 > a ≥ 0.7 MODERATE
0.6 > a LOW

Intraclass Correlation Results
Cut-offs for interpretation of individual test-retest reliability results using intraclass correlation coefficient (ICC) were as follows:
 
Intraclass Correlation Coefficient (ICC) Values Level of Test-Retest Reliability
>0.90  Excellent
0.75 to 0.9  Good
0.5 to 0.75  Moderate
<0.5  Poor
Reference: Koo TK, Li MY. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. Journal of Chiropractic Medicine. 2016; 15 (2):155-163.
 
OVERALL ICC Results
To interpret the degree of test-retest reliabilty based on the aggregate of all study results for each tool, overall ICC cut offs were established:
 
 ICC Cut Offs
Classification
0.75 to 1.0
HIGH
0.5 to 0.75 
MODERATE
<0.5 
LOW