2. WHAT IS UTILITY?
• How useful the test is
• The practical value of test as it aids in the decision-
making process/efficiency
• Utility Analysis: techniques entailing a cost-benefit
analysis designed to yield information relevant to a
decision about usefulness and/or practical value of
the test.
• “Does the benefit outweigh the cost?”
3. FACTORS AFFECTING TEST
UTILITY
• Psychometric Soundness
• Reliability and validity are acceptably high
• The higher the criterion-related validity of test scores,
the higher the utility.
• Utility has something to do with the behavior of
targeted users
4. FACTORS AFFECTING TEST
UTILITY
• Costs
• Allocate funds to purchase a particular test
• Funds to supply of blank test protocols
• Funds for computerized processing, scoring, and
interpretation
• May include the ff.:
• Professional fee
• Charges for testing facility
• Routine costs of doing business (i.e., legal, accounting,
licensing)
5. FACTORS AFFECTING TEST
UTILITY
• Benefits
• Justification of costs
• Increase in worker performance
• Reduction in training time
• Reduction in accidents
• Reduction in turnover
6. CONDUCTING UTILITY
ANALYSIS
• Expectancy Data
• Provides an indication of the likelihood that a testtaker
will score within some interval of scores on a criterion
measure
• Brogden-Cronbach-Gleser Formula
• Measure the productivity gains or the estimated
increase in work output
• Estimates the benefit of using a particular method in
the selection method
7. CONDUCTING UTILITY
ANALYSIS
• Considerations:
• Pool of applicants
• Complexity of Job
• Cut scores – the reference point derived as a result of a
judgment and used to divide a set of data into
classifications
• Relative cut score – reference point in the distribution that classifies
the set of data based on norm-related considerations
• Fixed cut score - reference point in the distribution that classifies the
set of data based on judgments concerning a minimum level of
proficiency required to be included
• Multiple cut score – two or more cut scores with reference to one
predictor for categorizing testtakers
8. SETTING CUT SCORES
• Angoff Method
• Presence or absence of a particular trait, attribute, or
ability
• Provides an estimate on how testtakers with the least
minimal competence should answer the items correctly
• The judgments of the experts are averaged to yield the
cut scores for the test
9. SETTING CUT SCORES
• Known Groups Method
• Entails collection of data on the predictor of interest
from groups known to possess a trait, attribute, or
ability
• Involves a problem of determination on where to set
cutoff score due to the composition of groups
10. SETTING CUT SCORES
• IRT-Based Methods
• Based on testtakers’ performance across all items on the
test
• Some of the total number of items must be scored correct
• Each item is associated with a particular level of difficulty
• In order to pass, the testtaker must answer items that are
deemed to be above some minimum level as determined by
experts
• Item mapping is used in licensing examination
• Bookmark method is used in academic applications
11. SETTING CUT SCORES
• Discriminant Analysis
• A family of statistical techniques used to shed light on
the relationship between certain variables and two
naturally occurring groups
12. WRAP-UP
• Creation of psychological tests can also be a matter
of business, but since we are concerned with the
welfare of everything, it is important that we can
make it accessible by publishing utility studies
• The higher the utility the more likely it will be
purchased but there must be proper considerations
on how it could be determined