Fitability is now part of Chequed.com    more...
The ABC’s of establishing a good hiring process

Fitability Insights: Hiring & Diversity

Author: Dr. Randall H. Lucius, PhD.

A very important article was recently published in the April issue of American Psychologist, titled “High-Stakes Testing in Employment, Credentialing, and Higher Education: Prospects in a Post-Affirmative Action World,” (Sackett, Schmitt, Ellingson & Kabin, 2001). In it, the authors made some very illuminating insights into the problems of building a diverse workforce while using cognitive based tests, which include tests of knowledge, skills and abilities, as selection tools.

Tests of knowledge, skill and ability are commonly used to help make employment decisions, and these tests all tap into a person’s general intelligence or “general cognitive ability,” also known as g. These tests are popular and have been for a long time because they are useful, although imperfect, predictors of future performance in employment (Sackett et al., 2001). Many researchers have long contended that measures of g are more useful than any other single type of assessment for predicting someone’s future performance. This does not mean that other assessment tools, such as personality, bio-data, behavior-based interviews, and other non-cognitive tests are not also useful, but if only one assessment tool is used, a g-based tool would probably be the best choice. This is a debatable point, but has been the predominant theme among a large body of research for some time.

The problem with using measures of g is that racial group differences are repeatedly observed (Sackett et al., 2001). Blacks tend to score lower than Whites by one standard deviation, and Hispanics score lower than Whites by approximately two-thirds of a standard deviation. Asians typically score higher than Whites on measures of mathematical—quantitative ability and lower than Whites on measures of verbal ability and comprehension. These differences in test scores can create an adverse impact against protected groups when test scores are used in selection and credentialing decision making.

Most organizations value racial and ethnic diversity in their workforce, with rationales ranging from a desire to mirror the composition of the community to a belief that academic experiences or workplace effectiveness are enhanced by exposure to diverse perspectives (Sackett et al., 2001). Since many g-based tests may screen out minority members as “unqualified,” this conflicts with the desire for a diverse workforce.

Giving minorities some sort of preference in the selection process is probably not a viable alternative. A variety of recent developments indicate a growing trend toward bans on preference-based forms of affirmative action (Sackett et al., 2001). Several recent court cases as well as the 1991 Civil Rights Act make preference based programs a poor choice for attempting to alleviate the ills of g-based selection procedures.

What to do?

Recent research on strategies for achieving diversity without minority preference have focused on the use of composites of g-loaded tests with other relevant tools that are not based on g. Several researchers have found that using personality, bio-data and other types of measures, in addition to using traditional g-based tests, significantly reduces the differences between groups on overall scores, while enhancing the predictive power of the selection process (Sackett et al., 2001). In other words, the combination of measures more accurately predicts whether someone would be an effective employee.

Some research even suggests that g-based measures might not be necessary. A recent study (Pulakos and Schmitt, 1996) compared the utility of a g-loaded test of verbal ability with a bio-data measure, a situational judgment test, and a structured interview. The predictive power added by the verbal ability test along with the 3 alternative predictors was only .02, while the impact on racial differences was very large with the verbal test. In other words, the g-test added practically nothing to the prediction of whether or not someone will be effective, but it did make for a potentially greater adverse impact on minority applicants. Other studies that included the use of personality measures have found similar results.

Conclusion

What the above research suggests is that using non-cognitive based measures, in addition to, or in lieu of, g-based measures can significantly improve the prediction of whether or not someone will be effective, while at the same time reducing adverse impact. What a deal! By measuring qualities like personality and previous experiences, and using good interviewing practices, employers can get a better idea of whether or not someone is the right fit for the job – much more so than just using g-based tests alone, as is so often the case in many personnel departments and recruiting firms (particularly for IT positions).

This does not suggest that g-based tests should no longer be used, for that would be throwing the baby out with the bathwater. But what it does suggest is that companies using g-based tests alone, which include traditional tests of knowledge, skills and abilities, are putting themselves at risk. They are likely to make a less accurate decision than they could have made had they used the test results in conjunction with other assessment tools. Furthermore, they are more likely to unfairly filter out minority applicants who may have been equally qualified for the position, which in turn could lead to costly legal problems.

Using a valid personality test is one way to solve this problem. A personality test that has demonstrated validity for predicting on-the-job performance could greatly assist an employer in making an accurate decision that will not adversely impact minority group members. Personality tests traditionally show few differences by race.

Alongside skills tests, a good interview and/or other assessment tools, employers can have the best of both worlds: a highly productive workforce that is ethnically diverse.

References

Sackett, P. R., Schmitt, N., Ellingson, J. E. and Kabin, M. B. (2001). High-stakes testing in employment, credentialing and higher education: Prospects in a post-affirmative action world. American Psychologist, 56 (4), 304 – 318.

Pulakos, E. D. & Schmitt, N. (1996). An evaluation of two strategies for reducing adverse impact and their effects on criterion-related validity. Human Performance, 9, 241-258