Skip to content

Research article

Using Invariance to Examine Cheating in Unproctored Ability Tests

Natalie A. Wright; Adam W. Meade; Sara L. Gutierrez

International Journal of Selection and Assessment • 2014

audience: factory-internalaudience: velaPeople Analyticsbridge (3)processed in meta-factory

Abstract

Despite their widespread use in personnel selection, there is concern that cheating could undermine the validity of unproctored Internet-based tests. This study examined the presence of cheating in a speeded ability test used for personnel selection. The same test was administered to applicants in either proctored or unproctored conditions. Item response theory differential functioning analyses were used to evaluate the equivalence of the psychometric properties of test items across proctored and unproctored conditions. A few items displayed different psychometric properties, and the nature of these differences was not uniform. Theta scores were not reflective of widespread cheating among unproctored examinees. Thus, results were not consistent with what would be expected if cheating on unproctored tests was pervasive.

Available formats

research_article

File instances

1

Extracted by meta-factory

Instruments (1)

  • Commercial Computer-Based Deductive Reasoning Test

    developer: SHL

    Constructs

    Deductive Reasoning Skills

    reliability: KR-20 coefficient was .92 for Sample 1 and .79 for Sample 2

Constructs (2)

  • Cheating in Unproctored Ability Tests

    ETH_001

    The act of using dishonest means to improve test scores in unproctored internet-based ability tests, potentially undermining the validity of the test.

    Domains

    Ethics & ValuesPerformance Management

    Linked models

    Item Response Theory (IRT) Differential Functioning

    Cheating is assessed through differential item functioning analyses comparing proctored and unproctored test conditions.

  • Differential Item Functioning (DF)

    MEA_001

    A lack of invariance at the item or scale level, indicating that the probability of answering an item correctly is not solely based on the respondent's ability.

    Domains

    Measurement & Assessment

    Linked models

    Item Response Theory (IRT)

    DF is used to detect potential cheating by comparing item parameters across different testing conditions.

Related

Source profile (V0). This page is a thin scaffold over the factory_documents registry; richer treatment lands once the source is ingested into Vela's editorial corpus.