Skip to content

Research article

Can knowledge tests and situational judgement tests predict selection centre performance?

Haroon Ahmed; Melody Rhydderch; Phil Matthews

Medical Education • 2012 DOI

audience: factory-internalaudience: velaPeople Analyticsbridge (3)processed in meta-factory

Abstract

Written tests are an integral part of selection into general practice specialty training in the UK. Evidence supporting their validity and reliability as shortlisting tools has prompted their introduction into the selection processes of other medical specialties. This study explores whether candidate performance on two written tests predicts performance on subsequent workplace-based simulation exercises.

Available formats

research_article

File instances

1

Extracted by meta-factory

Instruments (2)

  • Clinical Problem-Solving Test (CPST)

    Constructs

    Crystallised intelligenceDeclarative knowledge
  • Situational Judgement Test (SJT)

    Constructs

    Non-cognitive professional attributes

Constructs (2)

  • Clinical Problem-Solving Test (CPST)

    CPST_001

    A 100-item test that measures crystallised intelligence and tests declarative knowledge gained during previous medical training.

    Domains

    Learning & DevelopmentPerformance Management

    Used as a shortlisting tool in the selection process for general practice specialty training.

  • Situational Judgement Test (SJT)

    SJT_002

    A 50-item test that confronts applicants with written descriptions of job-related scenarios and asks them to indicate their actions from a list of predetermined responses.

    Domains

    Decision-Making & JudgmentPerformance Management

    Designed to test non-cognitive professional attributes and offers incremental validity over the CPST.

Related

Source profile (V0). This page is a thin scaffold over the factory_documents registry; richer treatment lands once the source is ingested into Vela's editorial corpus.