Assessment/Evaluation vs. Research

Research studies involving human subjects require IRB review.  Assessment/Evaluative studies and activities do not.  It is not always easy to distinguish between these two types of projects and projects frequently have elements of both.  Therefore, the decision about whether review is required should be made in concert with the IRB and the Office of Institutional Effectiveness.

Research is based on a formal testable question or hypothesis that arises from the review of literature on the topic. Data to test the hypothesis or develop new theories is gathered in a systematic and unbiased manner with the purpose of providing generalizable information that can be disseminated in public forums, publications, or other media. Most often it requires extensive time, resources, and expertise. The results can contribute to the general public, a specific discipline, or across disciplines. 

Assessment is aimed at the improvement of programs or a process through gathering data about current standards within the system. Assessments take a ‘snap shot’ of the company/institution/program with no intention of generalizing beyond the immediate source of data (e.g., department, university, business). Because the goal of assessment is to describe the organization so improvements can be made there are no control groups or manipulation of variables. The data collected applies only to the place/person/program in which it was collected and the data is not disseminated.

Elements that are common to assessment/evaluation and research projects are listed below; however, the list is not intended to be comprehensive and not all of the elements are required.

Common Elements

  • Assessment/Evaluation
    Research
  • Determines merit, worth, or value

    Strives to be value-free

  • Assessment of how well a process, product, or program is working

    Aims to produce new knowledge within a field (designed to develop or contribute…)

  • Focus on process, product, or program

    Focus on population (human subjects)

  • Designed to improve a process, product, or program and may include:

    • needs assessment
    • process, outcome, or impact evaluation
    • cost-benefit or cost-effectiveness analyses

    May be descriptive, relational, or causal 

  • Designed to assess effectiveness or a process, product, or program

    Designed to be generalized to a population beyond those participating in the study or contribute broadly to knowledge or theory in a field of study (designed to develop or contribute to generalizable knowledge)

  • Assessment of program or product as it would exist regardless of the evaluation

    May include an experimental or non-standard intervention

  • Rarely subject to peer review

    Frequently submitted for peer review

  • Activity will rarely alter the timing or frequency of standard procedures

    Standard procedures or normal activities may be altered by an experimental intervention

  • Frequently, the entity in which the activity is taking place will also be the funding source

    May have external funding

  • Conducted within a setting of changing actors, priorities, resources, and timelines

    Controlled setting (interaction or intervention) or natural setting (observation which may or may not include interaction or intervention)

Source material for this guidance was provided by the University of Connecticut.  KSU’s IRB gratefully acknowledges this support.

©