Evaluating with Validity

This reissued book is one of the key works that influenced and shaped the contemporary evaluation field.

Evaluating with Validity

This reissued book is one of the key works that influenced and shaped the contemporary evaluation field. The book developed a new, expanded conception of the validity of evaluation studies, based on broad criteria of truth, beauty, and justice. It also presented a widely-used typology of evaluation approaches and critiqued these approaches with the validity criteria. Its long term influence is demonstrated by the book, (published in 1980) and criteria being prominently featured in the overall theme for the forthcoming American Evaluation Association’s annual conference in November, 2010.

Revisiting Truth Beauty and Justice Evaluating With Validity in the 21st Century

This is the 142nd issue in the New Directions for Evaluation series from Jossey-Bass. It is an official publication of the American Evaluation Association.

Revisiting Truth  Beauty and Justice  Evaluating With Validity in the 21st Century

This issue discusses ways of constructing, organizing, and managing arguments for evaluation. Not focued solely on the logic of evaluation or predictive validity, it discusses the various elements needed to construct evaluation arguments that are compelling and influential by virtue of the truth, beauty, and justice they express. Through exposition, original research, critical reflection, and application to case examples, the authors present tools, perspectives, and guides to help evaluators navigate the complex contexts of evaluation in the 21st century. This is the 142nd issue in the New Directions for Evaluation series from Jossey-Bass. It is an official publication of the American Evaluation Association.

Revisiting Truth Beauty and Justice Evaluating With Validity in the 21st Century

Consider that while Scriven speaks of the validity of evaluation, Shadish comments on the essential nature of validity to evaluation. Scriven claims that an evaluation's validity depends upon its consideration of “all relevant factors, ...

Revisiting Truth  Beauty and Justice  Evaluating With Validity in the 21st Century

This issue discusses ways of constructing, organizing, and managing arguments for evaluation. Not focued solely on the logic of evaluation or predictive validity, it discusses the various elements needed to construct evaluation arguments that are compelling and influential by virtue of the truth, beauty, and justice they express. Through exposition, original research, critical reflection, and application to case examples, the authors present tools, perspectives, and guides to help evaluators navigate the complex contexts of evaluation in the 21st century. This is the 142nd issue in the New Directions for Evaluation series from Jossey-Bass. It is an official publication of the American Evaluation Association.

Advancing Validity in Outcome Evaluation Theory and Practice

Exploring the influence and application of Campbellian validity typology in the theory and practice of outcome evaluation, this volume addresses the strengths and weaknesses of this often controversial evaluation method and presents new ...

Advancing Validity in Outcome Evaluation  Theory and Practice

Exploring the influence and application of Campbellian validitytypology in the theory and practice of outcome evaluation, thisvolume addresses the strengths and weaknesses of this oftencontroversial evaluation method and presents new perspectives forits use. Editors Huey T. Chen, Stewart I. Donaldson and Melvin M. Markprovide a historical overview of the Campbellian typology adoption,contributions and criticism. Contributing authors proposestrategies in developing a new perspective of validity typology foradvancing validity in program evaluation including Enhance External Validity Enhance Precision by Reclassifying the CampbellianTypology Expand the Scope of the Typology The volume concludes with William R. Shadish's spirited rebuttalto earlier chapters. A collaborator with Don Campbell, Shadishprovides a balance to the perspective of the issue with aclarification and defense of Campbell's work. This is the 129th volume of the Jossey-Bass quarterly reportseries New Directions for Evaluation, an officialpublication of the American Evaluation Association.

Evaluating the Validity of Research Implications

Evaluating the Validity of Research Implications

This dissertation identifies a disconnect between science and decision making, develops a range of solutions, and uses them to identify significant problems in the existing literature. The disconnect occurs when scientists have to draw implications about actions well beyond the hypotheses they have been testing. Unfortunately, while there is rigor in the scientific testing of hypotheses, and some rigor in decision making, no similar process exists to evaluate the implications drawn from research. A 'validity' of research implications is created by modifying the educational and psychological testing concept of 'unitary validity' (Messick 1989). Three methods are presented for evaluating the validity of research implications: (1) An argument-based evaluation (Kane 1992) involves specifying the claims connecting an implication to the research results on which it is based, and evaluating them for clarity, coherence, and plausibility. (2) A graphical 'argument mapping' can extend this approach to more complicated arguments with multiple, interconnected parallel- and counter-arguments. (3) If all the components of the argument can be represented with quantitative models, then a quantitative evaluation is possible, in which the implications of a given scientific results are the actions that maximize a posterior predictive expected utility function. These approaches are demonstrated by applying them to existing controversies in the forestry literature. The implications of a controversial study of post-fire salvage logging (Donato, et al. 2006) are shown to be exactly opposite of those claimed. Validity analysis also shows that each of the conflicting analyses of the H. J. Andrew logging-flooding data (Jones & Grant 1996, Thomas & Megahan 1998, Beschta et al. 2000) underestimate logging impacts on larger floods. These results are shown to imply that researchers can use this approach to evaluate the validity of their implication statements, and reevaluate implications of published implications.

Evaluating the Validity of the PEAK E Assessment and the Efficacy of the PEAK E Curriculum in a Single case Evaluation

The present study evaluated the utility of the methods outlined in the Promoting the Emergence of Advanced Knowledge Relational Training System Equivalence Module (PEAK-E) through a single-case evaluation.

Evaluating the Validity of the PEAK E Assessment and the Efficacy of the PEAK E Curriculum in a Single case Evaluation

The present study evaluated the utility of the methods outlined in the Promoting the Emergence of Advanced Knowledge Relational Training System Equivalence Module (PEAK-E) through a single-case evaluation. Validity, reliability, and effectiveness were the variables explored to assess the degree to which the assessment was able to identify appropriate skills for targeted intervention, and the degree to which the programs were efficacious in teaching the targeted skills. Baseline results suggested that the programs identified through the PEAK-E assessment were not within the participants’ repertoires prior to the intervention. Following the implementation of 9 programs across three participants with autism, mastery was achieved for all of the directly trained relations, and all targeted derived relations emerged for 8 of the 9 programs.

Validity Testing in Child and Adolescent Assessment

Evaluating Exaggeration, Feigning, and Noncredible Effort Michael W. Kirkwood ... Hundreds of studies over the last several decades have documented the value of objective validity tests (Boone, 2007; Larrabee, 2007a; Sweet & Guidotti ...

Validity Testing in Child and Adolescent Assessment

Thoroughly covering the "why" and "how" of validity testing with children and adolescents, this book is edited and written by leaders in the field. Feigning or noncredible effort during psychological and neuropsychological assessments can have considerable repercussions for diagnosis, treatment, and use of resources. Practical guidance is provided for detecting and managing noncredible responding, including vivid case material. The reasons that children may feign during testing are also explored. Along with information relevant to all assessment settings, the book features specific chapters on educational, medical, sport-related, forensic, and Social Security Disability contexts.

Validity Assessment in Clinical Neuropsychological Practice

Practical and comprehensive, this is the first book to focus on noncredible performance in clinical contexts.

Validity Assessment in Clinical Neuropsychological Practice

Practical and comprehensive, this is the first book to focus on noncredible performance in clinical contexts. Experts in the field discuss the varied causes of invalidity, describe how to efficiently incorporate validity tests into clinical evaluations, and provide direction on how to proceed when noncredible responding is detected. Thoughtful, ethical guidance is given for offering patient feedback and writing effective reports. Population-specific chapters cover validity assessment with military personnel; children; and individuals with dementia, psychiatric disorders, mild traumatic brain injury, academic disability, and other concerns. The concluding chapter describes how to appropriately engage in legal proceedings if a clinical case becomes forensic. Case examples and sample reports enhance the book's utility.

Biology and Language

Biology and Language


Validity of Educational Assessments in Chile and Latin America

Learning progress assessment system, SEPA: Evidence of its reliability and validity. In J. Manzi, M. R. García, & S. Taut (Eds.), Validity of educational assessment in Chile and Latin America. Ediciones UC. American Educational Research ...

Validity of Educational Assessments in Chile and Latin America

This edited volume presents a systematic analysis of conceptual, methodological and applied aspects related to the validation of educational tests used in Latin American countries. Inspired by international standards on educational measurement and evaluation, this book illustrates efforts that have been made in several countries to validate different types of educational assessments, including student learning assessments, measurements of non-cognitive aspects in students, teacher evaluations, and tests for certification and selection. It gathers the experience of validity studies from the main international assessments in Latin America (PISA, TIMSS, ERCE, and ICCS). Additionally, it shows the challenges that must be taken into account when evaluations are used to compare countries, groups or trends of achievement over time. The book builds on the premise that measurements in the educational field should not be used if there are no studies that support the validity of the interpretation of their scores, or the use made of such tests. It shows that, despite the recognition given to validity, relatively few educational measurement assessments have accumulated enough evidence to support their interpretation and use. In doing so, this volume increases awareness about the relevance of validity, especially when assessments are key component of educational policies.

Evaluating the Validity and Reliability of Psychological Flexibility Measures in Children and Adults

The current study evaluated the reliability and validity of psychological flexibility self-report measures between typically developing under 18 and over 18 aged individuals.

Evaluating the Validity and Reliability of Psychological Flexibility Measures in Children and Adults

The current study evaluated the reliability and validity of psychological flexibility self-report measures between typically developing under 18 and over 18 aged individuals. An independent samples t test with a correlational statistical analysis was conducted to evaluate the relationship between the CPFQ, CAMM, AFQY and AAQII. The results demonstrated that three out of four measures are reliable and valid in determining psychological flexibility between the two groups. This study adds to literature that is conducive in the analysis of measures that are designed for younger population to be utilized by practitioners with older populations reliably.

Credibility Validity and Assumptions in Program Evaluation Methodology

This book focuses on assumptions underlying methods choice in program evaluation.

Credibility  Validity  and Assumptions in Program Evaluation Methodology

This book focuses on assumptions underlying methods choice in program evaluation. Credible program evaluation extends beyond the accuracy of research designs to include arguments justifying the appropriateness of methods. An important part of this justification is explaining the assumptions made about the validity of methods. This book provides a framework for understanding methodological assumptions, identifying the decisions made at each stage of the evaluation process, the major forms of validity affected by those decisions, and the preconditions for and assumptions about those validities. Though the selection of appropriate research methodology is not a new topic within social development research, previous publications suggest only advantages and disadvantages of using various methods and when to use them. This book goes beyond other publications to analyze the assumptions underlying actual methodological choices in evaluation studies and how these eventually influence evaluation quality. The analysis offered is supported by a collation of assumptions collected from a case study of 34 evaluations. Due to its in-depth analysis, strong theoretical basis, and practice examples, Credibility, Validity and Assumptions is a must-have resource for researchers, students, university professors and practitioners in program evaluation. Importantly, it provides tools for the application of appropriate research methods in program evaluation

Validity and Validation in Social Behavioral and Health Sciences

In population health, measures are used to evaluate health determinants and outcomes within and between populations. Thus, we encourage population health researchers to evaluate validity in the context of the populations they are ...

Validity and Validation in Social  Behavioral  and Health Sciences

This book combines an overview of validity theory, trends in validation practices and a review of standards and guidelines in several international jurisdictions with research synthesis of the validity evidence in different research areas. An overview of theory is both useful and timely, in view of the increased use of tests and measures for decision-making, ranking and policy purposes in large-scale testing, assessment and social indicators and quality of life research. Research synthesis is needed to help us assemble, critically appraise and integrate the overwhelming volume of research on validity in different contexts. Rather than examining whether any given measure is “valid”, the focus is on a critical appraisal of the kinds of validity evidence reported in the published research literature. The five sources of validity evidence discussed are: content-related, response processes, internal structure, associations with other variables and consequences. The 15 syntheses included here, represent a broad sampling of psychosocial, health, medical and educational research settings, giving us an extensive evidential basis to build upon earlier studies. The book concludes with a meta-synthesis of the 15 syntheses and a discussion of the current thinking of validation practices by leading experts in the field.