A guide for critique of research articles
Following is the list of criteria to evaluate (critique) a research article. Please note that you should first summarize the paper and then evaluate different parts of it.
Most of the evaluation section should be devoted to evaluation of internal validity of the conclusions. Please add at the end a section entitled ''changes in the design/procedures if I want to replicate this study." Attach a copy of the original article to your paper.
Click here to see a an example (this is how you start) of a research critique.
Click here to see the original article.
The following list is a guide for you to organize your evaluation. It is recommended to organize your evaluation in this order. This is a long list of questions. You don’t have to address all questions. However, you should address highlighted questions. Some questions may not be relevant to your article.
1. Is there a statement of the problem?
2. Is the problem “researchable”? That is, can it be investigated through the collection and analysis of data?
3. Is background information on the problem presented?
4. Is the educational significance of the problem discussed?
5. Does the problem statement indicate the variables of interest and the specific relationship between those variables which are investigated? When necessary, are variables directly or operationally defined?
Review of Related Literature
1. Is the review comprehensive?
2. Are all cited references relevant to the problem under investigation?
3. Are most of the sources primary, i.e., are there only a few or no secondary sources?
4. Have the references been critically analyzed and the results of various studies compared and contrasted, i.e., is the review more than a series of abstracts or annotations?
5. Does the review conclude with a brief summary of the literature and its implications for the problem investigated?
6. Do the implications discussed form an empirical or theoretical rationale for the hypotheses which follow?
1. Are specific questions to be answered listed or specific hypotheses to be tested stated?
2. Does each hypothesis state an expected relationship or difference?
3. If necessary, are variables directly or operationally defined?
4. Is each hypothesis testable?
1. Are the size and major characteristics of the population studied described?
2. If a sample was selected, is the method of selecting the sample clearly described?
3. Is the method of sample selection described one that is likely to result in a representative, unbiased sample?
4. Did the researcher avoid the use of volunteers?
5. Are the size and major characteristics of the sample described?
6. Does the sample size meet the suggested guideline for minimum sample size appropriate for the method of research represented?
1. Is the rationale given for the selection of the instruments (or measurements) used?
2. Is each instrument described in terms of purpose and content?
3. Are the instruments appropriate for measuring the intended variables?
4. Is evidence presented that indicates that each instrument is appropriate for the sample under study?
5. Is instrument validity discussed and coefficients given if appropriate?
6. Is reliability discussed in terms of type and size of reliability coefficients?
7. If appropriate, are subtest reliabilities given?
8. If an instrument was developed specifically for the study, are the procedures involved in its development and validation described?
9. If an instrument was developed specifically for the study, are administration, scoring or tabulating, and interpretation procedures fully described?
Design and Procedure
1. Is the design appropriate for answering the questions or testing the hypotheses of the study?
2. Are the procedures described in sufficient detail to permit them to be replicated by another researcher?
3. If a pilot study was conducted, are its execution and results described as well as its impact on the subsequent study?
4. Are the control procedures described?
5. Did the researcher discuss or account for any potentially confounding variables that he or she was unable to control for?
1. Are appropriate descriptive or inferential statistics presented?
2. Was the probability level, α, at which the results of the tests of significance were evaluated,
specified in advance of the data analyses?
3. If parametric tests were used, is there evidence that the researcher avoided violating the
required assumptions for parametric tests?
4. Are the tests of significance described appropriate, given the hypotheses and design of the
5. Was every hypothesis tested?
6. Are the tests of significance interpreted using the appropriate degrees of freedom?
7. Are the results clearly presented?
8. Are the tables and figures (if any) well organized and easy to understand?
9. Are the data in each table and figure described in the text?
Discussion (Conclusions and Recommendation)
1. Is each result discussed in terms of the original hypothesis to which it relates?
2. Is each result discussed in terms of its agreement or disagreement with previous results
obtained by other researchers in other studies?
3. Are generalizations consistent with the results?
4. Are the possible effects of uncontrolled variables on the results discussed?
5. Are theoretical and practical implications of the findings discussed?
6. Are recommendations for future action made?
7. Are the suggestions for future action based on practical significance or on statistical
significance only, i.e., has the author avoided confusing practical and statistical
8. Are recommendations for future research made?
Additional general questions to be answered in your critique.
1. What is (are) the research question(s) (or hypothesis)?
2. Describe the sample used in this study.
3. Describe the reliability and validity of all the instruments used.
4. What type of research is this? Explain.
5. How was the data analyzed?
6. What is (are) the major finding(s)?