Definition of Experimental Research in Psychology Peer Reviewed Articles

  • Journal Listing
  • J Athl Train
  • v.45(1); January-February 2010
  • PMC2808761

J Athl Train. 2010 Jan-February; 45(1): 98–100.

Study/Experimental/Research Blueprint: Much More Statistics

Kenneth L. Knight

Brigham Young University, Provo, UT

Abstract

Context:

The purpose of study, experimental, or research design in scientific manuscripts has inverse significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical assay. This practise makes "Methods" sections hard to read and sympathise.

Objective:

To clarify the difference between study design and statistical assay, to prove the advantages of a properly written report blueprint on article comprehension, and to encourage authors to correctly describe study designs.

Clarification:

The part of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Transmission of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is mutual today, which often includes manipulating variables to create new variables and the multiple (and unlike) analyses of a single data set, information collection is very different than statistical design. Thus, both a study design and a statistical design are necessary.

Advantages:

Scientific manuscripts volition be much easier to read and embrace. A proper experimental pattern serves as a road map to the study methods, helping readers to sympathize more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

Keywords: scientific writing, scholarly communication

Study, experimental, or inquiry design is the backbone of adept enquiry. Information technology directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers,one helping them negotiate the "Methods" section, and, thus, it improves the clarity of communication between authors and readers.

A growing trend is to equate study blueprint with only the statistical analysis of the information. The design statement typically is placed at the end of the "Methods" section as a subsection chosen "Experimental Design" or equally part of a subsection chosen "Data Analysis." This placement, however, equates experimental design and statistical analysis, minimizing the effect of experimental design on the planning and reporting of an experiment. This linkage is inappropriate, because some of the elements of the written report design that should be described at the starting time of the "Methods" section are instead placed in the "Statistical Analysis" department or, worse, are absent from the manuscript entirely.

Have you ever interrupted your reading of the "Methods" to sketch out the variables in the margins of the paper as you attempt to understand how they all fit together? Or have yous jumped back and forth from the early paragraphs of the "Methods" section to the "Statistics" section to attempt to understand which variables were collected and when? These efforts would exist unnecessary if a road map at the beginning of the "Methods" department outlined how the independent variables were related, which dependent variables were measured, and when they were measured. When they were measured is especially of import if the variables used in the statistical assay were a subset of the measured variables or were computed from measured variables (such as change scores).

The purpose of this Communications article is to clarify the purpose and placement of report design elements in an experimental manuscript. Adopting these ideas may improve your science and surely will enhance the communication of that science. These ideas will make experimental manuscripts easier to read and understand and, therefore, volition allow them to become part of readers' clinical conclusion making.

WHAT IS A STUDY (OR EXPERIMENTAL OR Research) Pattern?

The terms study design, experimental design, and research blueprint are often thought to be synonymous and are sometimes used interchangeably in a unmarried paper. Avoid doing and then. Apply the term that is preferred by the style manual of the periodical for which you are writing. Study design is the preferred term in the AMA Manual of Style,2 then I volition use information technology here.

A study design is the compages of an experimental report3 and a description of how the study was conducted,4 including all elements of how the information were obtained.v The study design should be the first subsection of the "Methods" section in an experimental manuscript (see the Table). "Statistical Design" or, preferably, "Statistical Assay" or "Data Analysis" should be the concluding subsection of the "Methods" section.

Table. Elements of a "Methods" Section

An external file that holds a picture, illustration, etc.  Object name is i1062-6050-45-1-98-t01.jpg

The "Study Blueprint" subsection describes how the variables and participants interacted. It begins with a general statement of how the study was conducted (eg, crossover trials, parallel, or observational written report).two The second element, which usually begins with the second sentence, details the number of independent variables or factors, the levels of each variable, and their names. A shorthand way of doing so is with a statement such as "A two × 4 × viii factorial guided information drove." This tells u.s. that there were three independent variables (factors), with 2 levels of the first factor, four levels of the 2d factor, and eight levels of the third factor. Following is a sentence that names the levels of each factor: for example, "The independent variables were sex (male or female), preparation plan (eg, walking, running, weight lifting, or plyometrics), and time (two, four, 6, eight, 10, 15, xx, or xxx weeks)." Such an arroyo conspicuously outlines for readers how the various procedures fit into the overall construction and, therefore, enhances their agreement of how the data were collected. Thus, the design statement is a road map of the methods.

The dependent (or measurement or outcome) variables are then named. Details of how they were measured are not given at this point in the manuscript but are explained later in the "Instruments" and "Procedures" subsections.

Next is a paragraph detailing who the participants were and how they were selected, placed into groups, and assigned to a particular treatment gild, if the experiment was a repeated-measures design. And although not a role of the design per se, a statement about obtaining written informed consent from participants and institutional review lath approval is unremarkably included in this subsection.

The nuts and bolts of the "Methods" section follow, including such things as equipment, materials, protocols, etc. These are across the scope of this commentary, notwithstanding, and and then will not be discussed.

The terminal part of the "Methods" department and terminal part of the "Report Blueprint" department is the "Data Assay" subsection. Information technology begins with an caption of any information manipulation, such as how data were combined or how new variables (eg, ratios or differences between collected variables) were calculated. Adjacent, readers are told of the statistical measures used to analyze the information, such every bit a mixed 2 × iv × viii analysis of variance (ANOVA) with 2 between-groups factors (sex and training plan) and ane within-groups factor (fourth dimension of measurement). Researchers should state and reference the statistical bundle and process(south) within the parcel used to compute the statistics. (Various statistical packages perform analyses slightly differently, then it is important to know the bundle and specific process used.) This detail allows readers to judge the appropriateness of the statistical measures and the conclusions drawn from the data.

STATISTICAL Design VERSUS STATISTICAL Assay

Avoid using the term statistical blueprint. Statistical methods are only part of the overall blueprint. The term gives too much emphasis to the statistics, which are of import, but merely one of many tools used in interpreting data and only part of the study design:

The almost of import problems in biostatistics are not expressed with statistical procedures. The issues are inherently scientific, rather than purely statistical, and relate to the architectural design of the inquiry, not the numbers with which the data are cited and interpreted.6

Stated another way, "The justification for the analysis lies not in the data collected just in the manner in which the data were collected."3 "Without the solid foundation of a good design, the edifice of statistical analysis is unsafe."vii (pp4–5)

The intertwining of study pattern and statistical analysis may have been acquired (unintentionally) by R.A. Fisher, "… a genius who almost single-handedly created the foundations for modern statistical scientific discipline."8 Most research did non involve statistics until Fisher invented the concepts and procedures of ANOVA (in 1921)9 , 10 and experimental design (in 1935).eleven His books became standard references for scientists in many disciplines. As a result, many ANOVA books were titled Experimental Blueprint (run across, for instance, Edwards12), and ANOVA courses taught in psychology and instruction departments included the words experimental design in their course titles.

Before the widespread use of computers to analyze information, designs were much simpler, and often there was little difference between study design and statistical assay. And then combining the two elements did not crusade serious problems. This is no longer true, however, for 3 reasons: (1) Research studies are becoming more complex, with multiple independent and dependent variables. The procedures sections of these complex studies tin be difficult to understand if your but reference betoken is the statistical analysis and design. (ii) Dependent variables are often measured at different times. (3) How the information were collected is frequently non directly correlated with the statistical design.

For case, presume the goal is to decide the strength gain in novice and experienced athletes as a result of 3 strength training programs. Charge per unit of change in strength is not a measurable variable; rather, information technology is calculated from strength measurements taken at various time intervals during the grooming. And so the report design would be a ii × 2 × 3 factorial with independent variables of time (pretest or posttest), experience (novice or avant-garde), and training (isokinetic, isotonic, or isometric) and a dependent variable of strength. The statistical design, even so, would be a 2 × 3 factorial with independent variables of feel (novice or advanced) and training (isokinetic, isotonic, or isometric) and a dependent variable of strength gain. Notation that data were collected according to a three-factor design but were analyzed co-ordinate to a 2-factor design and that the dependent variables were different. So a single blueprint statement, usually a statistical design argument, would not communicate which data were collected or how. Readers would be left to effigy out on their own how the data were nerveless.

MULTIVARIATE Research AND THE Demand FOR Report DESIGNS

With the advent of electronic data gathering and computerized data treatment and analysis, research projects have increased in complexity. Many projects involve multiple dependent variables measured at different times, and, therefore, multiple design statements may be needed for both data drove and statistical analysis. Consider, for example, a study of the effects of heat and cold on neural inhibition. The variables of Hmax and Kmax are measured three times each: before, immediately after, and 30 minutes afterward a 20-minute handling with heat or cold. Muscle temperature might be measured each minute before, during, and after the treatment. Although the minute-by-minute data are of import for graphing temperature fluctuations during the procedure, only 3 temperatures (fourth dimension 0, time xx, and time 50) are used for statistical analysis. A single dependent variable Hmax:Mmax ratio is computed to illustrate neural inhibition. Again, a unmarried statistical pattern statement would tell piffling about how the data were obtained. And in this example, split pattern statements would be needed for temperature measurement and Hmax:Mmax measurements.

Equally stated earlier, drawing conclusions from the data depends more than on how the data were measured than on how they were analyzed.3 , half-dozen , seven , thirteen So a unmarried study design argument (or multiple such statements) at the beginning of the "Methods" department acts as a route map to the study and, thus, increases scientists' and readers' comprehension of how the experiment was conducted (ie, how the information were collected). Advisable written report pattern statements too increase the accuracy of conclusions drawn from the report.

CONCLUSIONS

The goal of scientific writing, or whatever writing, for that matter, is to communicate information. Including two design statements or subsections in scientific papers—1 to explain how the data were collected and another to explain how they were statistically analyzed—will amend the clarity of advice and bring praise from readers. To summarize:

  1. Purge from your thoughts and vocabulary the idea that experimental design and statistical blueprint are synonymous.

  2. Study or experimental design plays a much broader office than simply defining and directing the statistical analysis of an experiment.

  3. A properly written study pattern serves equally a road map to the "Methods" section of an experiment and, therefore, improves communication with the reader.

  4. Study pattern should include a description of the type of design used, each cistron (and each level) involved in the experiment, and the time at which each measurement was made.

  5. Clarify when the variables involved in information drove and data analysis are different, such as when data assay involves only a subset of a collected variable or a resultant variable from the mathematical manipulation of 2 or more collected variables.

Acknowledgments

Thank you to Thomas A. Cappaert, PhD, ATC, CSCS, CSE, for suggesting the link between R.A. Fisher and the melding of the concepts of inquiry design and statistics.

REFERENCES

1. Knight K. 50., Ingersoll C. D. Structure of a scholarly manuscript: 66 tips for what goes where. J Athl Train. 1996;31(3):201–206. [PMC costless article] [PubMed] [Google Scholar]

2. Iverson C., Christiansen S., Flanagin A., et al. AMA Manual of Way: A Guide for Authors and Editors. New York, NY: Oxford University Printing; 2007. 10th ed. [Google Scholar]

3. Altman D. Grand. Practical Statistics for Medical Research. New York, NY: Chapman & Hall; 1991. pp. iv–5. [Google Scholar]

iv. Thomas J. R., Nelson J. K., Silverman S. J. Research Methods in Concrete Activity. Champaign, IL: Human Kinetics; 2005. 5th ed. [Google Scholar]

5. Leedy P. D. Practical Research, Planning and Design. New York, NY: Macmillan Publishing; 1985. pp. 96–99. 3rd ed. [Google Scholar]

six. Feinstein A. R. Clinical biostatistics XXV: a survey of the statistical procedures in full general medical journals. Clin Pharmacol Ther. 1974;15(1):97–107. [PubMed] [Google Scholar]

7. Schoolman H. M., Becktel J. 1000., All-time West. R., Johnson A. F. Statistics in medical research: principles versus practices. J Lab Clin Med. 1988;71(three):357–367. [PubMed] [Google Scholar]

viii. Hald A. A History of Mathematical Statistics. New York, NY: Wiley; 1998. [Google Scholar]

10. Fisher R. A. Statistical Methods for Research Workers. Edinburgh, Scotland: Oliver and Boyd; 1925. Cited by: Fisher RA, Bennett JH, eds. Statistical Methods, Experimental Design, and Scientific Inference: A Reissue of Statistical Methods for Enquiry Workers, The Design of Experiments, and Statistical Methods, and Scientific Inference. New York, NY: Oxford University Press; 1993. [Google Scholar]

eleven. Fisher R. A. The Pattern of Experiments. Edinburgh, Scotland: Oliver and Boyd; 1935. Cited by: Fisher RA, Bennett JH, eds. Statistical Methods, Experimental Design, and Scientific Inference: A Reissue of Statistical Methods for Research Workers, The Design of Experiments, and Statistical Methods, and Scientific Inference. New York, NY: Oxford University Printing; 1993. [Google Scholar]

12. Edwards A. L. Experimental Design in Psychological Research. New York, NY: Rinehart and Co; 1942. [Google Scholar]

13. Lang T. A., Secic G. How to Study Statistics in Medicine. Philadelphia, PA: American Higher of Physicians; 2006. p. 175. 2nd ed. [Google Scholar]


Articles from Journal of Athletic Training are provided here courtesy of National Able-bodied Trainers Association


allentoonow.blogspot.com

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2808761/

0 Response to "Definition of Experimental Research in Psychology Peer Reviewed Articles"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel