Speelman and McGann have published a paper in Frontiers titled “Statements About the Pervasiveness of Behavior Require Data About the Pervasiveness of Behavior.” This is a nice companion piece to our Persons as Effect Sizes paper. Generally, the argument we are all making is that one must be careful to focus on the individuals in one’s study. Aggregate statistics do not tell the entire story of one’s data. OOM can be used to analyze the data presented by Speelman and McGann, as their pervasiveness index is equivalent to the Percent Correct Classifications (PCC) index. They also discuss setting up thresholds for determining the number of people classified correctly according to expectation. In OOM this goal is accomplished with the Classification Imprecision option available in most analyses.
James Lamiell and Kate Slaney (Eds.) have published their new book, Problematic Research Practices and Inertia in Scientific Psychology: History, Sources, and Recommended Solutions. There are chapters on statistics, measurement, psychologists’ distaste for criticism, and the struggle to understand persons using aggregate methods. We have a chapter in which we use OOM to analyse data from a study on Dissociative Identity Disorder. We also address strategies to help connect mainstream researchers to OOM the ideas expressed in Lamiell and Slaney’s book.
The Personality Lab at OSU has published a paper titled “Persons as Effect Sizes” in Advances in Methods and Practices in Psychological Science. In this paper we demonstrate how OOM methods are used to answer the question “How many people in my study behaved or responded in a manner consistent with theoretical expectation?”
Dr. Frank Arocha has just published an article on scientific realism in the journal Theory & Psychology. The title of the article is: Scientific Realism and the Issue of Variability in Behavior. Here’s a link to the abstract:
The paper is broad in scope and offers a clear exposition of important issues facing modern psychologists and how we might move forward from a realist perspective. This will be required reading in my courses at OSU.
A new version of the OOM software has been uploaded. A number of minor bugs have been removed from the program, and a new option for generating data from proportions and frequencies (contingency tables) has been added. A video demonstrating this new feature has been uploaded to the Instructional Videos page (see link to the right, or click here). Two new videos for editing multigrams have also been added. Please update your version of the software, and please let me know if you find any bugs in the software or have any issues when using it.
Here’s a pithy article (behind a paywall) by Kevin Weinfurt of Duke University in which he revisits Francis Bacon’s famous idols: https://science.sciencemag.org/content/367/6484/1312.full Here’s my favorite quote: “And finally, the Idols of the Theater might be updated to include the uncritical adherence to systems of ritualized rules intended to automate the inductive activities of scientists” (p. 1312). One such system is of course Null Hypothesis Significance Testing (“p < .05”). I am hopeful OOM will encourage us to avoid statistical rituals and to instead always engage our data in a theoretically meaningful manner.
A sincere word of thanks to the faculty and staff of West Texas A&M University for hosting a talk on OOM last Friday, February 14th. I am particularly appreciative of John Richeson (an OSU alumnus!) and Mark Garrison for making the visit possible. West Texas A&M is growing and has a strong core of faculty…and, as a personally relevant fact, the university has an outstanding bowling program!
Thanks to Paul Barrett for alerting us to this newly published paper: Saylors, R., & Trafimow, D. (2020). Why the increasing use of complex causal models is a problem: On the danger sophisticated theoretical narratives pose to truth. Organizational Research Methods (https://doi.org/10.1177/1094428119893452 ), In Press, , 1-14. [paywall]
As pointed out by the authors, “As use of complex models increases, the joint probability a published model is true decreases.”
The paper comes with a calculator to compute said probability:
An analogous concern in OOM is that as a path model increases in complexity, fewer and fewer individuals will be traceable through the model. It is easy to imagine a complex path model in which not a single person can be accurately traced through all of the links. What use would such a model be as an explication of causes and effects? Of course, this information can only be known if the researcher attempts to perform such person-centered analyses.
- October 27th, 2019. A special thanks to Chris Cunningham at the University of Tennessee-Chattanooga for his invitation to present at the 15th Annual River Cities I-O (RCIO) Psychology Conference, October 25-26, 2019. I was to present a talk titled “Person-centered data analyses: Observation Oriented Modeling as an alternative and rational data analytics approach.” Unfortunately, due to illness I was not able to attend, and our attempts to present via the internet were not successful. The Powerpoint slides are nonetheless available upon request.
- Congratulations (!) to Valentine, Buchanan, Scofield, and Beauchamp on the publication of their paper “Beyond p values: Ultilizing multiple methods to evaluate evidence” published in Behaviormetrika, 46(1), 121-144. They compare NHST, Bayes, and OOM methods for analyzing repeated measures data.
- March 20th, 2019. A special thanks (!) to David Trafimow and the faculty and students at New Mexico State Universityfor hosting a talk on OOM. It was a pleasure and an honor to visit, especially given my affinity for the Desert Southwest. Godspeed to Dr. Trafimow and several of his colleagues as well as they continue to fight the good fight against NHST.
- January 22nd, 2019. Congratulations to Craig and Abramson on their recent publication covering Ordinal Pattern Analysis! It can be found in the International Journal of Comparative Psychology. Craig, D. P, & Abramson, C. I. (2018). Ordinal pattern analysis in comparative psychology – A flexible alternative to null hypothesis significance testing using an observation oriented modeling paradigm. International Journal of Comparative Psychology, 31. Retrieved from https://escholarship.org/uc/item/08w0c08s.