Social Behavior Research and Practice

Open journal

ISSN 2474-8927

Shedding the Quantitative Imperative

David Philip Arthur Craig*

David Philip Arthur Craig, PhD

Department of Psychology, Laboratory of Comparative Psychology Oklahoma State University, 116 N. Murray, OSU Stillwater, Oklahoma 74078, USA; Tel. (405) 744-6561; Cell: (352) 538-4269; E-mail: dpac007@gmail.com

I am, unfortunately, trained in the quantitative imperative associated with pythagoreanism and adopted by scientists during the development of common scales for length, mass, temperature, and time. While interested in complex and social behaviors, I have found myself at odds within my chosen field of psychology due to what is considered a requirement of science: true quantitative measurement. My background in physics impressed upon me a different definition of measurement than what I have become accustomed to within psychology, especially within the social and cognitive subfields. The quantitative imperative holds knowledge requires measurement, and measurement of an attribute requires a discovery of continuous ratios of real numbers (i.e. a=r*b where r is a unit and b is a magnitude). From this perspective, knowledge requires quantification, and quantification requires units that describe continuous scales.

In 1940, a group of physicists assembled the Ferguson Committee in order to evaluate whether or not the measurement attempts of psychometricians constituted a new field of science.1 They did not decide in the psychometricians’ favor, and the reason psychology was officially rejected as a scientific field by the greater scientific community was largely due to the quantitative imperative. Despite the modern widespread treatment of Likert or Item Response Theory scales as being quantitative, the psychometrician’s measurement is not of continuous ratios of real numbers, does not use physical units, and thus departs from the historical definition of measurement that has been the scientific standard since Euclid. In order for an observation to be continuous, magnitudes of the measure must meet the required associative, commutative, transitive, and density properties, and the scale must be able to be divided into separate arbitrary upper and lower sets.2 However, it is the density property (i.e. a scale must be infinitely divisible) that is critical for a scale to be not just additive, but also continuous. It was the density property that drew Stevens’3 focus when he redefined measurement to be the assignment of numbers to properties according to a rule after the Ferguson Committee rejected psychology’s measurement as science.

Stevens understood the importance of the density requirement of continuity and separated his nominal and ordinal scales (which were allegedly additive) from his interval and ratio scales (which were allegedly continuous) before identifying the mathematical permissibility of each of his new scales of “measurement.” Psychology believed the operationalism and representationalism of Stevens’ measurement allowed true quantitative measurement and developed a series of quantitative instruments and analysis methods without actually testing the hypothesis that these scales (and the actual phenomena/construct under investigation) were quantitative. Rather than test the quantitative hypothesis as was previously required within physics, psychologists and psychometricians such as Russell, Campbell, Nagel, and Stevens redefined measurement within psychology in a manner that departed from reality and tradition in favor of positivist philosophy. Doing so shifted the scrutiny of psychology’s alleged quantification to permissible statistics and meaningfulness while creating a culture that has systematically over-looked the critical assumption that the constructs and measurement within psychology are quantitative. With this new definition of quantitative measurement, psychology appeared to be a science – not because the quantitative hypothesis had been tested, but because psychology claimed itself to be a quantitative science due to the use of quantitative methods of analysis. This is an obviously circular argument.

After the Ferguson Committee, psychology was faced with two obvious options: reject the quantitative imperative, or adhere to the standards of science. Skinner chose the latter, for the measurement of inter-response times is a quantitative measure (with continuous units), and by working without proxies, radical behaviorists used no operationalism, and thus did not assume the construct under investigation to be quantitative. The radical behaviorists avoided the quantitative assumption made by the psychometricians and continued the scientific tradition of performing idemnotic rather than vaganotic measurement.4 Unlike the radical behaviorists, the neo behaviorists and eventually cognitivists began inferring internal processes from their quantitative measurement (e.g. reaction time to infer visual attention) but did not demonstrate the internal processes were also quantitative. Using a quantitative measure to make inferences about what may likely be qualitative internal events cannot be described as consistent theory or practice; indeed, the neo behaviorists’ methodological rigor had been compromised by the psychometricians’ choice of the former option: to reject the quantitative imperative. However, while the psychometricians rejected the quantitative imperative by turning to measures that are obviously not additive (let alone continuous), they recognized acceptance by the greater sciences hinged on appearing to conform to the quantitative imperative.

The psychometricians selected a third, and rather damning option: to partially reject the quantitative imperative by performing false quantitative measurement, and then reaping the benefits of allegedly performing quantitative measurement. With poor measurement came poor data, and the resulting small effects required further elucidation and innovation. So began the psychometricans’ development of allegedly quantitative statistical analyses which are misused and misunderstood to the point that the majority of these analyses’ users cannot actually describe a p-value. For all its robustness, null hypothesis significance testing has coincided with a decrease in replicability to the point that over 75% of major social psychology experiments fail to replicate. P-values are being removed from social psychology journals5 possibly because we are beginning to realize that fraudulent quantified measurement is not a solution to the measurement concerns of the psychometrician.

I affectionately label the cognitive dissonance related to the psychometricans’ actual versus supposed measurement as physics envy; though Michell’s6 use of the term pathology is far more compelling. Rather than develop measurement within the tradition of physics, psychology outside of radical behaviorism created a new tradition of measurement, fell victim of equivocation, began assuming itself to be a science, pioneered a series of quantitative analyses to further appear to be a quantitative science, labeled outside criticism from the greater scientific community as conspiracy, and cemented a pathology which has systematically overlooked the fact that psychology’s measurement is not continuous, additive, or quantitative. This thought disorder was likely elicited due to the quantitative imperative; rather than accept the seemingly bleak chances of quantitative measurement, it was apparently easier to redefine quantitative measurement and break away from scientific tradition.

While trained in the quantitative imperative, I now recognize that psychology (especially its social subfields) must reject the quantitative imperative and pythagoreanism. Michell7 provides an excellent point about the philosophy of realist science; any mandate that imposes limitations on investigations should not be accepted without critical inquiry, and the quantitative imperative has narrowed our focus of knowledge solely to quantitative measurement. Science is not merely experimentation and measurement; science is also a process of critical inquiry; hence, to define an enterprise as science simply based on its measurement is inappropriate. The main difference between quantification and qualification is instrumental; qualitative data are less precise estimates of quantified information. For example, a qualitative measure such as Skinner’s8 cumulative curve did not precisely measure the amount of time between responses, but Schneider9 eventually developed an apparatus that could precisely measure inter-response time and allowed quantification of previous qualitative measures. Thus, dismissing qualitative data may be inappropriate because this information may eventually be used to develop quantitative information.

However, quantum theory and Planck units complicate our understanding of continuity due to the density property. The density property requires a scale have a supposedly infinite number of divisions, but only measures of space-time can be classified as continuous with this criterion at the macro level. Planck units are the smallest possible value for each measure (with the exception of Kelvin which is a maximum value) before quantum effects become relevant and smaller divisions thus become meaningless. When considering Planck units, the density requirement cannot possibly be met by any measures that are generally interpreted to be continuous at the macro level. Thus, quantum theory seemingly indicates all scales should be interpreted as being discrete. Regardless of the metaphysical definitions of measurement, measurement is simply counting. We may count milliseconds, kilometers, or micrograms, but the actuality of our observations is that they occur discretely and should be treated as such.

It is my hope that more social behavior journals develop accepting cultures of qualitative data, and critically evaluate the assumptions inherent in quantitative measurement, especially when considering quantitative data analyses that depend on continuity assumptions. The quantitative imperative is archaic, and should be rejected based on quantum theory and its implications; any attempts to dress-up truly qualitative data as quantitative data introduces a flood of assumptions that affect not just the data analysis, but the implications and inferences of the analyses. It is my hope more social behavior journals develop accepting cultures of individual analyses, for Stevens was correct regarding the multiplicative permissibility of qualitative data used to perform group and aggregate analyses. After all, analyses based in group averages are not permissible as the density property is not met by our observations. It is my hope that more social behavior journals develop accepting cultures of alternative data analysis methods, for null hypothesis significance testing was popularized within psychology under the pressure of the quantitative imperative. With the rise of null hypothesis significance testing, we have observed a decrease in replicability, single-subject designs, and individual analyses. It is my hope that more social behavior journals develop cultures of consistent metaphysics and encourage the avoidance of hypocritical measurement and methods. Qualitative or discrete data should be analyzed appropriately; robustness of a test does not address underlying theoretical inconsistencies, and certainly is not justification for a continued denial of untested quantitative hypotheses. At the very least, it is my hope that more social behavior journals develop cultures that embrace critical inquiry over the quantitative imperative.

1. Humphry S. Understanding measurement in light of its origins. Front Psychol. 2013; 4: 113. doi: 10.3389/fpsyg.2013.00113

2. Michell J, Ernst C. The axioms of quantity and the theory of measurement: Translated from Part I of Otto Hölder’s German Text “Die Axiome der Quantität und die Lehre vom Mass”. Journal of Mathematical Psychology. 1996; 40(3): 235-252. doi: 10.1006/jmps.1996.0023

3. Stevens SS. On the theory of scales of measurement. Science. 1946; 103: 677–680. doi: 10.1126/science.103.2684.677

4. Mace C, Kratochwill T. The individual subject in behavioral analysis research. In Valsiner J (Ed.). The individual subject and scientific psychology. New York, USA: Plenum. 1986.

5. Woolston C. Psychology journal bans P values. Nature. 2015; 519(7541): 9. doi: 10.1038/519009f

6. Michell J. Quantitative science and the definition of measurement in psychology. Br J Psychol. 1997; 88: 355-383. doi: 10.1111/j.2044-8295.1997.tb02641.x

7. Michell J. The place of qualitative research in psychology. Qual Res Psychol. 2004; 1(4): 307-319. doi: 10.1191/1478088704qp020oa

8. Skinner BF. The behavior of organisms: An Experimental Analysis. New York, USA: Appleton-Century-Crofts; 1938: 457.

9. Schneider BA. A two-state analysis of fixed-interval responding in the pigeon. J Exp Anal Behav. 1969; 12: 677-687. doi: 10.1901/jeab.1969.12-677

LATEST ARTICLES

Blood Sample from the Patient

Hypertriglyceridemia-Induced Pancreatitis: A Case Report and Literature Review

Maarten Bulterys, Melvin Willems* and Agnes Meersman

doi.

From Neck Pain to a Life-Threatening Condition: A Case Report

Floris Vandewoude* and Sören Verstraete

doi.

Chest X-ray Showed a Hazy Left Upper Lung Infiltrate

A Noteworthy Case of Myasthenic Crisis Induced by Levofloxacin

Ada Young*, Ramya Ramesh and Milind Awale

doi.

The Right Thigh Anterior Compartment was Swollen, and the Skin was Ulcerated due to the Traditional Cautery

Primary Skeletal Muscle Lymphoma: A Case Report and Literature Review

Solomon Bishaw*, Addisu Alemu and Abel Tefera

doi.

LATEST ARTICLES

Blood Sample from the Patient

Case Report

2024 Jun

Maarten Bulterys, Melvin Willems* and Agnes Meersman

Case Report

2024 Jun

Floris Vandewoude* and Sören Verstraete