Skip to main content
SearchLoginLogin or Signup

Formal Minimal and Maximal Definitions for Characterizing Mis, Dis, and Malinformation

Published onOct 09, 2024
Formal Minimal and Maximal Definitions for Characterizing Mis, Dis, and Malinformation
·

Abstract

In the wake of the 2016 U.S. presidential election and the U.K. Brexit referendum, the field of misinformation studies has rapidly expanded, leading to increased scholarly activity and significant public attention. However, this growth has been accompanied by substantial criticisms, particularly regarding the need for greater consensus on defining key concepts, inadequate conceptual differentiation, and low coherence within the field. The ongoing replication crisis in the social sciences further underscores the urgent need for clearly defined phenomena to enhance reproducibility. This paper addresses these concerns by introducing new minimal and maximal definitions of misinformation, disinformation, and malformation, providing a more structured and theoretically informed approach to concept formation. We propose a minimal definition that frames these phenomena as communicative acts generating perceived harm due to an error in the message. Extending from this minimal definition, our maximal definitions incorporate contextual elements, including the message provider, goals, strategies, and tactics, to better differentiate between the diverse instantiations examined by specific research programs. By refining these definitions, this paper aims to provide a clearer conceptual foundation for future research, enhancing the field's coherence and enabling more targeted and effective interventions in research, policy, and practice. 

Funding Statement: This work was supported by the Air Force Office of Scientific Research, funding award FA9550-23-1-0453

Disclaimer: The views expressed in this publication are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the United States Government.

The field of misinformation studies expanded rapidly following the 2016 United States presidential election and the United Kingdom's Brexit vote, driven by increased public awareness and concern about misinformation and disinformation (Broda and Strömbäck 2024). In response to these concerns, researchers began examining key attributes of misinformation, such as message components, modes of spread, and factors influencing individual susceptibility (Broda and Strömbäck 2024; Lazer et al. 2018).

Building on these findings, subsequent research efforts aimed to detect misinformation (Molina et al. 2021), develop interventions that reduce susceptibility (Bachmann and Valenzuela 2023; Lees et al. 2023; Roozenbeek and Van Der Linden 2019), and offer policy recommendations (Wardle and Derakhshan 2017). Despite the field's rapid growth and research productivity, critics argue that it struggles with establishing cumulative knowledge across disciplines and rigorously defining the core concepts of misinformation, disinformation, and malinformation (Domenico et al. 2021; Douglas and Sutton 2023; Hameleers 2023; Musi and Reed 2022).

Critics, for example, have highlighted significant conceptual issues in misinformation studies, particularly the need for greater consistency, coherence, and differentiation in the definitions of misinformation, disinformation, and malformation (El Mikati et al. 2023; Harsin 2024; Vraga and Bode 2020; Wardle 2018). These deficiencies are critical as, in other fields, they are known to lead to indeterminate concepts, which increase the risks of reification, replication failures, and ineffective interventions (Ermakoff 2017)—problems currently observed in misinformation studies—(Dentith, Husting, and Orr 2024; Kessler and Bachmann 2022; McPhedran et al. 2023).

Such indeterminacies often arise from weaknesses in both minimal and maximal definitions. Minimal definitions, designed to provide a broad and general understanding, may fail to sufficiently differentiate the field’s explanatory domain from others if they lack clarity (Gerring and Barresi 2003; Hameleers 2023; Harsin 2024). Conversely, maximal definitions, which add specificity by incorporating more attributes, can become overly narrow and disconnected from the larger conceptual framework, leading to internal inconsistency and confusion. A systematic analysis of these definitions' fundamental dimensions can help develop clearer, more precise concepts that enhance understanding of misinformation's prevalence and inform research, practice, and policy.

This paper addresses these concerns using a processual, cognitive pragmatics framework to generate minimal and maximal definitions. The minimal definition establishes a shared basic understanding of the explanatory domain of misinformation studies while also better differentiating it from other fields. From here, we derive a series of maximal definitions by adding attributes to the minimal definition, creating specific referents for misinformation, disinformation, and malinformation that more closely align with the varied work of research programs and disciplines within the field.

This approach not only enhances differentiation across fields but also improves coherence and consistency within misinformation studies. By focusing on the reciprocal communicative process inherent in misinformation, disinformation, and malinformation, the cognitive pragmatics framework avoids reification errors and provides a nuanced understanding of communication mechanisms. This specificity allows for more precisely targeted interventions, reducing the risk of backfire effects and enhancing the overall effectiveness of strategies to mitigate misinformation.

BACKGROUND

Minimal And Maximal Approaches to Definitions

A concept’s attributes—characteristics or properties of a concept—and referents—real-world instances or examples—can be differentially combined to generate different types of definitions serving different functions. Minimal definitions possess few attributes but many referents, making them broad in scope but low in detail (Gerring and Barresi 2003). These definitions facilitate transdisciplinary research by providing a consistent conceptual meaning across disciplines and establishing a basic shared understanding within a field (Gerring, 2012). They also differentiate a concept from related concepts within its field and distinguish its explanatory domain from other closely related fields (Gerring, 2012).

In misinformation studies, minimal definitions are used to define misinformation, disinformation, and malinformation as messages containing error with differing intents (Chadwick and Stanyer 2022; Hameleers 2023; Krishna 2021). (Wardle 2018) introduced the term 'information disorder' to capture these phenomena collectively, though the concept lacks detailed differentiation, illustrating the challenges minimal definitions can pose when applied without sufficient specificity.

Maximal definitions, in contrast, include many attributes but few referents, adding concrete details and specificity to minimal definitions (Gerring and Barresi 2003). As definitions become more specific, they naturally reduce the number of real-world examples they encompass and can be refined further to generate more specialized research questions (Gerring and Barresi 2003). These definitions are particularly useful within the field’s constitutive disciplines, which often have a narrower focus than the broader field.

Figure 1 illustrates the relationship between minimal and maximal definitions. Crucially, attributes should be systematically added to minimal definitions rather than through ad hoc processes when generating maximal definitions. Such a systematic approach minimizes the risk of incorporating excessively idiosyncratic details that could overly differentiate definitions across disciplines or research programs, potentially transforming them into unrelated phenomena and undermining the field’s overall coherence and consistency.

In misinformation studies, maximal definitions include terms like fake news (Molina et al. 2021), propaganda (Tandoc Jr., Lim, and Ling 2018; Zannettou et al. 2019), conspiracy theories (Kapantai et al. 2021; Zannettou et al. 2019), and trolling (Kapantai et al. 2021). Unfortunately, these maximal definitions are often only loosely connected to their minimal counterparts, which, critics claim, results in low coherence and integration and impedes the development of a coherent theoretical framework within misinformation studies (El Mikati et al. 2023; Wardle and Derakhshan 2017). This lack of integration further hinders the field's ability to develop cumulative knowledge and effective interventions (Chadwick and Stanyer 2022; Douglas and Sutton 2023).

Figure 1. Minimal and Maximal Definitions

A diagram of a function Description automatically generated

For example, consider the current minimal definition of misinformation as “false or inaccurate information” (Chadwick and Stanyer 2022; Hameleers 2023; Krishna 2021). One group of researchers might expand this minimal definition by adding attributes focusing on the context in which the error occurs, such as the involvement of official institutions, leading to a maximal definition resembling propaganda (Nerino 2023). Another group of researchers might instead add attributes related to the message content and the channel, resulting in a maximal definition of health-related misinformation spread via social media (Moran, Swan, and Agajanian 2024).

Similarly, a third group of researchers could add attributes related to the intent of the message provider and the channel, defining a maximal definition of cyber fraud, where misinformation is intentionally crafted to deceive for financial gain, typically distributed via digital platforms like phishing emails or fraudulent websites (Holt et al. 2024). As a result, while these definitions all stem from the same minimal concept of misinformation, the ad hoc addition of attributes based on individual focus areas transforms these definitions into distinct and seemingly unrelated phenomena, each with its own theoretical implications and research questions, despite being derived from the same minimal definition. This divergence demonstrates how unsystematic attribute addition can fragment the field, making building a coherent and cumulative body of knowledge challenging (Adams et al. 2023; Domenico et al. 2021; Musi and Reed 2022).

Existing Communications Approaches to Mis and Disinformation

Misinformation, disinformation, and malinformation are fundamentally communicative processes, and neglecting the mechanisms of human communication hinders our understanding of how they are created and spread. This oversight contributes to conceptual ambiguities, compounding issues from poorly defined minimal and maximal definitions. For instance, definitions risk becoming overly simplistic without considering the nuances of message transmission—such as the intent behind the message, audience receptivity, and contextual cues. For example, disinformation might be framed merely as incorrect information, ignoring the relational dynamics that drive its spread, like intention to deceive, pre-existing willingness to trust, and strategic framing that enables it. Consequently, interventions may focus only on the message content, overlooking the deeper communicative processes that sustain its spread and magnify its impact. Failing to incorporate these critical factors risks creating definitions disconnected from real-world contexts, weakening both theoretical frameworks and practical responses in the field.

While both (Nerino 2023) and (Wardle 2018) identify the roles of message providers and receivers as crucial for understanding disinformation, few maximal definitions address the context or individuals involved in spreading and receiving messages (Domenico et al. 2021; Zhou and Zhang 2007). Some research has begun to address this gap by integrating theories of deception and communication into misinformation studies, offering more nuanced insights (French, Storey, and Wallace 2023; Hameleers 2023; Lelo 2024; Zhou and Zhang 2007). For instance, (French, Storey, and Wallace 2023; Hameleers 2023; Zhou and Zhang 2007) apply Information Manipulation Theory, which examines conversational maxims—such as information quality, quantity, relevance, and communication properties—to disinformation. This approach helps unpack how messages are structured to deceive. Given that individuals tend to assume others are truthful (as suggested by Truth-Default Theory), they are often poor at detecting deception, a vulnerability exploited by those spreading disinformation (French, Storey, and Wallace 2023; Hameleers 2023; Zhou and Zhang 2007).

(Lelo 2024) offers a different perspective, employing a praxeological framework emphasizing the social construction of meaning, focusing on how message understanding is co-created between the provider and receiver through shared context. This highlights that contextual understanding is essential to grasping the intended message beyond the content itself, underscoring the need for more comprehensive definitions that capture these communicative dynamics.

While the integration of deception and communication theories has significantly advanced the field, there remain critical gaps that future work must address. For instance, applications of Information Manipulation Theory often overlook the reciprocal nature of communication in disinformation. Current models treat disinformation as a one-way transmission from the provider to the receiver, assuming the message is understood as intended without considering feedback loops or how interpretation might vary. This unidirectional view neglects the complexities of communicative interactions, where messages are constantly negotiated and reinterpreted.

Moreover, existing frameworks do not clearly define what differentiates misinformation, disinformation, and malinformation, often failing to specify when a deceptive act transitions between these categories. Compounding this issue is the assumption that all deceptive communication is inherently harmful, yet not all instances generate negative consequences. Current theories lack a nuanced approach to distinguishing between types of deception based on their outcomes, which limits the field’s ability to fully capture the range of deceptive practices and their implications (Harsin 2024). Addressing these gaps would enhance our understanding of the communicative dynamics underlying misinformation and provide a stronger foundation for developing targeted interventions. Accomplishing this requires systematically redefining minimal and maximal definitions of misinformation, disinformation, and malinformation and integrating communicative mechanisms to better differentiate these concepts and guide effective interventions.

USING COGNITIVE PRAGMATICS TO DEFINE THE EXPLANATORY DOMAIN OF MISINFORMATION STUDIES

We adopt the communicative act framework from cognitive pragmatics rather than using information manipulation theory to account for the reciprocal nature of communication and situate the exchange in a larger social context. This framework explains mis, dis, and malinformation as communicatively constituted. (Bara, Douthwaite, and Bara 2010) defines cognitive pragmatics as “the study of the mental states of people who are engaged in communication” (p. 1). Examining these mental states requires studying the beliefs, motivations, goals, desires, and intentions of those engaged in communication as well as how they are expressed (Bara, Douthwaite, and Bara 2010). The communicative act provides the framing to investigate the speech act in the level of detail required in cognitive pragmatics.

Using a communicatively constituted cognitive pragmatics approach eliminates issues of reification in minimal and maximal definitions of misinformation, disinformation, and malinformation. Specifically, reification involves assigning causal powers and “thingness” to phenomena that are not concrete. In misinformation studies, one example of the fallacy of reification reduces the communicative process to properties that are inherent in the message.

In a reified view of the message as misinformation, attributes will add details to the properties of the message (e.g., format, content, language, etc.) but ignore the context or communicative attributes. Cognitive pragmatics and the communicative act framework correctly model misinformation and disinformation as a process of socially constructing meaning during a speech act as it occurs. It eliminates reification by relocating the unit of analysis from the message to the speech act. Additionally, through the attributes of the communicative act (explored in detail below), the integration of contextual, individual, and message properties is possible while increasing the consistency, coherence, and differentiation of concepts. This connects the minimal and maximal definitions along a spectrum of attributes in a fundamental shift away from current definitional approaches.

Minimal Definition: Providing Basic Attributes of the Communicative Act

Given the numerous conceptual issues surrounding misinformation, disinformation, and malinformation—as well as calls for their remedy—we propose a minimal definition for these phenomena that integrates existing communications literature and theory by highlighting communicative constitution. The minimal definition characterizes misinformation, disinformation, and malinformation along the dimensions of communicative constitution, perceived harm, and error. Specifically, we define the phenomenon that includes misinformation, disinformation, and malinformation as a communitive act that results in perceived harm due to error in the message. Each component is described in detail below. Importantly, the communicative act can occur in one-to-one, one-to-many, many-to-many, or many-to-one relationships.

Notably, the minimal definition is one type of communicative act. The distinctions regarding error and perceived harm establish boundaries around the phenomena that misinformation studies address and those of other fields such as linguistics, communications, and persuasion studies. Viewing the larger communicative act heuristic as a whole allows for clear differentiation between the explanatory domain of misinformation studies including misinformation, disinformation, and malinformation, and other types of deception frequently cited as counter-examples (e.g., white lies are messages that contain error but are not disinformation as they do not generate harm).

Figure 2 depicts the full communicative act heuristic. The broader heuristic of communicative acts provides a way to categorize non-harmful deception (e.g., white lies or parents telling Santa stories) as related but different from misinformation without ad hoc adjustments to the definition. It draws clear boundaries around the concepts of misinformation, disinformation, and malinformation to contribute coherence to the field and differentiate from other types of error-based communicative acts.

Figure 2. Communicative Acts and Misinformation Studies

Communicative Act

Communicative acts include context-dependent utterances, behaviors, gestures, etc. made by the speaker to convey some information to the receiver and have that intention recognized as such (Bara, Douthwaite, and Bara 2010). Communicative acts describe interactions composed of agents with the intent to engage in behavioral games to construct meaning through the interaction (Bara, Douthwaite, and Bara 2010). Since communication is largely the activity of individuals socially constructing meaning based on shared knowledge, at least two agents must participate (Bara, Douthwaite, and Bara 2010).

Both roles (i.e., actor and partner) must be filled for a communicative act to occur as a speaker without someone to receive the message is not communicating since the understanding of the message is not actively being socially constructed by the individuals involved. These agents must intentionally engage together in the behavior game as inferring information from an individual without the intent to convey it is information extraction as it does not meet the standards of intent and cooperation required of communication (Bara, Douthwaite, and Bara 2010).

The behavior game reinforces, teaches, and socializes individuals into the social statuses, roles, norms, and values of society through shared knowledge of game structures and mental representations (Bara, Douthwaite, and Bara 2010). The word “game” is used intentionally to capture the playful nature of interaction and the learning embedded in play while drawing on the social games described by Goffman (Bara, Douthwaite, and Bara 2010). Behavioral games use the stages of the communicative act to constrain interpretations of the speaker’s meaning (i.e., illocutionary act), the outcome they desire (perlocutionary act), and the range of cooperative and noncooperative responses (Bara, Douthwaite, and Bara 2010). Behavioral games all include a component of learning or socialization that can occur within a culture, group, or couple as long as both agents are interested in engaging in the game and the conditions are right for enacting the game (e.g., professional behavior games require a work setting during work hours) (Bara, Douthwaite, and Bara 2010). Once initiated, behavior games continue until they reach their natural end state unless an agent decides to stop participating which will result in loss of trust (Bara, Douthwaite, and Bara 2010).

The communicative act comprises five stages in which the speaker and partner engage to construct meaning from the interaction. The first stage is the expression act, in which the partner verifies that the speaker intends to communicate with them, and they begin to visualize the speaker’s mental state, including their beliefs (Bara, Douthwaite, and Bara 2010). Stage two is the speaker meaning in which the partner visualizes the illocutionary act—the speaker’s intention in communicating—and the perlocutionary act—the speaker’s desired effect—by drawing on presumed shared beliefs between the agents and the institutional context of the behavior game (Bara, Douthwaite, and Bara 2010). Stage three of the communicative act is the communicative effort in which the partner processes the speaker’s communication and its intentions based on shared knowledge and decides if they wish to continue to engage in the interaction or cooperate which is the desired response for a successful communicative act (Bara, Douthwaite, and Bara 2010). Deception, if present, will be part of stage three. The reaction stage is stage four of the communicative act and includes the partner producing their communicative intentions (Bara, Douthwaite, and Bara 2010). In stage five, the response state, the partner communicates their response to the speaker and may include a neutralization or explanation if the response is not preferred in the behavior game (Bara, Douthwaite, and Bara 2010).

As demonstrated in Figure 3, the communicative act is embedded in context and constrained by the beliefs of the agents engaged in the behavior game. The context of the situation dictates which behavior games are appropriate given the situational norms. Shared and mutual beliefs are crucial to successful behavior games as they are held in common by both agents and allow for the behavior game to socially construct the interaction, including the situational norms, values, and the social status of the agents in relation to each other to scaffold the communicative act and its mutual understanding (Bara, Douthwaite, and Bara 2010). The common ground or larger shared and mutual beliefs held by society and generating cultural communities is also crucial to situating the behavior game and shared beliefs of the agents inside a broader cultural context (Bara, Douthwaite, and Bara 2010).

However, shared beliefs or common ground do not ensure that the communicative act will be interpreted correctly although they constrain meaning, a distinction from (Lelo 2024) who suggests that meaning is correctly derived from context. The speaker may take advantage of the perceived individual beliefs of the partner to deceive them either through the contents of the message or the context in which they are engaging. This is a key distinction from information manipulation theory as used in misinformation studies (French, Storey, and Wallace 2023; Hameleers 2023) which locates error in the message or in the relevance or amount of information shared.

Figure 3. The Communicative Act in Cognitive Pragmatics

Harm

We include harm as a key attribute of our minimal definition to address (van Doorn 2023) call to expand considerations of harm beyond belief in false information. Within the zemiology literature, the concept of harm captures the direct and indirect consequences (intended or unintended) of actions beyond their criminal liability (Hillyard and Tombs 2007). Harm includes multiple dimensions: physical, emotional/psychological, economic/financial, and cultural harms. Additionally, the scale of harm additively measures the number of individuals affected, while the scope of harm multiplicatively measures the number of categories of harm experienced. The scale of harm captures if the harm is occurring at the level of the individual, organization, institution, or structural level or a combination of them.

Figure 4 depicts the scale of harm with specific examples. These dimensions of harm have also been informally identified in misinformation studies literature as consequences (Adams et al. 2023; Chadwick and Stanyer 2022; Domenico et al. 2021; Jaster and Lanius 2021; Kapantai et al. 2021; van der Linden and Kyrychenko 2024).

Figure 4. Scale and Scope of Harm

Physical harm includes death, injury, illness, and natural disasters (Canning and Tombs 2021). A key component of physical harm is that it is preventable and can occur indirectly or through blocked access to a healthy diet, sufficient exercise, a good quality of life, health care, proper and safe shelter and work environment (Canning and Tombs 2021; Hillyard and Tombs 2007). In misinformation studies, an example of individual-level physical harm is getting infected with a preventable disease as a result of non-vaccination after consumption of vaccine misinformation (van der Linden and Kyrychenko 2024). Figure 5 illustrates the four dimensions of harm with misinformation examples.

Figure 5. Types of Harm Generated by Error Based Communicative Acts

Economic and financial harm are related to poverty or economic hardships at either the individual (i.e. financial harm) or social level (i.e. economic harm) (Canning and Tombs 2021). Economic and financial harm may be caused by malpractice or policy at a corporate or government level (Canning and Tombs 2021; Hillyard and Tombs 2007). At the individual level, financial harm includes both temporary conditions (e.g., theft, fraud, unemployment) and chronic conditions (e.g., prolonged unemployment, high cost of living, and blocked access to basic human needs including education, healthcare, and transportation) that compose poverty (Canning and Tombs 2021; Hillyard and Tombs 2007). Economic harm at the social level includes financial harm that has a broader social impact and affects communities or societies with a policy component implicated in the widespread poverty (Canning and Tombs 2021). In the misinformation studies literature, companies experience financial harm due to changes in the stock market and consumer behavior related to misinformation about products (Domenico et al. 2021).

Psychological and emotional harm include both acute traumatic experiences and chronic ongoing stressors that negatively impact mental health and wellness regardless of a mental health diagnosis (Canning and Tombs 2021; Hillyard and Tombs 2007). The specific psychological harms include mental health diagnoses such as anxiety, depression, post-traumatic stress disorder, self-harm, suicidal ideation, and phobias resulting from poor quality of life including social isolation and loneliness, violent victimization or abuse, and insecurity (Canning and Tombs 2021). Further, poverty, housing insecurity, and poor working conditions can amplify psychological and emotional harm. During the COVID-19 pandemic, Asian Americans experienced increased hate crime and fearfulness as a group due to misinformation assigning them blame for the pandemic providing an example of emotional harm in misinformation studies (Lantz and Wenger 2023).

Finally, cultural harm generally addresses “harms to culture, harms by culture, relational harms, and harms to cultural safety” (Boukli and Copson 2019). Cultural harm can be subdivided into two categories: harm of recognition and harm to autonomy. Harm to recognition includes the misrepresentation or denial of cultural identity particularly for underrepresented groups and relational harm which excludes individuals from community experiences based on group membership (Canning and Tombs 2021; Hillyard and Tombs 2007). Exclusion from key social networks can have widespread effects including inability to obtain childcare or blocked educational access. Harm to autonomy includes blocked access to opportunities for self-actualization including education, employment, and training (Canning and Tombs 2021). At a societal level, harm to democracy is an example of harm to autonomy that may result from disinformation (Adams et al. 2023; Chadwick and Stanyer 2022; Jaster and Lanius 2021).

Error

Error includes any inaccurate or false information that is inserted into the message by the message provider and can be deliberate (i.e., disinformation), non-deliberate (i.e., misinformation), or deliberate and contextual (i.e., malinformation). Error can also be contained in the perception or understanding of the message receiver and the context of the communicative act itself. Using the communicative act framework, we define error in relation to the common ground of the larger society. Thus, if the common ground does not have “true knowledge” then the communication of false information is not mis or disinformation using this definition.

In the communicative act literature, error is described both as error in the message and error in the context or behavior game of the interaction. Error is the accidental communication of misleading information without the intent to deceive or be non-cooperative in the communicative act and may or may not be identified by the partner receiving the information (Bara, Douthwaite, and Bara 2010). Here error is considered misinformation as it is the unintentional transmission of incorrect information. Figure 6 displays the location of error and knowledge for each instantiation. Misinformation localizes the error in the individual and shared beliefs of the speaker and partner as well as in the message itself. Alternatively, the message and speaker could have knowledge that is interpreted incorrectly by the partner localizing the misinformation error in the individual belief of the partner.

Deceit is the intentional misrepresentation of information contained within the message or the context—the behavior game—in which the exchange occurs (Bara, Douthwaite, and Bara 2010). When the message provider uses deception successfully, they exploit the message receiver’s beliefs and lack of knowledge to convince them the deception is true (Bara, Douthwaite, and Bara 2010). Since deceit is the intentional transmission of incorrect information (or incorrect representation of the context), it is disinformation (Bara, Douthwaite, and Bara 2010). Here, the error is localized in the message and the individual and mutual belief of the partner. In malinformation, the error in the message is contextual, which leads to errors in the individual and mutual beliefs of the partner.

Figure 6. The Communicative Act and Mis, Dis, and Malinformation 

Maximal Definition: Adding Contextual Attributes to the Communicative Act

We add contextual attributes to the minimal definition of harmful error-based communicative acts to generate maximal definitions. The resulting heuristic is meant as a tool to organize communicative acts, including misinformation, disinformation, and malinformation, into similar categories with exemplar referents. It is not exhaustive but does help to view the full range of referents in relation to each other as logical derivations from the minimal definition. It is informed by current research, debates, and criticisms in the misinformation studies field and the levels in the heuristic are grounded in the literature. Figure 7 depicts the full contextual heuristic of the harmful error-based communicative act. It adds attributes and context to the minimal definition to generate maximal definitions (Gerring and Barresi 2003). Individual disciplines can add further attributes and contextual details to generate ideal types of misinformation, disinformation, or malinformation relevant to their explanatory domain.

The first level of the heuristic addresses the institutional context in which the communicative act occurs. (Bara, Douthwaite, and Bara 2010) note that individuals communicate to meet needs much as institutions organize to meet the needs of societies (Turner 1997). The categories at the institutional contextual level include reward-based context, political ideology-based context, entertainment, informational and instructional-based context, and routine interaction-based contexts. The institutional context shapes many broad sociocultural factors including common ground and mutual beliefs of the message provider and message receiver. These institutional contexts are identified as motivations for sharing disinformation and misinformation in the literature.

(Altay et al. 2023; Douglas and Sutton 2023) note that a primary motivation for sharing conspiracy theories is to advance a political agenda, which fits neatly into the political ideology institutional context. Within the reward-based context (Domenico 2024; Molina et al. 2021; Tandoc Jr., Lim, and Ling 2018), identify the pursuit of financial gain as a primary motivation for spreading disinformation. (Adams et al. 2023; Deutschmann 2020; Domenico 2024) discuss informational, educational, and entertainment motivations to spread disinformation. Finally, within the context of the routine interaction, (Domenico 2024; Douglas and Sutton 2023) note that disinformation may be shared for social reasons and is typically shared within one’s social group.

Figure 7. Harmful Error Based Communicative Act Typology

The next level of the heuristic categorizes the social status or category of the message provider. Within the rewards-based context, the message provider may have the social status of an economic (e.g., clickbait see (Pengnate, Chen, and Young 2021) or non-economic entity. Within the political ideology-based context, the message provider has either a government or non-government social status. In the entertainment, informational, and instructional-based context, the message provider has the social category of media, journalism, science, or education. Finally, in the routine interaction-based context, the social status of the message provider is either in-group or out-group (e.g., (McDonald and Ma 2016). The social status of the message provider indicates general information about the interaction to the message receiver and feeds into the situational norms and values that dictate and constrain the interaction.

The next level of the heuristic categorizes the sociological role the message provider is taking. This includes the situational norms and values the role invokes and the situational means, strategies, and tactics available within that role. Within the reward-based context, the role of the message provider is either criminal or non-criminal regardless of the social status categorization (i.e., economic or non-economic entity). In the political ideology-based context with a government social status, the message provider may take on either an administrative or individual role. Within the non-government social status, the message provider has either an organizational (e.g., Institute for Propaganda analysis using techniques of propaganda see (Bauer 2024) or individual role. In the entertainment, informational, and instructional based context, the media social status includes roles for traditional media (e.g., movies, magazines, radio) and new media (e.g., podcasts, blogs, and deepfakes see (Vaccari and Chadwick 2020). The journalism social status divides into the roles for traditional journalism (e.g., The New York Times, The New York Post) and alternative journalism (e.g., Buzzfeed News). (Allen et al. 2020) note that disinformation spread is becoming more common across alternative journalism sources on social media channels. The science social status includes roles for professional science and pseudoscience. During the pandemic, pseudoscientific social roles spread misinformation and disinformation regarding vaccination and the dangers of COVID-19 (Chavda et al. 2022). Finally, the education social status includes roles for institutional education and individual education. In the routine interaction institutional context, the available roles are personal and professional in both the in-group and out-group social status categories. Routine use of social media channels including sharing of misinformation with family and friends is an example of routine interactions, in-group, personal context (Buchanan 2020). Invisible Rulers details the motivation, methods, and impact of individuals spreading disinformation in routine social media interactions (DiResta 2024). Both the social status and social role levels of the heuristic constrain the behavior games scaffolding the communicative act.

The final level of the heuristic differentiates based on the nature of the error in the message. Each social role has a category for misinformation, disinformation, and malinformation. Disinformation includes errors in the message that are intentionally inserted. Misinformation is an error in the message that is accidental. Finally, malinformation includes errors that are intentionally inserted and are contextual (i.e., partial facts that are intended to deceive the message receiver and misrepresent the situation or topic).

Thus far, the heuristic has assumed the error is localized in the message itself (the last level of the heuristic). In a wolf in sheep’s clothing scenario, however, the error can also be located in the institutional context as part of an intentional deception (Bara, Douthwaite, and Bara 2010). For example, a cybercriminal seeking economic gain (institutional context: reward-based; social status: non-economic entity; role: criminal) may approach the message receiver by posing as a colleague invoking a routine interaction-based institutional context, in-group social status, and professional role. In this case, the error is located both in the message itself and in the context in which the message provider and receiver are engaging in the communicative act.

(Holt et al. 2024)’s assessment of ideologically motivated cyberattacks demonstrates how to navigate the heuristic by adding attributes to the minimal definition to reach a maximal definition. Specifically, the cyberattacks they examine fall under our minimal definition of communicative acts that generate perceived harm due to error in the message. They discuss ideologically motivated cyberattacks that contain the attributes of political ideology-based contexts. Within this context, (Holt et al. 2024) differentiate between actors with government and non-government social status and individual or administrative/ organizational social roles (e.g., state-sponsored hacker groups, animal rights-associated groups, and individual hackers that identify with terrorist ideology). Each of referents mentioned aligns with disinformation as the intentional distribution of error in messages. (Holt et al. 2024) further instantiate the referents by adding situational attributes that describe the mechanism of the communicative act (i.e., online communication) and components of the message to differentiate between referents (e.g., phishing and doxing).

DISCUSSION

Existing criticisms of misinformation studies have primarily pointed to semantic debates and the issue of face validity. However, these criticisms are symptoms of much deeper conceptual issues (i.e., low consistency, coherence, and differentiation) that generate indeterminate concepts, leading to the fallacy of reification within the literature. Each of these issues has severe consequences for a replication crisis and the failure and backfire effects of interventions, problems that misinformation studies is currently facing (Bachmann and Valenzuela 2023; Dentith, Husting, and Orr 2024; Douglas and Sutton 2023; Kessler and Bachmann 2022; McPhedran et al. 2023). Our approach of offering a minimal definition and moving towards maximal definitions through adding attributes and context systematically addresses these issues. Specifically, increasing the conceptual consistency, coherence, and differentiation results in clear boundaries for concepts and clear distinctions between concepts that are used consistently (i.e., eliminates indeterminate concepts) in addition to clearly differentiating the explanatory domain of misinformation studies. Generating a processual minimal definition avoids reification as the attributes are not limited to the properties of the message but include the whole communication process.

Benefits of this Approach

The presented characterization of misinformation, disinformation, and malinformation and the heuristic address concerns the conceptual concerns noted in the literature (i.e., indeterminate concepts and the fallacy of reification) through strengthening the dimensions of concept construction (i.e., consistency, cohesion, and differentiation). The broad characterization of the phenomenon of misinformation, disinformation, and malinformation as a communicative act that generates perceived harm due to error in the message provides a minimal definition that clearly differentiates the explanatory domain of the misinformation studies field and can be adopted across disciplines with the field. This addresses (Harsin 2024) critique that it is unclear what misinformation studies uniquely seeks to explain.

Part of the appeal of a broader or minimal definition of the phenomena studied in misinformation studies is that it provides a unifying function and a baseline understanding of the phenomenon that is not discipline-specific (Gerring and Barresi 2003). In misinformation studies, this is particularly relevant to unify discipline-specific maximalist definitions into an overarching minimal definition that promotes general understanding. To date, with the exception of (Wardle 2018), minimal definitions exist for misinformation, disinformation, and malinformation but not for a broader encompassing phenomenon that can be adopted across linguistic domains. While (Wardle 2018) introduces the term “information disorder” as encompassing misinformation, disinformation, and malinformation (p. 954), she does not provide a clear definition. Our minimal definition improves consistency, coherence, and differentiation across the explanatory domains that compose misinformation studies, while the maximal definitions improve the consistency, coherence, and differentiation within disciplinary explanatory domains.

In addition, the heuristic’s inclusion of context and additional attributes (i.e., institutional context, social status, and social role) provide a starting place for disciplines to generate ideal types or prototypes based on the scope of their explanatory domain. For example, (Linvill and Warren 2024) explore the strategies and tactics of state backed online trolls engaged in disinformation campaigns. Within the presented heuristic, these findings provide additional details regarding the sociological role of the message provider and examples of wolf in sheep’s clothing contextual error. Disciplines may also build on this work by uncovering additional attributes within their explanatory domain to generate fully maximal definitions. A key benefit of the minimal and maximal approach to defining concepts is the ability to logically connect maximal definitions to minimal definitions through the addition of attributes. This improves consistency, coherence, and differentiation of concepts defined using this approach (Gerring and Barresi 2003).

While existing typologies address motivations for spreading disinformation (Kapantai et al. 2021; Molina et al. 2021; Zhou and Zhang 2007), they do not frame these motivations in a larger institutional context. Situating the intentions of the message provider in entering the error-based communicative act within an institutional context connects literature framing the message provider or message with literature focused on the message provider and the context. (Zhou and Zhang 2007), for example, address motivations of message providers and the spread of disinformation separately but do not provide a way to unify them into a single model. By characterizing misinformation, disinformation, and malinformation as a communicative act, our maximal definitions and heuristic locate the error based communicative in an institutional context which shapes the behavior game of the agents involved.

Further, characterizing misinformation, disinformation, and malinformation as communicative acts, a process rather than a concrete thing, eliminates the reification fallacy present in the literature. It reorients the focus to exchanging information and meaning construction between individuals rather than reducing it to just the message's contents. Moving forward, it will be essential for future research to continue to examine misinformation, disinformation, and malinformation processually to avoid the fallacy of reification. This has the additional benefit of informing creative interventions that intervene in multiple places in the process. Since the processual model accurately represents reality, the interventions should be more targeted and hopefully successful.

Hypothesis Generation

The approach of coupling a minimal definition with maximal definitions that add attributes and context both unifies the field of misinformation studies and opens avenues for future research for disciplines within the field. Specifically, disciplines can use the maximal definitions in the heuristic as a starting point to further refine their empirical domain and generate relevant ideal types. With the minimal definition, disciplines can read misinformation research outside of their discipline and understand the framework, empirical scope, and processes related to misinformation. As a result, they can import this knowledge into their discipline and build on existing knowledge without a redundancy of efforts.

Limitations

While the current research exhibits numerous strengths and makes strong contributions to the literature, it is not without limitations. Firstly, as part of generating minimal and maximal definitions of misinformation, disinformation, and malinformation, we recognize that others could follow a similar procedure and arrive at different definitions (Gerring and Barresi 2003). In fact, there are many definitions and typologies in misinformation studies and some that incorporate deception (El Mikati et al. 2023). Given our systematic process, we feel that our contributions fill existing gaps and offer conceptually robust definitions according to the dimensions of conceptual analysis. Since our maximal definitions are not fully developed, we call on disciplines to uncover additional attributes within their explanatory domains to build on the contextual attributes and attributes of the communicative act. Finally, our discussion was limited to only communicative acts that generate perceived harm due to error in the message. We recognize that other types of error-based communicative acts exist, as well as truthful communicative acts that generate harm. Future research should pursue a deep exploration of these types of communicative acts.

Future Research

Future research should build on the definitions offered here to establish cumulative knowledge in three important ways. First, future research should further the conceptual work begun here within disciplines. Specifically, future research should further instantiate the heuristic to generate discipline specific ideal types with appropriate scope of the explanatory domain resulting in increased consistency, coherence, and differentiation. This work will include determining the additional attributes that should be specified within each contextual space including unpacking the components and attributes of the communicative act and error. Second, future research should unpack the processes underlying the minimal definition theoretically and empirically. For example, future research should map the theoretical process of the error based communicative act that generates harm and work to start validating it empirically. Finally, future research should examine the communicative acts that were beyond the scope of this paper for a better understanding of the process as a traditional communicative act (i.e. no error) or deception/ deceit that does not generate harm.

CONCLUSION

Given the key role concepts play in theory construction and the consequences of poor theory and concepts in replication efforts and interventions, it is crucial for fields such as misinformation studies with high stakes and relevance to assess the state of their concepts. Critics of misinformation studies identify issues of indeterminate concepts and the fallacy of reification in addition to the lack of a unifying minimal definition. As a result, we generated a minimal definition to scope the phenomena of interest to misinformation studies and added contextual attributes to move from the minimal definition to maximal definitions. The minimal definition provides a shared baseline understanding on the concepts across the field of misinformation studies. Disciplines and research programs can add further attributes to the maximal definitions provided to identify ideal types and prototypes within their explanatory domains. Further conceptual work is necessary to generate a thorough understanding of the phenomena and how to create successful interventions and findings that replicate.

Comments
1
?
Allen Dave:

Hello, as a newbie to cryptocurrency trading, I lost a lot of money trying to navigate the market on my own, then in my search for a genuine and trusted trader/broker, i came across Trader Bernie Doran who guided and helped me  retrieve my lost cryptocurrencies and I made so much profit up to the tune of $60,000. I made my first investment with $2,000 and got a ROI profit of $25,000 in less than 2 week. You can contact this expert trader Mr Bernie Doran via Gmail : BERNIEDORANSIGNALS@ GMAIL. COM or WhatsApp + 1 424 285 0682 and be ready to share your experience, tell him I referred you