[For votes to count, referees must reasonably explain why they voted as they did. Thus, please explain your vote. If you voted to publish pending minor changes, specify each change, why it is needed, and, possibly, how it should/could be done.]
I appreciate the authors tackling the topic of misinformation. It is a significant problem and deserves our attention. That said, there are issues with this study which, from my view, limit its potential for publication. I will present detailed feedback below, but my comments can generally be summarized as:
Does not sufficiently engage with the literature
Does not sufficiently justify its analysis
Findings are limited to obvious conclusions about headlines and deception
Authors spend significant time in the manuscript speculating on points that the data is ill-equipped to address.
Again, I want to thank the authors for the opportunity to review this manuscript and I am saddened that I cannot be more supportive of its publication. Below is more specific feedback on the manuscript.
Literature review needs to cover the available literature more succinctly and thoroughly. In the first two sections, the main bulk of the literature review, only about 10 sources are cited, not counting an off-handed citation to Cohen (1972). This is an insufficient coverage of a literature that has exploded in the past few years. The topics of disinformation and cyberwarfare have been matters of significant scholarly interest in recent years.
For instance, a simple Google Scholar search on "disinformation" brings up pages of results on just studies published since 2017. I also don't see reference to some of the books published on the matter like Theresa Payton's recent book on cyberwarfare and misinformation.
Related to information warfare, there is also a serious lack of consideration of the literature in this area. I do not see, for instance, any mention of Thomas Rid's work nor Clarke and Knake's Cyber War book.
While it is impossible to ask authors to consider every work on a given topic, I would expect more than a few references on subjects as robust as mis/disinformation and information warfare, even if we narrow those topics to recent electoral events.
I thus recommend that the authors return to the literature to consider prior scholarship more thoroughly.
Further, the coverage of the literature provided needs to be more succinct. Entire paragraphs are dedicated to single studies when likely only a sentence or two would suffice.
The literature review is thus simultaneously "too much" and "too little." It does not cover sufficient ground while also spending too much time detailing individual sources.
The review of social constructionism seems awkwardly placed. It is also an insufficient examination of this theoretical domain. It doesn't even cover Berger and Luckman's foundational book on the subject of social constructionism. It instead relies entirely on a single source that isn't even principally focused on social constructionism.
The authors explain that "the file containing the raw fake news dataset was imported into Microsoft Excel"? What does this raw data look like? Is it only the text information? If they only had 20 articles, why not store them in a more robust format that would preserve the content of the article and its presentation (formatting, pictures, etc.). These seem like important contextual elements for the study of misinformation and, given the small sample size, consideration of these factors would not be overly onerous for the analysis.
If the authors only got 22 articles, I don't see why it necessitated an automated tool. Seems like something that could have been garnered through simply browsing their Facebook page.
The coding process is interesting, but the authors overstep what can and cannot be done out of this form of analysis. For instance, they declare the "it enables the researcher the ability to describe the attitudinal or behavioral responses that occur within the communication method, reveal patterns and trends, and assist to uncover the emotional and psychological states that occur within the groups of interest." The data gathered in this study does not allow the researchers to examine responses or emotional/psychological states. If they were to examine reader comments associated with the articles or associated social media posts, then such claims might be supportable.
Authors need to strengthen their arguments as to why the Buffalo Chronicle is a source of interest. In other words, why would the reader care about what misinformation was circulated by this "news" outlet? This connection is never really established in the manuscript.
The "trigger topic" theme needs to be more thoroughly conceptualized. All news outlets use headlines to grab the attention of the reader. The use of "clickbait" style headlines, for example, has become a particularly acute blight for online social media news readers. Is there anything special about the use of triggers in this context? Because it seems that the theme hinges its importance on the presence of triggering intent itself, which is hardly novel or surprising.
Unclear what is meant when the authors state “rather than direct coverage about the issue in question, a political perspective is influenced within the overarching message that is delivered.” Do the authors mean that the political perspective of the reader is influenced or that the article advances a particular political message? The latter is easier to demonstrate with this data while the former is purely speculative.
The authors state that “through forcing the reader’s attention towards a separate issue where the message is politically motivated, a desired conclusion is manipulated.” Again, are the authors suggesting the reader is manipulated or that the article engages in misdirection? These are two related but separate claims and it is important to be clear as the analysis can speak to one but not the other.
The authors state that "their attention to these concerns appear superficial and are used to push a personal agenda" but what is this personal agenda exactly?
The authors state "During election time, small inferences such as this can have significant consequences." Again, the data cannot support these kinds of statements. At least provide citations to external sources if a claim like this is going to be made.
Regarding finding 2, “the use of true facts in combination with unverifiable or false fasts,” this is another obvious/well-establish strategy used in misinformation. Fiction is most believable when wrapped in fact. In literature, this might be referred to as verisimilitude. A great degree of recent research has been exploring the subject as well. Here is a news article addressing such research, for example:: https://www.bbc.com/future/article/20171114-the-disturbing-art-of-lying-by-telling-the-truth. Not all work has to arrive at a completely novel or new insight, but it doesn't help make the case for the importance or necessity of the study if the results simply reaffirm what is already well-known or obvious, especially if the methodology is as limited as in the case of the current study. At the very least, I would hope that such findings would clearly be connected to the robust literatures that they reaffirm--which I don't see in this analysis.
The first part of the discussion section mostly speculates about what behaviors and emotions the articles might incite in their readers. While there might be some room for such speculation, the authors would be better suited to focus on the content of the articles themselves as the data do not speak to reader reactions. Speculation without adequate evidence tends to weaken the points made (interestingly, this is one of the points made by the authors as they discuss the Buffalo Chronicle).
The entire section entitled "Evidence of Criminal Interference - Foreign Interference" seems unnecessary. The particular analysis used--content analysis--does not speak to the legal details surrounding the publication of the articles nor does it seem to offer any definitive evidence of criminal wrongdoing. As such, this section also appears to be largely speculative and outside of the scope of the analysis.
The section entitled "Social Construction Theory and Information Trolls" also does not seem necessary in the discussion. It doesn't directly speak to any of the findings of the study. It reads like it belongs in the literature review.
The "recommendations" section feels forced. None of the recommendations provided seem to rely upon the findings of the study and could have just as easily been stated through an examination of the established literature. A stronger, more explicit link needs to be made between the findings of the study and the kinds of recommendations advanced.