We examine a new form of online fraud closely related to traditional online romance fraud and catfishing, but which is “industrialized” through enterprise business practices, software platforms, and customer service processes. We conducted an inductive analysis of publicly ...
We examine a new form of online fraud closely related to traditional online romance fraud and catfishing, but which is “industrialized” through enterprise business practices, software platforms, and customer service processes. We conducted an inductive analysis of publicly available testimonial and review data provided by current and prior employees of a specific company in the online customer service contract space. Companies hire individuals online to work as “chat moderators” or “customer service providers” who are told that they are to advance engagement on social media platforms. In fact, they are being recruited as “sexting” workers, paid on a per-text basis to engage in intimate chatting with clients led to believe the workers are female participants on a dating site. The process is mediated via client management processes that monitor employee productivity and monetize all interactions between “clients” and “workers.” The company executes these processes with great efficiency by algorithmically assigning multiple workers to individual clients and assembling background files on clients in real time. We refer to this corporatized form of fraud as Intimacy Manipulated Fraud Industrialization (IMFI) and find that workers serve as both exploiters of their clients as well as victims of the company they work for.
Most types of online fraud have their origins in various forms of face-to-face “confidence games” perpetrated in person or via traditional mail. Such scams were conducted over time (days, weeks, or even months) on a one-at-at-a-time basis, often using hand-written letters, with varying degrees of success (see, e.g., Buse, 2005; Onyebadi and Park, 2012; Holt and Graves, 2007). The advent of computing systems and rapid development of internet-based platforms has led to the proliferation and increased efficacy of these schemes. Examples include phishing (Dhamija, Tygar, & Hearst, 2006; Alkhalil, Hewage, Nawaf & Khan, 2021), online grooming (O’Connell, 2003; Nikolovska, 2020), and advance fee fraud scams (Chang, 2008; Tambe, Siponen, & Topalli, 2023). Of particular note are scams that rely on the use of deception to foment an intimate relationship or connection between a scammer and unsuspecting victim to defraud the victim of their financial resources or blackmail them. These are referred to as online romance fraud (ORF) or romance scams (Whitty, 2018) and they have increased at an alarming rate over the past decade1, due in large part to the exponential advancement of computer technology and usability of social media platforms (see Topalli & Nikolovska, 2020).
Romance scammers use websites, social media platforms, or online forums to pose as potential partners, using engaging profiles with attractive photos and creating a charming persona to gain the trust and affection of their targets. Once a connection is established, the scammer gradually builds an emotional bond with the victim, often through regular messaging, phone calls, or video chats. They may express love, affection, sexual attraction, and a desire for a future together. Meanwhile, their goal is to systematically exploit the victim financially or obtain personal information for illegal purposes. To advance these goals, romance scammers invent elaborate stories to elicit sympathy and manipulate their victims (see Wang & Topalli, 2022). They might claim to be in financial distress, facing a medical emergency, or needing money to visit the victim. They often request money transfers or gift cards, and some scammers may even go as far as asking victims to launder money on their behalf (see Whitty, 2013 & 2015; Whitty and Buchanan, 2012).
Previous research (see Wang and Topalli, 2022) has demonstrated that romance scammers commonly apply impression management (Schneider, 1981) and interpersonal deception (Buller and Burgoon, 1996) as parts of a larger strategy of social engineering (Mitnick and Simon, 2003; Cialdini, 2001) to defraud their victims by exploiting the affordances provided by technological processes (i.e., the internet) and platforms (including social media sites and online communication tools) to great effect. Romance scammers can thereby maximize the rate and efficiency of their criminal behaviors across many more victims in many more locations than would be possible in a traditional FtF scam (Topalli & Nikolovska, 2020). Consequently, the amount of money obtained through romance scams has rocketed in recent years (up to $1.3 billion in the US alone according to the FTC)2, representing a significant, global form of financial crime.
Our research details the advancement of a new, related form of online fraud that relies on many of the same affordances (the internet and social media platforms) and techniques (impression management, deception, and social engineering) found in traditional ORF, but which is “industrialized” through enterprise business practices, software platforms, and customer service processes. This new form of fraud is perpetrated not by individual scammers but corporate entities marketing themselves to potential employees and clients as social media and customer service management companies. Companies hire individuals online to work as “chat moderators” or “customer service providers” with the expectation that workers will employ the company’s customer service database to guide “clients” through a customer service complaint or service request. In other cases, workers are told that they are being hired to advance engagement on new social media platforms.
In reality, they are being recruited as “sexting” workers, paid on a per-text basis to engage in sex chatting, charging roughly 1-2 Euros per message with clients who are led to believe the workers are participants on a social media-based dating site. Workers are provided background files on clients, including photos, histories, and other social media intelligence, and present themselves as interested potential romantic or intimate partners. The entire process is mediated via algorithmic processes that monitor worker productivity and monetize all interactions between clients and workers. We refer to this form of fraud as Intimacy3 Manipulated Fraud Industrialization (IMFI). From a criminological standpoint, IMFI is unique in that it potentially victimizes both clients (who are deceived into spending money to interact with people they believe are legitimate intimate partners) and the workers themselves, who are unaware that their work will require them to deceive clients and engage in sexting for money, and who are often the victims of wage theft by their employers. As such, IMFI represents not only a potential escalation of online fraud but a new online iteration of the criminological concept of a victim/offender overlap (see Marcum et al., 2014; Van den Eynde et al., 2023). In the current paper, we will draw upon the conceptual model employed in Wang and Topalli (2022) to elucidate the specific stages of IMFI that mirror those found in ORF as well as a variety of constituent scams and scamming techniques such as catfishing, call center fraud, advance fee fraud scamming, etc.
IMFI clients (victims) meet sexting employees via various websites, online advertisements, and chat rooms. They engage with advertised services promising mobile texting connections with desirable partners focused primarily on sexual or intimate interactions. These text exchanges are charged to the “client” at a rate of 1-2 euros and paid through the client’s phone bill. Clients learn about their chatting partners via profiles created to attract interest based on a variety of attributes and traits (race, language, geographical location, language, interests) complete with photographs. In IMFI, these profiles are in fact created by customer service companies employing customer service software and processes, with employees for the company posing as the women in the profiles provided.
Our own assessment of IMFI is that it draws on elements of a group of established forms of online fraud (including catfishing, call center fraud, and mobile payments fraud, to name but a few) but exponentiates its impact through organizational industrialization processes, moving the perpetration of fraud away from individuals or small groups of perpetrators to fully corporatized entities with significant resources and units of specialization (e.g., a software division, a marketing division, etc.). Corporate entities can leverage powerful customer service-oriented enterprise business platforms, machine learning, and their status as legitimate businesses to extract maximal financial gains from “clients” in IMFI. These platforms allow for algorithm-based optimization of fraud processes, worker surveillance and automated management, and the ongoing compilation of large client databases to convince clients to stay online longer and spend more money through sexting chats. Because these companies operate multi-nationally and because there is little in the way of regulatory or legal precedent for how to situate their impact, the current environment for IMFI is ripe for abuse. IMFI has its roots in a variety of existing fraudulent online practices (detailed below).
Online romance fraud (ORF) is a transnational crime, involving millions of victims worldwide. In the United Kingdom, victims of such scams lost £30.9 milion (approximately $36.9 million) and case numbers were up 41% at 3,270 in 2022 (see, O’Malley, 2023). In the United States, the Federal Trade Commission in 2022 found that nearly 70,000 people reported a romance scam, with losses estimated at $1.3 billion, and a median reported loss of $4,400. The Canadian Anti-Fraud Centre in 2022 reported receiving 1,928 romance scam complaints in 2021, with losses of more than $64 million. The financial numbers do not take into consideration the short- and long-term psychological and emotional consequences having been duped and manipulated (O’Malley, 2023; Cross, et al., 2022). Victims of financial and romance fraud evidence a wide array of deleterious consequences that go beyond financial loss and may include depression, anxiety, suicidal ideation, and other forms of self-harm behaviors (see Buchanan and Whitty, 2014; Whitty and Buchanan, 2012).
A few studies have expanded the research scope of ORF by exploring its connections to other types of crimes. For example, Cross and colleagues (2022 & 2023) have conducted studies on the application of ORF to perpetrate “sextortion”, where the offender threatens to distribute intimate images or videos collected from the victim over the course of their “relationship” unless they comply with the offender’s financial demands (Cross et al., 2022; see also, Whitty, 2013 and 2015).
The term “catfishing” originated from the documentary “Catfish”, which explored online deception in which a person creates a fake virtual persona (complete with fake photos, videos, and a fake identify and backstory) to trick others into forming an emotional connection with them. Following its release, the film encouraged numerous “silent” victims to come forward to talk about their experiences (e.g., stalking, harassment, psychological abuse) (Reichart Smith et al., 2017; see also, Launder and March 2023). The practice has been employed for a wide variety of motivations, including financial gain, revenge, thrills, romantic or sexual gratification (see, Simmons & Lee, 2020), and even self-identity revision (see Nolan, 2015). Some online users have adopted catfishing to explore their gender and/or sexual concepts for example (Cavalcante et al., 2014). There is also some question of whether having one’s likeness used as bait for a catfishing scam is itself of online abuse (see, Hartney, 2018). The primary operational contexts for catfishing are various social media platforms and dating apps (Kitzie, 2017 and 2018). Catfishing can also be employed to cyberbully others (Lauckner et al., 2019; Patchin, 2013), inflicting significant psychological distress on victims who are unable to identify the perpetrators.
In the United States, catfishing-based scams have led to an average loss of $132.5 million per quarter in 2022, an 11.2% increase from figures recorded in 2021 (Koebert and McNally, 2023). Most victims of catfishing report of experiencing depression, anxiety and paranoia and embarrassment upon the realizations of being deceived by a complete fabricated identity (Lauckner et al., 2019; see also, Launder and March 2023). We view catfishing, with its emphasis on deceiving victims into believing the perpetrator is someone else with an emotional or sexual interest in the victim, as a base strategy for ORF offenses, including IMFI. Techniques employed in catfishing (e.g., the presentation of a desirable suitor) emerge in IMFI alongside other established fraud strategies and platforms, such as call centers, romance-based fraudulent investments.
We use the term 'cyber-industrialization,' to refer to the transformation of activities previously enacted by an individual or small groups of people into semi-automated or fully automated processes enabled through technology and business practices. These most often include the use of hardware or software platforms capable of enhancing or replacing human effort. Importantly, cyber-industrialization is not in and of itself negative or positive in terms of human impact. Technology-enabled automation via the use of machine learning (ML), AI, or expanded online network capabilities can improve efficiency and accuracy in a multitude of processes formerly subject to slower and more error-prone human performance (e.g., online banking, sales, etc.). But the same capabilities can also be implemented by those seeking unfair or fraudulent advantage over potential victims because such processes can create asymmetries between resourced perpetrators and unresourced victims, thereby exponentiating both the number of victims and the efficiency with which perpetrators extract financial value from them (Topalli & Nikolovska, 2020).
Call centers are a good example of cyber-industrialization as an enhancing process. These “businesses” are typically situated in geographical locations where labor costs are low. They are normally commissioned by various commercial, governmental, or research entities to engage in a variety of commerce- and research-based activities from sales to customer support to marketing (see Gans, Koole & Mandelbaum, 2003). Centers have employed a variety of strategies to improve efficiency and maximize profits, including 24-hour staffing, just-in-time performance evaluations, incentive programs for rapid or high-quality output, and optimized cueing (see, Ta, et al., 2021; Borst, Mandelbaum & Reiman, 2004). More recently, call center performance has been enhanced and expanded via machine learning, AI, and deep learning processes (see, e.g., Woodcock, 2022; Deschamps-Berger, Lamel & Devillers, 2021).
Call center approaches have become a mainstay of fraud perpetration in recent years, with call service workers implicitly or explicitly employed to defraud unsuspecting victims through a variety of deceptive tactics (see, e.g., Miramirkhani, Starov, & Nikiforakis, 2016). In this type of scam, workers are provided leads of vulnerable victims (often via information purchased legitimately or on the dark web; see, e.g., Liu et al., 2020). They are responsible for their own tasks and report to a supervisor or team leader. In the country of residence, these organizations operate as “legitimate businesses”, renting space, paying wages and taxes, etc., (see Aziz, et al., 2020) making them difficult to regulate, particularly if they are targeting individuals who live in jurisdictions abroad (see Cross, 2020; Perloff-Giles 2018; Menon & Guan Siew, 2012). Commonly, they operate via a commission-based mechanism, wherein the worker receives a certain amount of base salary alongside incentives contingent upon the number of “clients” they successfully deceive (Sheckels & Ferer, 2018). The majority of these call centers are based in India (see, Malik & Choudhury, 2019), although other countries also play host to them, including Costa Rica, South Africa, and the Philippines (see Sallaz, 2019, Hunter & Achimi, 2012).
Call center workers use a variety of telephone-based strategies to defraud mainly vulnerable individuals from Western countries (primarily the US, Canada, and the UK) including the elderly and immigrants (The Economic Times, 2023). Workers rely on databases containing the information of potential victims to convince them to divulge passwords, send money, sign off on credit cards, or make payments for “back taxes” or other fabricated debts (see, U.S Attorney’s Office, 2022). They may pose as employees of banks, businesses, government agencies, or charitable organizations. Their access to illegally obtained personal information from data brokers and the dark web allows them to build trust. They often provide false badge or ID numbers and are able to provide to the victims their own information, including home addresses, last four digits of the victim's social security number, former places of work etc. (Sheckels and Farer, 2018). Once trust is established, operators proceed with an array of schemes relying on deceit (e.g., the promise of a cheap payday loan) or coercion (threats of arrest, imprisonment, or fines if the victims fail to pay “back taxes” or “penalties”). In some cases, operators may collaborate with a network of co-conspirators based in the victim’s country to complete the transaction. These co-conspirators assist in obtaining prepaid debit cards or providing wire transfer accounts under assumed fictitious names.
Sheckels and Farer (2018) analyzed a series of FBI investigation against call centers, revealing an industrialized scamming process. There are five main roles in the center: runners, domestic managers, call center operator, caller, payment processor and data broker. Individuals are assigned one or more specific roles to advance the conspiracy. Specifically, runners operate in the victim’s home country (often, the United States) and are responsible for providing transactional options for operators in India (e.g., purchase temporary prepaid cards). Domestic managers direct running activities while providing them with resources and supplies. Data brokers collect phone numbers and other identifying information (e.g., names, addresses), facilitating payments for domestic runners. Call center operators manage the day-to-day operations of the call centers, which involves maintaining expense sheets and acquiring phone numbers. Callers, aided by various entities, are the individuals who ultimately make calls to the victims. Finally, payment processors handle all aspects related to victims' payments, such as purchasing and activating prepaid cards, as well as transferring scam funds. The most recent estimates of the financial impact of call center scams indicate over $12 billion in losses in US alone, representing a 47% increase from previous year (Das, 2022).
"Sha Zhu Pan" – loosely translated as “Pig-Butchering Scam”4 – is an emerging form of ORF in China that employs an industrialized scheme to lure victims into investing in fraudulent platforms via romantic connections and interactions. According to Wang and Zhou (2022), Sha Zhu Pan employs some strategies and techniques – in particular, romantic deception – familiar to those who study ORF. However, rather than employ deception strategies focused on crisis narratives such as financial difficulties, business troubles, child injuries, or fabricated medical emergencies (Wang and Topalli, 2023), scammers in the Sha Zhu Pan scheme capitalize on romantic connection to convince the victim to “invest” in “opportunities” with the promise of large returns (similar to various financial fraud scams perpetrated against the elderly; see Siu, G. A., & Hutchings, 2023; Roy & Sanyal, 2017). The process is facilitated by an mobile app capable of draining the victim’s account over time. Initially, victims are encouraged to by the fraudster to invest small amounts and shown that they can easily withdraw those amounts when they choose. This confidence-building measure sets the stage for much larger investments by the victim. At a predetermined point, they are suddenly informed by a "customer service" communication from the investment app that withdrawals are suspended due to misconduct by a third party of by the victim themselves. To unfreeze their accounts, victims are coerced into paying a substantial penalty. At this stage, victims face a choice between continuing to pay the scammers or realizing they have been deceived and deciding to report the incident to law enforcement.
Scammers involved in the "Sha Zhu Pan" scheme assume different roles to deceive their victims and they operate with other scammers in a coordinated and collaborative manner, “automating” the online fraud operation with pre-designed plans, scripted interactions, and carefully executed post-scam strategies, and relying on a database of potential victims established independently or through identity brokers, and a professionally developed mobile app to facilitate the scam. Wang and Zhou (2022) highlighted the presence of five main roles within the Sha Zhu Pan syndicates, similar to scam call centers. "The host" group plays a crucial role as they are responsible for engaging in daily communication with the victims. The "resources" group provides various support to the host, including fake customer service, victims' personal information, disposable phones and numbers, etc. The "IT/Telecom" group offers technological assistance to the entire syndicate, including sham investment platforms, VPNs to fake locations, and the identification of internet vulnerabilities. Lastly, the "money-laundering" group handles the transfer and laundering of illicit funds (Wang and Zhou, 2022; see also, 315 Consumer Association5, 2020).
Both call center fraud and Sha Zhu Pan schemes have in common the exploitation of victim emotions to defraud them of funds via rudimentary industrialization/automation processes. These scams expand the pool of potential victims numerically and rely on teamwork, division of labor, and specialization to evade law enforcement scrutiny while maximizing profits. Both forms of scamming transform traditional solitary online fraud into more organized, business-like operations. As such, they retain some general similarities to gang, syndicate, or mob behavior (see, Albanese, 2014; Britz, 2008).
IMFI takes this process to a new level however, because it situates these activities within the functions of a legitimate business operating in the open and relies on more advanced technology-enhanced processes to exponentiate the extraction of value and operate without fear of legal prosecution (for now). We identified three corporations engaging in the practice of IMFI in this way and focused on one – Cloudworkers, Inc. – for the purposes of this paper. Our initial foray into better understanding the perpetration of IMFI was based on data collected from workers’ public comments about their experiences working for this company and provided an important look into the employees’ standing as both perpetrators of IMFI and as exploited workers. What emerges from our analysis is a novel yet clear picture of how emerging technologies can be leveraged by actors to devise new forms of exploitation and (potential) crimes as well as the presence of the victim/offender overlap phenomenon, an important concept in criminology with implications for cybersecurity and information security scholarship.
This study relies on open-source data to analyze and elucidate the observed phenomena. We sought to identify online platforms that gathered employees' reviews, thoughts, or experiences regarding chat moderator companies. To do so, we initially conducted a Google search using the following keywords and terms: "chat moderator(s)," "chat moderator review," "chat moderator experiences," and "chat moderator forum/post." Through this initial search, we were able to identify Cloudworkers as one of the chat moderator platforms most associated with the IMFI phenomenon. A UK-based company, Cloudworkers hires chat moderators globally, and maintains the most consistent presence and most extensive market penetration across three platform types we targeted for the gathering of data. These include employment websites (LinkedIn, Indeed), social media sites (Facebook, Instagram), and review/knowledge sites (Reddit, Trustpilot).
Cloudworkers requires applicants to fill out a brief application on their main server (https://cloudworkers.company/en), to obtain basic personal information, including name, email address, date of birth, phone number, Skype handle, availability, and experience. After an initial screening process, the company’s team leaders contact applicants to proceed with the next stage of the hiring process. We sought out commentary from individuals who had previously worked or were considering working with Cloudworkers. We extracted data from their publicly posted reviews, questions, and narratives on review platforms and Reddit, which contained a significant amount of data itself but also pointed to other platforms – specifically Glassdoor.com and Trustpilot.com – that provided us with further valuable qualitative data exclusively related to Cloudworkers and their enactment of IMFI on a global scale.
Although these were not the only online platforms featuring commentary on Cloudworkers, we chose Glassdoor and Trustpilot because both review sites provided the largest number of employee reviews specifically related to Cloudworkers. Data obtained directly from them or via scraping has consistently appeared in previous research across different disciplines, such as business (Das Swain et al., 2020; Chinazzo, 2021) and health (Srivastava et al., 2022; Pavithra and Westbrook, 2022). The same is true of Reddit (see Lee et al., 2021; Chen and Tomblin, 2021; Helm et al., 2022). Moreover, online forums have consistently been recognized as a convenient and effective method for collecting qualitative data to generate meaningful analyses (Iqbal et al., 2021; Richard et al., 2021; Lee and Jang, 2023).
Reddit is a social news aggregation and discussion platform. It also provides users with the opportunity to express their opinions on various topics or phenomena anonymously. It is currently the largest and most prominent online forum in the United States, and has been featured in numerous interdisciplinary studies, including those in the field of criminology, (e.g., Chen and Tomblin, 2021; Curiskis et al., 2020; Park and Conway, 2017; Lundmark and LeDrew, 2019; Hughes et al., 2021; Letico et al., 2022). Glassdoor is designed to provide an anonymous platform for current and former employees to review their companies. It also offers job search assistance, salary comparisons, and the ability to apply for jobs. Trustpilot is more focused on reviews, where users can anonymously leave evaluations for services and businesses, including those in the health, travel insurance, and automotive sectors. The latter two websites are regularly updated with new user-provided reviews.
The initial screening process revealed that the reviews and public forums about Cloudworkers were comprehensive and cross-checked consistently against other reviews. Previous and current employees provided detailed descriptions and accounts of their interactions with Cloudworkers' management teams and clients. Additionally, we found a number of discussions and posts specifically related to working for Cloudworkers. Thus, following an in-depth assesment between both researchers, we decided to incorporate all reviews and public forums into a single analysis. As a result, the screening process on both websites allowed us to locate a total of 54 comprehensive reviews on Glassdoor (38 reviews) and Trustpilot (16 reviews). Specifically on Glassdoor, we were able to locate 34 reviews focusing solely on the interview process of Cloudworkers. On Reddit, 15 discussion posts are selected into the final dataset.
We used NVivo to create a database comprised of reviews and public forum statements collected from the aforementioned websites (see Hooley, et al., 2012)6. All quotes were reviewed by both authors to generate domains, a model, and recurring themes. The primary author reviewed the entire database to identify preliminary, anchor themes (see, Sade-Beck, 2004; de Vries & Valadez, 2008). Subsequently, the primary author assigned each identified themes into nodes in NVivo and quotes for each category of themes were subsequently populated into the database. In this procedure, the primary author additionally revealed a clear and logical diagram that reorganized and connected all identified themes model (see Table 1 below). After this step, all coded files were provided to the second author who replicated the same process again. The flow diagram and quoted themes presented in the result section are the result of an in-depth co-analysis of data between both authors after the review. Finally, all themes were checked manually by both authors to ensure that themes are distinct from each other without duplicates.
Our analysis of users' accounts identified several themes that depict the hiring process, operational characteristics, deceptive nature of chat moderation, and exploitative aspects of the recruitment process itself. The themes identified in our study align with the concepts relevant to the theoretical frameworks of "impression management" (IM), "interpersonal deception theory" (IDT), and "social engineering" (SE) identified in earlier research on romance fraud by Wang and Topalli (2022). IMFI takes this process a step forward however, by relying on the industrialization capacity of a well-resourced IT-based company employing enterprise solutions and algorithmic processes to increased effect. Our analysis underscores how the domains related to these three theoretical frameworks evidence the industrialization of deception – of both the workers and the clients – within a corporate setting.
Specifically, we describe Cloudworkers' business operations, including their systematic hiring processes, salary structure, working environment, and commission model. We also identify the true nature of the “chat moderator” role through past employees' self-disclosures, identifying how the role foments deceptive and exploitative practices. At the same time, chat moderators are themselves subject to exploitation and deception by the company as it employs questionable tactics and deceptive communications to lure employees into working for the company under false pretenses and paying them less than advertised for their labor (for a treatment of this topic, see Gerber, 2021). As such, chat moderators are simultaneously exploited by the company as they themselves exploit their clients, representing an interesting variation on the criminological notion of a victim/offender overlap (see, Gottfredson, 1981; Wittebrood, K., & Nieuwbeerta, 1999; Jennings, Piquero, & Reingle, 2012), wherein individuals who operate and participate in criminogenic contexts are at greater risk for being victimized themselves7.
We conducted a systematic analysis of employees' or applicants' reports on the above-mentioned online forums. This online scan produced a systematic set of qualitative data, revealing consistent themes and domains across the theoretical framework (IMDTSE) identified above, across two relational structures (company to chat moderator, and chat moderator to client). Our treatment revolves around a recursive model that outlines three significant stages workers go through when attempting to work for Cloudworkers. In this model, we typologize chat moderators into three main types. Additionally, we delve into each category and provide further details regarding the advantages and disadvantages of being a chat moderator. In doing so, we introduce a new form of systematic online deception that cleverly disguises its exploitative practices within legitimate business operations.
In previous research on romance fraud, Wang and Topalli (2022) identify a theoretical framework for how these offenses are perpetrated, focusing on the psychological measures and techniques that fraudsters employ to extract financial gains from victims. The framework integrates three perspectives from three fields of study to do so. We argue that their impact on the fraud process is further enhanced by cyber-industrialization processes inherent in IMFI schemes.
First, the theory of impression management (with roots in social psychology; (see Goffman, 1978; Schlenker, 1980; Schneider, 1981) describes the process of influencing how others perceive you, your ideas, or your products. It involves controlling the information you share and emphasizing certain aspects while downplaying or hiding others. People use impression management for various purposes, such as making a good impression, gaining social approval, achieving personal or professional goals, or protecting their self-image. IM can be employed by individuals for honest (trying to impress a date or a job interviewer by being polite or well dressed for example) or for dishonest reasons (trying to convince someone that you like them by laughing at their jokes, even when they aren’t funny). In the case of romance fraud, scammers employ IM to establish trust and familiarity with a potential victim through online communications to set the stage for deceptive interactions that lead to a fraudulent financial exchange.
In Wang and Topalli’s model (2022), IM sets the stage for the deployment of principles of interpersonal deception theory (IDT) which has its origins in communications research (see Buller and Burgoon, 1996). IDT predicts that individuals will rely on principles of IM as well as certain communication structures and tactics (verbal and non-verbal) to deceive individuals. These include the use of misleading or vague statements, repetition of statements, emotional triggers, information about a victim that can be used to manipulate them into agreeing to certain actions (like providing personal information), and the communication of familiar and unfamiliar concepts, terminology, and histories.
Both of these theoretical approaches outline measures that form the basis for social engineering (SE; see, Mitnick and Simon, 2003; Cialdini, 2001) a perspective outlined by researchers in computer science and information systems. SE references several techniques and processes contained within both IM and IDT but adds to the mix a consideration of the ways influence and deception are enhanced by the presence of technology, particularly those with the capacity to enhance deception and control of others and thereby exponentiate the amount and impact of an online-based fraud and crime. These include reliance on social distance afforded by online platforms, the availability of vast sources of information online, the use of images, the distributed nature of online communications, etc. (see also, Bullée, et al., 2018).
In the table below, we find all three components of the Wang and Topalli’s (2022) model hold across both contexts and identify where the industrialization process intersects with each of its three components in the most impactful way for the perpetrating entity and those it targets.
Table 1. Identified Themes on Individual and Corporate Level
Interpersonal Deception Theory
Techniques of ORS:
Techniques of ORS:
Techniques of ORS:
Corporate Level (IMFI)
Cooperate to Workers:
(see below stage graph)
Workers to victims/clients:
Cooperate to Workers
Workers to Victims/Clients:
Corporate to Workers
Workers to Victims/Clients:
Corporate to Workers
Workers to Victims/Clients:
As stated previously, IMFI represents a new, corporatized form of intimacy-based fraud that targets individual clients and may as well exploit the workers who “service” those clients on behalf of the company. We delved into the process by which individuals become associated with the company and how their experiences with perpetrating IMFI influenced their associations with it. Our data highlight three distinct phases that workers underwent throughout this process (see Figure 1 below). Within each phase, a worker faced critical stages which presented decision-points that shaped the worker’s course of action, determining whether they persisted with the company or chose to leave. We identified and analyzed these decision-points and explored the various factors that influenced their choices and reasons for staying or departing. By examining these phases and decision-points, we developed a better understanding of the complex interplay between individuals, the company, and potentially dishonest practices.
Figure 1. The Phase Chart for Chat Moderator Application
Phase one: pre-employment stage
Phase one is comprised of three stages: the application stage, task revelation stage and employee/supervisor interaction stage. In the application stage, the company employs two main social engineering techniques – both of which are facilitated by internet-based communication technologies and social media platforms – to target and promulgate their online advertisements. These include employment websites (e.g., LinkedIn, Indeed), social media platforms (e.g., Facebook, Instagram, and YouTube), and review websites (e.g., Glassdoor, Trustpilot). On those platforms, Cloudworkers employs IM and IDT principles establish their legitimacy and authority: post well-written, professional, and authoritative job advertisements.
job description: as a chat moderator, you[sic] task is to participate in text-based online chats on one or more social community platforms and maintain the conversation…
Your profile/what you bring: own computer with stable internet; proficient spelling…
What we offer: flexible and independent planning of working hours; work wherever you want (e.g., home office or abroad) …
The company’s legitimacy is further enhanced by current and former employees on LinkedIn who openly state on their profiles that they have worked for Cloudworkers:
Glassdoor review 3: “high pay and minimal work, freedom to work anywhere through Internet and anytime. Highly recommend working for”.
Reddit review 7: “I have been working with this company for a couple of years. So far, I enjoy this job. Please message me if you have any questions or concerns about the tasks as well as easy ways to join in.
Prospective employees drawn in by these recruitment strategies eventually initiate the application process online on the Cloudworkers’ official website. Accounts collected from several Reddit forums detail the process. For example, one employee stated, “I fill my personal info and my availability on their websites and waited for like two weeks before they respond to my form” (Reddit 1). Depending on variations within different regions, some individuals stated coming from Latin America or Africa tend to get a relatively faster response from the company, typically, “three days to one week” (Reddit 4).
Following the screening stage, a recruiter from the hiring team sends an email asking applicants to provide documents to verify their identities. The requests varied. For example, “they asked me to provide my passport or driver’s license to prove its actually me applying for this job” (Trustpilot 5). Others were asked for copies of their “citizenship and social security cards” (Glassdoor 18). Nearly all individuals in the sample testified that they had received multiple forms from this company requesting information such as “date of birth”, “Trade license”, “registration number for free lancing”, “profession”, “bank account number”, “bank code/swift code/name of the bank”, “IBAN code”, “PayPal email address” (Reddit 13).
1.1. Pre-enter information pathway: Application to revelation or vice versa
At various points during the application process workers began to learn that the position they’d applied for did not match their expectations in terms of the kind of work advertised. Many applicants were disturbed by the amount of information requested by the company as well its requests for sensitive financial information. Although this was not proven to be the case, some employees in the sample suspected that the company might engage in “identity theft” (Reddit 13). In some instances, the company did not respond promptly to queries or concerns of applicants which, when combined with the requests for financial information, aroused suspicion. One individual stated, “there are no communications after I (the applicants) submitted all required identity documents including my bank accounts.” (Trustpilot 14) or “I got the same answer 3 times with 5 days response time. Filled the requested information exactly as told, but they always kept saying I should upload it again cause the could’nt [sic] see it.” (Trustpilot 8). Consequently, a certain subset of the sample represents those applicants who abandoned the application process early, due to preliminary suspicions unrelated to the nature of work (sexting) for which they were ultimately being recruited.
Trustpilot review 9: Company posts FRAUDULENT ads on FB promoting the hiring of "Chat Moderators." Once the applicant applies, surrendering their personal information, which is likely to be sold on the spam market online, the applicant receives an auto response that the job is no longer available. This is an International Scam. Read the reviews on social media sites. Highly UNRecommended.
The vast majority of early exiters however, did so in response to discovering that they were to engage in sexting, a disclosure commonly made later in the application process. This technique of delaying such a revelation is in line with social psychological research on decision-making that indicates people are less likely to abandon a course of action (even one they may find morally or ethically objectionable) if they have invested time in it (see, Haita-Falah, 2017). But clearly, the sunk costs of the application process were not weighty enough to override the distaste many applicants experienced upon finding out they were being hired to engage in professional sexting. There was some degree of variance regarding the willingness of candidates to continue the process and do this type of work.
Trustpilot review 7: I don't even want to give 1 star, that's even too much!!! This company I can't believe I gave my own information to them shame to this company!!! They wanted me to have erotic conversations with people or even have sex?? What in the world? I am really disgusted and shocked I was just looking for a normal work I am a married women with a baby I can't believe this. I blocked them immediately.
While in some cases, applicants waited until they were interviewed for the position to find out that they would be engaged in sexting, others paused or delayed their full application to research the company online (e.g., on platforms like Reddit, Glassdoor, and Trustpilot). Upon learning the true nature of the job, some decided to continue with their application, citing the allegedly attractive salary and flexible working hours. Others expressed discomfort and chose to withdraw from the application process.
Reddit 4: “Remember to post your experience pls. I already applied and they already answered, I'm guessing it works as a "real" side hustle, where you can make little money but for little effort, lot of time probably, but rn I just need money lmao. So, I do not really care about talking those things…”
Reddit 6: “I’ve emailed Cloudworkers and they have actually replied back, Im well aware of the 'erotic dialogues' and I have no moral qualms with it. Im asking to see if their projection that you can make around 7euro/hour is legit (0.07 cents per message, 100 messages an hour)…”
Reddit 7: “…Holy fuck, I would have never imaged this is the kind of work they do there. Is that all they do? surely they do other things? that's just straight up insane. I am glad I found this post. that is a nah for me.”
1.2. No information pathway: Initiate application to meet with a team leader
On the “no information” pathway, in contrast, applicants who resolved their doubts or concerns about the position chose to continue the hiring process by meeting with a team leader. After filling in paperwork, applicants reported a relatively straightforward process, “I was contacted later by a team leader, they explained the job in detail, making sure I was ok with the ‘erotic’ aspect before sending me the freelance employment contract” (Glassdoor interview review 31). In general, most applicants described the interaction between them and the team leader as “easy”, “smooth”, “crystal-clear” and “straight forward” (Glassdoor Interview review)
Glassdoor review 1: The interview process was smooth and straight forward. Just do your research on the company and ask any relevant questions you may have during that time.
Glassdoor review 3: very easy and fast! I applied, and then someone reached out to me asking if I was still interested, and if so then we would move on to next steps. Within 30 minutes I sent over everything they needed.
Participants selected to join Cloudworkers then engaged in an online training via Skype. After signing off on “terms and condition” (Glassdoor interview review 27) they were forwarded a “training manual” 8 (Glassdoor interview review 25) to review. The team leader then asked the candidate to complete a practical exercise – candidates were monitored sending messages on the actual chatting site for around 30 messages and see if they met certain standards of the position.
Glassdoor review 13: You just needs [sic] to study the training manual, concentrate, have good grammar and typing skills. Study the instructions. We needed to know more about how the system works and prove that you can actually use and use all the things that you have learned on the manual. Then you have to go through a training session to be tested for your skill to talk.
Reddit 10: They have you send messages to see whether you're good enough to be on the team, you have to do it on a pc. Basically when I went through training one of the team leaders gave me contracts to sign and papers to read, after that they gave me a login for their site and monitored about 30 messages to see if I met their quality standard, My advice is to use good grammar, you get paid for the messages you send during training if you get accepted but it's only like 3$.
1.3. Post hoc information pathway: Meet with mangers to revelation
Some workers did not fully realize the responsibilities of being chat moderators until they engaged with the managers during the Skype call. Candidates posting on review platforms and professional channels in our dataset indicated managers would ask during the call about whether they were comfortable engaging in sex chatting or adult topics. For example, a review from the Glassdoor indicated, “First you have to fill the application form online, than you going to get an e-mail after a few hours in my case, than they telling you about the job, and if you are open minded with sex stuff and capable of keeping the rules, you going to get hired” (Glassdoor interview review 30). A prior employee from the Reddit community also revealed that “I didn’t take me long scrolling so it’s not like they bury it. They call it chat moderation but it’s really paid sexting. I have to admit that when they emailed info to me they really didn’t hide that fact” (Reddit 2).
At this stage, we observe workers split into two groups, those deciding to withdraw themselves from working for the company and those willing to move forward. In the latter circumstance, individuals expressed indifference regarding the true nature of the job at hand, “Im well aware of the 'erotic dialogues' and I have no moral qualms with it” (Reddit 6). In the former however, applicants expressed shock and resentment after being informed of the duties of chat moderators. Many quit at this stage, “I received an e-mail clarifying what it was for and I’m a bit pissed of [sic] by their marketing in the website and internet…Its just disgusting and I feel sorry they have some of my informations. Give up immediately and Dont recommend…” (Trustpilot 4). Others complained that “This company I can't believe I gave my own information to them shame to this company!!! They told me they wanted me to have erotic conversations with people or even have sex?? What in the world? I am really disgusted and shocked…I blocked them immediately...” (Trustpilot 7).
Phase 2: employment stage (workers to clients): describe deceptions
For those who opted to move on to the employment stage, individuals would be asked to first sign contract and agreements, while waiting to receive “NLD number and login credentials”. Following the completion of these tasks they received training on using the “customer service platform” and immediately began working with “clients” online. In this stage, nearly all workers expressed those various restrictions and deceptions imposed by this corporate begin to emerge, which obviously were not mentioned before signing the contract. Specifically, restrictions are manifested in working rules and the delivery of commissions. In addition, there were deceptions regarding commissions, tasks and routine.
2.1. Strict working rules
Despite the remote nature of their role, employees’ online actions are constantly supervised and restrained through the Cloudworkers management system. Our data reveal at least five rules that employees must obey. As summarized, these five rules are presented below with appropriate quotes selected from the dataset.
Strict rules for numbers of characters in each message: “…. paid per send chat, but must be minimum x characters long, making that you have to answer chats with nonsense many times.”
No allowances for communication between employees: “they basically cut all connections and communications between employees in different place”, “there is no way to get togetherness between colleagues, there are ones who work for managers as spies”.
Strict rules for working hours: “you can be blocked for work at the wrong hour”, “you have to use a laptop though, book at least 12 shifts/week, no more than 40 and send at least 1500 messages in total a month to remain active. Each shift is an hour”.
Rigorous expectations for correct grammar in each message: “there are always someone looking at you, so make sure your grammars are always correct and English is frequent, or your accounts get instantly blocked”.
Requirements for constant availability and immediate shift response: “They email you at odd hours (3am for instance) if there's a queue in the chat. They desire you to work 24/7, and you have to reply fast to be active”.
Strict rules for chat content: “You should always use fake identity and never reveal your true one, making those men thinking they are talking to the real one”. “You should never give your number those guys or agree to meet with them. When they do so, you should string them along so they pay for more and more texts. They can never meet the women they fall for because they aren’t real”.
2.2. Rigid commission-based salary structure
Cloudworkers maintains a formalized payment system, albeit with several restrictions favorable to the company. Only five reviews in the sample commented that the pay was “good”, “reasonable” and “promptly by month”. For example, one reviewer from Trustpilot 15 said that “This job has saved my life during covid, when most people had little to no income and lost their jobs, I made so much money because the rand had dropped. I have helped so many of my friends to apply so they can sustain their kids because getting a job in South Africa is so hard. Uk1446 is a happy chappy”.
Prior employees in the sample stated that Cloudworkers pay is based on the volume of sent messages. Individuals were usually paid “0.10€ or 10 cents per message we (employees) send” (Reddit 1). However, depending on the type of team or agent or recruiter (Glassdoor 22) the employee assigned at Cloudworkers, some were paid less than 10 cents per message, with a lowest case for only 2 cents per message reported.
Importantly, the commissions are “extremely [sic]instable” (Glassdoor 17) because they are based on a combination of factors, including the consistency of their internet connection, the worker’s response time, the number of messages they are able and allowed to send per hour, and their creativity in producing sexually engaging content. Workers in the sample complained that “If there is no work, you have to sit there for zero dollars”. Others said that, “sometimes it takes an hour or even more for them [customers] to respond, the time can waste in this case”. Consequently, the pay scales and rates were unstable and unpredictable with ranges of “up to 300 per hour” to only “20 messages per hour”. Workers generally found that payments were, “higher for weekends and bank holidays, also maybe for hours after 00:00” (Glassdoor 19), however significantly lower in other timeframes.
Cloudworkers also enforces rules on the quantity and quality of messages. For example, employees revealed that “Payments are made monthly, must be over 50€ which they should be bc they ask that you send a minimum of 1500 messages per month” (Reddit 1). Cloudworkers would demand both a sufficient quantity of messages and that they be of “high quality” (i.e., grammatically correct, sexually engaging, designed to prompt increased rate of message exchange). Both factors were critical. If the message count was satisfactory, they would still stress the importance of message quality and vice versa. According to a prior employee, “If you did 100 an hour the companies would fire you for not giving "good enough quality" messages. They demand high quality and originality while claiming you can send so many in a short time” (Reddit 6). Lastly, some employees expressed their concerns about the stability of payments resulting from the repetitive nature of their work, with frustrations over “the repeating boring topics…. plus, I was constantly running out of ideas about what to say so I wasn't getting paid much since I sent a lower amount of texts” (Reddit 6).
2.3. Deceptions regarding required tasks
Once workers passed the screening process (as indicated in theme 1), they were provided one-on-one training by a Cloudworkers manager. It was here they would be provided the greatest amount of detail regarding the nature of the work, the way it was to be performed, the ins and outs of the client management platform, and payment processing. At this stage, workers’ testimonials diverged once again as to how “up front” managers had been on the true nature of worker responsibilities and payment regimens
The majority of workers shared that they were misled by the job advertisement and the information provided by Cloudworkers' HR and managers, even during the interview and training stages. They reported that managers described the work as, “moderating conversations on legitimate dating apps” (Reddit 5), “conversating with people who wants emotional comfort” (Reddit 6) or “holding conversations with individuals to solve their confusions” (Trustpilot 4). As soon as the initial onboarding process was complete, workers learned that their duties had nothing to do with moderating conversations or promoting engagement with social media start-ups, and that they were essentially tasked with deceiving and catfishing clients into believing they were having real conversations with women they would potentially meet in the future with sexually explicit messaging:
Reddit 5: “They offer chat moderating work but after you agree they tell you it's operating profiles for dating apps. Not a great gig but I had no other projects at the time and needed money so I decided to try and see what's up. Well, now I know what's up :( There are some pretty terrible glassdoor reviews on this company, ...and lots of glowing reviews which lay it on a bit thick, probably fake, which makes sense since the company's business is generating fake internet content”.
Trustpilot review 3: “According to their website you have to chat to various partners/communities. You would think it’s for business purposes, but NO they want you to do online erotic messages through Skype. NOWHERE ON THE WEBSITE DOES IT SAY SO!!”
Trustpilot review 4: “I received an e-mail clarifying what it was for and I’m a bit pissed off by their marketing in the website and internet, it looks like you have to moderate talks about specific subjects, like going to the gym, why people get frustrated, like themed conversations. In the site they say adult, not 18+ rated conversation”.
Trustpilot review 10: “I was completely weirded out because in the ad they were talking about customer service, and chatting about gardening and cooking tips, and other harmless things”.
2.4. Deceptions on the routine
Once workers settled into their regular routines and start completing their tasks as freelancers for the corporation, they quickly realized the job was not as easy as promised. Although workers were assured the ability to make their own hours and work at their own pace, they quickly realized that they had to invest the amount of attention and effort expected of a full-time job to earn satisfactory commissions. Moreover, their work hours were dictated by the company’s algorithms and policies rather than by the worker’s expected schedule. Primarily, this was due to variations in demand periods for their labor, dictated by client engagement patterns based on world geographical location and time of day (accounting for time zone differences).
As a result, work demands fluctuated widely between high volume periods when they were pushed to produce at high rates to low volume periods when they were expected to be available for sexting even when there were not enough clients to support the supply of workers. One worker pointed out that, “…these adult chat companies lying and exaggerating about the amount of messages you can send and the amount of money you can make. 100 messages per hour is impossible” (Reddit 6). Some workers noticed that “there is higher pay for weekends and bank holidays, also maybe for hours after 00:00” (Glassdoor 19). Workers complained that the company encouraged workers to “work 24/7” (Reddit 2) and “not to even leave to go to the toilet” (Glassdoor 27) in order to earn the “minimum salary as it in United States” (Reddit 5).
Phase 3: turning points and potential relapse
In the final phase of the model, our analysis revealed that a few individuals experienced a turning point (see Nguyen & Loughran, 2018) which influenced their decision to either leave the job or persist despite its challenges. Those who chose to quit were primarily motivated by three factors: (1) deceptions involved with salary, (2) psychological distress, (3) inadequate or insufficiently compensated salaries, and (4) the negligent management style. Only the third constitutes potential fraud, but all of them represent abuse and deception to some degree.
The recruitment phase led to more concrete communications regarding the amount that workers were to be paid. The extent to which pay rates and ultimate remuneration could be described as misleading varied, but complaints regarding false promises and bait-and-switch approaches to worker pay were among the most common by workers. For example, “everything changed after we signed the contract, the salary and this work is not as good as we thought at first.” As described briefly in theme 1, workers warned others that commissions can be extremely low and unstable. As well, there were complaints that worker payments were often late or sometimes did not take place at all. For example, a worker leaving review at Trustpilot complained that “On top of that, the pay is terrible and many people have reported not receiving any payments at all” (13).
Overall, the company was heavily criticized for a lack of transparency and clarity regarding when payments could be expected and how much they would ultimately provide. A common complaint was that there was no way for workers to record and document timesheets or the number of text messages they sent that were supposed to result in payments; not a daily, weekly, or monthly accounting of the same. Such systems are a common feature of many “gig economy” work arrangements and are designed to avoid such negative outcomes (see, e.g., Uber, Lyft, Bird, etc.: see Wood, et al., 2019; Stewart & Stanford, 2017). That said, workers communicated that one needed to be extremely attentive about the number of messages sent each month, which were in amounts that were considered time consuming and exhausting. Thus, most workers, especially those clearly stating they were from the US or Europe complained that they could, “barely survive with this little money” (Glassdoor 33) or that, “this work is a waste of time” (Reddit 3).
Four employees decided to quit due to mounting psychological distress. They expressed feelings of guilt for being chat moderators and a deep sense of empathy regarding the emotional trauma experienced by clients. These employees described their experiences as emotionally draining, as they found themselves constantly engaging in a variety of excuse-making processes – such as neutralizations (Sykes and Matza, 1957; Maruna and Copes, 20004; Benson; 1985) and accounts (see Hewitt & Stokes 1975; Hunter 1984; Scott & Lyman 1968) both to convince themselves and others that it was acceptable to continue the work. Eventually, these forms of moral disengagement failed, and the worker quit. In the aftermath, they expressed sympathy and concern for the clients who they’d had a hand in financially and emotionally exploiting.
Glassdoor review 22: “this is a degrading work, makes you morally unsafe, scam people who think they are talking to “real” people”.
Reddit 7: “those clients are not talking to one person, but an entire company worth of people. I cannot bear such an emotional toll; it was lies and manipulation. I quite after a few hours”.
Glassdoor review 21: “a complete rip-off for client, I cannot image if they know the truth. you cheat men on the dating site and pretend that you want a relationship with him, but he will never meet you. men just pay 1 Euro(!) per message. A chat moderator get 10 cents...”
Glassdoor review 22: “destroying people lives, some go into deep depression”.
Reddit 6: “I'd be more than happy to pose as random women if the people talking were aware that it wasn't a real person and they just wanted a bit of cheeky fun, but the deception left a bad taste in my mouth that hardly seemed worth 7 cents a message”.
Consistent with themes related to negligent or dishonest business practices listed above, troubles getting paid were a common occurrence and caused some workers to quit. Six prior employees disclosed that they quit because of irregularities with their payments from the company, (either being paid the wrong amount, getting paid late, or not getting their wages at all). Even when they were paid, it was only after multiple complaints to the company.
Reddit 5: I started working with them in mid-October and I still have not received payment for my work. The team leader hasn't responded to my emails, and I have contacted their general contacts email but so far there has been no response. They lied about how much work I will be able to have and now are not even paying what little I managed to earn.
Trustpilot review 6: I worked for this company for about 2 months and have not received my salary. I tried to contact the manager who hired me, then I wrote directly to the info email of Cloudworkers, but there was no reply. Moreover, the job is about talking with different men on the website, but the messages come once an hour. So, the maximum you can earn is 3 euros a day. But that's not a problem, because you will not see your money anyway. Just wasted my time. I have a contract, but who will go to court for unpaid 10 euros?
Glassdoor review 10: Email communication only, seemed like a scam from the beginning, which proved to be true. Read comments here and on Reddit. That's not freelance. That's smth totally else. No workiy hours discussed, salary is a subject not to be included in an agreement (!), nor paymnet process and how the salary is being accounted or paid.
TrustPilot review 13…you will never know the true nature of this work by talking to persons inside Cloudworkers. The truth is that the pay is terrible and many people have reported not receiving any payments at all. They have even closed their regular Trustpilot page (cloudworkers.company) for new reviews which is pathetic. Stay far away from this fraudulent company.
In addition, at least six workers in our sample complained that Cloudworkers would automatically block and ban workers’ accounts without giving any warnings/notices. The reasons for such disruptions were seemingly random, sometimes referencing a violation of company policy but often coming without any explanation at all. In such instances, the commissions earned in the period prior to account deactivation were confiscated by the company. Although not directly revealed by the workers, we suspect a text-based algorithm was employed to automate the evaluation of worker performance in real time. Those who underperformed across a specified amount of time in terms of both texting errors and text production rate were likely deactivated automatically. For example:
Glassdoor review 15: I am so sick of this job now. You can get banned without any notice for simply making a small mistake. You can also be blocked if you work in wrong hour.
Glassdoor review 21: Right Grammar all the time or account gets automatically blocked. After few month of work i was left without salary Team leader was very deceptive and rude. The salary is extremely unstable and bad.
As we pointed out earlier, workers occupied the status of exploited and exploiter simultaneously. While many workers expressed their feelings of betrayal and abuse by the company, others (sometimes the same people) reported on their role as perpetrators of deception against clients. In keeping with Wang and Topalli’s (2022) model of ORF perpetration, workers employed specific IM and IDT techniques to attract client/victims to engage with them, the use of fictional profiles and identities, and portraying themselves as having a sexually driven personality. Our data from worker accounts were replete with examples of such deceptive managements:
Glassdoor review 10: “…no pros at all. you cheat men on the dating site and pretend that you want a relationship with him, but he will never meet you. men just pay 1 Euro(!) per message”.
Reddit 6: “you need to really good at communicate in adult topics to impress your clients”.
Reddit 5: “When I had my first shift I found out that you’re continuing conversations with men who think they’re paying to message real women. The men sound very in love and into the women. What the men don’t realize is that they never talk to the same person. I would get someone’s profile and have to read previous messages that sometimes as many as a dozen Cloudworkers had worked on. You have to keep the guys messaging as they pay per text they send. When the guys ask for your number or to meet, you string them along so they pay for more and more texts. They can never meet the women they fall for because they aren’t real. It’s catfishing. The photo and profile they see is a photo of someone they aren’t really talking to. They’re talking to an entire company worth of people. I was told to never reply without asking a question, as this prompted the men to keep responding. It was not sex chat, it was lies and manipulation. I quit after a few hours”.
As noted by one prior worker, “chat moderating can be deceiving for workers, but not a scam, but definitely a scam for clients as I did not realize so many clients have fallen in love with these fake profiles” (Reddit 6). To successfully initiate or maintain conversations on sexual topics with clients, workers in the sample use fictional identities and profile pictures provided by the company to bait victims to invest in continued messaging, believing, “they are talking with actual attractive women on a dating site” or that, “they have chance to meet up with you in real life when you do not” (Reddit 6) In the end, workers acknowledged that the clients were being duped not by one worker but multiple individuals sharing the workload according to algorithms established by the company, “…what the men don’t realize is that they never talk to the same person. I would get someone’s profile and have to read previous messages that sometimes as many as a dozen Cloudworkers had worked on” (Reddit 7).
Peak of relationship
Workers who continue their employment with the company were able to take advantage of the company’s technological resources and algorithms to employ social engineering approaches (taught to them by the company through their training materials) steeped in impression management and deception techniques. These tools were indispensable to workers seeking to convince unsuspecting clients to maintain their engagement with them. Proprietary profiles afforded by the company could be modified over time to incorporate new information the company collected on clients. This allowed for powerful levels of personalization that were employed to achieve high levels of engagement through the illusion of authenticity and common interests. In this hyperpersonal form of communication clients are led to believe they are conversing with a single woman who perfectly matches their ideal expectations. However, the reality is that these conversations are sequential, involving an entire team of moderators engaging with a client.
Glassdoor review 5: “…as long as you are ok with adult topics, this work would not be considered illegal because it is on internet and no one will know the real you”.
Reddit 7: “those clients are not talking to one person, but an entire company worth of people…it was simply a lies and manipulation…”
The Turning point and revelation
A “turning point” (see, Wang and Topalli, 2022) occurs after fraudsters make their initial financial proposition. In traditional online romance fraud, this takes place after a psychosocial investment period by the fraudster in the victim, during which they have employed impression management, deception, and social engineering to convince the victim of an emotional connection and the legitimacy of their intentions for a relationship. At this stage, victims may either comply with the requests or become suspicious and exercise caution, ultimately leading to the revelation regarding the true intent of the relationship and its termination. We refer to relationships that are terminated before funds are exchanged as evidencing a “preemptive turning point.” Those that end after some funds have been exchanged are refer to a “reactive turning point.” The turning point typically leads to the revealing of the truth of interaction. ORF studies consistently detail that victims would experience financial losses as well as tremendous emotional and psychosocial consequences of having been misled and duped. These consequences exist as well for victims of IMFI with the caveat that because the sexting relationship initiates with pay-per-text chatting, by definition all of these engagements can evidence only reactive turning points.
Workers clearly understood the negative consequences for clients and their families (see above) regardless of whether they sympathized with them of not with statements such as, “this scam can cause significant trouble to their families” (Reddit 4). In the absence of direct data from client/victims we can only deduce from the commentary of workers that some significant proportion of clients experienced financial hardships due to the funds spent in the company’s moderated chat rooms, as well as the related psychological and emotional consequences evident in ORF scams.
It stands to reason a large number of victims may have sought or are seeking restitution or assistance from law enforcement or other government consumer agencies in their home countries to hold the company accountable for its business practices. We are unaware of such efforts based on the data we collected for this paper for two likely reasons. First, workers are unlikely to be privy to such actions unless they were made worthy of media attention. Currently there are no relevant legal cases posted online about chat moderating. Second, were such actions taken by clients, it is plausible (even likely) that the company may have taken remediating actions (such as refunds) to head off more difficult legal entanglements before their business practices are exposed. Our sense is that it may only a matter of time before a client/victim unsatisfied with mere financial compensation decides to pursue legal action in order to seek justice.
Wang and Topalli (2022) refer to “the relapse” as when victims who eventually discover the truth of their relationship with a fraudster, and then engage in processes of self-deception and self-justification to reimagine their culpability in their own victimization and then reengage with the same fraudster or a different one, believing that the new relationship is real. This emotional recidivism is common in abusive relationships and it’s clear that there are similar dynamics at play not only in traditional ORF but IMFI as well. In fact, research shows that one of the strongest predictors of repeat victimization is the presence of earlier abuse (see, e.g., Hindelang, et al., 1978; Feinberg, 1980)
In current study, we also hypothesize the existence of a group of victims who, even after becoming aware that the entire situation is a scam, choose to continue investing money in sexting conversation to fulfill their romantic or sexual desires. This is particularly likely given the powerful cyber-industrialization and personalization tools available to the company and its workers designed to ensnare and engage clients in extended financial relationships with the company. Prior research on victimization would support the notion that certain backgrounds or personality traits (see, e.g., Winkel, et al., 2003; Miano, et al., 2021) may make some individuals more susceptible to repeatedly engaging in such conversations. We would assume in fact, that the processes described above were designed to achieve this effect with this specific population. Because each worker did not develop their own specific fraudulent relationship with individual clients, it was not possible for them to divulge such effects in the data we gathered. That said, subsequent research involving victims of IMFI would shed light on this possibility and provide corroboration of the effects detailed by the workers in our sample.
Estimates are that there are over 163 million online workers in the world (Kässi, et al., 2021), with some non-significant proportion of them engaging in fraudulent behavior of their own accord or as employees of businesses operating internationally. At the same time, it is well documented that platform workers are themselves often the target of exploitation (see, e.g., De Stefano, et al., 2022). Because of the criminogenic affordances provided by technology (including algorithmic processes, machine learning, distributed social and commerce networks, and customer service designed enterprise solutions) many of these workers are likely operating as “exploited exploiters” of a large number of client/victims living primarily in the Western hemisphere. In this vein, our results introduce to the reader to IMFI as part of this online commerce-enabled system of fraudulent business practices. IMFI is an adaptation of existing form of crime (online romance fraud), itself an adaptation of a more traditional, physical world crime (FTF romance fraud) demonstrating a common evolution of certain crimes from the FTF physical world to the online world (e.g., bank robbery has become online bank hacking). IMFI represents a further evolution of online crime by virtue of its exponentiation via technology and business practices. In this case, previous forms of online deviance (online romance fraud and catfishing) have been cyber-industrialized through automation, algorithms, and client-management platforms to produce a new type of potential crime. An important marker of this new form of crime is that one of its key tools – the fake profile implemented by workers to dupe clients – is fairly static in the case of ORF, relying on manual updating by a single perpetrator and incorporating little in the way of automated or technologically supplemented processes. IMFI profiles on the other hand are continuously updated in real time, with contextual information added (for example, the local weather where a fake profile lives) for increased realism and authenticity. Because profiles are managed by a client management software system, the workload of scammers can be distributed across multiple actors for maximum efficiency. The development and application of these profiles by workers feature tried and true principles of impression management, deception theory, and social engineering to maximize value extraction from clients and profitability for the company. As such, our investigation demonstrates a key principle of the crime exponentiation hypothesis (Topalli & Nikolovska, 2020) wherein advances in technology expand the potential number of offenses and victims through principles of crime diversification/innovation and crime productivity/efficiency.
Although IMFI bears striking similarities to online romance scams, there are important differences. One of them is that in IMFI there is no prolonged time investment of the worker (fraudster) with the client (victim) because the exchanges are monetary from the very start. Unlike traditional ORF, the question is not whether an individual will lose money but how much. As such, traditional ORF is subject to both preemptive and reactive turning points while IMFI is subject only to the latter. At the same time, the top end losses for ORF are likely higher since victims may share financial information with fraudsters in a way that facilitates full access to their bank accounts or set up payments with higher amounts, while losses from IMFI are processed in smaller increments (1-2 euros per text) via mobile payments systems that are more likely to have automated financial controls and pay limits (though this has yet to be measured). In addition, individuals perpetrating ORF scams are often proactive in coaxing victims into moving to private messenger platforms. This shift aims to ensure maximum privacy and lower the risk of exposure. However, our analysis of the guidelines set forth by the chat moderating company show the opposite. Cloudworkers employees actively discourage migrating to cost-free private messengers, even if clients (the victims) make such requests. After all, if clients switch to free messengers the economic model of IMFI breaks down. Finally, because IMFI is perpetrated by individuals working for a larger organization, the client is contending with both the company and a worker as perpetrators. Because the workers have access to the significant resources of the company, the level of exploitation they can perpetrate is greater than what they could achieve by enacting ORF on their own. At the same time, we have provided ample evidence that their own relationship with the company is fraught with deception and exploitation. We would argue there are significant implications here for scholars studying labor rights and wage theft related to online work for example (see, Vallas & Schor, 2020; Bittle & Snider, 2018; and Stefano, 2016) with parallels to other industries where financial exploitation is common (e.g., predatory lending; see Lawson, 2013; Mesly et al., 2020).
From a criminological standpoint, IMFI represents an interesting example of the criminological principle of a victim/offender overlap. Unlike traditional models of the victim-offender overlap that identify how the proximity of offending and personal victimization accrues from geographical and social proximity (see e.g., Lauritsen, et al., 1991; Sampson & Lauritsen, 1990) the presence of an overlap in IMFI is produced consciously by the business operating an IMFI. The company's management uses IM and IDT techniques to attract employees, who then employ similar techniques to impress customers and persuade them to spend more money to maximum effect by leveraging the cyber-industrialization tools provided by the company itself (see Figure below).
We identified three distinct IM-related techniques used by the corporation in our sample: implementing a systematic hiring process, enforcing a commission-based salary structure, and imposing strict working rules. Furthermore, employees who successfully secure a job and continue to work for Cloudworkers, even after learning about the deceptive process, also use fictional profiles and IM techniques to impress customers and increase their commission rates. In this scenario, we can identify two additional IM techniques used by workers: using fictional identity and profiles, pretending to be appealing to sexual/adult topics.
Workers in our study were diverse in their self-identification as victims of the company or as victimizers of clients (or both). In the absence of direct data, we can only speculate but there seem to be two continua of self-perception among workers, one related to the extent to which the see themselves as victims of the company (from not a victim at all to being totally exploited) and one related to the extent to which they see themselves as exploiters of the clients (from not exploitative to exploitative). We suspect these identities may be correlated but not necessarily causally related to one another. In other words, it is possible for a worker to see themselves as both exploited and exploitative or exploited and justified in their work and vice versa. It is likely that variation of self-identification along these dimensions accrue from both situational and dispositional factors and may also change over time as workers become more or less comfortable with their activities for the company. A fruitful area of research to pursue this possibility would be to examine the extent to which workers successfully employ excuse making strategies before (e.g., neutralizations, see Matza and Sykes, 1957), during (e.g., moral disengagements, see Bandura, 1999), or after (e.g., accounts, see Scott & Lyman, 1968) their exploitation of clients to manage their levels of internal (guilt) and external (shame) emotions. Understanding human behavior as both a private and public process is a common and parsimonious approach in both psychology and criminology (see, e.g., Carver and Scheier’s 1981 work on public and private self-consciousness, and Reckless’ (1967) formulation of containment theory (1967)). These considerations – public vs private self-assessments of the worker’s role in perpetuating exploitation while suffering exploitation at the same time – is well suited to those seeking to understand the ethical and moralistic dimensions of business decision-making, especially in the online world where companies like Cloudworkers are able to leverage technology to achieve their aims.
While our findings provide valuable insights into the operations of IMFI, it is important to acknowledge the limitations in our work that point to the need for further research in this area. Firstly, this study relies on open-source data gathered from victims' reviews posted on review websites and public forums, rather than directly conducting interviews with the victims themselves. This approach provides valuable information about an emerging form of online fraud that has yet to be thoroughly studied by academic researchers. Its strength lies in the fact that they data were not collected proactively. Wang and Topalli (2022) highlighted the importance of utilizing voluntary testimonials in studies, as they can address the limitations associated with leading questions and biased interview protocols. However, it is important to note that such insights gained from these reviews and public posts offer a limited view of the IMFI process, as they rely on individuals self-selected into seeking out and working for Cloudworkers. While these datasets offer the aforementioned advantages, they still present challenges in terms of objectivity, as former employees may primarily focus on negative experiences and provide only partial information about their roles as chat moderators. We are likely to get more varied and nuanced data from workers across a broader spectrum of companies, especially if we were to interview them directly rather than rely on their postings on the websites we included in our data search.
Additionally, verifying the authenticity of these testimonials and reports can be difficult, as it is possible for other freelancer companies to deliberately leave negative reviews or post unfavorable comments about Cloudworkers or for former disgruntled employees to (ironically) post multiple complains under multiple profiles. Consequently, these limitations underscore the need for future qualitative research that involves conducting direct interviews with workers operating with different companies. By employing this approach, it would be possible to analyze the interview transcript for consistency and determine whether the existing operational model observed in the current study can be revised, improved, or supplemented. Finally, direct interviews with both workers and clients would provide a full picture of how these practices operate in real time and the ways in which cyber-industrialization of fraud represent a unique new form of online scamming. Such interviews would also inform detection and prevention of such crimes.
The second limitation of the current study pertains to the lack of a comprehensive socio-legal assessment of the "offense" in question, as it remains unstudied in academia. It would be unreasonable to arbitrarily categorize IMFI as a crime, deviance, or mere exploitation, given the absence of actual legal case precedents or appeals from victims at a societal level. There are certainly distinctions to be made between what is ethical vs what is legal vs what is moral in this case. In the current paper, we have chosen to label IMFI practices as deceptive but are they truly fraudulent in the legal sense? Clients of the company are after all voluntarily paying for their sexting sessions and receiving a service in return. The extent to which the value of this service is degraded by its deceptive nature is an important question. This limitation presents challenges in terms of how we should label individuals working for Cloudworkers and how we label the company itself. As time progresses, we anticipate that there may be legal appeals from both victims and workers who have been deceived by these so-called "freelancer" companies. This could lead to more social or legal discussions and policy on the subject.
We would highlight what we view as the most obvious and controversial implication of this work: The extent to which companies employing IMFI processes can eliminate human workers from the equation through the application of large language model artificial intelligence (LLM-AI) platforms like ChatGPT (currently in its fourth iteration). The interactions and communication between clients and workers in the Cloudworkers ecosystem rely heavily on gathering background data on clients over time and distributing them across workers. But this same data can be used to train and automate sexting discussions via ChatGPT. As LLM-AI becomes more sophisticated9 it will be capable of developing and deploying algorithms that will be able to maximize participation (and thus, financial exploitation) of clients. The current capacity of such systems to approximate human discourse is currently being studied (see, Aggarwal, et al., 2023; Guo, et al., 2023; Metz, 2023), and it is not difficult to imagine that training an AI to take over the sexting duties of a human would be fairly simple and therefore inevitable, given what we would assume to be a vast digital library of sexting interactions owned by Cloudworkers and other companies. Because such systems do not represent General AI potential capabilities of self-reflection and morality rules, it is likely that LLM-AIs implemented through customer service platforms will present thorny moral and legalistic issues for policymakers, consumer advocate, and governments. As we are only now becoming aware of IMFI, the potential for AI to advance IMFI is not even being considered. This is an unfortunate side effect of the exponential nature of technological development racing ahead of our assumptions about the progression of online crime and its unanticipated exploitation by bad actors. AI-enhanced IMFI represents a 4th generation advancement of romance fraud (with FTF romance fraud, ORF, and IMFI representing the first three generations) for which we would seem to be ill-prepared. These same processes bear relevance for a multitude of other types of online commercial fraud (e.g., advance fee fraud scamming) and interpersonal criminal deception (e.g., online grooming of minors). As such, we conclude this paper with the caveat that further study into the nature and potential of future forms of AI-enhanced IMFI would seem well warranted.
315 Consumer Association. (2020). Sha Zhu Pan scam regains its force: The love of millions in debt is just a “misunderstanding.” https://weibo.com/ttarticle/x/m/show/id/2309404582430601248861? _wb_client_=1&object_id=1022%3A2309404582430601248861&extparam=lmid– 4582430600338773&luicode=10000011&lfid=1076035784133871
Aggarwal, N., Saxena, G. J., Singh, S., & Pundir, A. (2023). Can I say, now machines can think?. arXiv preprint arXiv:2307.07526.
Albanese, J. (2014). Organized crime: From the mob to transnational organized crime. Routledge.
Alkhalil, Z., Hewage, C., Nawaf, L., & Khan, I. (2021). Phishing attacks: A recent comprehensive study and a new anatomy. Frontiers in Computer Science, 3, 563060.
Aziz, S. J., Bolick, D. C., Kleinman, M. T., & Shadel, D. P. (2000). The national telemarketing victim call center: Combating telemarketing fraud in the United States. Journal of Elder Abuse & Neglect, 12(2), 93-98.
Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and social psychology review, 3(3), 193-209.
Berg, B. L. (2001). Qualitative research methods for the social sciences. Allyn & Bacon.
Bittle, S., & Snider, L. (2018). How Employers Steal from Employees. Social Justice, 45(2/3 (152/153), 119-146.
Borst, S., Mandelbaum, A., & Reiman, M. I. (2004). Dimensioning large call centers. Operations research, 52(1), 17-34.
Britz, M. T. (2008). A New Paradigm of Organized Crime in the United States: Criminal Syndicates, Cyber‐gangs, and the Worldwide Web. Sociology compass, 2(6), 1750-1765.
Buchanan, T., & Whitty, M. T. (2014). The online dating romance scam: causes and consequences of victimhood. Psychology, Crime & Law, 20(3), 261-283.
Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communication theory, 6(3), 203–242. https://doi.org/10.1111/j.1468-2885.1996.tb00127.x
Bullée, J. W. H., Montoya, L., Pieters, W., Junger, M., & Hartel, P. (2018). On the anatomy of social engineering attacks—A literature‐based dissection of successful attacks. Journal of investigative psychology and offender profiling, 15(1), 20-45.
Buse, U (2005). "Africa’s City of Cyber Gangsters". Der Spiegel. Retrieved November 11, 2022.
Carver, C. S., & Scheier, M. F. (1981). The self-attention-induced feedback loop and social facilitation. Journal of Experimental Social Psychology, 17(6), 545-568.
Cavalcante, A., Slade, A. F., Narro, A. J., & Buchanan, B. P. (2014). Reality Television: Oddities of Culture. Lexington Books.
Chang, J. J. (2008). An analysis of advance fee fraud on the internet. Journal of Financial Crime, 15(1), 71-81.
Cialdini, R. B. (2001). The science of persuasion. Scientific American, 284(2), 76–81.
Chen, K., & Tomblin, D. (2021). Using data from reddit, public deliberation, and surveys to measure public opinion about autonomous vehicles. Public Opinion Quarterly, 85(S1), 289-322.
Chinazzo, G. (2021). Investigating the indoor environmental quality of different workplaces through web-scraping and text-mining of Glassdoor reviews. Building Research & Information, 49(6), 695-713.
Cross, C. (2020). ‘Oh we can’t actually do anything about that’: The problematic nature of jurisdiction for online fraud victims. Criminology & Criminal Justice, 20(3), 358-375.
Cross, C., Holt, K., & O’Malley, R. L. (2022). “If U Don’t Pay they will Share the Pics”: Exploring Sextortion in the Context of Romance Fraud. Victims & Offenders, 1-22.
Cross, C., Holt, K., & Holt, T. J. (2023). To pay or not to pay: An exploratory analysis of sextortion in the context of romance fraud. Criminology & Criminal Justice, 17488958221149581.
Curiskis, S. A., Drake, B., Osborn, T. R., & Kennedy, P. J. (2020). An evaluation of document clustering and topic modelling in two online social networks: Twitter and Reddit. Information Processing & Management, 57(2), 102034.
Das R. M. (2022). Indian scam call centres looted over $10 billion in 11 months from US senior citizens this year. Firstpost. https://www.firstpost.com/tech/news-analysis/indian-scam-call-centres-looted-over-10-billion-in-11-months-from-us-senior-citizens-this-year-11896001.html
Das Swain, V., Saha, K., Reddy, M. D., Rajvanshy, H., Abowd, G. D., & De Choudhury, M. (2020, April). Modeling organizational culture with workplace experiences shared on glassdoor. In Proceedings of the 2020 CHI conference on human factors in computing systems (pp. 1-15).
De Stefano, V. (2016). Introduction: crowdsourcing, the gig-economy and the law. Comparative Labor Law & Policy Journal, 37(3).45(2/3 (152/153), 119-146.
De Stefano, V., Durri, I., Stylogiannis, C., & Wouters, M. (2022). Exclusion by default: Platform workers quest for labour protections. A Research Agenda for the Gig Economy and Society, 13.
Deschamps-Berger, T., Lamel, L., & Devillers, L. (2021, September). End-to-end speech emotion recognition: challenges of real-life emergency call centers data recordings. In 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 1-8). IEEE.
Dhamija, R., Tygar, J. D., & Hearst, M. (2006, April). Why phishing works. In Proceedings of the SIGCHI conference on Human Factors in computing systems (pp. 581-590).
Evans, A., Elford, J., & Wiggins, D. (2008). Using the internet for qualitative research. The Sage handbook of qualitative research in psychology, 315-333.
Feinberg, S. (1980). Statistical modelling in the analysis of repeat victimization. In Feinberg, S., and Reiss, A. (eds.), Indicators of Crime and Criminal Justice: Quantitative Studies, U.S. Department of Justice, U.S. Department of Justice, Washington, DC, pp. 54–58.
Fieser, J. (1996). Do businesses have moral obligations beyond what the law requires?. Journal of Business Ethics, 457-468.
Gans, N., Koole, G., & Mandelbaum, A. (2003). Telephone call centers: Tutorial, review, and research prospects. Manufacturing & Service Operations Management, 5(2), 79-141.
Gerber, C. (2021). Community building on crowdwork platforms: Autonomy and control of online workers? Competition & Change, 25(2), 190-211.
Goffman, E. (1978). The presentation of self in everyday life (p. 56). London: Harmondsworth.
Gottfredson, M. G. (1981). On the etiology of criminal victimization. Journal of Criminal Law and Criminology, 72, 714–726.
Guo, B., Zhang, X., Wang, Z., Jiang, M., Nie, J., Ding, Y., ... & Wu, Y. (2023). How close is chatgpt to human experts? comparison corpus, evaluation, and detection. arXiv preprint arXiv:2301.07597.
Haita-Falah, C. (2017). Sunk-cost fallacy and cognitive ability in individual decision-making. Journal of Economic Psychology, 58, 44-59.
Hartney, T. (2018). Likeness Used as Bait in Catfishing: How Can Hidden Victims of Catfishing Reel in Relief? Minnesota Journal of Law, Science & Technology, 19: 277.
Helm, B., Scrivens, R., Holt, T. J., Chermak, S., & Frank, R. (2022). Examining incel subculture on reddit. Journal of crime and justice, 1-19.
Hindelang, M. J., Gottfredson, M. R., & Garofalo, J. (1978). Victims of personal crime: An empirical foundation for a theory of personal victimization. Cambridge, MA: Ballinger.
Holt, T. J., & Graves, D. C. (2007). A qualitative analysis of advance fee fraud e-mail schemes. International Journal of Cyber Criminology, 1(1), 137-154.
Hooley, T., Wellens, J., & Marriott, J. (2012). What is online research? Using the internet for social science research (p. 176).
Hughes, J., Chua, Y. T., & Hutchings, A. (2021). Too Much Data? Opportunities and Challenges of Large Datasets and Cybercrime. Researching Cybercrimes: Methodologies, Ethics, and Critical Approaches, 191-212.
Hunter, M., & Hachimi, A. (2012). Talking class, talking race: Language, class, and race in the call center industry in South Africa. Social & Cultural Geography, 13(6), 551-56.
Iqbal, T., Khan, M., Taveter, K., & Seyff, N. (2021, September). Mining reddit as a new source for software requirements. In 2021 IEEE 29th International Requirements Engineering Conference (RE) (pp. 128-138). IEEE.
Jennings, W. G., Piquero, A. R., & Reingle, J. M. (2012). On the overlap between victimization and offending: A review of the literature. Aggression & Violent Behavior, 17, 16–26.
Kässi, O., Lehdonvirta, V., & Stephany, F. (2021). How many online workers are there in the world? A data-driven assessment. arXiv preprint arXiv:2103.12648.
Kitzie, V. L. (2017). Affordances and constraints in the online identity work of LGBTQ+ individuals. Proceedings of the Association for Information Science and Technology, 54(1), 222-231.
Kitzie, V. (2018). " I pretended to be a boy on the Internet": Navigating affordances and constraints of social networking sites and search engines for LGBTQ+ identity work. First Monday.
Koebert and McNally (2023). Catfish Capitals: These Are the Places You’re Most Likely To Fall Victim to a Catfishing Scam. All About Cookies. https://allaboutcookies.org/catfishing-scams-by-state#:~:text=Key%20findings,of%20catfish%20victims%20per%20capita.
Lauder, C., & March, E. (2023). Catching the catfish: Exploring gender and the Dark Tetrad of personality as predictors of catfishing perpetration. Computers in Human Behavior, 140, 107599.
Lauckner, C., Truszczynski, N., Lambert, D., Kottamasu, V., Meherally, S., Schipani-McLaughlin, A. M., ... & Hansen, N. (2019). “Catfishing,” cyberbullying, and coercion: An exploration of the risks associated with dating app use among rural sexual minority males. Journal of Gay & Lesbian Mental Health, 23(3), 289-306.
Lauritsen, J. L., Sampson, R. J., & Laub, J. H. (1991). The link between offending and victimization among adolescents. Criminology, 29(2), 265-292.
Lawson, A. (2013). Foreclosure stories: Neoliberal suffering in the great recession. Journal of American Studies, 47(1), 49-68.
Lee, C. S., & Jang, A. (2023). Sharing Experiences and Seeking Informal Justice Online: A Grounded Theory Analysis of Zoombombing Victimization on Reddit. Victims & Offenders, 1-20.
Lee, J. Y., Chang, O. D., & Ammari, T. (2021). Using social media Reddit data to examine foster families' concerns and needs during COVID-19. Child Abuse & Neglect, 121, 105262.
Letico, V., Iliadis, M., & Walters, R. (2022). De (a) fining consent: Exploring nuances of offering and receiving sexual consent among Deaf and Hard-of-Hearing people. Criminology & Criminal Justice, 17488958221120887.
Liu, Y., Lin, F. Y., Ahmad-Post, Z., Ebrahimi, M., Zhang, N., Hu, J. L., ... & Chen, H. (2020, November). Identifying, collecting, and monitoring personally identifiable information: From the dark web to the surface web. In 2020 IEEE International Conference on Intelligence and Security Informatics (ISI) (pp. 1-6). IEEE.
Lundmark, E., & LeDrew, S. (2019). Unorganized atheism and the secular movement: Reddit as a site for studying ‘lived atheism’. Social Compass, 66(1), 112-129.
Malik, J. K., & Choudhury, S. (2019). Privacy and surveillance: the law relating to cyber crimes in India. Journal of Engineering, Computing and Architecture, 9(12), 74-98.
Marcum, C. D., Higgins, G. E., Freiburger, T. L., & Ricketts, M. L. (2014). Exploration of the cyberbullying victim/offender overlap by sex. American Journal of Criminal Justice, 39, 538-548.
Matza, D., & Sykes, G. (1957). Techniques of neutralization: A theory of delinquency. American Sociological Review, 22(6), 664-670.
Menon, S., & Guan Siew, T. (2012). Key challenges in tackling economic and cyber-crimes: Creating a multilateral platform for international co‐operation. Journal of Money Laundering Control, 15(3), 243-256.
Mesly, O., Shanafelt, D. W., Huck, N., & Racicot, F. É. (2020). From wheel of fortune to wheel of misfortune: Financial crises, cycles, and consumer predation. Journal of Consumer Affairs, 54(4), 1195-1212.
Metz, C. (2023a). How smart are the robots getting? Retrieved from https://www.nytimes.com/2023/01/20/technology/chatbots-turing-test.html. Accessed on February 9, 2023.
Miano, P., Bellomare, M., & Genova, V. G. (2021). Personality correlates of gaslighting behaviours in young adults. Journal of Sexual Aggression, 27(3), 285-298.
Miramirkhani, N., Starov, O., & Nikiforakis, N. (2016). Dial one for scam: Analyzing and detecting technical support scams. In 22nd Annual Network and Distributed System Security Symposium (NDSS (Vol. 16).
Mitnick, K. D., & Simon, W. L. (2003). The art of deception: Controlling the human element of security. John Wiley & Sons.
Nguyen, H., & Loughran, T. A. (2018). On the measurement and identification of turning points in criminology. Annual Review of Criminology, 1, 335-358.
Nikolovska, M. (2020). The Internet as a creator of a criminal mind and child vulnerabilities in the cyber grooming of children. JYU dissertations.
Nolan, M. P. (2015). Learning to circumvent the limitations of the written-self: The rhetorical benefits of poetic fragmentation and internet" catfishing". Persona Studies, 1(1), 53-64.
O’Connell, R. (2003). A typology of child cybersexploitation and online grooming practices. Cyberspace Research Unit, University of Central Lancashire.
O’Malley, R. L. (2023). Short-term and long-term impacts of financial sextortion on victim’s mental well-being. Journal of interpersonal violence, 38(13-14), 8563-8592.
Onyebadi, U., & Park, J. (2012). ‘I’m Sister Maria. Please help me’: A lexical study of 4-1-9 international advance fee fraud email communications. International Communication Gazette, 74(2), 181-199.
Park, A., & Conway, M. (2017). Tracking health related discussions on Reddit for public health applications. In AMIA annual symposium proceedings (Vol. 2017, p. 1362). American Medical Informatics Association.
Patchin J (2013). Catfishing as a Form of Cyberbullying. Cyberbullying Research Center. https://cyberbullying.org/catfishing-as-a-form-of-cyberbullying.
Pavithra, A., & Westbrook, J. (2022). An assessment of organisational culture in Australian hospitals using employee online reviews. Plos one, 17(9), e0274074.
Perloff-Giles, A. (2018). Transnational cyber offenses: Overcoming jurisdictional challenges. Yale Journal of International Law, 43, 191.
Reckless, Walter. 1967. The Crime Problem. New York: AppletonCentury-Crofts.
Reichart Smith, L., Smith, K. D., & Blazka, M. (2017). Follow me, what's the harm: Considerations of catfishing and utilizing fake online personas on social media. J. Legal Aspects Sport, 27, 32.
Richard, B., Sivo, S. A., Ford, R. C., Murphy, J., Boote, D. N., Witta, E., & Orlowski, M. (2021). A guide to conducting online focus groups via Reddit. International journal of qualitative methods, 20, 16094069211012217.
Remmers de Vries, S., & Valadez, A. A. (2008). Let our voices be heard: Qualitative analysis of an Internet discussion board. Journal of Creativity in Mental Health, 3(4), 383-400.
Roy, S., & Sanyal, S. N. (2017). Perceived consumption vulnerability of elderly citizens: A qualitative exploration of the construct and its consequences. Qualitative Market Research: An International Journal, 20(4), 469-485.
Sade-Beck, L. (2004). Internet ethnography: Online and offline. International Journal of Qualitative Methods, 3(2), 45-51.
Sallaz, J. J. (2019). Lives on the line: how the Philippines became the world's call center capital. Oxford University Press.
Sampson, R. J., & Lauritsen, J. L. (1990). Deviant lifestyles, proximity to crime, and the offender-victim link in personal violence. Journal of research in crime and delinquency, 27(2), 110-139.
Schlenker, B. R. (1980). Impression management (pp. 79–80). Monterey, CA: Brooks/Cole.
Schneider, D. J. (1981). Toward a Broader Conception. In T. Tedeschi, & J (Eds.), Impression management theory and social psychological research (1st ed., 23p. vol.). Academic Press.
Scott, M. B., & Lyman, S. M. (1968). Accounts. American sociological review, 46-62.
Sheckels, J. M., & Farer, J. L. (2018). Investigating and Prosecuting Transnational Telefraud Schemes: The India-Based Call Center Scam and Costa Rica Telemarketing Fraud Cases. Dep't of Just. J. Fed. L. & Prac., 66, 213.
Simmons, M., & Lee, J. S. (2020). Catfishing: A look into online dating and impersonation. In, Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis: 12th International Conference, SCSM 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings, Part I 22 (pp. 349-358). Springer International Publishing.
Siu, G. A., & Hutchings, A. (2023, July). " Get a Higher Return on Your Savings!": Comparing Adverts for Cryptocurrency Investment Scams Across Platforms. In 2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW) (pp. 158-169). IEEE Computer Society.
Sivo, R.B., Ford, R. C., Murphy, J., Boote, D. N., Witta, E., & Orlowski, M. (2021). A guide to conducting online focus groups via Reddit. International journal of qualitative methods, 20, 16094069211012217.
Sparks, R. F. (1981). Multiple victimization: Evidence, theory, and future research. J. Crim. L. & Criminology, 72, 762.
Srivastava, A., Ayyalasomayajula, S., Bao, C., Ayabakan, S., & Delen, D. (2022). Relationship between electronic health records strategy and user satisfaction: a longitudinal study using clinicians’ online reviews. Journal of the American Medical Informatics Association, 29(9), 1577-1583.
Stewart, A., & Stanford, J. (2017). Regulating work in the gig economy: What are the options? The Economic and Labour Relations Review, 28(3), 420-437.
Ta, T. A., Chan, W., Bastin, F., & L’Ecuyer, P. (2021). A simulation-based decomposition approach for two-stage staffing optimization in call centers under arrival rate uncertainty. European Journal of Operational Research, 293(3), 966-979.
Tambe Ebot, A. C., Siponen, M., & Topalli, V. (2023). Towards a cybercontextual transmission model for online scamming. European Journal of Information Systems, 1-26.
The Economic Times (2023). Gig workers, call centre scams, ghost work: the dark side of India's tech sector. Retrieved from: https://economictimes.indiatimes.com/tech/technology/gigs-scams-ghost-work-the-dark-side-of-indias-tech-sector/articleshow/98896795.cms
Topalli, V., Wright, R., & Fornango, R. (2002). Drug dealers, robbery and retaliation. Vulnerability, deterrence and the contagion of violence. British Journal of Criminology, 42(2), 337-351.
Topalli, V., & Nikolovska, M. (2020). The future of crime: how crime exponentiation will change our field. The Criminologist, 45(3), 1-8.
U.S Attorney’s Office (2022). Multiple India-based call centers and their directors indicted for perpetuating phone scams affecting thousands of Americans [Press release]. https://www.justice.gov/usao-ndga/pr/multiple-india-based-call-centers-and-their-directors-indicted-perpetuating-phone-scams.
Vallas, S., & Schor, J. B. (2020). What do platforms do? Understanding the gig economy. Annual Review of Sociology, 46, 273-294.
Van den Eynde, S., Pleysier, S., & Walrave, M. (2023). Non-consensual dissemination of sexual images: The victim-offender overlap. Social Sciences & Humanities Open, 8(1), 100611
Wang, F., & Topalli, V. (2022). Understanding Romance Scammers Through the Lens of Their Victims: Qualitative Modeling of Risk and Protective Factors in the Online Context. American Journal of Criminal Justice, 1-37.
Wang, F., & Zhou, X. (2023). Persuasive schemes for financial exploitation in online romance scam: An Anatomy on Sha Zhu pan (杀猪盘) in China. Victims & Offenders, 18(5), 915-942.
Whitty, M. T., & Buchanan, T. (2012). The online romance scam: A serious cybercrime. CyberPsychology, Behavior, and Social Networking, 15(3), 181-183.
Whitty, M. T. (2013). The scammers persuasive techniques model: Development of a stage model to explain the online dating romance scam. British Journal of Criminology, 53(4), 665-684
Whitty, M. T. (2015). Anatomy of the online dating romance scam. Security Journal, 28, 443-455.
Whitty, M. T. (2018). Do you love me? Psychological characteristics of romance scam victims. Cyberpsychology, behavior, and social networking, 21(2), 105-109.
Whitty, M. T., & Buchanan, T. (2016). The online dating romance scam: The psychological impact on victims–both financial and non-financial. Criminology & Criminal Justice, 16(2), 176-194.
Winkel, F. W., Blaauw, E., Sheridan, L., & Baldry, A. C. (2003). Repeat criminal victimization and vulnerability for coping failure: A prospective examination of a potential risk factor. Psychology, Crime and Law, 9(1), 87-95.
Wittebrood, K., & Nieuwbeerta, P. (1999). Wages of sin? The link between offending, lifestyle and violent victimization. European Journal of Criminal Policy and Research, 7, 63–80.
Wood, A. J., Graham, M., Lehdonvirta, V., & Hjorth, I. (2019). Good gig, bad gig: autonomy and algorithmic control in the global gig economy. Work, employment and society, 33(1), 56-75.
Woodcock, J. (2022). Artificial intelligence at work: The problem of managerial control from call centers to transport platforms. Frontiers in Artificial Intelligence, 5, 888817.
Young, S (2017). “Kittenfishing is the new online dating term you've probably experienced”. The Independent. https://www.independent.co.uk/life-style/kittenfishing-catfishing-online-dating-term-trend-love-relationships-a7818056.html.