Considerations for Using Artificial Intelligence to Manage Authorized Push Payment (APP) Scams

Artificial Intelligence (AI)-based security intelligence modeling can be used to prevent, detect, and manage cyber threats. Data-driven AI solutions are currently undergoing rigorous research and design changes in their own field, but few scholars or practitioners frame authorized push payment (APP) scams as a unique cybersecurity concern, or tailor technical solutions based on local regulatory contexts. Drawing on a recent consultation publication by the UK Payment Systems Regulator on APP scams (November 2021), this article shows how AI can be leveraged to manage APP scams and explores some of the opportunities and risks one should consider when adopting such an approach. We highlight three scenarios: 1) Liability on payment service provider; 2) Liability on payor; and 3) Liability on payor with substantial public sector involvement. These examples illustrate how sociotechnical systems can play a design role, and consequently assist industry leaders and engineering management in prioritizing investment focus, strategic approaches, and technical solutions.


I. INTRODUCTION
WHETHERbyrai sing workforce capacity and productivity or offering data-driven predictions and solutions, Artificial Intelligence (AI) presents tangible benefits that drive our technological society forward [1]. Digital activities, human behavior, and consumer decisions can now be analyzed using programmable algorithmic models, signifying how AI has become inextricable from our digitized society.
Not only is AI capable of simplifying our lives, but it can also act as a gatekeeper that accurately predicts and proactively manages incoming risks. Industry leaders and technical professionals rely heavily on AI to understand risk exposure and prioritize resources accordingly. AI is one of the most viable options for tackling current technological and societal problems, from navigating the COVID-19 pandemic by predicting case numbers and monitoring treatment results [2], [3] responding to natural disasters and building resilient public infrastructure [2], [4], to managing cyber threats by detecting suspicious activities and identifying risk patterns [5], [6], [7]; this is all made possible by AI.
This article is primarily motivated by the changing regulatory environment in the United Kingdom in regards to authorized push payment (APP) scams. Previously, financial institutions in the UK did not need to reimburse APP scam victims, but recent regulatory requirements have changed this: a new regulation proposal shifts the liability for financial losses from APP scams from the victim to payment service providers, and requires major financial institutions in the UK to reimburse APP scam victims.
What are APP Scams?
Authorized push payment (APP) scams happen when consumers are deceived into sending payments under false pretenses to bank accounts controlled by fraudsters. As payments made using real-time payment schemes are irrevocable, victims cannot reverse transactions once they realize they have been scammed.
Such a major regulatory requirement change naturally introduces additional operational costs as financial institutions begin to pay out APP scam victims. The additional operational costs primarily come from financial reimbursements to APP scam victims, the hiring of additional bank employees to process APP scam claims, as well as designing and implementing new policies and strategies to better manage APP scams.
This change raises numerous questions. What are some actions management teams can take or should consider within this new regulatory requirement? What are some precautions that should be built in when designing and redesigning strategic solutions to APP scams? Are there any existing technologies that management teams can utilize to better manage APP scams?
With these questions as a framework, and building on the experiences of the authors as academic researchers and professionals working in senior management teams in the Canadian banking industry, this article will guide readers through several considerations surrounding APP scams-both social, and technological. We argue that: 1. APP scams should be tackled as a unique cybersecurity concern: contrary to traditional cybersecurity concerns, there is no "hacking" component or user authentication challenges involved in APP scams, since they all involve customers' authorized transactions; [5], [6], [7]; 2. Technology leaders can consider utilizing AI applications to combat APP scams [6], [7]; and 3. Design of strategic technical and socio-legal solutions for APP scams should consider local socio-technical systems such as local regulatory contexts, regional Research and Development (R&D) priorities, and global legal environment.
Before designing any cybersecurity technical solutions, it is crucial to consider human actors and local regulatory contexts through a sociotechnical lens, so that investment resources can be better prioritized to manage various types of cybersecurity concerns. Industry observations have revealed that there is often more than one key deliverable to be accomplished during a specific timeframe and within a given organization. We also offer some internationally applicable recommendations to improve AI performance in detecting and preventing authorized push payment (APP) scams.
In this article, we have identified two literature gaps: 1) a lack of cybersecurity-focused research for APP scams; and 2) inadequate application of sociotechnical system theory as a primary theoretical framework to address APP scams. Both literature gaps will be discussed throughout Sections IV and V.
Intended audiences for this article include business or technical professionals who hold managerial or strategic leadership responsibilities to mitigate cyber financial crimes and cybersecurity risks. Technical professionals who manage projects related to fraud prevention technology and fraud loss reduction tactics are also encouraged to read and review. This article aims to foster and forward leadership thinking in sustaining a balance between regulations, user experience, technology design, and strategic priorities.
The structure of this article is as follows: In Section II, we discuss the general application of artificial intelligence in cybersecurity, and some of the successful algorithmic models currently being applied in the industry; in Section III, we provide a summary of background and context for APP scams, including its definition and some classic scam schemes; in Section IV, we identify literature gaps in cybersecurity-focused research for APP scams and argue that APP scams should not be treated as a purely human problem, and that there are benefits to leveraging nonhuman actors when designing fraud management strategies; in Section V, we identify literature gaps including the inadequate application of sociotechnical system theory and recommend approaching the APP scams through this theoretical lens; in Section VI, different regulatory contexts are highlighted and various strategic priorities are recommended; and in Section VII, we provide a conclusion reemphasizing the reasons why managers should consider using artificial intelligence to address APP scams.

II. THE ROLE OF ARTIFICIAL INTELLIGENCE IN CYBERSECURITY
Cybersecurity refers to work performed to protect networks, devices, and data from unauthorized access or criminal use, and to ensure the confidentiality, integrity, and availability of information [8]. The US National Institute of Standards and Technology (NIST) lists ransomware, spyware, rootkits and botnets, Denial-of-Service Attacks, phishing, and website and wireless network security as examples of cybersecurity risks [9]. Malicious cyber threat activities can include exploiting technical vulnerabilities, employing social engineering techniques, and obfuscating penetration processes to avoid detection [10].
AI can help detect, prevent, and manage cyber risks through engaging with various algorithmic models [11], [12], [7]. Some examples of AI-Based Security Intelligence Modelling provided by [6] include: 1) Machine learning (ML) based modeling: Using supervised learning, unsupervised learning, security feature optimization, and deep learning to classify malicious activities, detect hidden behavioral patterns, and identify malware traffic [13], [14]; 2) Natural Language Processing (NLP) Based Modeling: Using lexical analysis, syntactic analysis, and semantic analysis to expose malicious domain names, examine code vulnerabilities, and recognize phishing attempts and malware attacks [15], [16], [17]; 3) Knowledge representation and conceptual modeling: Using logical representation, semantic network representation, frame representation, and production rules to apply cybersecurity knowledge to the real world to train AI to solve complex security problems in a human-like capacity [18], [19]; 4) Cybersecurity expert system modeling: Using classification learning rules, association learning rules, fuzzy logic-based rules, and conceptual semantic rules to engage with decision tress, frequent IF-THEN pattern data, and computing based on "degrees of truth" [20], [21], [22].
We provide these examples of AIbased security intelligence modeling to illustrate some of the common practices currently used to manage cyber risks. This nonexhaustive list serves as a representation of the applicability of AI adoption in the industry. AI is well-capable of managing cyber risks. Cybersecurity risks are constantly evolving and inherently unpredictable, but AIbased security solutions equip organizations with the necessary tools and mechanisms to avoid exploitation by malicious actors in cyberspace.

III. BACKGROUND AND CONTEXT FOR AUTHORIZED PUSH PAYMENT (APP) SCAMS
Among all cybersecurity risks, cyber fraud remains one of the most prevalent yet complex threats to manage. High-risk behaviors can also be difficult to identify if users are legitimately granting authorization for fraudulent transactions, such as in the example of APP scams [23], [24], [25]. The UK's Payment Systems Regulator (PSR) defines push payment as "when someone authorizes their payment service providers (PSP) to send money to a payee's account," while APP scams refer to "when someone is tricked into making a push payment to a fraudster" [26].
APP scams present in various forms, as per the high creativity and performativity of cyber fraud. Purchase scams, investment scams, romance scams, advance fee scams, invoice and mandate scams, CEO fraud, impersonation scams, business email compromise, emergency and extortion scams, lottery and money offer scams, and tech scams are all examples of APP scams [27], [28], [29], [30].
It is important to note that APP scams can target both machine and human vulnerabilities, meaning attacks can aim at flaws in both cyber infrastructure and psychological or behavioral weaknesses [10], [31]. Scam strategies often leverage the combination of social engineering tactics (or behavioral and persuasion tactics) and cyber-attacks including leveraging victims' leaked data from data breaches or using malware to track victims' activities [27], [28], [6], [31], [26]. For example, malicious actors may select potential victims from compromised databases, approach them using available personal information from the data breaches, pretend they are authority figures, intimidate victims using personal information (i.e., date of birth, social security numbers, and residential address), then trick victims into sending money to complete the APP scam [17], [23], [24]. The combination of technological vulnerabilities and human vulnerabilities make APP scams hard to detect, hard to govern, and hard to reduce.

IV. APP SCAMS: AN UNDERRESEARCHED CYBERSECURITY CONCERN
Research surrounding unauthorized transactions as a cybersecurity concern has blossomed in recent decades. Subsequently, debate and discussion around using AI, public policies, and law and regulations to manage unauthorized access to financial services has also grown exponentially. Most research, however, focuses primarily on unauthorized rather than authorized transactions, which positions unauthorized access as the more pressing cybersecurity concern [32], [33], [34], [35]. Such a presupposition has led to an imbalance in research focus and meant that there remains a significant lack of research surrounding APP scams, especially from the technological perspective.
Because of the involvement of human psychological components and transaction authorizations by users, APP scams are often considered to be a human rather than technological problem. For instance, romance scam perpetrators may persuade their targets by building personal rapport, establishing intimate emotional relationships, and fostering a sense of dependence, after which they can easily install ransomware on the victim's machine [36]. Many governments and regulators spend a lot of resources educating consumers about APP scams because the standard belief is that public education can lead to lower victimization rates [31], [36]. They also believe if customers build fraud awareness, they will be less likely to authorize fraudulent transactions [26], [36]. The first literature gap we identify is that most cybersecurity researchers look solely at unauthorized transactions and may bypass research related to authorized transactions [32], [33], [34], [35].
We argue that APP scams, although often entailing the intricacies of human interactions, should still be considered a cybersecurity threat because of the presence of cyberattack elements at various stages of the process [10], [28], [26]. Sometimes, approaching the scam victim is just the beginning; malicious actors may ask victims to download malware in order to constantly monitor victims' activities through backdoors or keyloggers for future exploitation [9], [10]. While it is true that the security incident may be initiated by authorized transactions, that does not mean there are no subsequent cybersecurity concerns.
In addition, there are several technological aspects of APP scams worth noting from the APP scam detection and cybersecurity defense perspectives. First, APP scams involve suspicious payments that victims do not typically make, making them possible to detect from an algorithmic model perspective, as the transaction may be deemed abnormal based on historical transaction patterns [13], [14]. Second, some APP scams such as tech support scams may involve suspicious remote controlled login sessions. In these scams, malicious actors may instruct victims to download remote control software so that the malicious actors can have access and control over the device. Device telemetry and monitoring of login patterns can be useful in detecting APP scams from a behavioral analytics perspective [26], [28]. Finally, APP scams may involve suspicious deposits. For example, romance scam perpetrators may send victims fraudulent cheques and ask the victim to send back the money [24], [28]. In this case, the fraudulent deposit becomes a detectable suspicious activity that can be identified using machine learning algorithms.
Technology leaders and policy makers should also consider how technology impacts users' judgment and decision-making: for example, generative AI such as ChatGPT and deepfake technologies can mislead APP scam victims. When romance scam victims think they are chatting with genuine lovers online, they may be interacting with generative AI. Therefore, fraud awareness education may not always work if customers fail to identify the legitimacy of the person they are interacting with.
APP scams are neither a purely human problem, nor a purely technological problem. As such, technology leaders and policy makers should consider APP scams to be a unique cybersecurity concern.
V. THROUGH THE LENS OF SOCIOTECHNICAL SYSTEMS Can APP scams be researched using purely traditional cybersecurity research methodology? Unfortunately, the classic cybersecurity framework is not without its flaws: most cybersecurity research is constructed based on the traditional confidentiality-integrityavailability (CIA) triad, and little work in the field has thus far investigated the complex impacts of external legal, social, and economic environments [37], [38], [39]. Only using traditional cybersecurity research methods can potentially omit important factors in APP scams such as human emotions, language discourse, and digital culture [40], [41], [42].
Approaching APP scams as a traditional cybersecurity concern may lead to resources wasted on customer authentication or transaction verification. Technology resources used to prevent, for example, distributed denial of service (DDoS) attacks, will not be helpful in addressing APP scam concerns because the root causes of these two problems are fundamentally different. In APP scams, victims are willingly conducting the transaction using their legitimate identities and authentication credentials; while in DDoS attacks, the cybercriminals may have other hacktivism agendas. APP scams are technological challenges that involve significant human intricacies. Therefore, we recommend using sociotechnical systems as a theoretical framework to better understand the complex issues at hand.
The concept of a sociotechnical system stresses the reciprocal interrelationship between humans and machines and argues that such a system contains both the social and the technical as independent yet interacting forces [43], [44]. Sociotechnical systems represent a wide network of complex actors, and the model investigates the development process of a technological artifact, its variations and selections, its associated social groups, and the problems and solutions that can arise [45].
Authors of this article also have performed a brief literature review for sociotechnical systems. Scholars from the field of Science and Technology Studies (STS) and public policy makers have worked to understand how social and technical systems are inseparable [45], [46], [47]. STS scholarship links scientific knowledge, technological artifacts, human-machine interaction, cyborgs and hybrids, and social order through interconnected networks, which leads to the mutual shaping and framing of materiality and the coproduction of knowledge in a dynamic political economy [47]. Even though STS scholars have examined sociotechnical systems for various technologies, such as in the nanotechnology and biomedical fields, they seldom apply sociotechnical systems as a theoretical framework to understand APP scams. We argue that deconstructing the sociotechnical system of APP scams using STS approaches helps policy makers disentangle the material, cultural, political, social, and rhetorical components of APP scam networks.
Approaching APP scams using a sociotechnical system design can be particularly useful for industry leaders because such a multidirectional model engages with leaders from various other social groups rather than only working with internal stakeholders within the given organization. The resulting analysis produces a reflexive understanding of the translation and interpretation of the sociotechnical system, therefore helping to provide a solution to the issue at hand [45], [46], [47]. Governance and regulatory controls in our interconnected digital world have become multilevel, multinodal, and multilateral, accompanying the traits of coevolution, cospecialization, and co-opetition [48], [49]. In the following discussion, we will provide examples of using AI to manage APP scams in different regulatory contexts with the goal of providing industry insights to technical practitioners and strategic thinkers.

VI. APP SCAMS REGULATORY CONTEXTS OVERVIEW
In November 2021, the UK based Payment Systems Regulator (PSR) published a consultation paper eliciting public opinion on regulating APP scams. In this proposal, it becomes mandatory for major financial institutions in the UK to reimburse scam victims. In addition, payment service providers (PSP) also need to share data intelligence to improve the detection and prevention of APP scams, such as data related to the performance of APP scam management, the reimbursement levels for APP scam victims, and the PSPs receiving fraudulent payment [26]. The UK Economic Secretary to the Treasury, MP John Glen, announced in 2021 that the UK Government will legislate accordingly to address any regulatory barriers that may arise while enacting these measures [26].
The Automated Clearing House-a US financial network used for electronic payments and money transfers-has a series of laws and regulations protecting consumers, including limiting consumer liability when unauthorized transactions occur [50]. However, when a payor agrees to the payment and authorizes the transaction in APP scenarios, the liability then shifts [50], [51]. Similarly, Canada, Australia, and many European countries place liability on payors, therefore victims often must absorb financial losses from APP scams [52]. At the time of writing in 2023, no mandatory regulatory requirements exist in the US, Canada, or Australia that mandate PSPs to reimburse APP scams. Although the authors suspect regulatory requirements may shift going forward, for the purposes of this article, we consider payors in these countries to be liable. In such cases, victims remain responsible for their authorized transactions, even if they later claim the transactions to be a scam. While victims have the option of filing police reports, law enforcement agencies generally do not carry out a guardian function to keep victims away from APP scams in the first place.
In countries and states where law enforcement agencies carry more authority and have more resources, such as China and Singapore, the public sector plays a larger role in the prevention and management of APP scams. While the liability still lies with the payor, law enforcement agencies devote large amounts of time, resources, and money to identifying risky financial interactions and tracing scammed funds [53], [53], [54]. For example, China encourages the public to download a mobile phone app called the "National Anti-fraud Center" designed to prevent APP scams [55]. To use this antifraud app, however, users need to grant information access to law enforcement agencies so that the police can constantly monitor potential APP scam risks, such as suspicious text messages and highrisk caller IDs. In Singapore, the SPF's dedicated Anti-Scam Center, established in 2019, has partnered with more than 20 external stakeholders including banks, Fintech companies, telecommunication companies (telcos), and online marketplaces, so that in the onset of APP scams, the center has the resources and discretion to immediately freeze bank accounts and launch an investigation in a timely manner [56]. In each situation, we highlight how leaders across public and private institutions can consider various factors and develop strategies to tackle APP scams. For example, leading detection refers to managers who are responsible for detecting APP scams; leading incident response refers to managers who are constantly monitoring cyber-attacks and security intrusions; and leading data design refers to managers that are responsible for consolidating investigative data for algorithmic modeling. We also include discussion aimed at managers in the areas of user interface/user experience design, as well those responsible for customer education.
We break down different aspects of management in this way because each organization may assign different labels to their teams, which would be difficult to translate into a research paper; nevertheless, the central functions of these teams remains similar. We use the key roles and responsibilities instead of team names to best signal the intended audience. We also encourage managers from different areas of business to learn from each other, which can broaden strategic vision and potentially spark opportunities for collaboration.
A. Situation I: Liability on PSP In scenarios that follow regulations in the UK, where APP scam liability is placed on the PSP, we recommend that PSPs invest in sophisticated AI infrastructure including robust production rules and unsupervised learning techniques to detect suspicious activities potentially related to APP scams. We also highlight the risks of not including sufficient qualitative data from the operation team when designing AI, such as APP scam investigation details and victim interview data. We hope to remind managers and leaders that a lack of qualitative data makes designing effective models difficult.
Leading Detection-Developing Robust Production Rules (Technical Focused): Production rules can be well illustrated with the phrase "If Condition then Action." Production rules models usually involve pairs of conditions and corresponding actions [6]. Assorted production rules combine to create a knowledge base that can eventually contribute to a knowledge representation model. PSPs can hire in-house developers to design production rules that create dynamic fraud detection algorithms, including device monitoring, biometrics evaluation, and behavioral pattern analytics [51]. When device ID is inconsistent with historical records, remote access is detected, or session length is longer than normal, rules can be triggered to prompt a two-step verification (TSV) for authentication, or block/suspend accounts [57], [58], [51]. If PSPs use an accelerometer, a tap gesture (light and steady versus strong and shaky) can be used as an indicator to determine if the customer is under distress [51].
Leading Incident Response-Unsupervised Learning for Incident Response (Technical Focused): Unsupervised learning is a machine learning technique used for uncovering hidden patterns and discovering structures of unlabeled datasets [59], [60]. Through unsupervised learning, the machine can build representations from purely unstructured data input and generate decisions driven by data results from AI-based modeling. Unlike supervised learning, where the data are defined and classified, unsupervised learning leverages clustering techniques to group data and measure similarities [6]. Unsupervised learning has been adopted for incident responses in various ways, such as unsupervised classification of unknown web access logs and the training of the extracted rules and raw data using unsupervised learning [61], [62]. We suggest leveraging unsupervised learning to identify unknown APP scam patterns. This is similar to using machine learning-based modeling in typical cybersecurity practices, [6] identifying multiple examples of utilization including partitioning methods (K-means, K-medoids, CLARA, etc.), density-based methods (DBSCAN), distributionbased clustering (Gaussian mixture models), and hierarchical based methods-both agglomerative and divisive (single linkage, complete linkage, BOTS, etc.). Unsupervised learning prepares PSPs to combat unknown cyber-attacks and new APP scam trends. It can also help to build a flexible but effective early warning system that can contribute to the implementation of critical fraud strategies and detection mechanisms that fit the corresponding sociotechnical system.

Leading Data Design-Considerations on Inadequate
Qualitative Data (Social Focused): Using AI to limit incidents of APP scams can be efficient, but it introduces the risk of overlooking results found through human investigation. As most PSPs have both in-house fraud detection algorithms developers and fraud investigators, we strongly recommend that PSPs adopt an interdisciplinary approach when developing AI models and performing fraud data analysis [63]. While most algorithm developers have extensive STEM training, not all have a social science background in areas such as criminal justice and crime investigation. A lack of an interdisciplinary approach when designing AI algorithms can miss qualitative data, therefore resulting in incomplete information input when developing frameworks [64].
Qualitative data can add additional information related to investigation details, fraud trend root causes, and victimization processes to the model [63]. We encourage using an interdisciplinary approach when PSPs design their models, so that the tailored algorithms can be customized to respond to the changes and challenges brought by unique sociotechnical systems.
B. Situation II: Liability on Payor As mentioned, most countries, including the US, Canada, and Australia, place APP scams liability on payors [52]. In these cases, reimbursement of APP scams is not regulated and therefore PSPs are not required to reimburse APP scams losses to victims [52]. Since consumers must absorb losses themselves, AI-based security intelligence modeling can help payors to better recognize APP scams before becoming victimized. We provide two examples: fuzzy logicbased scam warning messages, and supervised learning to tailor scam education. We also highlight the importance of managing technological biases and inequalities when designing AI to stress the component of AI ethics.
Leading User Interface-Fuzzy Logic-Based Scam Warning Messages (Technical Focused): Fuzzy logic is an algorithmic approach to creating a fuzzy expert system. Instead of predicting the results based on "true or false" dichotomies, this approach looks for "degrees of truth" [22]. We propose using fuzzy logicbased rules to create an APP scam warning system so that payors can make sound judgments before authorizing a payment to the scammer [51]. The advantage of using fuzzy logic-based rules is that the scam warning message system will be triggered immediately when user behavior resembles patterns of other APP scams victims. Instead of relying on production rules that set out specific requirements for triggering an action, fuzzy logic-based rules are more flexible in sending out messages specific to each APP scam scenario.

Leading Customer Education-Supervised Learning to Tailor Scam Education (Technical Focused):
Supervised learning is one of the most prevalent techniques of machine learning modeling. This approach leverages distinct data from a finite set of values and labels them for classification [65]. Through supervised learning, models can produce output based on given labeled data, and continue to compute new output values for new inputs [66]. We suggest using supervised training to classify different types of APP scams, labeling the user groups that are most likely to fall for this type of APP scam, and then tailoring scam education based on the likelihood of victimization for each social group [6], [31]. Such APP scam education can be effective as the information is tailored to each user group's psychological and behavioral weaknesses, making them more resilient to APP scams and increasing public understanding of APP scams as a cybersecurity threat. Combatting APP scams requires developers and regulators to collaborate and consider both technological and human factors.

Leading Data Design-Considerations Regarding Technological Biases and Inequalities (Social Focused):
Creating AI-based security intelligence models can sometimes cause unintentional technological biases and inequalities ( [67], [63], [68], [69]. When using AI to manage APP scams, overblocking profiles stemming from high false positive rates can induce human-machine interaction frictions such as missing payment due dates, excessive step-up authentication, and decreased usage of digital financial services [70]. Additionally, we want to flag that all AI applications should exercise caution when utilizing blacklist or whitelist mechanisms. While various types of machine learning techniques such as K-nearest neighbors, Random Forest learning, and Naive Bayes can be helpful in determining the level of APP scam risk, arbitrarily adopting blacklist or whitelist mechanisms can cause serious consequences related to technological bias and inequalities [68], [69]. Strategic leaders should consider factors contributing to algorithmic inequalities and devise prevention and management strategies, as well as appropriate responses and business continuity plans.

C. Situation III: Liability on Payor
With Substantial Public Sector Involvement Not all nation states mandate law enforcement agencies to actively participate in managing APP scams [52]. In fact, public sector actors often face enforcement challenges due to local privacy laws, as most sensitive data held by private actors is considered privileged information [71]. Depending on local regulatory contexts, police officers may encounter significant difficulties when investigating APP scams. Therefore, countries like China and Singapore grant additional resources and authority to their law enforcement agencies to combat APP scams. In these situations, AI can be used to improve the public sector's detection capabilities. Techniques can include using NLP-based modeling to detect scam attempts and using deep learning to classify evolving cyber risks [72]. We also raise some concerns related to data privacy, so that policy makers in these regions can leverage and build a more democratic AI model with a higher level of public confidence.
Leading Detection-NLP-Based Modeling to Detect Scam Attempts (Technical Focused): NLP-based modeling leverages patterns of human language for knowledge interpretation from unstructured information, including interpretation, deciphering, comprehension, and sense making [52]. Law enforcement agencies can partner with telecommunication service providers to perform lexical analysis on suspicious text messages and caller IDs. Another option is to work with social media platforms to use semantic analysis to detect high risk persuasion phrases commonly used in APP scams, such as NLP assignments related to urgency, stress, and other emotions that target victim's psychological weaknesses [24], [56], [53]. Strategic leaders and consulting advisors working in financial institutions can also consider building such technological functions into their digital financial service platforms such as telephone banking and banking chatbots.
Leading Incident Response-Deep Learning to Classify Cyber Risks (Technical Focused): Cyber risks continue to evolve as different digital products and services are developed. For example, extensive research has been devoted to AI application in wearable technologies [73], [74], [75], [76]. As usage of wearable technologies diversifies, however, cyber risks also increase. For example, many service providers now offer payment options on smartwatches [77]. This creates a potential new target for APP scams. We recommend using deep learning techniques to continuously classify new cyber risks that may be unknown to regulators. Deep learning leverages artificial neural network modeling to consolidate input layers, hidden layers, and output layers in order to capture cyber anomalies [75]. Deep learning offers useful insights driven by data, including unseen test cases [6]. Using deep learning to predict and capture new APP scam trends can be useful for improving the technical resilience and incident preparedness of relevant stakeholders, especially when public sector actors have access to large quantities of unstructured data and face ongoing and unknown challenges.
Leading Data Privacy-Considerations Regarding Data Privacy Issues (Social Focused): Substantial public sector involvement in regulating APP scams is not the perfect solution, mainly because of subsequent data privacy issues. While the public sector can potentially act more efficiently if resources are centralized and prioritized, users must grant information access to the government to receive the reciprocal protection. It has been reported that the Chinese government may use mobile phone apps to monitor users' online activities and track access to banned foreign websites [55]. While we believe active involvement by the public sector can lead to effective regulation of APP scams, we recommend building a more intensive control system to govern data usage. For instance, data trust, which refers to a collection of privileged data stored in a data warehouse and accessed only on the bases of subject oriented, integrated, time-variant, and nonvolatile necessity [78]. Public sector actors can consider leveraging a data trust system to build a democratic interaction model with the public and gain more trust and confidence from its users.

Summary for Management Insights:
In this section, we have offered various technological and human approaches for APP scams solutions. We also make clear whether the solution is technical or social in nature. We argue that considering the local regulatory context is crucial for investment resource prioritization; for example, hiring technology development or policy strategists and designing technical or social solutions for APP scams.
When liability falls on PSPs, we recommend developing robust production rules and using unsupervised learning for efficient incident response; when liability falls on the payor, we recommend institutions invest in fuzzy logic-based scam warning message systems and engage supervised learning models to tailor consumer education and public awareness; and when liability falls on the payor but substantial public actor involvement is present, we recommend using NLP-based modeling to detect scam attempts and leverage deep learning models to classify unknown cyber risks. We also highlight some risks for consideration, including inadequate qualitative data resulting in deficient AI design, technological biases and inequalities, as well as data privacy issues stemming from extensive monitoring of consumer activities.
As authors are both industry practitioners and academic researchers, we are well positioned to observe gaps between practice and research. The technical and social solutions proposed in this research paper cannot be implemented without support from management teams.
Managers are central in rolling out new technology and policies in any organization. From robust production rules for APP scam detection and unsupervised learning to detect anomalies, to fuzzy logic-based scam warning message systems and deep learning models to classify unknown cyber risks, nothing can be developed without management's support. For example, developing algorithmic models such as unsupervised learning may encounter AI governance concerns, which managers need to coordinate; using NLP-based modeling to detect scam attempts may lead to privacy concerns, and managers are typically required to evaluate risks and supervise implementation of any regulations or strategies.
Managers are often required to put substantial time and effort into paperwork and ethics governance to create a compliant AI model. If AI governance is not undertaken thoroughly and carefully, including review of any technological biases and inequality concerns (as we have highlighted), the risks are high. We advocate for a more efficient collaboration model for AI governance, and for more productive partnerships between governance and technology development teams.
We offer some examples of interaction models, such as creating a template in a check-list fashion for developers to self-declare risks; establishing an automatic testing mechanism to ensure the fairness of the model; or creating coworking sessions that include AI governance teams to ensure fair design during the project planning process.
With the understanding that each algorithm may be unique, managers should help expedite algorithm deployment timelines, especially for cyberattacks and APP scams. The time sensitive nature of cyberattacks and APP scams means industry actors are constantly at odds with malicious actors. Any amount of time where fraudsters have control can result in significant financial losses. Therefore, exploring how managers can efficiently instigate technological developments in an ethical and responsible way becomes a vital social concern and a critical organizational challenge that should be prioritized.

VII. CONCLUSION
This article discusses the role of AIbased security intelligence modeling in managing APP scams and offers some consideration and recommendation on the risks and opportunities associated with each use case for industry leaders and policy makers.
Broadly speaking, management teams can derive the following generic takeaways from this article: First, this research-based article aims to offer feasible APP scam management solutions to industry practitioners and policy leaders. However, managers can use similar methodology to that outlined to analyze any emerging cyber risks by identifying available technical solutions, understanding local regulatory contexts, and prioritizing strategies based on the qualities of each cyber risk.
Second, not every organization has sufficient funding or resources to design technical solutions. Prioritizing and customizing technological solutions based on local sociotechnical system can be one approach when planning for business investments.
Finally, AI solutions can provide a viable approach to managing APP scams across regulatory contexts. AI solutions can be used to address many other complex social issues introduced by human actors. While these technical solutions will not be able to fix every security problem at once, breaking down the issues and targeting various pain points can address aspects of the challenge. Instead of declaring that AI cannot solve cyber security threats, leaders in the field should begin to think about how it can be utilized to mitigate parts of this critical social and financial problem. sciences. During his professional career, he has overseen the development and delivery of numerous AI solutions to combat third party fraud for money-out payment products, with a focus in integrating AI methodologies to decision engines. He is currently leading a dynamic team that designs technology solutions to mitigate complex fraud vectors.