Latif U. Khan

and 5 more

In contrast to methods relying on a centralized training, emerging Internet of Things (IoT) applications can employ federated learning (FL) to train a variety of models for performance improvement and improved privacy preservation. FL calls for the distributed training of local models at end-devices, which uses a lot of processing power (i.e., CPU cycles/sec). Most end-devices have computing power limitations, such as IoT temperature sensors. One solution for this problem is split FL. However, split FL has its problems including a single point of failure, issues with fairness, and a poor convergence rate. We provide a novel framework, called hierarchical split FL (HSFL), to overcome these issues. On grouping, our HSFL framework is built. Partial models are constructed within each group at the devices, with the remaining work done at the edge servers. Each group then performs local aggregation at the edge following the computation of local models. End devices are given access to such an edge aggregated model so they can update their models. For each group, a unique edge aggregated HSFL model is produced by this procedure after a set number of rounds. Shared among edge servers, these edge aggregated HSFL models are then aggregated to produce a global model. Additionally, we propose an optimization problem that takes into account the RLA of devices, transmission latency, transmission energy, and edge servers’ compute latency in order to reduce the cost of HSFL. The formulated problem is a mixed-integer non-linear programming (MINLP) problem and cannot be solved easily. To tackle this challenge, we perform decomposition of the formulated problem to yield sub-problems. These sub-problems are edge computing resource allocation problem and joint relative local accuracy (RLA) minimization, wireless resource allocation, task offloading, and transmit power allocation sub-problem. Due to the convex nature of edge computing, resource allocation is done so utilizing a convex optimizer, as opposed to a block successive upper-bound minimization (BSUM) based approach for joint relative local accuracy (RLA) minimization, resource allocation, job offloading, and transmit power allocation. Finally, we present the performance evaluation findings for the proposed HSFL scheme.

Latif U. Khan

and 5 more

Shawqi Al-Maliki

and 3 more

Protecting the privacy of personal information, including emotions, is essential, and organizations must comply with relevant regulations to ensure privacy. Unfortunately, some organizations do not respect these regulations, or they lack transparency, leaving human privacy at risk. These privacy violations often occur when unauthorized organizations misuse machine learning (ML) technology, such as facial expression recognition (FER) systems. Therefore, researchers and practitioners must take action and use ML technology for social good to protect human privacy. One emerging research area that can help address privacy violations is the use of adversarial ML for social good. Evasion attacks, which are used to fool ML systems, can be repurposed to prevent misused ML technology, such as ML-based FER, from recognizing true emotions. By leveraging adversarial ML for social good, we can prevent organizations from violating human privacy by misusing ML technology, particularly FER systems, and protect individuals’ personal and emotional privacy. In this work, we propose an approach called Chaining of Adversarial ML Attacks (CAA) to create a robust attack that fools misused technology and prevents it from detecting true emotions. To validate our proposed approach, we conduct extensive experiments using various evaluation metrics and baselines. Our results show that CAA significantly contributes to emotional privacy preservation, with the fool rate of emotions increasing proportionally to the chaining length. In our experiments, the fool rate increases by 48% in each subsequent chaining stage of the chaining targeted attacks (CTA) while keeping the perturbations imperceptible (ε = 0.0001).

Hassan Ali

and 3 more

While the technique of Deep Neural Networks (DNNs) has been instrumental in achieving state-of-the-art results for various Natural Language Processing (NLP) tasks, recent works have shown that the decisions made by DNNs cannot always be trusted. Recently Explainable Artificial Intelligence (XAI) methods have been proposed as a method for increasing DNN’s reliability and trustworthiness. These XAI methods are however open to attack and can be manipulated in both white-box (gradient-based) and black-box (perturbation-based) scenarios. Exploring novel techniques to attack and robustify these XAI methods is crucial to fully understand these vulnerabilities. In this work, we propose Tamp-X—a novel attack which tampers the activations of robust NLP classifiers forcing the state-of-the-art white-box and black-box XAI methods to generate misrepresented explanations. To the best of our knowledge, in current NLP literature, we are the first to attack both the white-box and the black-box XAI methods simultaneously. We quantify the reliability of explanations based on three different metrics—the descriptive accuracy, the cosine similarity, and the Lp norms of the explanation vectors. Through extensive experimentation, we show that the explanations generated for the tampered classifiers are not reliable, and significantly disagree with those generated for the untampered classifiers despite that the output decisions of tampered and untampered classifiers are almost always the same. Additionally, we study the adversarial robustness of the tampered NLP classifiers, and find out that the tampered classifiers which are harder to explain for the XAI methods, are also harder to attack by the adversarial attackers.

Shawqi Al-Maliki

and 6 more

Deep Neural Networks (DDNs) have shown vulnerability to well-designed adversarial examples. Researchers in industry and academia have proposed many adversarial example defense techniques. However, none can provide complete robustness. The cutting-edge defense techniques offer partial reliability. Thus, complementing them with another layer of protection is a must, especially for mission-critical applications. This paper proposes a novel Online Selection and Relabeling Algorithm (OSRA) that opportunistically utilizes a limited number of crowdsourced workers to maximize the ML system’s robustness. OSRA strives to use crowdsourced workers effectively by selecting the most suspicious inputs and moving them to the crowdsourced workers to be validated and corrected. As a result, the impact of adversarial examples gets reduced, and accordingly, the ML system becomes more robust. We also proposed a heuristic threshold selection method that contributes to enhancing the prediction system’s reliability. We empirically validated our proposed algorithm and found that it can efficiently and optimally utilize the allocated budget for crowdsourcing. It is also effectively integrated with a state-of-the-art black-box defense technique, resulting in a more robust system. Simulation results show that OSRA can outperform a random selection algorithm by 60% and achieve comparable performance to an optimal offline selection benchmark. They also show that OSRA’s performance has a positive correlation with system robustness.
Next generation wireless networks are expected to be extremely complex due to their massive heterogeneity in terms of the types of network architectures they incorporate, the types and numbers of smart IoT devices they serve, and the types of emerging applications they support. In such large-scale and heterogeneous networks (HetNets), radio resource allocation and management (RRAM) becomes one of the major challenges encountered during system design and deployment. In this context, emerging Deep Reinforcement Learning (DRL) techniques are expected to be one of the main enabling technologies to address the RRAM in future wireless HetNets. In this paper, we conduct a systematic in-depth, and comprehensive survey of the applications of DRL techniques in RRAM for next generation wireless networks. Towards this, we first overview the existing traditional RRAM methods and identify their limitations that motivate the use of DRL techniques in RRAM. Then, we provide a comprehensive review of the most widely used DRL algorithms to address RRAM problems, including the value- and policy-based algorithms. The advantages, limitations, and use-cases for each algorithm are provided. We then conduct a comprehensive and in-depth literature review and classify existing related works based on both the radio resources they are addressing and the type of wireless networks they are investigating. To this end, we carefully identify the types of DRL algorithms utilized in each related work, the elements of these algorithms, and the main findings of each related work. Finally, we highlight important open challenges and provide insights into several future research directions in the context of DRL-based RRAM. This survey is intentionally designed to guide and stimulate more research endeavors towards building efficient and fine-grained DRL-based RRAM schemes for future wireless networks.

Asad Ali

and 5 more

In response to various privacy risks, researchers and practitioners have been exploring different paradigms that can leverage the increased computational capabilities of consumer devices to train machine (ML) learning models in a distributed fashion without requiring the uploading of the training data from individual devices to central facilities. For this purpose, federated learning (FL) was proposed as a technique that can learn a global machine model at a central master node by the aggregation of models trained locally using private data. However, organizations may be reluctant to train models locally and to share these local ML models due to required computational resources for model training at their end and due to privacy risks that may result from adversaries inverting these models to infer information about the private training data. Incentive mechanisms have been proposed to motivate end users to participate in collaborative training of ML models (using their local data) in return for certain rewards. However, the design of an optimal incentive mechanism for FL is challenging due to its distributed nature and the fact that the central server has no access to clients’ hyperparameters information and the amount/quality data used for training, which makes the task of determining the reward based on the contribution of individual clients in FL environment difficult. Even though several incentive mechanisms have been proposed for FL, a thorough up-to-date systematic review is missing and this paper fills this gap. According to the best of our knowledge, this paper is the first systematic review that comprehensively enlists the design principles required for implementing these incentive mechanisms and then categorizes various incentive mechanisms according to their design principles. In addition, we also provide a comprehensive overview of security challenges associated with incentive-driven FL. Finally, we highlight the limitations and pitfalls of these incentive schemes and elaborate upon open-research issues that required further research attention.