Adnan Qayyum

and 5 more

Classical computing works by processing bits, or 0s and 1s representing electrical signals of on and off. Quantum computing employs a very different technique for information processing. It uses qubits, which can exist as both a 1 and 0 at the same time, and uses the properties of subatomic particles in quantum physics such as interference, entanglement, and superposition to extend computational capabilities to hitherto unprecedented levels. The efficacy of quantum computing for important verticals such as healthcare where quantum computing can enable important breakthroughs in the development of life-saving drugs, performing quick DNA sequencing, detecting diseases in early stages, and performing other compute-intensive healthcare related tasks is not yet fully explored. Furthermore, implementations of quantum computing for healthcare scenarios such as these have their own unique set of requirements. Unfortunately, existing literature that address all of these dimensions is largely unstructured. This research is intended to be the first systematic analysis of the capabilities of quantum computing in enhancing healthcare systems. This article is structured with the help of taxonomies developed from existing literature to provide a panoramic view of the background and enabling technologies, applications, requirements, architectures, security and open issues, and future research directions. We believe the paper will aid both new and experienced researchers working in both quantum computing and the healthcare domains in visualizing the diversity in current research, in better understanding both pitfalls and opportunities, and coming up with informed decisions when designing new architectures and applications for quantum computing in healthcare.

Hassan Ali

and 7 more

Recent works have highlighted how misinformation is plaguing our online social networks. Numerous algorithms on automated misinformation detection are centered around deep learning~(DL) which requires large data for training. However, privacy and ethical concerns reduce data sharing by stakeholders, impeding data-driven misinformation detection. Current data encryption techniques providing privacy guarantees cannot be naively extended to text inference with DL models, mainly due to the errors induced by stacked encrypted operations and polynomial approximations of the otherwise encryption-incompatible non-polynomial operations. In this paper, we show, formally and empirically, the effectiveness of (1) $L_2$ regularized training to reduce the overall error induced by approximate polynomial activations, and (2) sigmoid activation to regulate the error accumulated due to cascaded operations over encrypted data. We assume a federated learning-encrypted inference~(FL-EI) setup for text-based misinformation detection as a (secure and privacy-aware cloud) service, where classifiers are securely trained in FL framework and inference is performed on homomorphically encrypted data. We evaluate three architectures—Logistic Regression~(LR), Multilayer Perceptron~(MLP), and Self-Attention Network~(SAN)—on two public text-misinformation datasets with some interesting results, for example, by simply replacing ReLU activation with sigmoid, we were able to reduce the output error by $1750\times$ in the best case to $43.75\times$ in the worst case.

Asad Ali

and 5 more

In response to various privacy risks, researchers and practitioners have been exploring different paradigms that can leverage the increased computational capabilities of consumer devices to train machine (ML) learning models in a distributed fashion without requiring the uploading of the training data from individual devices to central facilities. For this purpose, federated learning (FL) was proposed as a technique that can learn a global machine model at a central master node by the aggregation of models trained locally using private data. However, organizations may be reluctant to train models locally and to share these local ML models due to required computational resources for model training at their end and due to privacy risks that may result from adversaries inverting these models to infer information about the private training data. Incentive mechanisms have been proposed to motivate end users to participate in collaborative training of ML models (using their local data) in return for certain rewards. However, the design of an optimal incentive mechanism for FL is challenging due to its distributed nature and the fact that the central server has no access to clients’ hyperparameters information and the amount/quality data used for training, which makes the task of determining the reward based on the contribution of individual clients in FL environment difficult. Even though several incentive mechanisms have been proposed for FL, a thorough up-to-date systematic review is missing and this paper fills this gap. According to the best of our knowledge, this paper is the first systematic review that comprehensively enlists the design principles required for implementing these incentive mechanisms and then categorizes various incentive mechanisms according to their design principles. In addition, we also provide a comprehensive overview of security challenges associated with incentive-driven FL. Finally, we highlight the limitations and pitfalls of these incentive schemes and elaborate upon open-research issues that required further research attention.

Adnan Qayyum

and 4 more

Retinal images acquired using fundus cameras are often visually blurred due to imperfect imaging conditions, refractive medium turbidity, and motion blur. In addition, ocular diseases such as the presence of cataract also result in blurred retinal images. The presence of blur in retinal fundus images reduces the effectiveness of the diagnosis process of an expert ophthalmologist or a computer-aided detection/diagnosis system. In this paper, we put forward a single-shot deep image prior (DIP)-based approach for retinal image enhancement. Unlike typical deep learning-based approaches, our method does not require any training data. Instead, our DIP-based method can learn the underlying image prior while using a single degraded image. To perform retinal image enhancement, we frame it as a layer decomposition problem and investigate the use of two well-known analytical priors, i.e., dark channel prior (DCP) and bright channel prior (BCP) for atmospheric light estimation. We show that both the untrained neural networks and the pretrained neural networks can be used to generate an enhanced image while using only a single degraded image. We evaluate our proposed framework quantitatively on five datasets using three widely used metrics and complement that with a subjective qualitative assessment of the enhancement by two expert ophthalmologists. We have compared our method with a recent state-of-the-art method cofe-Net using synthetically degraded retinal fundus images and show that our method outperforms the state-of-the-art method and provides a gain of 1.23 and 1.4 in average PSNR and SSIM respectively. Our method also outperforms other works proposed in the literature, which have evaluated their performance on non-public proprietary datasets, on the basis of the reported results.