MCLA Task Offloading Framework for 5G-NR-V2X-Based Heterogeneous VECNs

Ensuring dependable quality of service (QoS) and quality of experience (QoE) for computation-intensive and delay-sensitive applications in vehicles can be a challenging task that impacts performance. While multi-access edge computing (MEC) based vehicular edge computing network (VECN) and vehicular cloudlets (VC) enable task offloading, but their prompt and optimal accessibility is another challenge. The conventional wireless technologies may not suffice to meet the stringent ultra-low latency and cost constraints of such applications. Nonetheless, the combination of different wireless technologies can enhance network performance and satisfy these requirements. Focusing on the computational efficacy of VECN, this paper proposes a mobility, contact, and computational load-aware (MCLA) task offloading scheme for heterogeneous VECN. The MCLA scheme dynamically considers the mobility, contact, and computational load of vehicles for making task offloading decisions. To optimize the performance, the MCLA scheme integrates the Mode-1 and Mode-2 of the 5G-NR-V2X standard, along with mmWave communications. The MCLA scheme provides an opportunistic switching mechanism between these modes and heterogeneous radio access technologies (RATs) to reduce communication delays and costs. Moreover, the MCLA scheme leverages public vehicles (i.e., public buses), in proximity by using their computational power to manage computational latency and cost. Furthermore, it also considers the shareable computations from passengers’ mobile equipment within the public vehicle to improve the computation capacity of the public vehicles. Extensive evaluations and numerical results show that the proposed MCLA scheme significantly improves the task turnover ratio by 4%–15% with 4.7%–29.8% lower transmission and computation costs.


I. INTRODUCTION
T HE vehicular ad-hoc networks (VANETs) were initially coined in 2001 with popularity of car-to-car communications [1].At first, the scope of VANETs was limited to sharing vehicular-kinetic information.However, with the continuous advancement of technology, a variety of advanced featured services have been included, making it an indispensable component of the modern intelligent transportation system (ITS) [2].In today's world, the integration of diverse information including vehicle's operational information, proximity vehicle information, road and traffic conditions, and more, in vehicles and infrastructure nodes facilitates the transition to intelligent and connected vehicles (ICVs) in the modern world.This transformation has given rise to a wide range of innovative applications, such as vehicle navigation, collision warnings, autonomous driving, and in-vehicle entertainment.However, these applications have stringent computing requirements that often exceed the computational resources available in ICVs.This growing demand for applications and the limited computation resources of ICVs pose a significant challenge to the development of ITS [3].
To overcome the rising demands of computing and storage resources in vehicular ad-hoc networks (VANETs), computation offloading and cloud computing have emerged as solutions [4].However, the distance between cloud servers and the network introduces transmission delays and reliability issues, posing challenges for applications with ultra-low latency requirements.Meeting the needs of real-time audiovisual analytics, immersive gaming, self-driving, and VR/AR entertainment using remote cloud servers becomes challenging.To address the latency constraints in vehicular networks, the concept of multi-access edge computing (MEC) has been introduced.MEC involves deploying servers at roadside units (RSUs) or serving evolved-node-B (eNB) to enhance the capabilities of vehicular networks.This deployment brings computation and storage resources closer to the network edge, resulting in improved efficiency and faster response times for resource-intensive applications [5].By converting vehicular networks into vehicular edge computing networks (VECNs), MEC minimizes costs and enhances performance [6].
Moreover, Various radio access technologies (RATs) are available in vehicular edge computing networks (VECNs) to access different computing paradigms.Both dedicated shortrange communications (DSRC) and cellular networks, such as 4G/5G/LTE and 5G new radio (NR) networks, support computation offloading, but their support for different vehicleto-everything (V2X) use cases can vary [7].The evolving nature of vehicular RATs can have a significant impact on the next-generation VECNs and their applications, including factors such as high throughput, ultra-low latency, high reliability and accuracy, and high mobility [8].The 5G-NR-V2X technology utilizes mmWave communication, offering high throughput, low latency, and reliability.However, achieving beam alignment between transmitter and receiver antennas can be challenging, especially when vehicles are moving at different speeds.Techniques such as channel estimation, prediction, and tracking can help address this issue.However, factors like vehicle density, interference, and line-of-sight blockages can still impact beamforming performance.To enhance mmWave communications, efficient beam tracking algorithms and hardware acceleration are essential [9].
Beamforming involves directing a focused beam of RF energy towards a specific target, such as another vehicle or an RSU.This process requires a complex signal processing and beamforming algorithm, which may introduce some latency or delay in the system [10].The beamforming process can take a few milliseconds to tens of milliseconds depending on several factors such as the hardware and software implementation, complexity of the beamforming algorithm, speed and direction of the vehicles, and number of antennas used [11].
Latency in beamforming has significant implications for system responsiveness and accuracy in real-time applications, such as vehicle-to-vehicle communication and collision avoidance systems.Additionally, high latency can lead to dropped packets and data loss, impacting system reliability.Minimizing latency in beamforming is vital for ensuring efficient and reliable communication in mmWave systems [12].Techniques such as optimizing the beamforming algorithm, reducing signal processing time, employing predictive techniques, utilizing multiple beams, and optimizing the network infrastructure can help achieve this goal [13].Managing high-speed transportation systems with IoT devices as edge nodes in ITS due to the high mobility of vehicles and the resource-constrained nature of edge nodes is also a challenge [14].
Moreover, vehicles can also communicate with nearby vehicles, vehicular cloudlets, and edge servers, all of which have considerable computation capabilities [5].But making the right decision for offloading is critical because the time taken to complete a task within a defined threshold is heavily dependent on this decision.Furthermore, if a vehicle decides to offload its task, it also involves transmission and computation costs [15].In other words, resource-demanding vehicles in VECNs face the challenge of taking quick and informed decisions to manage delays and cost constraints while also choosing the most appropriate communication technology to access the available computing paradigms.

A. Related Works
Computation offloading in VECNs has attracted significant research attention in recent years.Plenty of research has suggested various game-theoretic approaches, convex/non-convex optimization-based schemes, and other heuristics solutions.The focus of some schemes was to optimize processing and transmission delays, and some focused on optimization of resource management, while the others focused on reduction of offloading costs, etc.
For instance, Gu and Zhao [16] proposed a contextaware delay-optimizing computation offloading scheme while considering vehicle's mobility and energy consumption constraints.In another article Misra and Bera [17] presented a mobility aware processing-delay-curtailing task offloading scheme, segregated into task offloading and fog node selection mechanisms.Aiming to reduce the overall system latency, Tang et al. [18] proposed a greedy heuristic task scheduling and offloading mechanism considering VECNs as a three-layered architecture.Utilizing the parked vehicles, Ma, Chunmei et al. [19] provided an offloading scheme focusing reliability and low latency.The concept is to make virtual servers of clusters made of off-street and on-street parking vehicles to assist vehicular edge servers in handling the offloaded tasks.Focusing on delay minimization, Chen et al. [20] introduced a greedy and bat algorithm based solution for task offloading in a k-hop like vehicular wireless environment.
In another work, Lakhan et al. [21] discusses the increasing usage of E-Transport applications, including E-Bus, E-Taxi, self-autonomous car, E-Train, and E-Ambulance, and the challenges encountered in assigning workloads to optimal computing nodes in cloudlet-based cloud networks.To address this issue, a multi-layer latency aware workload assignment strategy (MLAWAS) is introduced to minimize the average response time of applications.The proposed solution involves a mobile cloudlet-based cloud system, and the MLAWAS framework comprises various components, such as the improved Genetic Algorithm, Simulated Annealing, and Q-Learning aware migration technique, among others, to tackle the workload assignment problem in a distributed Multi-Carrier Burst Contention (MCBC) network.In a separate study, Sodhro et al. [22] present a novel ML-driven mobility management approach for effective communication in industrial network in box (NIB) applications.They provide a 6G-based intelligent QoE and QoS optimization architecture, a 6G-based NIB framework, and a use-case scenario for 6G-enabled industrial NIB that enables energy-efficient communication.In another work, Zardari et al. [23] highlighted the significance of healthcare vehicular Ad hoc network (H-VANET) in ITS for the remote healthcare of elderly patients.A V2V platoon-based system architecture is introduced for mobility-aware routing protocols to enhance the QoS performance of vehicular communication for healthcare purposes.They claimed, the proposed adaptive mobilityaware routing protocols can potentially deliver acceptable mobility, high reliability, high packet delivery ratio, low packet loss ratio, and low end-to-end delay to ITS.Moreover, Sodhro et al. [24] explored the use of artificial intelligence driven fog computing (FC) in VECNs to enhance the QoS for passengers.The proposed solutions is a reliable and interference-free mobility management algorithm (RIMMA), which is designed for FC intra-vehicular networks.RIMMA incorporates a reliable and delay-tolerant wireless channel model that enhances QoS, as well as a novel multi-layer fog-driven inter-vehicular framework that improves computation, communication, cooperation, and storage space.The RIMMA algorithm is also self-adaptive, reliable, intelligent, and mobility-aware, which allows it to effectively monitor sporadic contents in highly mobile vehicles.Additionally, in [25] Sodhro et al. discussed the blockchain-driven industrial internet of things (IIoT) system in terms of reliability, convergence, and interoperability.
Besides, [26], [27], and [28] used the concept of additional server to uplift the MEC server capacity and availability, but while using public vehicles, i.e., public buses.Ye et al. [26] explored a scalable fog computing paradigm that leverages the characteristics of buses to extend the computing capability of RSU cloudlets while minimizing costs and maintaining mobile users' experience.Pham et al. [27] introduced a scalable vehicle-assisted MEC (SVMEC) paradigm, which uses public buses as fog servers to offload computing tasks, resulting in less delay and cost-effective offloading.In addition, Liu et al. [28] propose the use of buses as pre-designated servers with a stable observation set to address the highly dynamic and uncertain moving routes of vehicular servers in VEC systems.A fluctuation-aware learning-based computation offloading (FALCO) algorithm based on multi-armed bandit (MAB) theory is introduced.FALCO enables base stations (BSs) to learn the state of the moving server and construct a stable observation set.The proposed algorithm guides the computation offloading decisions of BSs and minimizes the average offloading delays.
The overall system cost is also affected by the transmission and computation delays.Focusing on the minimization of transmission and computation costs, Yang et al. [29] also provided a mobility-aware task offloading mechanisms for independent and cooperative MEC server scenarios.Traditionally utilizing only MEC resources for task offloading not only increases the system costs, but also unable in certain cases to meet the offloaded application's latency requirements.Pointing out this issue, Deng et al. [30] introduced a link correlation theory based binary search algorithm for the multihop-relay offloading mechanism in VANETs.Raza et al. [31] also worked on partial task offloading to reduce the system costs while satisfying the delay constraints.They extended their work in [32] and introduced a mobility-aware computational efficiency based task offloading and resource allocation scheme for delay and energy optimization.A 5G-NR-V2X based mobility-aware computational efficiency-based task offloading and resource allocation (MACTER) scheme is introduced.A game-theoretical approach is followed for making offloading decisions and the Lagrange multiplier technique for resource allocation.While considering VEC resources and vehicles mobility, Lei Liu et al. [33] proposed an optimization based multi-hop task offloading scheme aimed to reduce costs and delays.In another work, Sun et al. [34] proposed a framework that integrates task offloading and service caching.A distributed task offloading algorithm based on non-cooperative game theory is proposed to determine whether to conduct local processing or offload the task to a side server.Additionally, a 0-1 knapsack algorithm is used to achieve dynamic service caching based on task popularity.If the services necessary for the task are not cached by the edge server, the task will either be executed locally or sent to the cloud.Moreover, de Souza et al. [35] proposed a Bee Colony-based task offloading in VEC (BTV) algorithm that employs contextual parameters and wireless access to provide task scheduling solutions to different servers in a feasible time.Additionally, the BVT uses mmWAVE and 5G technologies to take full advantage of the technological potential available in VEC systems.
Exploiting the benefits from the bridge amongst mmWave and vehicular communication technologies, and aiming to maintain a higher throughput, Du et al. [9] worked on mmWave based routing in vehicular communications.A geographical information and hierarchy-based greedy algorithm is proposed for mmWave based V2V communications.The global network information helps to identify both line of sight (LOS) and non-LOS scenarios of vehicles to obtain highthroughput relay links.In another article, Xiong et al. [36] proposed a stochastic network calculus-based task offloading framework utilizing mmWave and dedicated short-range communication (DSRC) for V2V communications and cellularbased V2V and V2I (C-V2X) communications.

B. Motivation & Contributions
The objective of this paper is to enhance computational efficacy while minimizing transmission and processing delays and associated costs.Although there are some schemes available and addressing the computation efficiency.However, different from our scheme, the following works in [16], [17], [18], [19], and [20] only discussed the optimization of transmission and processing latencies and resource allocations either by using only VEC computation or through vehicular cloudlets.
In contrast [29], [31], [32], [33] cooperatively optimized the costs, latencies, and resource allocations.However, none of these works consider NR-V2X RATs for transmission latency minimizations.The works in [9], [36], and [37] did consider NR-V2X RATs, but limited to just V2V-only or V2I-only communications.The authors in [26] and [28] took advantage by using public buses in task offloading.However, these schemes only address V2I computation sharing and do not consider V2V computation sharing.Additionally, none of the these works account for the accumulation of idle allowed-shareable resources of in-vehicle passengers.Moreover, they only consider communications based on C-V2X but no mmWave communications for V2V or V2I modes.In the dynamic nature of VANETs, it is challenging to deal with the decision to offload tasks with different CPU cycles, sizes, and threshold time requirements.Focusing on the gaps found in the aforementioned literature and considering vehicles' dynamic computation demands and timelines, we propose a 5G-NR-V2X based mobility, contact, and computation loadaware cost-optimized task offloading scheme.An in-depth comparative analysis of related studies and our research, emphasizing distinctions and contributions is given in Table I.The salient features of our proposed scheme are as follows: • We propose a hybrid heuristic mobility, contact, and loadaware (MCLA) scheme to minimize transmission and processing delays and associated costs.The offloading decision-making process runs heuristically in a distributed manner on vehicles, while coordinating with RSU controllers.The MCLA algorithm makes optimal offloading decisions by utilizing vehicle mobility, headway, V2V and V2I contacts and duration, and computation load information extracted from RSU controllers.
• Our approach includes using 5G-NR-V2X technology for both V2V and V2I communication, using both sub-6GHz and mmWave communication.We use 5G's C-V2X and mmWave communication modes in an interleaved manner to improve throughput and overcome transmission delay and cost constraints.
• To reduce computation latency and costs, we use public vehicles (i.e., buses in our case) as high-capacity processing units by collecting shareable CPU cycles from passengers' mobile devices.It turns buses into mobile power processing units that can provide computation sharing to nearby resource-demanding vehicles within their communication range.
• The experimental results demonstrate that the proposed MCLA scheme significantly enhances computational efficiency compared to the baseline approaches while meeting the cost and latency constraints.

C. Paper Organization
Rest of this article is structured as follows.In Section II, we present the system model, which consists of network, communication, and computation models.Problem formulation is discussed in Section III-A and the MCLA task offloading scheme is discussed in Section III-B.The numerical results and discussions are provided in Section IV.Finally, Section V concludes this article.

II. SYSTEM MODEL
This section presents the system model of our task offloading framework as depicted in Fig. 1.The followed 3-layered VECN architecture is presented in Fig. 3-B.First, we describe the vehicular network topology along with vehicular communication modes followed by the computation models.Table II represents the key notations.

A. Network Model
We consider 5G NR-V2X RAT, as it is specially designed to support ultra-low latency, ultra-high accuracy and reliability, and high throughput demanding applications [7].Moreover, both communication modes of NR C-V2X are utilized, i.e., Mode-1 (between eNB/gNB and vehicles, called Uu link), and Mode-2 (amongst vehicles, called PC5 link) along with mmWave radio communications.We consider a unidirectional urban road with two lanes and each having lane width l w .MEC server equipped gNB type RSUs R j = {R 1 , R 2 , . . ., R N } are considered to be installed at perpendicular distance d r j away from the road, where N is the maximum number of RSUs.R j 's are installed at equal distance d j with communication range radius r j .Each RSU is connected to the core network through the eNB.The set of resource-demanding vehicles are categorized as task offloading vehicles, denoted as v n = {v 1 , v 2 , . . ., v M }, where M is the maximum number of offloading vehicles.In our proposal, we consider two types of resourceful vehicles including, the common vehicles v com i having sufficient resource to share, and the public buses v pub i , cumulatively represented as v i .The idea behind considering public busses is to exploit the passenger devices' idle resources (e.g, storage and computation) against certain reward or incentive.This reward may be in the form of intravelling services like percentage off on travelling tickets, Internet resources, media access, etc., against unit resources provided by a passenger [38].Therefore, in addition to v pub i 's own resources, the resources shared by the passenger's mobile equipment (ME) will make v pub i like a server-on-road serving other vehicles in proximity [18].
RSU controllers gather information from nearby vehicles, including task queues, computation status, available resources, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.vehicle location, speed, and inter RSU and vehicle distances.This information is then utilized in the communication mode selection and task offloading decision processes.

B. Communication Model
The communication model is categorized into two major categories according to vehicle communication modes: V2V and V2I.Moreover, each mode is further sub-categorized into PC5/Uu and the mmWave links.Therefore, the vehicles and RSUs are considered to have both cellular and mmWave connectivity options.
1) V2V Mode: In the V2V communication mode, vehicles interact with each other either through the PC5 link or mmWave link.Following sub-section discusses both PC5 and mmWave based links under the V2V mode.
a) PC5 based V2V mode: The PC5 mode, also called Mode-2 of NR C-V2X, works autonomously irrespective of cellular connectivity in 5.9 GHz band.Whenever a vehicle comes into the communication range of another vehicle, both vehicles start communicating directly under Mode-2 in a fully decentralized way.Following the Shannon's capacity, the transmission rate R n,i between vehicle v n and vehicle v i can be calculated as: where B is the bandwidth among vehicles, P n is the transmission power of the vehicle's OBU, h is the complex Gaussian channel fading coefficient from v n to v i , N 0 is white Gaussian noise power, and L n is the V2V path loss under PC5 links and can be calculated as in [39].h is assumed to follow the complex normal distribution CN (0, 1).It is a random variable that captures the stochastic behavior of the wireless channel, encompassing path loss and fading.The complex normal distribution characterizes h with a zero mean and unit variance, as specified in [40] and [41].
L n = 63.3 + 17.7 log 10 (d n,i ). ( Here d n,i is the inter vehicle distance (in meters).Vehicles move at a constant speed but due to different relative speeds d n,i varies with time, and so does the v n 's stay time t n,i under v i 's communication range.The time varying inter-vehicle distance d n,i (t) can be calculated as: where r n is the maximum communication range (in meters) between v n and v i .
the relative positions of v n and v i .The stay time in terms of time slots of v n under v i can be calculated as: where ⃗ µ n and ⃗ µ i are the vector speeds of v n and v i .Here, we assume that vehicles maintain a constant speed when determining the contact duration with other communicating nodes, whether they are vehicles or RSUs.Furthermore, if two vehicles have the same vector speeds, they will remain in contact indefinitely unless their speeds change.Due to the change in distance between vehicles R n,i would also be varying, and therefore, it can be expressed as R n,i (t).The average transmission rate between vehicles v n and v i can be calculated as: b) mmWave based V2V mode: The effective communication range of the V2V-PC5 mode is larger as compared to the mmWave's range.Therefore, before the start of mmWave communication, vehicles v n and v i must be in contact/communicating under the PC5 mode.Therefore, to start mmWave communications, the vehicles have to align their antennas accordingly before starting the transmission of data/tasks.The control channel of the PC5 mode can be used for the mmWave antenna and beam alignments.The RTS-like and CTS-like beacon messages are delivered in this control channel and contain the vehicle's kinetic information [42].Using this kinetic information, v n transmits an RTS-like beacon to vehicle v i in the PC5 mode for initiating the beam alignment.In a sequence, v i receives an RTS-like beacon and responds to vehicle v n by CTS-like beacon transmitted in the PC5 mode (if and only if both vehicles satisfy the condition of communication range).Vehicle v n receives CTS-like beacon and starts communication over the mmWave link [43], the conceptual scenario of link establishment is shown in Fig. 2.
Beam alignment is the beamforming, and antenna arrays exploit the benefit of directivity and interference isolation.In the beamforming process, the sector alignment is done in the first phase, and then beam alignment is done under the selected sector for the fine granularity through t p pilot transmissions [44].W sl r x and W sl t x are considered as the sectorlevel, and W bl r x and W bl t x are beam-level beam widths of the receiver and transmitter, respectively.Sector widths and beam widths are set to 45 • and 15 • , respectively.The beam alignment latency τ a n,i between vehicles v n and v i is calculated using 6, while taking one t p time equals to 0.2 ms [45].τ t n,i is the mmWave transmission time and t t is the total time of transmission, whereas, t t = τ a n,i + τ t n,i and τ a n,i is derived as: The antenna gain G r x ,t x (θ) of a generic mmWave (Rx) and (T x) pair after beamforming can be formulated as: where θ is the angle off to boresight direction, G main r x ,t x and G side r x ,t x are the array gains of main and side lobes, respectively, and w • is the main lobe's beam-width.
In this work, we consider that each vehicle is equipped with two mmWave antennas (one for horizontal polarization and the other for vertical polarization) operating at 28 GHz as used in [46].The mmWave communication outperforms in line of sight (LOS) Rx and T x pair's alignments.It also works in non-line of sight (NLOS) Rx and T x pair alignments under specified range and conditions, but it does not perform well compared to LOS alignments due to highly attenuated signals.Keeping this in mind and following θ ≤ w • /2 beam alignment, the signal to noise ratio (SNR) of a typical mmWave based V2V link is calculated as [45]: where P mm n is transmit power of OBU for mmWave, N mm 0 is the noise power in the mmWave link, B mm is the mmWave bandwidth, ς is the path loss exponent, d mm n,i is the distance between vehicles v n and v i under vehicle's mmWave coverage.The shadow fading effect, denoted by ρ α , is modeled as a lognormal random variable with a zero-mean standard deviation of σ 2 .In the LOS case, σ is set to 3 d B, while for the NLOS case, it is set to 4 d B, according to [47].To calculate the time-varying distance between v n and v i , we can use  4).The effective data exchange between v n and v i is started after the beam alignment process.Therefore, the data rate R mm n,i on such a mmWave V2V link is calculated as: The change in time varying distance d mm n,i (t) not only affects the vehicle's stay time under other vehicles, but also effects the SNR.If S N R n,i is effected, the transmission rate is also effected by the change, and therefore, time varying d mm n,i (t) and S N R n,i (t) of the mmWave based average transmission rate R mm n,i can be calculated by following (5) by putting R n,i = R mm n,i and t n,i = t mm n,i .2) V2I Mode: In the V2I mode, we leverage it using Mode-1's Uu links of 5G NR-V2X and the mmWave links.The following sub-section discusses both Uu and the mmWave links based V2I communication modes.
a) Uu based V2I mode: The Uu links in Mode-3 of C-V2X and in Mode-1 of NR-V2X are the same for V2I communications [7].Since the vehicles communicate with RSUs in the V2I mode, the stay time t n, j of vehicles under corresponding RSU R j can be computed as: where d n, j is the distance between the vehicles and RSU, which is calculated by (11), and the rate between vehicles and the R j is calculated by following (12). where are the coordinates of the vehicle v n and RSU R j , respectively.It must follow the d n, j ≤ r j constraint for an effective communication.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
where B j is the bandwidth between vehicles and R j , ξ is the path loss exponent set equals to 2, and L j is the path loss for V2I under Uu links and can be calculated as in [48]: In this work, we consider that the vehicles on road move at a constant speed and consequently d n, j varies over time, so does the t n, j .Therefore, the time varying stay can be calculated as, d n, j (t)/|| ⃗ µ n ||.If the stay time changes over time, and consequently the rate can also be effected.Then the average rate can be calculated as: b) mmWave based V2I mode: The mmWave communications for V2I mode is established same as of with the V2I mode, but with a difference of one side communicating party, i.e., the RSU instead of another vehicle.The process of beam alignment starts by exchanging RTS-like and CTS-like messages afterwards the mmWave communication is started, as shown in Fig. 2. The beam alignment latency among vehicle v n and RSU R j is τ a n, j = (W sl r x W sl t x )/(W bl r x W bl t x ) (t p ) as defined in (6).Furthermore, the same model of antenna gain expressed in ( 7) is also used here in V2I mode.The vehicle's stay time under particular R j 's mmWave communication range r mm j depends on the vehicle's speed and the distance between them.The distance d mm n, j between vehicle v n and the R j can be calculated while following (15).
It must follow the d mm n, j ≤ r mm j constraint for an effective communication.Moreover, the stay time t mm n, j , in terms of time slots, and the link rate R mm n, j of vehicles under R j 's mmWave communications can be calculated by following ( 16) and (17), respectively.
where B mm j is the mmWave bandwidth in the V2I mode and S N R n, j is the SNR among vehicles and the RSUs under mmWave links.S N R n, j is calculated as The shadow fading is set to 5.0 d B for LOS and 7.6 d B for NLOS scenarios [49] at 28G H z.Moreover, the average mmWave based V2I link transmission rate R mm n, j is calculated while using R mm n, j and t mm n, j in (14).

C. Computation Model
The is the maximum threshold time to execute tk m .
For better understandings, the computation model is further classified according to the offloading decisions.In the following subsections, we have discussed the local computing and nearby resourceful vehicle computing followed by the MEC computing.
1) Local Computing: Vehicles can perform local computing or offload their tasks to other computing paradigms.Since every modern vehicle has some processing unit, let C n be the computation capacity of vehicle v n 's CPU (measured in cycles per second).When vehicle v n decides to perform local computation, i.e., d m = 0, the local execution time, T e n = c n /C n for task tk m .Each vehicle maintains a queue q n for task scheduling and resource management.The queue includes both local tasks (tk l ) and tasks offloaded from other vehicles (tk m ).Tasks in the queue experience both processing latency and waiting time.To estimate these times accurately, we adopt the widely used M/M/1-FCFS queue processing model, which predicts task processing and waiting times in the task offloading process [50].The queue and processing latency for the last task in q n at t time slot can be calculated as: where c tk l n is the sum of CPU cycles required to execute other local tasks in q n at t time slot.Since, the vehicle decides to compute locally the total local latency to be the same as of total queue latency at t time slot, T n = T q n .Given that the task is executing locally, it only depends on vehicle v n 's computation capacity; and therefore, the task transfer latency can be neglected [51].However, each time when a task is executed, it has some cost, and let C n be the per second unit cost of the execution time to execute a task at vehicle v n locally.Then the total local execution cost C n can be specified as: 's OBU.The computation latency, T e i = c n /C i for task tk m at vehicle v i .Since, vehicle v i is also maintaining the service queue same as of v n , the queue and processing latency of a task tk m at vehicle v i can be calculated similarly as in (18) at time slot t as: The total task offloading latency in normal conditions is the sum of transmissions, antenna alignment, queue, and execution latencies, i.e., T i = T up i + T q i + T dn i + τ a n,i at time slot t.Here, the normal condition refers to the vehicle receiving the results while staying under the communication range of the exact vehicle where it has uploaded the task.Given that, the following condition must be satisfied in order to avail the nearby vehicular computation.
Moreover, the total cost of task offloading C i in the V2V scenario includes per byte transmission cost C t i and per time unit execution costs C e i , and can be expressed as: 3) MEC Computing: Vehicle v n avails the MEC offloading, i.e., d m = −1, when there is no vehicle v i in the proximity range or the condition in ( 22) is not satisfied.The MEC offloading is based on the same spirit as of Section II-C.2, the up-link T up j and down-link T dn j transmission latencies are formulated as: T dn j = s out n / R n, j under PC5 range, s out n / R mm n, j under mmWave range. ( The queue at the MEC not only includes tk l and tk m tasks, but also includes tasks tk j received from the adjacent RSUs in account of cooperative load balancing.The execution time of task tk m at the MEC is calculated with, considering C j as computation capacity of the MEC as: Considering M/M/1-FCFS as the queue processing model, the queue and processing latencies, which tk m has to face at time slot t at the MEC can be modeled by following (21) as: Here, c tk l j includes the sum of required CPU cycles to tasks tk l , and c tk j j is the sum of CPU cycles required by tk j .In normal conditions, the total latency of task offloading T j is the sum of uploading and downloading transmission latencies, and antenna alignment latency in addition to the execution and queuing latencies, i.e., T j = T up j + T q j + T dn j + τ a n, j .Here, the normal condition refers that, the offloading vehicle would receive the results back from the exact RSU where it has uploaded the task.The similar condition as of ( 22) also applies here in the MEC scenario as: The total cost of offloading a task to the MEC includes, the cost of transmissions and the execution, which is expressed as: where C t j and C e j are the per unit transmission and per unit execution costs in the MEC scenario.Given that the total system cost of all the tasks tk m from every vehicle v n can be formulated as:

III. PROBLEM FORMULATION AND THE SOLUTION
In this section, we formulate the cost, transmission, and processing delays minimization problem, and present our solution.We employ 5G NR-V2X cellular and mmWave for both V2V and V2I communication modes to minimize transmission delays.Moreover, we introduce a new way to use public vehicles as high-capacity processing vehicles to reduce the processing delays and avoid RSUs' overloading.

A. Problem Formulation
A vehicle's decision to process a task locally or to offload at the nearby vehicle or RSU, all options have different processing delays due to their various processing capacities.Moreover, the offloading process includes the transmission of tasks and their metadata.Consequently, using either PC5, Uu, or mmWave RAT all bring different transmission delays.While observing the whole system, the delay is not just the only factor to be optimized.The system also includes transmission and processing costs.Therefore, selecting a processing node under the maximum threshold delay with the lowest cost is another problem.
Furthermore, the MEC servers coupled with RSUs are not just for performing offloading tasks but also for ITS operations.Therefore, the MEC server is always busy.In this case, MEC overloading is possible and will not provide Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
the necessary processing resources.Aiming to minimize the overall transmission and processing delays and costs, under constraints of maximum permissible delays and computation capacities, we formulate the problem as a mixed-integer nonlinear programming problem which is as under; Constraint C1 is the constraint to the total processing and transmission latency of a task.The maximum latency either processed locally or at the vehicle in proximity or at the MEC at time slot t must be less than or equal to the maximum tolerable latency indicated by the routine of the same task.Constraint C2 indicates that the transmission of a task and its belonging data must be entirely transmitted before the vehicle runs out of the communication range of another vehicle.Constraint C3 indicates the same case of C2 but with an RSU communication range.Constraint C4 refers to the total task processing load at RSUs, and it must be less than the computation capacity of the linked MEC server.Constraint C5 is a coupled constraint of Constraint C4, which refers that the total load tk R j is the sum of local tasks, offloaded tasks from the vehicles, and the offloaded tasks from adjacent cooperating RSU.Finally, Constraint C6 constraint defines the offloading decision set.
Problem P1 belongs to the class of non-convex problems because it has non-convex constraints and involves a set of discrete resources like proximity vehicles and RSUs.Furthermore, the objective function of the problem is based on minimizing a sum of costs, which classifies it as a mixedinteger programming problem and is known to be NP-hard.As a result, it is not feasible to find the optimal solution using a polynomial-time algorithm.Therefore, we propose a heuristic offloading scheme that balances performance and computation complexity.This approach offers a near-optimal solution to the problem.The proposed scheme has a time complexity of O(N 2 log N ), as discussed in Section III-C.Therefore, it is computationally feasible for large-scale problems with a significant number of resources.

B. Proposed Mobility, Contact, and Load Aware (MCLA) Task Offloading Scheme
A well on-time offloading decision impacts a lot on VECN's computational efficiency.In this subsection, we propose an MCLA scheme to maximize VECN's computational efficiency.The offloading decision process of MCLA comprises a selection process between available computation resources and dynamic RAT selection while satisfying the maximum latency and cost constraints.The task offloading decision d m is the  The MCLA scheme is mainly divided into three parts: the main MCLA algorithm (Algorithm 3), the resourceful vehicle selection algorithm (Algorithm 1), and the RSU/MEC-server selection algorithm (Algorithm-2).Fig. 3 also depicts the task handling mechanism for both V2V and V2I modes.In the MCLA scheme, we suppose that vehicles on the road generate tk m tasks in time slots t.These tasks need to be processed either by the vehicle itself or offloaded when the vehicle runs out of computation resources.
Whenever an offloading request is generated, the tk m 's tuple is forwarded to the RSU controllers.Meanwhile, two parallel processes are initiated one for locating resource-rich v i 's in the proximity of vehicle v n , in a distributed fashion.Whilst, the other process for examining the MEC server resources resided at the serving R j and the adjacent RSUs of R j .
Algorithm 3 in its first phase, searches for resource-rich v i 's in the proximity of vehicle v n .The selection of these resourceful v i 's is made through Algorithm 1, specially constructed for V2V communication and computations.We suppose that the RSU controller forwards kinetics of one-hop distant v com i and v pub i vehicles to Algorithm 1, if they exist.The distance Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

Algorithm 2 RSU Selection Algorithm
Input: RSUs set r with their locations, tk m 's tupple.Output: t max n complying RSU/MEC-server set r 1 initialization: calculate d n, j and d mm n, j for vehicle v n using ( 11) and ( 15) 6 if r mm j < d n, j ≤ r j then 7 calculate t n, j for v n using ( 10) and (11) 8 calculate (R n, j ) for v n using ( 12), (13) move to r k+1 of r 19 end 20 return r between vehicle v n and proximity vehicles v i is calculated using their initial locations, and then their communication mode (mmWave or PC5) is selected.
Whenever the distance condition (r mm n ≤ (distance between vehicles v n and v i ) ≤ r n ) comes true, the PC5 based V2V communication mode is selected among v n and v i .Otherwise, if (1 ≤ (distance between vehicles v n and v i ) ≤ r mm n ) comes true, the mmWave based V2V communication mode is selected.Once the V2V communication mode is selected, v n 's stay time under vehicle v i , its average transmission rate, task tk m 's upload and processing time, and queue latency are calculated.Then if condition (T i ≤ t max n )∧(t n,i ≥ T i ) is satisfied the vehicle is said to be a qualified resourceful vehicle.A set v of these qualified one-hop distant resourceful vehicles is maintained.Finally, a joint transmission and computation costeffective vehicle is selected by (32), and thus, the offloading decision becomes d m = 1.
The MCLA algorithm examines the condition in (33) if it is satisfied, the V2V offloading is started.Whenever the condition in (33) becomes false, the MCLA algorithm calls Algorithm 2 to fetch sorted set r of readily available resourceful RSU/MEC-server complying with maximum remaining t max n .In the initial state of r set generation, the distance between v n and the serving R j is calculated after checking the computation time under t max n time constraint.
select Rsu Function Definition, Function for the Selection of an RSU Under Maximum Task Threshold Time Constraints at Time Slot t 1 Function selectRsu( t trans mm up , t trans U u up , r k ) 2 calculate T q j using ( 26) and ( 27) compute C j of r i using ( 28) and ( 29) get C j+1 of r k+1 and calculate T q j+1 using ( 26) and ( 27) compute C j of r k+1 using ( 28) and ( 29) Once the distance is calculated the RAT for V2I communication mode is selected dynamically, i.e., if condition (r mm j ≤ distance between v n and R j ≤ r j ) becomes true, the Uu-link based V2I RAT is selected.Otherwise, if (1 ≤ d mm n, j ≤ r mm j ) becomes true, mmWave based V2I RAT is selected for inter vehicle and RSU communications.After this segregation, v n 's stay time under R j , their average transmission rate, tk m 's upload and processing time, and MEC queue latency is calculated.Then if (T j ≤ t max n ) ∧ ((t n, j + t mm n, j ) ≥ T j ) is satisfied, the transmission and computation cost is calculated for R j and added to the set r .If R j does not satisfy this condition, R j+1 in the headway of vehicle v n is selected and tested under t max n time constraint and added to set r if comply.
The second half of the RSU/MEC-server selection function (Lines 11 to 20 of Select R SU function) also serves as RSU/MEC-server load balancing.The function select Rsu returns R j to Algorithm 2, where set r is maintained.The MCLA algorithm fetches a computation and transmission cost effective r i from set r by following (34).This fetching of r i is started only if the condition in (35) becomes true so does the task offloading status for task tk m becomes d m = −1.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

Algorithm 3
The Distributed MCLA Algorithm Input: Task tk m 's tuple, v n .Output:  Otherwise, the RSU controller responds to the v n that currently there are not sufficiently available resources to compute tk m .Then the task tk m is added to the waiting queue to be processed in time slot t + 1 if v n allows that.The results dissemination starts when this task offloading and processing phenomenon is over.In V2V offloading, vehicle v n receives results from vehicle v i directly if the condition at Line 25 of the MCLA algorithm becomes true.Otherwise, the output is transferred to the MEC-server, and vehicle v n gets it from serving R j or the RSU in the headway.Whenever V2I offloading is availed, the results are directly forwarded to vehicle v n if the condition at Line 30 of MCLA is satisfied.Otherwise, it gets the results from the upcoming RSU in its headway.

C. Computational Complexity
The MCLA algorithm initially attempts local processing with a constant time complexity of O(1).If local execution is not possible, suitable nearby vehicles are selected, involving a loop over the vehicle set of size N with an overall complexity of O(N ).RSU selection, with M total RSUs, employs a search within a sorted set, resulting in O(log M) complexity per iteration and overall complexity of O(M log M).The distributed MCLA algorithm processes each task in a loop, with constant time calculations, and includes a call to Algo. 1 with complexity O(N ).The overall complexity is the product of the individual steps' complexities, resulting in O((M log M)(N )).Since we are considering the worst-case scenario, assuming M = N , therefore, the simplified complexity is O(N 2 log N ), reflecting a quadratic logarithmic complexity.

IV. EXPERIMENTAL SETUP, FINDINGS, AND DISCUSSION
In this section, we present our simulation set up along with the numerical results and discussion.

A. Experimental Setup
In order to simulate our proposed system, we have carefully selected a specific road section to consider.The road is 2 kilometers long and has two lanes with a width of 4 meters.To ensure efficient and effective communication throughout the system, we have placed six RSUs equipped with MEC servers at even intervals along the road.To determine the communication range of each entity, we have set specific ranges for cellular and mmWave communication for v n , v com i, v pub i, and R SU entities, with respective ranges of 100m, 100m, 120m, 200m, and 70m, 70m, 90m, 150m [32], [47].Furthermore, Fig. 3.A comprehensive overview of the MCLA scheme, showcasing the both V2V and V2I offloading mechanisms.Fig. 3A, the flow chart illustrates how the MCLA scheme handles task offloading in both V2V and V2I scenarios.Additionally, Fig. 3B presents the three-layered architectural structure of the VECN that underlies the MCLA scheme.
we have established specific data size and CPU cycle requirements for tk m , with data sizes ranging from 50 to 100 Mb and required CPU cycles ranging from 10 to 200 M-Cycles.The maximum allowable processing time for t max n is between 200 to 500 ms.C n is set to 5 units.Unless otherwise noted, all other simulation parameters are the same as listed in Table III.

B. Baseline Approaches and Evaluation Metrics
The proposed MCLA scheme is simulated in comparison with two variations, i.e., MCLA-I, MCLA-II, and are extensively evaluated in different contexts in comparison to the following baseline schemes while keeping the base simulation parameters values the same for all the schemes, unless mentioned.
• MCLA-I: It is the exact proposed scheme.
• MCLA-II: It differs from MCLA-I with a change that it does not include v pub i vehicles.
• MACTER: Mobility-aware computational efficiency based task offloading and resource allocation (MACTER) is a VEC based task offloading scheme where vehicles process a task locally or offload to the VEC computing network.This scheme optimizes the offloading decision and uses 5G-NR-V2X RAT for the vehicular communications [32].
• MAP: Mobility-aware partial (MAP) task offloading scheme is an LTE-C-V2X based computation offloading and resource allocation scheme which considers partial task offloading.This scheme optimizes the offloading decision and includes local, V2V, and V2I computation offloading scenarios [31].
• MAP-I: It is the same MAP scheme, but we slightly modify it by changing it's partial offloading mechanism to binary offloading.
• Conventional: It is identical to the MEC offloading scheme, where vehicles process their tasks either locally or offload to the MEC-server.

C. Findings and Discussion
The turnover ratio of offloaded tasks, costs, and latencies are the key performance evaluation metrics for evaluating a task offloading scheme in VECNs.Focusing on joint optimization of transmission and computation delays and costs, we use average delay, average total cost, and the number of completed tasks as the performance indicators in our experiments.
We conducted multiple assessments to validate our proposed MCLA scheme.Figs. 4 and 5 illustrate the comparative results and the impact of task size on system performance, relative to other baseline algorithms.We collected the results by maintaining uniform simulation parameters for all schemes, including our proposed MCLA algorithms and other benchmark schemes.Fig. 4 exhibits the correlation between task size and average system cost.Our proposed MCLA algorithms provide a much more cost-effective offloading solution than other baseline schemes.As expected, the average system cost increases as task size grows.For very small task sizes, all Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Fig. 4. The correlation between task size and average system cost (transmission + processing), and the cost convergence of task offloading schemes.Fig. 5.The relationship between task size and average total delay (transmission + processing) of task, as well as the delay convergence of task offloading schemes.
schemes are relatively similar until the task size reaches approximately 80MB.
In cases where the task size exceeded 80MB, the conventional scheme exhibited the poorest performance, ultimately converging at the highest cost.The MAP-I scheme showed improved performance compared to the conventional scheme, although it was not superior to all other schemes.The MAP scheme converged at a better position than the conventional and MACTER schemes, but not as favorably as the MCLA and MACTER schemes.The MCTER scheme remained a strong competitor to both MCLA schemes.However, as the task size increased, the MCLA schemes gradually converged at lower costs, while other schemes converged at considerably higher cost levels.Furthermore, the MCLA-I and MCLA-II schemes manage to achieve an average cost of 43.55 and 45.71, respectively.In comparison, the MACTER, MAP, MAP-I, and conventional schemes attain average costs of 46.52, 53.03, 56.40, and 62.09, respectively.These results indicate that our proposed MCLA schemes offer a much more costeffective solution for offloading tasks than the other benchmark algorithms.
Fig. 5 shows the relationship between task size and the average execution delays, including transmission delays.It is observed that for small task sizes, all schemes exhibit nearly identical average delays, which increase as the task size grows.The average delays of the conventional, MAP-I, and MAP schemes follow a similar convergence trend, which results in much higher delays compared to the MACTER and MCLA schemes.The MACTER scheme performs similarly to MCLA-II, while both heuristic and conventional schemes exhibit significantly poorer performance.On the other hand, our proposed MCLA schemes converge at a much lower delay compared to other baseline schemes.Specifically, the MCLA-I and MCLA-II schemes keep the overall delay at 204.70ms and 222.17ms, respectively, while the MACTER, MAP, MAP-I, and conventional schemes take an average delay of 228.71ms, 371.61ms, 381.53ms, and 387.14ms.These results suggest that the proposed MCLA schemes provide a more efficient solution in terms of execution and transmission delays.
There are several reasons for the significant performance gap between our proposed MCLA schemes and other baseline algorithms.One such reason is that the MCLA schemes utilize a dynamic approach to collect and share vehicular kinetic information from on-road vehicles through RSU controllers in the V2I mode, as well as through the PC5 direct V2V mode.This sharing of information occurs at a high frequency of 10Hz, using maximum-sized 1,400 bytes beaconing messages [7].The MEC servers are connected to each other either through an optical fiber link or through mmWave RAT, making this information sharing process negligible in terms of time.However, this continuous running process can become an overhead for other schemes that do not consider mmWave communications.
Furthermore, the proposed scheme exhibits superior performance due to its ability to calculate contact and stay time before making offloading decisions other than d m = 0.The mmWave communication range is limited compared to Mode-1 and Mode-2 of NR C-V2X RATs.Therefore, vehicles are already engaged in communication activities before the mmWave communication range begins.This prior C-V2X communication activity helps align mmWave antennas by exchanging antenna alignment instructions as part of normal V2V or V2I C-V2X based communication.Moreover, the contact duration and stay time can be easily calculated from this prior C-V2X communication.This initial C-V2X communication and contact information assist the MCLA algorithm in dynamic RAT selection segregation, making it superior to other schemes.
We assume that the processing power and loads of vehicles and MEC servers are also shared through the beaconing messages.Therefore, the MCLA scheme knows about the processing power and load of the destination offloading node before taking the offloading decision.This in turns, makes the MCLA scheme to be able to choose an optimal destination offloading node.In this manner, our proposed MCLA algorithm saves the processing costs and time along with the saving of transmission costs and time through the dynamic RAT selection mechanism.
A comparative analysis of MCLA schemes with other baseline schemes is shown in Fig. 6 with fixed t max n time.However, other simulation parameters are kept variable, as specified in Table III.The turnover ratio of tasks is directly impacted by t max n , as well as system costs and delays.Figure 6(a) shows the task turnover ratio for different fixed t max n values.while Figs.6(b) and 6(c) illustrate the effect of varying t max n on transmission and computation costs, and transmission and processing delays, respectively.When t max n is set between 200-500 ms, the local task processing ratio increases by around 13%-24%, and offloading reduces from 87%-76%.However, when t max n is further increased and tested at 800, 1,000, and 1,500 ms, the local processing turnover  remains around 30%, and offloading increases to 70% for all three cases.This unchanging local processing turnover is due to v n 's OBU reaching its total processing capacity, and offloading becomes the only option to complete tk m .Furthermore, extending t max n allows for more flexible ways to process offloaded tasks, leading to increased turnover but also increased delays and costs in transmission and processing.The average cost-to-task-turnover ratio for MCLA-I, MCLA-II schemes is 0.037 and 0.043, while for MACTER it is 0.042, and for MAP, MAP-I, and conventional schemes it is 0.041, 0.048, and 0.049, respectively.It is worth noting that a lower ratio indicates higher efficiency of the task offloading algorithm.However, in terms of the task turnover-to-delay ratio, MCLA-I and MCLA-II have ratios of 0.14 and 0.15, respectively, while MCTER has a ratio of 0.15.MAP, MAP-I, and conventional have ratios of 0.24, 0.25, and 0.23, respectively.In summary, the MCLA schemes outperform other schemes in terms of task turnover-cost ratio and task turnoverto-delay ratio in all scenarios of t max .
The speed of vehicles can impact the performance of VECNs, particularly in terms of task offloading.The primary aim of MCLA scheme is to optimize the task offloading process by maximizing the task turnover ratio while minimizing the associated costs.Therefore, we conduct several experiments on vehicle speed and present the results in Fig. 7(a)-(f), where we fixed the speed at 30kph, 45kph, and 60Kph and evaluated its impact on task turnover ratio and average cost of the offloading schemes.Figs.7(a increases, the task turnover ratio decreases.This is because the increased speed results in larger distances between the vehicles, leading to shorter stay times and contact durations, ultimately resulting in lower task turnover ratios.However, it is noteworthy that the speed of v com i has a more significant impact on task turnover ratio than the speed of v n and v pub i vehicles.On the other hand, Figs.7(d)-(f) demonstrate the impact of speed on average cost.As the task turnover ratio is affected by the speed, it is evident that the average cost would also be affected.For the variations in the speed of v n , the average cost-to-task-turnover ratio for MCLA-I and MCLA-II is 0.037 and 0.045, respectively.MACTER, MAP, and MAP-I have average cost-to-task-turnover ratios of 0.044, 0.045, and 0.054, while the conventional scheme has a ratio of 0.053.For the speed variations of v com i , the average costto-task-turnover ratio for MCLA-I and MCLA-II is 0.039 and 0.047, respectively.MACTER, MAP, MAP-I, and conventional schemes have average cost-to-task-turnover ratios of 0.046, 0.047, 0.055, and 0.056, respectively.For the variations in the speed of v pub i , the average cost-to-task-turnover ratio for MCLA-I and MCLA-II is 0.037 and 0.044, respectively.MAC-TER, MAP, MAP-I, and conventional schemes have average cost-to-task-turnover ratios of 0.043, 0.045, 0.052, and 0.053, respectively.In summary, in all cases of speed variations, the MCLA-I scheme has a higher task turnover with the lowest Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.cost and outperforms all other task offloading schemes in terms of average cost-to-task-turnover ratio., on different offloading schemes.Initially, v com i 's density was set to 5, 15, and 30, and then v pub i 's density was set to 4, 8, and 16, respectively.Other parameters were kept as described in Section IV-A and Table III.As shown in Figs.8(a) and (c), the task turnover ratio increases as the vehicle densities increase, resulting in a decrease in the average system cost, as demonstrated in Figs.8(b) and (d), respectively.In the case of v com i , the average cost-to-taskturnover ratio for MCLA-I and MCLA-II is 0.037 and 0.044, respectively.MACTER, MAP, and MAP-I offer the average cost-to-task-turnover ratios of 0.043, 0.044, and 0.053, respectively.In contrast, the conventional scheme has a ratio of 0.052.Similarly, in the case of v pub i , the average cost-totask-turnover ratio for MCLA-I and MCLA-II is 0.031 and 0.045, respectively.MACTER, MAP, and MAP-I have the average cost-to-task-turnover ratios of 0.044, 0.037, and 0.054, respectively.Again, the conventional scheme has a ratio of 0.053.These results can be attributed to the fact that resourcehungry vehicles can find resource-rich vehicles nearby, and V2V offloading is often preferred over V2I due to the higher cost of the latter.The vehicle density has a direct relationship with task turnover and an inverse relationship with the average system delays and costs.Additionally, lower densities of public vehicles v pub i result in higher task turnover than higher densities of common v com i vehicles.Each density variation case demonstrates that the proposed MCLA-I scheme consistently offers a higher task turnover, lower average cost, and the lowest average cost-to-task-turnover ratio compared to all the other schemes.MACTER also performs well, but both the heuristic and conventional schemes offer higher cost solutions with lower task turnover.
The average task turnover in relation to the average delay per task is illustrated in Fig. 9.In our investigation, we considered t max n , c n , task size, and vehicle speeds as random variables to evaluate the performance of each scheme.The results indicate that our proposed MCLA schemes demonstrate 89% and 87% efficiency in task offloading and processing, respectively.This is in contrast to MAP, MAP-I, MACTER, and conventional offloading schemes, which demonstrate 77%, 75%, 83%, and 74% efficiencies, respectively.The task turnover ratio of MCLA schemes is substantially higher than that of the other baseline schemes, as evidenced by Fig. 9.Moreover, the average task processing delays for MCLA schemes are significantly lower than those for MACTER, heuristic, and conventional schemes.
In VECNs, MEC servers are not only responsible for handling vehicular task offloading, but also perform other functions such as content delivery, caching, positioning, intelligent routing, and connected cars.However, if vehicles offload their computation tasks to MEC servers, it may cause an overload on the servers.Therefore, to maintain Quality of Service (QoS), load balancing of MEC servers is crucial.Fig. 10 presents a comparative analysis of our proposed MCLA scheme with and without load balancing.In a non-load balancing environment, MEC servers can handle vehicular offloading activities when their computational load is 60% or below.However, if the load increases to 70%, their availability for vehicular offloading activities decreases by about 10-11%.Furthermore, when the load reaches 80% and 90%, their availability decreases by nearly 31% and 73%, respectively.By considering load balancing mechanisms, the availability of MEC servers for vehicular task offloading can be increased by approximately 5%, 9%, and 7% when the computational loads are at 90%, 80%, and 70%, respectively.

D. Strengths, Limitations, and Future Recommendations
The MCLA task offloading scheme is a promising approach to minimize offloading cost and delay in VECNs.It considers mobility and load factors to make informed offloading decisions.However, there are limitations that need to be acknowledged.Firstly, mobility prediction introduces uncertainty, and inaccurate predictions may result in suboptimal offloading decisions.Secondly, the frequent exchange of messages between the vehicle's OBU and the edge server can lead to increased communication overhead and latency.Thirdly, it is worth noting that the MCLA scheme is primarily designed for offloading compute-intensive tasks and may not be as suitable for other task types, such as data-intensive tasks.Additionally, MCLA scheme solely considers the load on the mobile device and edge servers and not the network congestion level, which could lead to prolonged delays in high congestion scenarios.
Despite these limitations, the MCLA scheme holds promise in enhancing the efficiency and performance of VECNs by reducing offloading cost and delay.It is suitable for largescale VECNs, does not require centralized decision-making, and is compatible with different radio access technologies and cloud/edge computing paradigms.
Future recommendations for the MCLA scheme include employing advanced mobility prediction techniques, such as machine learning algorithms, to enhance offloading decision accuracy.Incorporating network congestion awareness by considering factors like network traffic load and bandwidth availability can optimize performance in congested scenarios.Furthermore, exploring the integration of the MCLA scheme with distributed controllers for edge resource virtualization holds promise in enhancing scalability.

V. CONCLUSION
In this paper, we present a study on the performance enhancement of VECNs.An MCLA task offloading scheme with heterogeneous access technologies under 5G-NR-V2X is proposed to alleviate the computation efficiency of VECNs.The MCLA scheme also includes public vehicles as power processing vehicular nodes to reduce computation latencies and costs.These vehicles can add allowed-shareable CPU cycles from passengers' mobile equipment to their computation capacity.To minimize transmission costs and latencies, the MCLA scheme dynamically switches between NR-C-V2X and mmWave RATs opportunistically.In the first phase, resource-rich destination offloading nodes are shortlisted by assessing their computation load and observing their mobility and contact correlation with the source offloading node.Later, a node complying with offloading task requirements with low cost and high throughput link is selected from the shortlisted nodes for offloading.We also provide a collaborative RSU load balancing mechanism.The results show that the MCLA scheme outperforms baseline schemes with a 4% to 15% increase in task turnover ratio and 4.7%-29.8%lower transmission and computation costs.The MCLA scheme also improves MEC server availability to vehicular nodes by 5% to 9% through it's load balancing mechanism.

Fig. 1 .
Fig. 1.A comprehensive overview of the system model, showcasing the framework mechanism employed for efficient task offloading under both NR C-V2X and mmWave technologies.

Fig. 2 .
Fig. 2. A conceptual illustration to showcase the initiation of mmWave communications among resource demanding vehicles v n and resource rich vehicles v i , and among resource demanding vehicles v n and in range roadside unit R j .
(3) by setting d n,i (t) = d mm n,i (t) and r n = r mm n .Moreover, the V2V communication in the mmWave mode must satisfy the condition 1 ≤ d mm n,i ≤ r mm n and for the PC5 mode r mm n ≤ d n,i ≤ r n .The stay time in terms of time slots t mm n,i of vehicle v n under v i 's mmWave coverage can be calculated by putting d n,i (t) = d mm n,i (t) in ( offloading vehicles v n generate tk m tasks where m = {1, 2, . . ., N }, and an offloading decision d m ∈ {0, 1, −1} associated to each task.d m = 0 means local computation decision of vehicle v n for tk m , when d m = 1, vehicle v n decides to upload its task to the nearby resourceful vehicle, and when d m = −1, the vehicle decides to offload its task to the MEC server.Each task of vehicle v n consist of a tuple {c n , s in n , s out n , t max n }, where c n is the required CPU cycles, s in n and s out n are the input and output sizes of tk m , and t max n

n then 5 d)
m = 0 (compute tk m locally) ∧ (v ̸ = φ) then 12 select v i of v by following (32) 13 d m = 1 (compute tk m at v i ) 19select r k of r by following(34) 20d m = −1 (compute tk m at r k )21 else 22 not enough resources to compute tk m at t 23 put tk m in waiting queue if allowed 24 end 25 if ((t n,i + t mm n,i ) > T i ) ∧ (d m = 1) at t + (T i ) then 26 transfer output directly to v n 27 else 28 upload output to R j or R j+1 in headway of v n 29 end 30 else if ((t n, j + t mm n, j ) > T j ) ∧ (d m = −1) at t + (T j ) then 31 transfer output directly to v n 32 else 33 transfer output to the nearest R j+1 in headway of v n

Fig. 6 .
Fig. 6.Comparison of MCLA schemes versus other baseline schemes in terms of completed tasks, average cost, and average delay.Figs.(a), (b), and (c) the analogy with fixed t n at 200ms, 350ms and 500ms, respectively.All the other simulation parameters are set as mentioned in Section IV-A and TableIII.

Fig. 7 .
Fig. 7. Comparison of MCLA schemes and other baseline schemes in terms of completed tasks and average cost with a fixed speed of 30kph, 40kph, and 60kph, respectively.Figs.(a), (d) pertain to the speed of v n vehicles, while Figs.(b), (e) correspond to v com i vehicles, and Figs.(c), (f) correspond to the ) and (d) correspond to v n vehicles, Figs.7(b) and (e) correspond to v com i vehicles, and Figs.7(c) and (f) correspond to v pub i vehicles.Figs.7(a)-(c) illustrate that as the speed of vehicles v n , v com i , and v pub i

Fig. 8 .
Fig. 8. Comparative depiction of the impact of vehicle density on tasks turnover and average cost.Figs.(a), (b) are associated with v com i vehicles,

Figs. 8
(a)-(d) illustrate the effect of vehicle densities, v com i and v pub i

Fig. 9 .
Fig. 9. Comparison of task offloading Schemes with respect to average number of completed tasks and average delay.

Fig. 10 .
Fig.10.The effects of load balancing across various MEC-server load levels for enhanced system performance.

TABLE I AN
IN-DEPTH COMPARATIVE ANALYSIS OF RELATED STUDIES AND OUR RESEARCH, EMPHASIZING DISTINCTIONS AND CONTRIBUTIONS

TABLE II LIST
OF FREQUENTLY USED KEY NOTATIONS The vehicle goes for d m = 1, when the condition T n ≤ t max n is not satisfied.Vehicle v n searches for the nearby resource sharing vehicle v i where v i ∈ {v Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.for the PC5 and mmWave modes is calculated as s in n /R n,i and s in n /R mm n,i , respectively.While the V2V down-link (i.e., back transmission of results) transmission latency T dn i for the PC5 and mmWave modes is calculated as s out n /R n,i and s out n /R mm n,i , respectively.
i }.When a vehicle v i is found, s inn is transferred to it from v n along with c n and t max n , which incurs an additional transmission latency.The V2V up-link (i.e., transmitting the data and programming Algorithm 1 Selection of Suitable Vehicles Subset v From Proximity Vehicles Input: one hop distant in-range vehicle set N at time t, tk m 's tuple.Output: suitable set of resource rich vehicles v 1

TABLE III PARAMETRIC
VALUES OF SIMULATION PARAMETERS