Trust in Vehicles: Toward Context-Aware Trust and Attack Resistance for the Internet of Vehicles

Trust evaluation and management schemes have been extensively employed in a bid to alleviate diverse insider attacks. These trust values are often ascertained by taking into consideration trust parameters to determine the honesty and reliability of a vehicle and evaluating a weighted sum of the said parameters. To achieve precision and to reflect a reasonable impact of these parameters, rational weight values are imperative. Accordingly, this research primarily emphasizes on the quantification of weights associated with the contributing trust attributes by proposing a novel trust management mechanism that utilizes contextual information in addition to employing relevant impacting quantities as weights to formulate trust evaluations. Moreover, the envisaged trust management model incorporates 1) attack resilience while constituting certain parameters and 2) an adaptive and flexible threshold to mitigate malevolent behaviors. The simulation results depict that the devised parameters and the formulated trust aggregation cater to the dynamic nature of vehicular networks demonstrating the rationality of the weights’ quantification, and the introduction of the adaptive threshold for misbehavior detection aligns well with the requirements of the ever-changing vehicular networks.

Abstract-Trust evaluation and management schemes have been extensively employed in a bid to alleviate diverse insider attacks. These trust values are often ascertained by taking into consideration trust parameters to determine the honesty and reliability of a vehicle and evaluating a weighted sum of the said parameters. To achieve precision and to reflect a reasonable impact of these parameters, rational weight values are imperative. Accordingly, this research primarily emphasizes on the quantification of weights associated with the contributing trust attributes by proposing a novel trust management mechanism that utilizes contextual information in addition to employing relevant impacting quantities as weights to formulate trust evaluations. Moreover, the envisaged trust management model incorporates 1) attack resilience while constituting certain parameters and 2) an adaptive and flexible threshold to mitigate malevolent behaviors. The simulation results depict that the devised parameters and the formulated trust aggregation cater to the dynamic nature of vehicular networks demonstrating the rationality of the weights' quantification, and the introduction of the adaptive threshold for misbehavior detection aligns well with the requirements of the ever-changing vehicular networks.
Index Terms-Vehicular ad hoc networks, Internet of Vehicles, context-aware trust management, weight quantification, attack resistance, adaptive threshold.

I. INTRODUCTION
T HE evolving need and growing mobility demands have resulted in an exponential increase in the number of cars on the road [1]. The number of registered road vehicles in Australia alone was recorded to be 20.1 million as of 31 January 2021, according to the Australian Bureau of Statistics [2]. The tremendous growth in terms of the number of vehicles leads to a rising fatality rate, which is estimated to be about 3,700 people daily due to road accidents, worldwide [3]. Lately, the development of innovative traffic control and road safety techniques has attracted much interest from researchers in both industry and academia [4]. Vehicular ad hoc networks (VANETs) hold paramount importance in alleviating traffic-related issues in urban areas. The amalgamation of cloud and edge computing with big data and the Internet of Things (IoT) is impelling the evolution of VANETs to bring forth the notion of the Internet of Vehicles (IoV) [5]. Smart connected vehicles relying on vehicle-to-everything (V2X) communications aid in providing safe and efficacious traffic flows, subsequently, supporting next-generation road mobility and transport [6]. V2X communications encompass vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-sensor (V2S), vehicle-to-pedestrian (V2P), and vehicle-to-cloud (V2C) communications, wherein vehicles utilize wireless media to exchange information with other vehicles, surrounding infrastructure, onboard sensors, personal devices, and the cloud computing environment, respectively [7], [8]. The V2X-based application scenarios generally include i) safety-critical applications, e.g., crash avoidance and collision notification, ii) non-safety applications, e.g., navigation, anti-theft, and iii) entertainment [9]. The recent breakthroughs in Intelligent Transportation Systems (ITS) are primarily related to the acquisition and processing of high volumes of sensor data [10], [11]. The data acquired by the embedded sensors are exchanged with other onboard sensors and with diverse sensors in the vicinity to provide real-time traffic management and ensure road safety [12]. It is, therefore, crucial that the information itself and the exchange of this information are secure and reliable. However, vehicular networks are susceptible to attacks, wherein dishonest entities can modify legitimate safety messages, spread counterfeited information, or forward messages with a delay, subsequently, endangering human lives [13]. The ever-evolving topology owing to the highly mobile nature of vehicular networks, decentralized architecture, pervasive operation, and open infrastructure make it challenging to ensure security and make vehicular networks vulnerable to both insider and outsider attacks [14], [15]. A comprehensive review of the literature demonstrates that numerous cryptography-based security solutions have been suggested over the years. However, these techniques alone have only been proven useful for mitigating outsider attacks, wherein, the attackers are unauthorized users of a vehicular network [16]. Vehicles exchange information in an IoV environment that is fundamental for achieving safety and security objectives. These objectives can be realized in the form of collision avoidance warning, emergency brake notification, and right/left turn assistance, etc. Moreover, this data sharing also helps with non-safety (infotainment) applications such as navigation systems, Internet services, and file sharing services.
To tackle insider attacks on vehicular networks, the notion of trust has lately been introduced and several trust management models have been proposed [17]. Researchers from diverse domains have presented multiple definitions of trust and the respective trust attributes over the years [18], [19], [20], [21]. Trust is defined as the belief of a vehicle (referred to as a trustor) in its peer vehicle (referred to as a trustee) relying on the past interactions among the two and the opinions towards a trustee, acquired by a trustor's neighboring vehicles.
Trust computation in the said trust management models takes into account numerous parameters, e.g., quality of past interactions (i.e., packet delivery ratio), neighbor recommendations, time, distance, familiarity, frequency of interactions, and amalgamate these parameters to compute the final trust value. While accumulating these parameters, weights are often associated with individual parameters to reflect their significance in the final trust score. Depending on the model, the final trust scores are often termed as local or global trust and the same is not only witnessed in vehicular networks but has been widely employed in other domains as well [22]. In order to decide which vehicles are honest and which are not, a threshold is defined and vehicles having a trust value above this predefined threshold are categorized as trustworthy. If the trust score of a vehicle falls below the said threshold, that vehicle is identified as a malicious vehicle, and information received only from a trusted vehicle is accepted.
Context can be defined as the knowledge that can prove useful in determining the circumstances of a target entity. The said entity can be an individual, a location, or an object (e.g., a vehicle) that has specific relevance to the interaction among an end-user and an application, in addition to the end-user and the application [23]. Similarly, context-aware security is defined as a collection of context-specific supplementary information having relevance to the security practices concerning a certain task which aids in enhancing the security-related decision making [24], [25]. Accordingly, we define context as the category of the communication or message exchange, i.e., safety-critical or non-safety (infotainment), among vehicles in vehicular applications during the trust evaluation process as depicted in Fig. 1. Safety-critical applications such as collision warning primarily focus on improving road safety and alleviating road accidents by preventing collisions, including collisions of vehicles with the other vehicles and with the vulnerable pedestrians. These collisions can occur as a result of stop sign violation, traffic signal violation, and lack of pedestrian crossing information. Non-safety applications such as file sharing services enable passengers to exchange data with one another via transferring pictures, audios, videos, etc. Such applications include peer to peer (P2P) applications such as bit torrent, CarTorrent, and Fleenet, which helps improve the passengers' experience and comfort [26].
The massive increase in smart connected vehicles mandates reliance on information sharing among vehicles in a bid to make independent judgement and decisions for their own safety and efficiency. The accuracy of these decisions is entirely dependent on the credibility of the shared information. For instance, imagine a vehicle receiving altered information regarding forward collision warning resulting in an accident or a vehicle getting false incident reports causing unnecessary rerouting.
Vehicular networks, analogous to mobile ad hoc networks [27] and cloud computing environments [28], are susceptible to a variety of attacks, including but not limited to, on-off attack, selective node attack, man-in-the-middle attack, and collusion attack [29]. Vehicles acting selfishly, counterfeiting or altering messages, introducing delays, dropping messages, and assigning false ratings to messages as well as peer vehicles, etc. may cause life threatening accidents when dealing with safety-critical applications, whereas, for nonsafety applications, it may lead to inconvenience, discomfort, and waste of resources.
The research-at-hand addresses the weights (i.e., of contributing parameters) and trust quantification along with the dynamic misbehavior detection challenges in the ever-changing vehicular networks for safety-critical and nonsafety applications by incorporating context information. This is essential in order to realize safe and reliable vehicular traffic flows as rational weight assignment and context-awareness lead to an accurate trust assessment. Subsequently, these trust evaluations assist in deciding a precise misbehavior detection threshold as setting the threshold too low or too high leads to inaccurate categorization of honest and dishonest vehicles. Moreover, it is equally important for this threshold to be adaptive, i.e., it should be well aligned with the dynamic nature of the vehicular networks to ensure timely detection of malevolent vehicles before they could damage the network operations. To the best of our knowledge, the problems concerning rational weight assignment and threshold quantification while incorporating application specific context information and attack resistance with the evolving characteristic of the vehicular networks have not been catered for despite their imminence.

A. Our Contributions
In this paper, we make the following main contributions: • Orchestrating a trust management framework by incorporating the context (i.e., information regarding safety-critical and non-safety (infotainment) applications) of the communication among vehicles and by exploring diverse contributing attributes to guarantee a rigorous criterion while evaluating trust scores to meet the stringent demands of these vehicular applications.
• Addressing the problem pertinent to weight quantification by employing suitable and rational influencing parameters as weights, catering for resilience against multiple attacks, e.g., on-off attacks and selective node attacks when formulating contributing parameters, and integrating historical behavior with relevant time-aware impact besides introducing the influence of a vehicle's historical misconduct for penalizing purposes to reflect the network dynamics.
• Formulating a flexible and adaptive misbehavior detection threshold to alleviate malicious conduct while accommodating the dynamic nature of vehicular networks, furthermore, carrying out extensive simulations that demonstrate the rationality of the context-awareness, devised contributing trust parameters, weights' quantification, and the proposed adaptive threshold to cater with the requirements of the ever-changing vehicular networks.

B. Organization of the Paper
The rest of the paper is organized as follows. Section II provides an overview of the state-of-the-art trust management models. Section III discusses the system architecture and covers the details of our proposed trust management model. Section IV presents the misbehavior detection mechanism, whereas Section V reports the simulation setup and the experimental results. Finally, Section VI offers some concluding remarks.

II. RELATED WORK
An extensive review of the existing literature suggests that a diverse range of trust management models in VANETs has been proposed [30], [31], [32], [33], [34], [35]. Rational weight allocation and formulation of a precise threshold along with the consideration of context information are crucial to cater to the dynamic nature of vehicular networks. Nonetheless, the existing literature has significant shortcomings in addressing the same. Chen et al. [30] propose a trust management model reliant on blockchain for decentralized trust computation of vehicles. The proposed scheme calculates the global trust for every vehicle by computing the weighted sum of the vehicle's previous trust score, and its message sending and rating behaviors. Nevertheless, the values of the weights associated with these parameters remain unexplained. On the contrary, Keshavarz et al. [31] present a trust management scheme, wherein the trust scores are computed in a centralized manner for each unmanned aerial vehicle (UAV) by calculating the weighted sum of a trustee's energy consumption, its task success rate, and the path deviation. Nonetheless, the quantification of the said weights remains unexplored.
Alnasser et al. [32] propose a trust assessment scheme, wherein the current trust and the indirect trust of a vehicle are amalgamated to compute the global trust of that vehicle.
The mean of the direct trust and that of the past trust is taken as the current trust, whereas the indirect trust is the weighted sum of both, negative and positive, recommendations resulting from combining the confidence score and the global trust. This suggests that both, direct and past trusts, have an equal importance in the current trust evaluation. The global trust computation employs the notion of weights. However, the quantification of these weights remains unexplained. The trust management model proposed by Hasrouny et al. [33], calculates the trust score of a vehicle by aggregating the direct and the indirect trust of the vehicle evaluated by peer vehicles, group leader, and the Roadside Unit (RSU). The said assessments are performed under two scenarios, normal and event triggered. While computing the total trust score at the vehicle level, manually assigned (i.e., predefined) weights have been associated with the direct trust, whereas the trust score calculated at the group leader level is the mean of all the trusts computed at the vehicle level, which implies that the trust assigned by each peer has an equal influence. Moreover, the quantification of the predefined weights remains unexplained.
Wang et al. [34] present a trust computation scheme, wherein the neighbor trust is computed as the weighted sum of the success rate and that of the packet number trust. The proposed model aggregates the sensing capability of a vehicle, its communication behaviors, and its weighted energy trust to assess the total trust of a vehicle. However, the authors do not discuss the quantification of the associated weights. Dewanta et al. [35] evaluate the trust between a fog client vehicle and a fog service provider vehicle as the weighted sum of parameters, e.g., entity type, bidding number, and record of the transaction. Nevertheless, the quantification of these parameters and their respective weights has not been discussed.
Kang et al. [36] associate predefined weight parameters with positive and negative interactions, historical and current interactions, reputation computation, and verifier's incentive without shedding any light on the quantification of the values of these weights. Similarly, predefined values for weighted factors assigned to data and control trusts have been utilized by Zhang et al. [37], nevertheless, the manuscript lacks a discussion on the quantification of the same. Furthermore, the assigned weight values suggest an equal preference, which makes the idea of introducing weights meaningless.
Luo et al. [38] employ equal weights for past and current behaviors to prevent trust boosting by relying on recent interactions. However, assigning equal preference to both may not be a good idea for newcomers as they will not have any past behaviors to rely on, resulting in honest vehicles getting lower trust scores. Moreover, the quantification of the values defined as balancing factors has not been discussed by the authors. Li et al. [39] introduce cold start and equilibrium parameters while computing a vehicle's behavior. However, the quantification of these parameters has not been addressed.
In contrast to the state of the art, our envisaged approach primarily focuses on the allocation and quantification of rational weights and the definition of a precise adaptive threshold along with the consideration of contextual information to cater to the dynamic nature of vehicular networks. respectively, context defines the nature of the interaction (i.e., safety-critical or non-safety), and the roadside unit is the local authority that accumulates pairwise local and context-dependent trusts.

III. TRUST EVALUATION AND MANAGEMENT FOR IOV
The overall system architecture of the envisaged trust management model comprises vehicles interacting with other vehicles within a vehicular cluster and forming opinions about each other. Furthermore, the context (i.e., safety-critical application, and non-safety application) of the interaction among vehicles has also been considered while performing these evaluations. The said opinions are integrated and the resulting pairwise (i.e., from a trustor to a trustee) local trust along with the context dependant trust is reported to the local authority, i.e., the roadside unit, where these assessments are combined to obtain an aggregated pairwise trust value. Subsequently, a global trust score is computed for each vehicle by accumulating these aggregated pairwise trust values to establish a single belief about each vehicle. Fig. 2 sketches the system architecture, whereas Fig. 3 presents the detailed system framework of the proposed trust management model.
We define a set of vehicles V v , where v = {1, . . . , V }. At every time instance k, each vehicle interacts (i.e., communicates) with other vehicles in its vicinity, and accordingly, assesses each other on the basis of the quality of interaction among them. The assessment takes place in the form of pairs, i.e., the vehicle assessing the other vehicle is the trustor i and the one being assessed is the trustee j (i ̸ = j), and is termed as local trust L T i, j,k . Consequently, every trustee j is evaluated by all of its neighbors (i.e., |v − 1|) trustors, to compute the global trust GT j,k of the said trustee j. Table I summarizes the notations employed in the proposed system model.

A. Local Trust (L T i, j,k )
The local trust encompasses the weighted sum of i) the direct trust, i.e., the direct observation of a trustor i towards a trustee j at time instance k, and ii) the indirect trust, i.e., the opinion of all the neighbors (i.e., |v − 2|) of a trustor i towards a trustee j at time instance k. 1) Direct Trust (DT i, j,k ): The direct trust represents the direct opinion of a trustor i towards a trustee j based on the quality of the interaction among the two. The quality of the said interaction is measured by the packet delivery ratio (PDR) between a trustor i and a trustee j.
Packet Delivery Ratio (P D R i, j,k ) -The packet delivery ratio (0 ≤ P D R i, j,k ≤ 1) represents the proportion of the successful interactions to the total number of interactions among a trustor i and a trustee j, and is computed as: where, s i, j,k represents the successful interactions from j to i at time instance k, whereas u i, j,k represents the unsuccessful interactions from j to i at time instance k. Time Decay ( i, j,k ) -The time decay (0 ≤ i, j,k ≤ 1) represents how recent the interaction among a trustor i and a trustee j is, and is computed as: where k int i, j represents the time instance when the said interaction took place, whereas k curr ent represents the current time instance.
Forgetting Factor (λ i, j,k ) -The forgetting factor (0 ≤ λ i, j,k ≤ 1) ensures that the dishonest behavior of a trustee is not easily forgotten, and is computed as: where L T i, j,k−1 represents the local trust of trustor i towards trustee j on the previous time instance k − 1. The Local Trust has been initialized as L T i, j,k = 0.5 at k = 0. As depicted in Algorithm 1, the direct trust DT i, j,k of a trustor i towards a trustee j at the time instance k takes into account the PDR amongst the two at the said time instance and the weighted sum of the PDR amongst the two at the earlier time instances. The PDR of the historical interactions is weighted w.r.t. the freshness of the particular time instances, i.e., the recent interactions are assigned higher weights as they are deemed more significant as compared to the old interactions while computing the direct trust. Furthermore, it has also been ensured that the untrustworthy behavior of a trustee is not forgotten easily by introducing a forgetting factor, i.e., the more untrustworthy behavior a trustee shows in the earlier time instance, the more it will be remembered by assigning it a higher weightage, and vice versa. The direct trust is computed as: where P D R i, j,k represents the PDR among a trustor i and a trustee j at time instance k (Eq. (1)), whereas P D R i, j,l represents the PDR regarding the historical interactions (i.e., 1, . . . , k-1). Moreover, i, j,l represents the weight of the specific historical interaction, i.e., the time decay factor (Eq. (2)), whereas λ i, j,l is the forgetting factor (Eq. (3)) to include the impact of previous untrustworthiness.
The advantage of taking historical interactions into account is that it helps prevent on-off attack, wherein a vehicle switches between the attacking and the disguise mode to avoid detection and possible elimination from the network. Moreover, it is worth noting that if a vehicle is unable to communicate even because of unintentional reasons, e.g., a jammed/blocked link, that particular vehicle is unreliable for a trustor in that time instance as the trustor cannot trust that specific vehicle to relay messages, particularly, safety-critical messages.
2) Indirect Trust (I DT i, j,k ): The indirect trust represents the recommendation/opinion of the neighbors (i.e., |v − 2|) of a trustor i towards a trustee j at time instance k. The said recommendation/opinion relies on the direct trust computed by the neighbor n towards a trustee j at time instance k.
Confidence Factor (θ i,n,k ) -The confidence factor (0 ≤ θ i,n,k ≤ 1) represents how reliable a trustor considers its neighbor, and is computed as: where s i,n,k is the successful interactions among a trustor i and its neighbor n, whereas v j=1 s i, j,k , j ̸ = i represents the successful interactions among a trustor i and all of its |v − 1| neighbors.
As delineated in Algorithm 2, the indirect trust I DT i, j,k of a trustor i towards a trustee j at time instance k takes into consideration the weighted sum of the DT computed by each neighbor (i.e., |v − 2| neighbors) of a trustor towards a trustee at the said time instance. DT is weighted w.r.t. the confidence level of a trustor towards its neighbors, i.e., the more the confidence level of a trustor towards a neighbor, the more weightage is assigned to the said neighbor's opinion. The indirect trust is computed as: where θ i,n,k is the confidence factor of a trustor i towards its neighbor n (Eq. (5)), whereas DT n, j,k represents the opinion, i.e., the direct trust of the said neighbor n towards a trustee j at time instance k (Eq. (4)).
The advantage of taking into account the opinion of all the neighbors is that it prevents the selective node attack, wherein a node (i.e., a vehicle) switches between the honest and the dishonest behaviors while interacting with different vehicles.
3) Aggregation of A.1 and A.2: The direct trust DT i, j,k and the indirect trust I DT i, j,k are aggregated to compute the local trust L T i, j,k of a trustor i towards a trustee j at time instance k. The said aggregation is the weighted sum of the two.
Frequency of Interaction (β i, j,k ) -The frequency of interaction (0 ≤ β i, j,k ≤ 1) represents how regular a trustor and a trustee communicated, i.e., interacted, and is computed as: where k l=1 x i, j,l represents the interactions between a trustor i and a trustee j at the current and all the previous time instances, whereas v j=1 j̸ =i k l=1 x i, j,l represents the interactions between a trustor and all its |v − 1| neighbors at the current and all the previous time instances.
The local trust L T i, j,k considers the frequency of interactions as the weighting factor while amalgamating the direct and the indirect trust from a trustor towards a trustee at any time instance as outlined in Algorithm 2. The rationale behind introducing the frequency of interactions as weight is that if a trustor has had significant interactions with the trustee, the trustor would be able to form an accurate opinion towards the trustee. However, if the trustor has not interacted much with the trustee, it will rely more on the recommendations from its neighbors towards the trustee. The local trust of a trustor i towards a trustee j at time instance k is computed as: where β i, j,k is the frequency of interactions among a trustor i and a trustee j at time instance k (Eq. (7)), whereas DT i, j,k Algorithm 2 Indirect and Local Trust Computation at Time k Input: successful interactions s i, j,k , pairwise direct trust DT i, j,k , frequency of interaction β i, j,k Output: Pairwise Indirect Trust DT i, j,k , Local Trust L T i, j,k for k ← 1 to K do for i ← 1 to V do for j ← 1 to V do if i! = j then for n ← 1 to V do if n! = i && n! = j then θ i,n,k ← Con f idence Factor (s i,n,k , s i, j,k ) Eq.(5) DT n, j,k ← N eigh Recom(DT i, j,k ) Eq.(4) I DT i, j,k ← I ndir ect T r ust (θ i,n,k , DT n, j,k ) Eq.(6) end update I DT i, j,k L T i, j,k ← Local T r ust (DT i, j,k , β i, j,k , I DT i, j,k ) Eq.(8) end end end end end Algorithm 3 Context-Dependent and Global Trust Computation at Time k Input: distance D i, j,k , propagation speed S p , neighbors N , context C t xt i,, j,k Output: Pairwise Context Dependent Trust C DT i, j,k , Global Trust GT j,k for k ← 1 to K do for i ← 1 to V do for j ← 1 to V do if i! = j then P D i, j,k ← Pr opagation Delay(D i, j,k , S p ) Eq.(9) Co i, j,k ← Cooperativeness(N j,k , v) Eq.(10) F i, j,k ← Familiarit y(N i,k , N j,k ) Eq.(11) C DT i, j,k ← Cnt xt Dep T r ust (P D i, j,k , Co i, j,k , F i, j,k ) Eq.(12) T L T i, j,k ← T ot Local T r ust (L T i, j,k , C t xt i, j,k , C DT i, j,k ) Eq.(14) end end end GT j,k ← Global T r ust (GT i,k−1 , T L T i, j,k ) Eq.(15) end and I DT i, j,k are the direct (Eq. (4)) and the indirect trusts (Eq. (6)) respectively. Equation (8) indicates that in case the trustor has had frequent interactions with the trustee, the direct trust would be assigned more weight, whereas if the trustor has had very little interaction with the trustee, it would rely more on the neighbors' opinion and consequently the weight associated with the indirect trust will have a higher value.

B. Context Dependent Trust (C DT i, j,k )
The context dependant trust encompasses the weighted sum of (a) the propagation delay, i.e., the time it will take the message to traverse from a trustee to a trustor, (b) the cooperativeness, i.e., how cooperative a trustee is with other vehicles, and (c) the familiarity, i.e., how well a trustor knows a trustee as summarized in Algorithm 3. 1) Propagation Delay (P D i, j,k ): The propagation delay (0 ≤ P D i, j,k ≤ 1) represents how long it takes the messages to traverse from a trustee to a trustor and is computed as: where D i, j,k represents the distance between a trustor i and a trustee j at time instance k, whereas Sp is the propagation speed of the messages from a trustee j to a trustor i. Propagation Delay is employed to indicate the preference of a trustor towards a trustee.
2) Cooperativeness (Co j,k ): The cooperativeness (0 ≤ Co j,k ≤ 1) represents how interactive, selfish, or cooperative a trustee is, and is computed as: where N j,k is the set of neighbors/vehicles that a trustee j interacts with at time instance k, whereas v is the number of vehicles in the network. It should be noted that a vehicle does not necessarily communicate/interact with all of its neighbors at every time instance. Cooperativeness is considered to reduce the preference of a trustee behaving selfishly, wherein vehicles preserve their resources by not interacting with other vehicles and by not relaying the received messages. If a vehicle is more cooperative, the message will be disseminated faster to a large number of vehicles resulting in a safer network.
3) Familiarity (F i, j,k ): The familiarity (0 ≤ F i, j,k ≤ 1) represents the proportion of common neighbors a trustor and a trustee have and is computed as: where N i,k ∩N j,k is set of common neighbors among a trustor i and a trustee j at time instance k, whereas N i,k is the neighbors of a trustor i at time instance k. The concept of familiarity is taken from social networks, wherein entities tend to trust other entities that are more familiar. Furthermore, having more common neighbors ensures that there are at least two links (one through the trustor and one through the trustee) present to disseminate messages to the neighboring vehicles. 4) Aggregation of B.1, B.2, and B.3: where, 1 − P D i, j,k is the reverse of the propagation delay, Co j,k is the cooperativeness, and F i, j,k is the familiarity among vehicles computed using Eq. (9), (10), and (11) respectively. This way, the lower the propagation delay and the higher the cooperativeness and familiarity between two vehicles are, the higher the context-related trust is. The assignment of equal weights to all three constitutive elements (i.e., P D, Co, and F) reflects the equivalent significance of each of the elements while computing C DT as they all play an equal role towards expediting the propagation of information.

C. Total Local Trust (T L T i, j,k )
The total local trust is the pairwise Total Local Trust and takes into consideration the pairwise local trust (i.e., the amalgamation of the direct and the indirect trust), context dependant trust (i.e., the accumulation of the propagation delay, cooperativeness, and familiarity measures), and the contextual information as depicted in Algorithm 3.
C t xt i, j,k = 0.5, if safety-critical application. 0, otherwise. (13) where C t xt i, j,k is the context of the messages exchanged among a pair of a trustor and a trustee. In this paper, we consider two cases of context, i.e., safety-critical application and non-safety (infotainment) application: where T L T i, j,k is the pairwise Total Local Trust assigned by a trustor i to a trustee j at time instance k. L T i, j,k , C DT i, j,k , and C t xt i, j,k are as defined in Eqs. (8), (12), and (13), respectively.

D. Global Trust
The global trust is the overall trust of a trustee j in the entire network at time instance k and is computed by aggregating all the pairwise total local trusts assigned by each trustor i to a target trustee j at a specific time instance k to form a single belief about the said trustee at the specified time instance as mentioned in Algorithm 3.
where T L T i, j,k is the pairwise Total Local Trust assigned by a trustor i to a trustee j at time instance k (Eq. (14)), whereas GT i,k−1 is the global trust of a trustor i at the previous time instance k − 1. An algorithm flowchart depicting relationships between individual algorithms discussed in this work has been presented in Fig. 4.

IV. MISBEHAVIOR DETECTION
In this section, a misbehavior detection mechanism has been instituted, wherein vehicles having a global trust score less than 0.5 (i.e., the mean value of the lowest (i.e., 0) and the highest (i.e., 1) possible trust values, otherwise stated as the neutral trust score) are considered as suspicious vehicles V s k . An adaptive and flexible threshold is introduced to accommodate the dynamic and ever-changing nature of vehicular networks for the purpose of identification of malicious vehicles V m k among the suspicious vehicles and exterminating these dishonest vehicles from the network using Eqs. (16) and (17). The envisaged scheme offers the flagged malicious vehicles an opportunity to redeem themselves, i.e., such malicious vehicles are not eradicated from the network the first time they are flagged. However, any subsequent flagging as being malicious will eliminate such vehicles from the network. This way it is ensured that the designed model is adapting according to the network dynamics and that it does not throw any vehicles out before taking the time to observe if the vehicle has improved its behavior.
where V s k is set of suspicious vehicles at time instance k, whereas GT j,k is the global trust of a trustee j at time instance k (Eq. (15)) and |v| is the number of vehicles in the network.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. where V s k and V m k are sets of suspicious and malicious vehicles, respectively, at time instance k. If the global trust score (GT j,k ) of a vehicle falls below 0.5, the said vehicle is considered suspicious, i.e., it can be either trustworthy or malicious. However, to verify if the said vehicle is trustworthy, a threshold (T h adapt k ) is computed, as defined in Eq. (16) and vehicles having a global trust value above this threshold are classified as trustworthy.

V. SIMULATION SETUP AND RESULTS
The acquisition of the dataset utilized in this paper comprises an IoV-based simulator [40], wherein vehicles traverse on the road with random speeds between 50 km/h -70 km/h in line with [41]. For visualization purposes, the said simulation runs for a duration of 30 minutes (i.e., 1,800 seconds). It has been kept 3 times as much as a recent paper published in IEEE Transactions on Vehicular Technology [42]. The stated duration has been divided into 10 time instances (each of the time instance corresponds to 3 minutes). The simulation time can be easily increased, however, plotting a large number of time instances will make the results difficult to visualize. This issue can be addressed by plotting only selected time instances, however, it will affect the time-based analyses of the said results. During this time, the vehicles exchange packets with other vehicles in the network. For every interaction amongst a pair of vehicles (i.e., between a trustor and a trustee), the number of packets originating from the source vehicle is kept constant. However, the number of packets successfully received at the destination vehicle varies to contextualize the packet delivery ratio. The context of the said interactions is also captured in terms of either safety-critical or non-safety (infotainment) applications and the context of communication among a specific pair of vehicles (i.e., a particular trustor and a particular trustee) is assumed to be constant at a given time instance. Moreover, the information regarding the time instances when a specific interaction happened, and the distance between a trustor and a trustee are also gathered, assuming the distance between a pair of vehicles remains constant throughout a given time instance. The generated dataset has 1,084 interactions arranged across 7 attributes (i.e., time, trustor, trustee, successful interactions/messages, total interactions/messages, delay, and context). Table III provides the structure of the said dataset. Fig. 5 presents an illustration of the said IoV-based simulator implemented in Java. The remaining simulations are carried out on MATLAB. Fig. 6 depicts the effects of time decay while computing Direct Observation for Trustee 2 (Fig. 6(a) and Trustee 6 ( Fig. 6(b) assigned by all trustors at time instance 5. Direct  trust is not merely equal to the successful message transmission among a trustor and the target trustee, but it also takes into consideration the forgetting factor and time decay resulting in a much more constricted direct trust. Taking into account the time decay factor as opposed to only computing the current successful transmission ratio among a trustor and the target trustee or simply averaging the current and all past PDRs restricts the value of direct trust calculated by reflecting the impact of the transmission history with the reasonable influence of each. Fig. 7 illustrates a comparison of the global trust computed by introducing the forgetting factor, i.e., the untrustworthiness Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  of a trustee in the past to the global trust calculated without taking into account the previous bad behavior of a trustee. It is evident from the figure that the injection of the past untrustworthiness influences and lowers the global trust of a trustee in the event of the target trustee depicting unreliable behavior. Fig. 8 exhibits the effect of employing context on the trust assigned to Trustee 2 ( Fig. 8(a) and Trustee 6 ( Fig. 8(b) by all trustors at time instance 5. The trust computed for safety-critical message exchange among a pair of a trustor and a trustee exerts additional parameters specific to the sensitive nature of such communication, e.g., propagation delay, familiarity, and cooperativeness resulting in a lower or a higher trust score between the said pair. For instance, Trustee 2 is more trustworthy than Trustor 3 and Trustor 5 when exchanging safety-related information while for Trustors 4 and 6, it is less trustworthy. This is because of the delay between a pair as shown in Fig. 9, i.e., the safety-related trust is inversely proportional to the delay or distance between a trustor and a trustee. Fig. 10 depicts the familiarity computed for Trustee 2 vis-á -vis all the trustors at time instance 5 along with the corresponding pairwise context dependent trust. It can be observed that the familiarity has an influence on the context dependent trust. For instance, Trustor 4 has a higher familiarity with Trustee 2 as compared to the familiarity amongst Trustor 5 and Trustee 2, whereas, Trustor 6 shares a higher familiarity with Trustee 2 in contrast to Trustor 5, and the same is reflected in the context dependent trust as well. However, the drop in familiarity from Trustor 2 to Trustor 3 or from Trustor 6 to Trustor 7 is not seen in the corresponding context dependent trust. This is because of a significant drop in the message delay amongst the said pairs resulting in an improved context dependent trust. It has been observed that despite assigning equal weights to familiarity and delay in the context dependent trust, the influence of delay is more noticeable than that of the familiarity.
Analogous to Fig. 8, Fig. 11 also demonstrates the impact of contextual information but on the overall, i.e., global trust of a trustee and in the form of heat maps (i.e., Fig. 11(a) for global trust with context and Fig. 11(b) for global trust without context). The introduction of context information employs a different set of parameters to compute the trust of a nonsafety-related message exchange as compared to the safetyrelated interaction. This translates to a much more stringent criterion when dealing with communication regarding road safety leading to higher/lower trust scores depending on the additional context parameters. For ease of illustration, the difference between the two heat maps has been depicted in Fig. 11(c). It is worth noting that the residual map consists of all positive values, i.e., it shows the difference in values of global trusts in terms of positive values only. Fig. 12 and Table IV delineate a comparison of the direct and indirect trust aggregation carried out by using three different approaches, BTCMV [43], the baseline [44], and the proposed. BTCMV [43] is a recently published article in IEEE Transactions of Intelligent Systems in 2021. The said work utilizes Baysian inference to compute the direct trust which is combined with the peer recommendations to determine the trustworthiness of a vehicle. The parameters considered in the referenced work are relevant to the ones employed in the envisaged scheme for comparison reasons. Baseline employs the most commonly used method for trust aggregation, i.e., by associating equal weights with the contributing parameters [44]. The cited work has a significant relevance for comparison as one of the major contributions of the proposed framework is assigning rational weights to the contributing parameters.
The aggregated trust for BTCMV is computed by utilizing our dataset (i.e., due to the unavailability of the same dataset applied in BTCMV) on Eqs. (2), (7) and (9) in [40], i.e.,   The aggregated trust for BTCMV is computed by utilizing our dataset (i.e., due to the unavailability of the same dataset applied in BTCMV) on Eqs. (2), (7), and (9) in [43], i.e., the equations for aggregate, direct, and recommendation trusts, respectively employed in the referenced work. Moreover, utmost efforts have been made to replicate the parameters and conditions in conformity of the said referenced work.
The aggregated trust for baseline is computed by utilizing our dataset and our Eqs. (4) and (6) for direct and indirect trusts, respectively. However, equal weights have been assigned to each of these components (as mentioned in Eq. (25) and Table II of [44]) instead of applying our computed weights to highlight the importance of dynamic weights assignment. The primary focus of this comparison is to take and employ the notion of equal weights in the envisaged framework. It is worth noting that while computing aggregated trust using BTCMV and the proposed scheme, the penalty factor, and the forgetting factor have been ignored. Significant efforts have been made to calculate accurate values for these schemes considering the lack of information or insufficient information in the referenced works. The pairwise aggregated trust evaluated by BTCMV shows that the values assigned by the trustors to Trustee 2 ( Fig. 12(a)) and Trustee 6 ( Fig. 12(b)) are significantly high reflecting non-stringent criteria for trust computations. Moreover, it evaluates a trustee even if there are no packets exchanged, e.g., it computes an aggregated trust assigned from Trustor 2 to Trustee 2 ignoring the fact that a vehicle will not be exchanging messages with itself. On the other hand, the baseline scheme exhibits a rather unstable behavior by reflecting the interaction quality in terms of radical trust values. Furthermore, it delegates equivalent importance to indirect observation as it does to the direct observation  regardless of the number of messages exchanged among a pair of a trustor and a trustee which lacks logic and is impractical. On the contrary, the aggregated trust assessed by the proposed scheme is more rational and reasonable, and caters to the dynamic behavior of vehicles while maintaining subtle and persistent behavior. Fig. 13 and Table V sketch the comparison of aggregated trust evaluated utilizing BTCMV scheme employing predefined weights as suggested in [43] and associating proposed weights (i.e., relying on the frequency of interaction among a pair of a trustor and a trustee) as recommended in this manuscript. It is evident from the figure that introducing the proposed weights in BTCMV makes the aggregated trust stable, dynamic and stringent by reflecting the quality of interactions among vehicles even when ignoring the penalty factor.
Table VI outlines the comparison of the direct trust computed by RFSN [45] reported in [43] and the proposed scheme relying on the selected number of successful and unsuccessful interactions. RFSN [45] is a highly cited research work in the subject area having 1,594 citations and has been referenced in BTCMV [43] for comparative performance evaluation. Moreover, the influencing attributes utilized in the said work are relevant to the trust management model developed in this manuscript. It is apparent that RFSN relies solely on the current successful packet transmission rate among two vehicles irrespective of the increasing number of successful or unsuccessful interactions, whereas the proposed scheme takes into account other factors, e.g., the past packet delivery ratio, time decay, and forgetting factor as well to build a much dynamic and realistic model. Fig. 14 presents a comparison of the steady threshold utilized for malicious vehicle detection (Fig. 14(a) to employing the proposed adaptive threshold for the same (Fig. 14(b). The steady threshold does not cope with the dynamic demands of the vehicular network and eliminates any vehicle having a trust score below the predefined threshold without taking into consideration the overall network situation regarding communications. On the contrary, the proposed adaptive threshold caters to the dynamic nature of the network by accounting for the global conditions, e.g., in the event that the global trust value of each vehicle is declining, owing to a low communication rate in the entire network, the threshold value will also decrease to avoid eradication of the honest vehicles. While in the case of a rise in the overall global trust, the threshold value also increases. Moreover, the proposed malicious detection mechanism provides vehicles with an opportunity to recover from bad behavior and only eliminates them from the network in case of subsequent misdemeanors. It is evident from Fig. 14(b) that the flagged vehicles at time instance 2 are in fact improving their behavior and accordingly, their trust scores are recovering as well. This behavior confirms that it is of great significance to allow the flagged vehicles to rectify their situation. Another issue with a predefined threshold is the quantification rationale. If the said threshold is kept too high, it eliminates honest vehicles as well along with the dishonest ones. On the other hand, if it is kept too low, the malicious vehicles can remain in the network undetected. Some existing research studies have suggested a threshold of 0.5 [42], which is quite high, whereas some defined it at 0.1 [46], which is quite low. For demonstration purposes, the steady threshold in Fig. 14(a) is kept at 0.3, which is the mean of the previously discussed threshold values, i.e., from [42] and [46].

A. On-Off Attack Detection
In order to realize resistance against an on-off attack, a malicious vehicle is injected in the network which launches an on-off attack by behaving honestly and interacting actively with peer vehicles between time instances 1 and 3, whereas, the said vehicle starts behaving selfishly by reducing the number of interactions with neighboring vehicles to reserve its resources starting at time instance 3. Figure 15(a) illustrates the detection of the on-off attacker by employing the proposed adaptive threshold. The attack begins at time instance 3 and the proposed system is able to detect the attacker at time instance 5 due to 1) the formulation of trust attributes that resulted in the trust drop, and 2) the adaptive threshold.

B. Selective Node Attack Detection
To demonstrate resistance against a selective node attack, a misbehaving vehicle has been injected in the network that launches a selective node attack from the beginning, i.e., starting time instance 1, wherein it interacts honestly and actively with a group of vehicles, whereas, it does not interact with the remaining vehicles to the extent that it should have under normal conditions. The attack shown in Fig. 15(b) begins at time instance 1 and the attacker is successfully flagged by the proposed system at time instance 5 due to 1) the recommendation-based trust formulation, and 2) the adaptive threshold.

VI. CONCLUSION AND FUTURE DIRECTIONS
This paper focused on developing a trust management scheme relying on diverse influencing parameters coupled with the context of the messages exchanged between vehicles. Moreover, it addressed the challenging issues concerning weight quantification by associating rational weights computed via utilizing contributing attributes related to the network and communication dynamics. Furthermore, it catered for resilience against misbehaviors while formulating constituents of trust and employing a flexible and adaptive threshold to mitigate dishonest vehicles. In the future, we intend to extend this research work by addressing the trade-off between the trust-based security and the privacy concerns arising from the same. Moreover, we intend to add the notion of a sliding time window while taking into consideration reasonable historical interactions in future work. Analogous to any other networks [47], it is of great significance that generic adversarial model along with attack specific adversarial models are developed to strengthen the resilience of vehicular networks [48]. Accordingly, designing a separate adversary model which would be able to identify a variety of specific attacks and evict the corresponding attackers is also part of the subsequent research work. Furthermore, taking into consideration the firmware and software security [49] of the individual components in the IoT specifically IoV infrastructure will also be investigated.