Network programmability is crucial for addressing the multiplicity and heterogeneity of Network Services, the diversity of the underlying infrastructure of Sixth Generation (6G) communication systems, and the requirements for maximum efficiency. The programmability of a service platform enables algorithmic network management by leveraging contemporary software virtualization technologies. Moreover, network programmability will abstract the essential network/service and resource configuration, as well as the production and administration of policy lifecycles, as the number of local breakouts (both public and private) is anticipated to grow exponentially. Network programmability is the central point of interest for Hexa-X, the European 6G flagship project, which aims to facilitate the dynamic adaptation to changing network situations and requirements for the most efficient use of available resources. To explore such a critical enabler of futuristic mobile networks, this article addresses the role of network and service programmability and its impact on various aspects of 6G within the context of Hexa-X. In order to accomplish this, the article begins by discussing Hexa-X’s proposed service Management and Orchestration (M&O) framework for 6G. Based on this framework, it identifies and explores in greater detail the programmability of four primary processes in 6G: expressing application and service requirements; service description models and profiling; monitoring and diagnostics; and reasoning. Beyond the scope of the Hexa-X, this article aims to serve as a foundation for future research into network and service programmability in 6G.

Matteo Pergolesi

and 9 more

Fifth generation (5G) and beyond systems require flexible and efficient monitoring platforms to guarantee optimal key performance indicators (KPIs) in various scenarios. Their applicability in Edge computing environments requires lightweight monitoring solutions. This work evaluates different candidate technologies to implement a monitoring platform for 5G and beyond systems in these environments. For monitoring data plane technologies, we evaluate different virtualization technologies, including bare metal servers, virtual machines, and orchestrated containers. We show that containers not only offer superior flexibility and deployment agility, but also allow obtaining better throughput and latency. In addition, we explore the suitability of the Function-as-a-Service (FaaS) serverless paradigm for deploying the functions used to manage the monitoring platform. This is motivated by the event oriented nature of those functions, designed to set up the monitoring infrastructure for newly created services. When the FaaS warm start mode is used, the platform gives users the perception of resources that are always available. When a cold start mode is used, containers running the application’s modules are automatically destroyed when the application is not in use. Our analysis compares both of them with the standard deployment of microservices. The experimental results show that the cold start mode produces a significant latency increase, along with potential instabilities. For this reason, its usage is not recommended despite the potential savings of computing resources. Conversely, when the warm start mode is used for executing configuration tasks of monitoring infrastructure, it can provide similar execution times to a microservice-based deployment. In addition, the FaaS approach significantly simplifies the code logic in comparison with microservices, reducing lines of code to less than 38%, thus reducing development time. Thus, FaaS in warm start mode represents the best candidate technology to implements such management functions.