/AI-based orchestration for cloud-native ultra low latency applications

AI-based orchestration for cloud-native ultra low latency applications

Gent | More than two weeks ago

Enable next-generation services in immersive telepresence, automotive, eHealth, retail and industry with ultra-low latency and high-throughput requirements through advanced orchestration techniques.

Applications that rely on remote communication, collaboration and sensing are becoming more and more distributed over large networks and are demanding increasing amounts of computational and networking resources. Next-generation services in eHealth, immersive telepresence, automotive, retail and industry all deal with ultra-low latency and high-throughput requirements that cannot be met by monolithic deployments. Such applications are instantiated as Service Function Chains (SFC), involving multiple multi-tenant components that are often AI-based themselves.

Potential breakthroughs at the network level include novel paradigms for higher flexibility, precision and scalability, such as Deterministic Networking (DetNet), Time-Sensitive Networking (TSN) and Segment Routing (SR), each with their own latency-aware design features. In addition, Intent-based Networking (IBN) holds promises to enforce networking rules without relying on explicit system details.

Efficient orchestration strategies for computational resources are primordial to complement the network management solutions and provide efficient and high-throughput data processing. Highly dynamic, AI-based micro-service provisioning is required to circumvent current resource fragmentation problems among multiple network and service providers and reduce execution times along the end-to-end path of the service chain, while ensuring reliability, scalability, security and energy-efficiency.

Many of these networking and orchestration solutions remain largely unexplored, especially on their interactions and control loops in cloud-native environments that span multiple domains. Integrating Deep Learning (DL) methods in orchestration practices and designing Reinforcement Learning (RL) systems capable of performing service scheduling are among the major research challenges in the network and cloud management domain.

2025-099

Required background: Engineering Science, Computer Science or equivalent, Engineering Technology

Type of work: 60% modeling/simulation, 30% experimental, 10% literature

Supervisor: Filip De Turck

Co-supervisor: Bruno Volckaert

Daily advisor: Filip De Turck

The reference code for this position is 2025-099. Mention this reference code on your application form.

Who we are
Accept marketing-cookies to view this content.
Cookie settings
imec inside out
Accept marketing-cookies to view this content.
Cookie settings

Send this job to your email