/Multi-view sensing in dense 6G networks using distributed deep learning

Multi-view sensing in dense 6G networks using distributed deep learning

Antwerpen | More than two weeks ago

Exploit distributed deep learning and data fusion techniques to sense the environment using wireless signals of widely distributed and dense 6G networks

The upcoming sixth-generation (6G) wireless communication system promises support for a wide range of challenging application scenarios that combine the virtual and physical world into a seamless cyber-physical continuum. Enabling synchronized interaction across this continuum, doesn’t only require reliable and low-latency connectivity, but also highly accurate sensing of the world around us. Therefore, integrated sensing and communications (ISAC) is regarded as one of the most promising potential features of 6G networks. ISAC will allow wireless signals to simultaneously provide both conventional communication services, as well as environmental sensing services. This will support gesture recognition, pose estimation, tracking, and 3D environment reconstruction at an unprecedented scale.

 

The 6G network architecture is expected to be widely distributed, consisting of distributed remote radio units that make use of distributed massive MIMO and operate in a cell-free manner. Combining this with the highly dense deployment of base stations and user equipment creates unique opportunities for distributed multi-view sensing, where sensing data from many different angles and locations is fused to create a dynamic 3D view at city scale, or beyond. Processing and combining such immense amounts of channel information requires the development of resource-efficient real-time distributed deep learning methods that can derive relevant features from raw channel data at the source and fuse them into a coherent 3D view of the environment.

 

The first goal of this PhD project is to collect multi-view wireless sensing data using a distributed 6G testbed, consisting of several SDR-based mmWave radio transceivers with beamforming and MIMO capabilities. Subsequently, the collected data will be used to design, train, and test a distributed deep learning pipeline for real-time multi-view sensing in dynamic 3D environments.


Required background: Computer Science, Electrical Engineering, Telecommunications Engineering, or equivalent. Prior knowledge of signal processing, machine learning, and wireless communications are a plus.

Type of work: 10% literature, 40% modelling, 50% implementation/experimentation

Supervisor: Jeroen Famaey

Co-supervisor: Nazar Idrees

Daily advisor: Raf Berkvens

The reference code for this position is 2025-084. Mention this reference code on your application form.

Who we are
Accept marketing-cookies to view this content.
Cookie settings
imec inside out
Accept marketing-cookies to view this content.
Cookie settings

Send this job to your email