/Physics-informed generative vision models for improved sample efficiency

Physics-informed generative vision models for improved sample efficiency

Leuven | More than two weeks ago

Making machines see better through the power of generative AI

Machine perception, vision in particular, is a cornerstone of many recent technological advances (e.g., in autonomous driving, medical imaging). Its success is determined by both the development of high-fidelity sensor hardware and powerful AI models to interpret the sensory data and produce accurate predictions or sound decisions. An active area of research focuses on the benefits of incorporating sensory modalities other than the traditional RGB imaging into machine vision (e.g., capturing light at wavelengths not visible to humans). A key challenge, however, is the scarcity of data in these domains, as data capture with new, candidate sensors tends to be costly.

The main goal of this PhD research is to overcome this challenge by studying and developing generative AI-based methods to synthesize data of a new target sensory modality. The focus will be on how to generate highly realistic, yet diverse data with a minimum of examples to learn from. We envision several avenues for exploration. One is to systematically compare the sample efficiency of current generative model architectures and test proposed architectural improvements. Another is to adopt transfer learning techniques to optimally leverage knowledge from less scarce modalities. Finally, a key part of the efforts will be devoted to physics-informed approaches. That is, can we limit the number of required training samples by building in strong priors about the physics laws and physical regularities governing the target sensory modality? For example, the literature on infrared image generation describes loss functions to encourage outputs whose physical components (temperature, emissivity, thermal texture) adhere to radiation laws.

Together, the findings of this PhD research will be of direct relevance to ongoing imec projects in several application domains, including but not limited to developing next-gen automotive sensors and improving tumor detection with hyperspectral imaging.

We offer a challenging, stimulating and pleasant research environment, where PhD students can engage in international research on artificial intelligence with a close link to the underlying hardware. A PhD student working on this topic will be part of the AI & Algorithms department, but will also collaborate closely with imec hardware, sensor development and university teams to produce novel solutions together.

Our ideal candidate for this position has the following qualifications: 

  • You have a Master’s degree in Computer Science, Informatics, Physics, Engineering, Electronics or a related field. 
  • You have experience with deep learning, ideally in the visual domain
  • You have strong python skills and familiarity with deep learning libraries such as PyTorch
  • Physics knowledge (sensor knowledge) is considered a plus
  • You are able to plan and carry out your tasks in an independent way. 
  • You have strong analytical skills and the ability to think critically about research results
  • You are a responsible, communicative and flexible person. 
  • You are a team player. 
  • You are fluent in English (speaking and writing).



Required background: Master’s degree in Computer Science, Informatics, Physics, Engineering, Electronics, or related field with knowledge about artificial intelligence and deep learning

Type of work: Modelling, algorithmic and system design, experimentation, literature study

Supervisor: Steven Latré

Co-supervisor: Tom De Schepper

Daily advisor: Lore Goetschalckx, Kaili Wang, Siri Willems

The reference code for this position is 2025-113. Mention this reference code on your application form.

Who we are
Accept marketing-cookies to view this content.
Cookie settings
imec inside out
Accept marketing-cookies to view this content.
Cookie settings

Send this job to your email