Leuven | More than two weeks ago
Develop cutting-edge machine learning algorithms for the acceleration of numerical thermal analysis of automotive chiplets and 3D packages.
Following Moore’s Law, the size of electronic devices in integrated circuits has been decreasing ever since the invention of the transistor. Today, we are reaching dimensions in the nanometer range. This area-scaling results in high power densities and consequently high operating temperature. Another prominent packaging trend is heterogeneous integration, where multiple chips are combined in a single package. The most important applications include automotive packages, where different ‘chiplets’ – modular heterogeneous chips mounted on an interposer – enable the high-performance compute capabilities for autonomous drive, and high-performance 3D chip packages to enable high bandwidth for efficient calculations of machine learning models or AI training data sets.
The goal of the thermal simulation of these packages is to obtain the temperature distribution inside the chips and study methods for improving the thermal design. The current challenge in thermal simulation is the multi-scale aspect both in time and space: any change in the cooling solution of the chip package (cm scale) will influence the device performance (nm scale). Widely used modelling tools such as finite element (FE) models for thermal simulation are lacking in this regard as the number of required elements and hence the computational time rapidly increases for capturing details at the smallest scales.
The objective of this PhD work is to develop accurate and efficient machine learning algorithms for the steady state and transient multi-scale thermal analysis of multi-chip package assemblies. There have been many efforts in recent studies to make the simulation process more efficient. The general idea is to make a reduced-order model where the total number of degrees of freedom (DOFs) is reduced to a minimum, which in turn speeds up the simulation process. Machine learning (ML) methods for regression are a popular choice as they have been proven to be very efficient. One possible ML algorithm for this purpose is the feed-forward artificial neural network (ANN), which can be trained on either experimental or numerical simulation data. The trained ANN can be used as a black-box tool to replace the traditional and time-consuming FE simulations[1]. One of the main remaining challenges for ANNs is incorporating known-physics during training [2]. These so-called physics-informed neural networks are a challenging, but very promising route for multiscale thermal simulation. Recent advances in Graph Neural Networks (GNNs) [3] also show promising results in learning mesh-based physical simulation data.
In this PhD work, the following activities are foreseen:
[1] D. Coenen, et al., "Benchmarking of Machine Learning Methods for Multiscale Thermal Simulation of Integrated Circuits," IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, Vol.42, No.7, pp.2264-2275, July 2023.
[2] M. Raissi et al., "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", Journal of Computational Physics, Vol. 378, pp. 686-707, Feb. 2019
[3] M. Maurizi et al., "Predicting stress, strain and deformation fields in materials and structures with graph neural networks", Nature Scientific Reports, (2022) 12:21834, https://doi.org/10.1038/s41598-022-26424-3
Required background: Engineering Science (Electrical, Mechanical, Computer), Physics, Mathematics
Type of work: 70% simulation/coding, 20% experimental, 10% literature
Supervisor: Houman Zahedmanesh
Co-supervisor: Herman Oprins
Daily advisor: David Coenen
The reference code for this position is 2025-095. Mention this reference code on your application form.