Portfolio

Neural Network-Based Modeling of Ultrafast Laser-Induced Heat Transfer in Thin Films

In this project, I developed an artificial neural network (ANN) framework to simulate ultrafast heat conduction in double-layered metallic thin films subjected to ultrashort-pulsed laser heating. Traditional approaches rely on solving the parabolic two-temperature model (TTM), which captures the coupled dynamics between electron and lattice temperatures at micro/nanoscale time scales. Our ANN-based method offers a data-driven alternative capable of approximating the TTM solution with strong theoretical convergence guarantees. I contributed to the design and implementation of the ANN architecture and helped evaluate its performance against analytical solutions. The model was used to accurately predict the thermal responses in a gold-on-chromium thin film system under femtosecond laser excitation, demonstrating the potential of physics-informed ML for nanoscale thermal analysis in advanced materials and laser processing applications.

Enhancing Neural Operator Surrogates with Diffusion Models for High-Frequency Turbulence

In this colaborative project, we addressed a key limitation of neural operators in modeling turbulent flow: their inability to capture fine-scale, high-frequency structures. While neural operators such as Fourier Neural Operators (FNO) and DeepONets offer scalable and efficient surrogate modeling, their outputs tend to be overly smooth, failing to reproduce the rich spectral content of turbulence. To overcome this, we developed a hybrid framework where generative diffusion models are conditioned on neural operator predictions. This enables the diffusion model to restore high-frequency components lost during surrogate approximation. I contributed to the development of the hybrid architecture and led the validation across diverse datasets, including high Reynolds number jet flow simulations and experimental Schlieren velocimetry. Our method achieves markedly improved energy spectrum alignment and enables temporally stable autoregressive rollouts. Spectral analysis via Proper Orthogonal Decomposition (POD) further confirms enhanced fidelity in both space and time. This framework offers a generalizable approach for physics-informed generative enhancement, applicable to scientific systems requiring microstructural resolution.

Data-Efficient Inverse Design of Architected Metamaterials using Neural Operators

In this collaborative project, we developed a scientific machine learning framework to enable the inverse design of micro-architected metamaterials from sparse, high-fidelity experimental data. Traditional design methods for such materials often rely on dense simulations or costly lab experiments, particularly challenging for nonlinear and stochastic microstructures. Our approach leverages deep neural operators—including DeepONet and its variants—to directly learn the complex mappings between microstructural features and their mechanical responses. I contributed to the implementation and evaluation of the neural operator models and supported the comparative analysis between standard neural networks and operator-based architectures. Our results on spinodal microstructures fabricated via two-photon lithography demonstrated predictive accuracy within 5–10%, highlighting the method’s viability under data-constrained conditions. This work illustrates the power of integrating advanced ML with nanoscale experimentation to accelerate the design of next-generation mechanical metamaterials.