Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
The learning dynamics of modern neural networks remain an open problem in deep learning. The Neural Tangent Kernel (NTK) offers an elegant description of training dynamics in the infinite‑width limit, yet its classical formulation assumes a static data set. Modern model training practice departs from this strong assumption through the use of on‑the‑fly data augmentations (e.g. additive noise). In this work, we conduct an NTK-driven analysis of how data transformations affect a neural net's evolution in the function space. Our theoretical contributions characterize how repeated Gaussian perturbations from NTK-derived covariances can steer neural-net optimizations toward user‑specified behavior. These theoretical insights are empirically validated by controlled experiments. Taken together, our results lay the foundation for a promising future research direction that transforms the NTK from a descriptive to a prescriptive tool, enabling control of neural net training trajectories and behavior of inference generalization with grounded interventions.