Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Machine learning is widely used in various areas. However, the current machine learning framework remains vulnerable to issues such as adversarial attacks, fairness violations, and data leakage. These problems are not adequately captured by fitting models to collected data and focusing on test performance metrics alone, like accuracy or F1-score. In practice, machine learning tasks often involve additional quantities of interest, which turns an originally unconstrained optimisation problem (only optimising toward accuracy) into a constrained one. This thesis formally studies machine learning under different types of commonly concerning constraints, such as robustness, fairness, and privacy. I first focus on how the formal machine learning framework can be extended to incorporate robustness, which is a critical factor for safety. After that, I turn to more ethics-related aspects like fairness and privacy, to explore the possibility of formally fitting them into machine learning. My approach differs from empirically pushing up multiple metrics and instead emphasises fundamental ways to understand and address the underlying challenges.
