Menu

News

最新消息

2026-02-26
【專題演講】115/3/5(四) 14:00 – 15:00 施智涵 博士

Learning Modern statistics often contrasts frequentist and Bayesian inference, where uncertainty is quantified through sampling distributions or posterior distributions— typically under an explicit likelihood. Extended fiducial inference (EFI) offers a different and ML-friendly perspective: it constructs parameter uncertainty directly from the optimization objective, without requiring a tractable likelihood. In this talk, I will introduce the EFI framework and show how it turns common loss functions into practical “inferential engines,” producing uncertainty quantification that is easy to deploy in modern pipelines. I will highlight applications in high-dimensional regression and logistic regression, and close with why EFI is timely for the future of statistics in machine learning—enabling uncertainty quantification for deep models such as CNNs, LSTMs, and physics-informed neural networks, and pointing toward scalable uncertaintyaware learning in large architectures.