-
From MLE to Neyman-Pearson to Reward Models
A broad roadmap of statistical inference, inspired by Data 145, and a short bridge to modern reward-based AI.
-
Explainable AI (XAI) and Model Interpretability (SHAP, Integrated Gradients, and Sparse Autoencoders)
A future-me-friendly toolbox of interpretability methods: feature attribution (Shapley/SHAP, Integrated Gradients), perturbation tests, and representation-level methods like Sparse Autoencoders.
-
Diffusion Language Models Deep Dive (Part 1: Method)
This post explains general development of diffusion language models (DLMs), including Discrete Diffusion, and Simple and Effective Masked Diffusion Language Models.
-
Minimum Math Review for Diffusion Language Models
Diffusion Objective, Variational Inference, and KL
-
More on Parallelism
More on Sharding and Parallelism