Last active
March 7, 2025 01:14
-
-
Save petermchale/a4fc2ca750048d21a0cbb8fafcc690af to your computer and use it in GitHub Desktop.
A derivation of the bias-variance decomposition of test error in machine learning.
This analysis presented in this gist has also been published on Cross Validated: https://stats.stackexchange.com/a/287904/146385
Also see the section entitled "The Bias-Variance Decomposition" in Christopher Bishop's 2006 book: https://link.springer.com/book/9780387310732
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In active machine learning, we assume that the learner is unbiased, and focus on algorithms that minimize the learner's variance, as shown in Cohn et al (1996): https://arxiv.org/abs/cs/9603104 (Eq. 4 is difficult to interpret precisely, though, in the absence of further reading).