Question

A use case for this method is to fit a mixture of Gaussians to data when the Gaussians to which observations belong are unobservable. For 10 points each:
[10h] Name this class of iterative optimization algorithms that alternate between two namesake steps by first probabilistically defining a latent variable and then performing MLE to update parameter values.
ANSWER: EM algorithms [or expectation–maximization algorithms] (Andrew Ng's CS229 lecture notes use this motivating example.)
[10m] The "E" step uses this statement to get posterior probabilities for the latent variable from prior probabilities. This statement names a statistical approach contrasted with frequentism.
ANSWER: Bayes’ theorem [or Bayes’ law or Bayes’ rule; prompt on Bayes or Bayesian statistics]
[10e] "E" stands for expectation, which is a name for this quantity that equals the sum of n values divided by n.
ANSWER: mean [accept sample mean or arithmetic mean or average; prompt on mu]
<Science - Other Science - Computer Science>

Back to bonuses

Data

Summary

TournamentEditionExact Match?HeardPPBEasy %Medium %Hard %
2025 PACE NSC06/07/2025Y4212.8688%41%0%