1 min readJun 3, 2020
Good work Maarten.
A little correction, In the statement “Random Forests where the quality of prediction is improved by combining, typically, weak models.” should be “strong models”. Random forest assumes each base learner is overfitted (high variance (strong models), a decision tree of full depth) and this high variance is reduced by bootstrap sampling and aggregation.
In the case of boosting each base learner is a weak learner (high bias, typically a decision stump) and we try to reduce the bias.