Random Forests work by training many Decision Trees on random subsets of the features, then averaging out their predictions.

The only parameter that we really need to care about in practice is the number of trees k (step 3) that we choose for the random forest

Tree learning "come[s] closest to meeting the requirements for serving as an off-the-shelf procedure for data mining", say Hastie et al., because it is invariant under scaling and various other transformations of feature values, is robust to inclusion of irrelevant features, and produces inspectable models. However, they are seldom accurate.

results matching ""

    No results matching ""