gptkbp:instanceOf
|
gptkb:model
ensemble learning method
|
gptkbp:advantage
|
can be computationally intensive
less interpretable than single decision trees
robust to noise
works with categorical and numerical data
|
gptkbp:basedOn
|
decision trees
|
gptkbp:category
|
supervised learning
non-parametric method
|
gptkbp:citation
|
gptkb:Breiman,_L._(2001)._Random_Forests._Machine_Learning,_45(1),_5-32.
|
gptkbp:developedBy
|
gptkb:Leo_Breiman
|
gptkbp:handles
|
high-dimensional data
missing values
|
https://www.w3.org/2000/01/rdf-schema#label
|
Random Forests
|
gptkbp:hyperparameter
|
maximum tree depth
minimum samples per leaf
number of features per split
number of trees
|
gptkbp:implementedIn
|
gptkb:Spark_MLlib
gptkb:Weka
gptkb:scikit-learn
R
|
gptkbp:improves
|
prediction accuracy
|
gptkbp:introducedIn
|
2001
|
gptkbp:provides
|
feature importance
|
gptkbp:reduces
|
overfitting
|
gptkbp:relatedTo
|
gptkb:AdaBoost
gradient boosting
bagging classifier
|
gptkbp:technique
|
bagging
|
gptkbp:usedFor
|
gptkb:dictionary
regression
feature selection
|
gptkbp:bfsParent
|
gptkb:Journal_of_Machine_Learning_Research
|
gptkbp:bfsLayer
|
5
|