Uses simple Bayesian inferencing to segment the data given the conditional features. Then estimates a density over the remaining values of the target feature and returns the most likely value using a maximum a posteriori estimate of the kernel (returning its mode).
bayesRegressSimple( df, features, targetCol, selectedFeatureNames = c(), retainMinValues = 2, regressor = NULL )
| df | data.frame  | 
    
|---|---|
| features | data.frame with bayes-features. One of the features needs to be the label-column (not required or no value required).  | 
    
| targetCol | string with the name of the feature that represents the label (here the target variable for regression).  | 
    
| selectedFeatureNames | vector default   | 
    
| retainMinValues | integer to require a minimum amount of data points when segmenting the data feature by feature.  | 
    
| regressor | Function that is given the collected values for regression and thus finally used to select a most likely value. Defaults to the built-in estimator for the empirical PDF and returns its argmax. However, any other function can be used, too, such as min, max, median, average etc. You may also use this function to obtain the raw values for further processing.  | 
    
Scutari M (2010). “Learning Bayesian Networks with the bnlearn R Package.” Journal of Statistical Software, 35(3), 1--22. doi: 10.18637/jss.v035.i03 .
feat1 <- mmb::createFeatureForBayes( name = "Sepal.Length", value = mean(iris$Sepal.Length)) feat2 <- mmb::createFeatureForBayes( name = "Sepal.Width", value = mean(iris$Sepal.Width)) # Note how we do not require "Petal.Length" among the features when regressing: mmb::bayesRegressSimple(df = iris, features = rbind(feat1, feat2), targetCol = "Petal.Length")#> [1] 4.04962