5 Comments
访客 *88308069* @ 2008-11-28 06:25:58 写道:
用得不好会割到自己(too sharp to cut yourself)
这句话好似不妥,翻译来应该是太锋利了割不到自己。呵呵。
谢谢提醒,已经更正:)bow~
——原帖发布于 2008-11-28 07:50:08
访客 *123* @ 2009-01-04 17:14:00 写道:
呵呵呵好 👍
Guest *Liang* @ 2010-05-04 00:57:36 originally posted:
Boost algorithm CAN be used to regression problems so does AdaBoost. In this case, simply use stump on the continuous independent variables as a weak classifier. The program fail may be due to the fact that the Adabag package doesn't allow continuous numerical inputs to a tree, but the user can code his own implementation without much difficulty.
Your feedback is warmly welcomed.
Thanks for your comment. My point was to suggest her read the documentation carefully -- don't just throw everything in an R function and wait for the output. That's kind of ridiculous.
You are absolutely correct in terms of the continuous case. Actually we do not have to restrict the base classifier to a stump. That requires the user to do the programming by themselves. There might be such packages on CRAN. I have not check the ``MachineLearning'' task view.
Originally posted on 2010-05-05 08:13:53
访客 *wang* @ 2015-01-17 06:42:00 写道:
想请教一个关于adabag包的问题,我用boosting函数得到一个变量,之后用predict函数做预测,但是这里一直要求输入的newdata里面也要包含分类变量label这一列,可是我用新数据做预测就是为了得到label分类啊?不太明白这其中的原理。
访客 *御宅暴君* @ 2016-04-16 07:07:08 写道:
把回归问题当分类问题做,哈哈。
Sign in to join the discussion
Sign in with GitHub