Adaboost (Adaptive Boosting) Classifier
Boosting algorithms try to aggregate a couple of poor classifiers by order to make a powerful one. They assign weights to every labeled sample. When one of the poor classifier fails to correctly classify a sample, the weight of that sample is boosted. Then it tries another poor classifier.
Let’s take Adaboost and Pruning algorithms for example:
- For the training set
{(xi,yi)}ni=1 , initialize their weights{wi}ni=1 as1/n . And letf←0 . - For
j=1,…,b :
- Based on current sample weights
{wi}ni=1 , pick up the classifier with the smallest weighted error rateR :φj=argminφR(φ),R(φ)=∑j=1nwi2(1−φ(xi)yi) - Calculate the weight of classifier
φj :θj=12log1−R(φj)R(φj) - Update the aggregated classifier
f :f←f+θjφj - Update the weights of samples
{wi}ni=1 :wi←exp(−f(xi)yi)∑nk=1exp(−f(xk)yk),∀i=1,2,…,n
- Based on current sample weights
n=50; x=randn(n,2);
y=2*(x(:,1)>x(:,2))-1;
b=5000; a=50; Y=zeros(a,a);
yy=zeros(size(y)); w=ones(n,1)/n;
X0=linspace(-3,3,a);
[X(:,:,1), X(:,:,2)]=meshgrid(X0);
for j=1:b
wy=w.*y; d=ceil(2*rand); [xs,xi]=sort(x(:,d));
el=cumsum(wy(xi)); eu=cumsum(wy(xi(end:-1:1)));
e=eu(end-1:-1:1)-el(1:end-1);
[em,ei]=max(abs(e)); c=mean(xs(ei:ei+1));s=sign(e(ei));
yh=sign(s*(x(:,d)-c)); R=w'*(1-yh.*y)/2;
t=log((1-R)/R)/2; yy=yy+yh*t; w=exp(-yy.*y); w=w/sum(w);
Y=Y+sign(s*(X(:,:,d)-c))*t;
end
figure(1); clf; hold on; axis([-3,3,-3,3]);
colormap([1 0.7 1; 0.7 1 1]);
contourf(X0,X0,sign(Y));
plot(x(y==1,1),x(y==1,2),'bo');
plot(x(y==-1,1),x(y==-1,2),'rx');