People spend hours to drive their car from place to place. What if a person sets its destination and goes to sleep while the car drives itself to the destination? It will save plenty of time.
Tesla already started selling autopilot cars. Though the car can drive itself but is trustable only in certain quality roads. This means, research should still be carried out in self driving car project. All of the existing self-driving car simulation projects used Convolutional Neural Network as learning method. Though Adaboost is mostly used with binary classification problem, a variant can be developed to adapt Adaboost with Convolutional Neural Network.
Content
1.0 Introduction
2.0 Project Objectives:
3.0 Adaboost:
3.1 Pseudocode
3.2 Classifier weight at vs Error rate:
3.3 Weight Update:
3.4 Application:
3.5 Popular variants:
4.0 Experimental Setup:
4.2 ACNN Model
4.3 ACNN Flowchart
4.4 CNN Model:
5.0 Self-driving Car Simulation:
5.1 Data Generation:
5.2 Training Mode:
5.3 Testing Mode:
6.0 Data visualization:
7.0 Project demonstration:
8.0 Result Analysis:
9.0 Conclusion:
10.Source code & Data:
Bibliography
Abstract
Recently Udacity has open sourced it's self-driving car simulation project where a developer has access to left, center, right images from car camera and driving parameters like steering angle, throttle, speed and break which simulates the real-life car driving environment. Some others just recorder the game playing video to capture training data and trained the model to mimic keyboard button press. They all used convolutional neural network(CNN) approach to train the model. But this project is targeted to use one of the most popular boosting algorithm named Adaboost's variant Adaboost-CNN to boost CNN classification.
Keywords:
Boosting Algorithm, Adaboost, Multi-class Adaboost, Convolutional Neural Network, Adaboost-CNN(ACNN), Self-driving Car Simulation.
1.0 Introduction
People spends hours to drive car from place to place. What if a person sets it destination and go for sleep and car drive itself to the destination? It will save plenty of time. Tesla already started selling autopilot car in the market. Tough the car can drive itself but trustable only in certain quality roads. Means, research should still be carried out in self driving car project. All of the existing self-driving car simulation projects used Convolutional Neural Network as learning method. Though Adaboost is mostly used with binary classification problem, a variant can be developed to adapt Adaboost with Convolutional Neural Network.
2.0 Project Objectives:
- To study CNN, ACNN and existing self-driving car simulation project.
- Simulate self-driving car game using ACNN
- Simulate self-driving car game using CNN
- To compare between ACNN and CNN
3.0 Adaboost:
Adaboost is one of the popular boosting algorithm which is developed by Freund & Schapire in 1997. It is widely applied in data classification and object detection. Adaboost is used with many other learning algorithms to improve performance. It combines the result of weak classifier and provides us a strong classifier. A weak classifier is just a classifier that performs better than random guessing.
Abbildung in dieser Leseprobe nicht enthalten
Figure-1: Adaboost classification(RAY, 2015)
In figure-1, assumes D1, D2, D3 accordingly represents the classifier CNN, KNN, CNN. The task is to classify plus(+) and minus(-) correctly. Here the fonts size of the plus or minus symbols representing its weight. Initially, all the sample has been initialized with same weight. After training it classifies 3 plus symbols wrongly. So next time Adaboost will focus on classifying them correctly, that's why it increases the weight of all the misclassified sample and decrease the weight of correctly classified sample. Then, D2 classifier is trained with the new weight and it classifies all the 3 plus sample correctly but caused in classifying two minus sample wrongly. So Adaboost increases the weight of these two sample and decrease the weight of correctly classified samples and feeds it to D3 with new weight. D3 classifies only the top 3 plus samples correctly. But If iteration stops here, then result of the weak classifier D1,D2,D3 will be used for voting for its final group resulting in box 4. Suppose for a sample D1 classifies it as minus and d2 and d3 classifies it as plus group then majority vote is for plus group and result is plus group.(Shuo Yang, 2017)
3.1 Pseudocode
Abbildung in dieser Leseprobe nicht enthalten
At first, all the m samples' weight is initialized with 1/m which sums up to 1. Here T is the number of iteration. ht is the classifier whose error rate is lesser than any other classifier for current weight. Error rate et is sum of weight of misclassified sample divided by total weight form sample. at represents the weight given to a classifier. Dt+1 represents the weight of m sample will be used for next iteration. After T iteration each of the classifier vote to final output based on the at given to that classifier multiplied by its output for the input x.(Schölkopf B., 2013)
3.2 Classifier weight at vs Error rate:
Abbildung in dieser Leseprobe nicht enthalten
Figure-2: at vs Error rate(McCormick, 2013)
There are three bits of intuition to take from this graph:
1- The classifier weight grows exponentially as the error approaches 0. Better classifiers are given exponentially more weight.
2- The classifier weight is zero if the error rate is 0.5. A classifier with 50% accuracy is no better than random guessing, so we ignore it.
3- The classifier weight grows exponentially negative as the error approaches 1. We give a negative weight to classifiers with worse worse than 50% accuracy. “Whatever that classifier says, do the opposite!”.
3.3 Weight Update:
Abbildung in dieser Leseprobe nicht enthalten(1)(McCormick, 2013)
The equation number (1) shows how to update weight. Dt is a vector of weights which contains the weight of each sample in the training set for tth iteration. Each sample weight is divided by Zt to make sure it sums up to 1. Dt(i) represents the probability ith sample of being picked up for next training set.
Here, Zt =
Abbildung in dieser Leseprobe nicht enthalten
Figure-3: exp(x) function(McCormick, 2013)
The function exp(x) will return a fraction for negative values of x, and a value greater than one for positive values of x. So the weight for training sample i will be either increased or decreased depending on the final sign of the term “-alpha * y * h(x)”.
3.4 Application:
Adaboost and its variants has been used in Viola-Jones face detection, hands detection and human detection.
3.5 Popular variants:
Real AdaBoost, LogitBoost, Gentle AdaBoost, ACNN.
4.0 Experimental Setup:
As Adaboost doesn’t support CNN directly, to adapt CNN with adaboost we will be using Adaboost’ variation named ACNN. Following is the brief of ACNN algorithm:
4.1 Adaboost-CNN(ACNN):
Adaboost-CNN or ACNN is a variant of Adaboost algorithm which is mainly targeted to improve class accuracy rather than overall classifier accuracy. As car driving is very sensitive, this paper is more focused on improving class accuracy beside classifier accuracy. The classifier accuracy will be handled by CNN and each class accuracy will be handled by ACNN.
[...]
- Quote paper
- Ali Mohammad Tarif (Author), S. M. Raju (Author), Mohammod Al Amin Ashik (Author), Md. Shariful Islam (Author), Tabassum Tahera (Author), 2017, Self-Driving Car Simulation using Adaboost-CNN Algorithm, Munich, GRIN Verlag, https://www.hausarbeiten.de/document/386130