MACHINE LEARNING

Paper Code: 
MCS 327B
Credits: 
04
Periods/week: 
04
Max. Marks: 
100.00
Objective: 

This course focuses on the study of basic concepts of machine learning by using algorithms and various theories.

12.00
Unit I: 

Overview and Introduction to Bayes Decision Theory: Machine intelligence and  applications, pattern recognition concepts classification, regression, feature selection, supervised learning, class conditional probability distributions, Examples of classifiers bayes optimal classifier and error, learning classification approaches.

               

12.00
Unit II: 

Linear machines: General and linear discriminants, decision regions, single layer neural network, linear separability, general gradient descent, perceptron learning algorithm, mean square criterion and widrow-Hoff learning algorithm; multi-Layer perceptions: two-layers universal approximators, backpropagation learning, on-line, off-line error surface, important parameters.

12.00
Unit III: 

Learning decision trees: Inference model, general domains, symbolic decision trees, consistency, learning trees from training examples entropy, mutual information, ID3 algorithm criterion, C4.5 algorithm continuous test nodes, confidence, pruning, learning with incomplete data. Instance-based Learning: Nearest neighbor classification, k-nearest neighbor, nearest neighbor error probability.

12.00

Machine learning concepts and limitations: Learning theory, formal model of the learnable, sample complexity, learning in zero-bayes and realizable case, VC-dimension, fundamental algorithm independent concepts, hypothesis class, target class, inductive bias, occam's razor, empirical risk, limitations of inference machines, approximation and estimation errors, Tradeoff.

12.00
Unit V: 

Machine learning assessment and Improvement: Statistical model selection, structural risk minimization, bootstrapping, bagging, boosting. Support Vector Machines: Margin of a classifier, dual perceptron algorithm, learning non-linear hypotheses with perceptron kernel functions, implicit non-linear feature space, theory, zero-Bayes, realizable infinite hypothesis class, finite covering, margin-based bounds on risk, maximal margin classifier.

ESSENTIAL READINGS: 
  1. E. Alpaydin, “Introduction to Machine Learning” , Prentice Hall of India, 2006.
  2.  R. O. Duda, P. E. Hart, and D.G. Stork, “Pattern Classification”, John Wiley and  Sons, 2001.
  3. Vladimir N. Vapnik, “tatistical Learning Theory” John Wiley and Sons, 1998.
  4.  Shawe-Taylor J. and Cristianini N., Cambridge, “ntroduction to Support Vector Machines”, University Press, 2000.
REFERENCES: 

1. T. M. Mitchell, “Machine Learning”, McGraw-Hill, 1997.

2. C. M. Bishop, “Pattern Recognition and Machine Learning” , Springer, 2006.

Academic Year: