An introduction to elementary statistical learning theory

An introduction to elementary statistical learning theory

Kulkarni, Sanjeev
Harman, Gilbert

86,85 €(IVA inc.)

ÍNDICE: Preface. 1. Introduction: Classification, Learning, Features, Applications. 1.1 Scope. 1.2 Why Machine Learning? 1.3 Some Applications. 1.4 Measurements, Features, and Feature Vectors. 1.5 The Need for Probability. 1.6 Supervised Learning. 1.7 Summary. 1.8 Appendix: Induction. 1.9 Questions. 1.10 References. 2. Probability. 2.1 Probability of Some Basic Events. 2.2 Probabilities of compound events. 2.3 Conditional probability. 2.4 Drawing without replacement. 2.5 A Classic Birthday Problem. 2.6 Random Variables. 2.7 Expected Value. 2.8 Variance. 2.9 Summary. 2.10 Appendix: Interpretations of Probability. 2.11 Questions. 2.12 References. 3. Probability Densities. 3.1 An Example in Two Dimensions. 3.2 Random Numbers in [0, 1]. 3.3 Density Functions. 3.4 Probability Densities in Higher Dimensions. 3.5 Joint and Conditional Densities. 3.6Expected Value and Variance. 3.7 Laws of Large Numbers. 3.8 Summary. 3.9 Appendix: Measurability. 3.10 Questions. 3.11 References. 4. The Pattern Recognition Problem. 4.1 A Simple Example. 4.2 Decision Rules. 4.3 Success Criterion. 4.4 The Best Classifier: Bayes Decision Rule. 4.5 Continuous Features and Densities. 4.6 Summary. 4.7 Appendix: Uncountably Many. 4.8 Questions. 4.9 References. 5. The Optimal Bayes Decision Rule. 5.1 Bayes Theorem. 5.2 Bayes Decision Rule. 5.3 Optimality and Some Comments. 5.4 An Example. 5.5 Bayes Theorem and Decision Rule With Densities. 5.6 Summary. 5.7 Appendix: Defining Conditional Probability. 5.8 Questions. 5.9 References. 6. Learning from Examples. 6.1 Lack of Knowledge of Distributions. 6.2 Training Data. 6.3 Assumptions on the Training Data. 6.4 A Brute Force Approach to Learning. 6.5 Curse of Dimensionality, Inductive Bias, and No Free Lunch. 6.6 Summary. 6.7 Appendix: What Sort of Learning? 6.8 Questions. 6.9 References. 7. The Nearest Neighbor Rule. 7.1 TheNearest Neighbor Rule. 7.2 Performance of the Nearest Neighbor Rule. 7.3 Intuition and Proof Sketch of Performance. 7.4 Using More Neighbors. 7.5 Summary. 7.6 Appendix: When People Use Nearest Neighbor Reasoning. 7.7 Questions. 7.8 References. 8. Kernel Rules. 8.1 Motivation. 8.2 A Variation on Nearest Neighbor Rules. 8.3 Kernel Rules. 8.4 Universal Consistency of Kernel Rules. 8.5 Potential Functions. 8.6 More General Kernels. 8.7 Summary. 8.8 Appendix: Kernels,Similarity, and Features. 8.9 Questions. 8.10 References. 9. Neural Networks:Perceptrons. 9.1 Multilayer Feed Forward Networks. 9.2 Neural Networks for Learning and Classification. 9.3 Perceptrons. 9.4 Learning Rule for Perceptrons.9.5 Representational Capabilities of Perceptrons. 9.6 Summary. 9.7 Appendix: Models of Mind. 9.8 Questions. 9.9 References. 10. Multilayer Networks. 10.1 Representation Capabilities of Multilayer Networks. 10.2 Learning and SigmoidalOutputs. 10.3 Training Error and Weight Space. 10.4 Error Minimization by Gradient Descent. 10.5 Backpropagation. 10.6 Derivation of Backpropagation Equations. 10.7 Summary. 10.8 Appendix: Gradient Desc

  • ISBN: 978-0-470-64183-5
  • Editorial: John Wiley & Sons
  • Encuadernacion: Cartoné
  • Páginas: 232
  • Fecha Publicación: 03/06/2011
  • Nº Volúmenes: 1
  • Idioma: Inglés