# 统计学习基础：（英文版）

2009-01

ISBN: 9787506292313

• 　　Thelearningproblemsthatweconsidercanberoughlycategorizedaseithersupervisedorunsupervised.Insupervisedlearning,thegoalistopredictthevalueofanoutcomemeasurebasedonanumberofinputmeasures;inunsupervisedlearning,thereisnooutcomemeasure,andthegoalistodescribetheassociationsandpatternsamongasetofinputmeasures. 作者：（德国）T.黑斯蒂（Trevor Hastie） Preface
1IntroductionOverviewofSupervisedLearning
2.1Introduction
2.2VariableTypesandTerminology
2.3TwoSimpleApproachestoPrediction：LeastSquaresandNearestNeighbors
2.3.1LinearModelsandLeastSquares
2.3.2Nearest-NeighborMethods
2.3.3FromLeastSquarestoNearestNeighbors
2.4StatisticalDecisionTheory
2.5LocalMethodsinHighDimensions
2.6StatisticalModels，SupervisedLearningandFunctionApproximation
2.6.1AStatisticalModelfortheJointDistributionPr(X，Y)
2.6.2SupervisedLearning
2.6.3FunctionApproximation
2.7StructuredRegressionModels
2.7.1DifficultyoftheProblem
2.8ClassesofRestrictedEstimators
2.8.1RoughnessPenaltyandBayesianMethods
2.8.2KernelMethodsandLocalRegression
2.8.3BasisFunctionsandDictionaryMethods
BibliographicNotes
Exercises

3LinearMethodsforRegression
3.1Introduction
3.2LinearRegressionModelsandLeastSquares
3.2.1Example：ProstateCancer
3.2.2TheGanss-MarkovTheorem
3.3MultipleRegressionfromSimpleUnivariateRegression
3.3.1MultipleOutputs
3.4SubsetSelectionandCoefficientShrinkage
3.4.1SubsetSelection
3.4.2ProstateCancerDataExamplefContinued)
3.4.3ShrinkageMethods
3.4.4MethodsUsingDerivedInputDirections
3.4.5Discussion：AComparisonoftheSelectionandShrinkageMethods
3.4.6MultipleOutcomeShrinkageandSelection
3.5CompntationalConsiderations
BibliographicNotes
Exercises

4LinearMethodsforClassification
4.1Introduction
4.2LinearRegressionofanIndicatorMatrix
4.3LinearDiscriminantAnalysis
4.3.1RegularizedDiscriminantAnalysis
4.3.2ComputationsforLDA
4.3.3Reduced-RankLinearDiscriminantAnalysis
4.4LogisticRegression
4.4.1FittingLogisticRegressionModels
4.4.2Example：SouthAfricanHeartDisease
4.4.4LogisticRegressionorLDA7
4.5SeparatingHyperplanes
4.5.1RosenblattsPerceptronLearningAlgorithm
4.5.2OptimalSeparatingHyperplanes
BibliographicNotes
Exercises

5BasisExpansionsandRegularizatlon
5.1Introduction
5.2PiecewisePolynomialsandSplines
5.2.1NaturalCubicSplines
5.2.2Example：SouthAfricanHeartDisease(Continued)
5.2.3Example：PhonemeRecognition
5.3FilteringandFeatureExtraction
5.4SmoothingSplines
5.4.1DegreesofFreedomandSmootherMatrices
5.5AutomaticSelectionoftheSmoothingParameters
5.5.1FixingtheDegreesofFreedom
5.6NonparametricLogisticRegression
5.7MultidimensionalSplines
5.8RegularizationandReproducingKernelHilbertSpaces..
5.8.1SpacesofPhnctionsGeneratedbyKernels
5.8.2ExamplesofRKHS
5.9WaveletSmoothing
5.9.1WaveletBasesandtheWaveletTransform
BibliographicNotes
Exercises
Appendix：ComputationalConsiderationsforSplines
Appendix：B-splines
Appendix：ComputationsforSmoothingSplines

6KernelMethods
6.1One-DimensionalKernelSmoothers
6.1.1LocalLinearRegression
6.1.2LocalPolynomialRegression
6.2SelectingtheWidthoftheKernel
6.3LocalRegressioninJap
6.4StructuredLocalRegressionModelsin]ap
6.4.1StructuredKernels
6.4.2StructuredRegressionFunctions
6.5LocalLikelihoodandOtherModels
6.6KernelDensityEstimationandClassification
6.6.1KernelDensityEstimation
6.6.2KernelDensityClassification
6.6.3TheNaiveBayesClassifier
6.8MixtureModelsforDensityEstimationandClassification
6.9ComputationalConsiderations
BibliographicNotes
Exercises

7ModelAssessmentandSelection
7.1Introduction
7.2Bias，VarianceandModelComplexity
7.3TheBias-VarianceDecomposition
7.4OptimismoftheTrainingErrorRate
7.5EstimatesofIn-SamplePredictionError
7.6TheEffectiveNumberofParameters
7.7TheBayesianApproachandBIC
7.8MinimumDescriptionLength
7.9VapnikChernovenkisDimension
7.9.1Example(Continued)
7.10Cross-Validation
7.11BootstrapMethods
7.11.1Example(Continued)
BibliographicNotes
Exercises

8ModelInferenceandAveraging
8.1Introduction
8.2TheBootstrapandMaximumLikelihoodMethods
8.2.1ASmoothingExample
8.2.2MaximumLikelihoodInference
8.2.3BootstrapversusMaximumLikelihood
8.3BayesianMethods
8.4RelationshipBetweentheBootstrapandBayesianInference
8.5TheEMAlgorithm
8.5.1Two-ComponentMixtureModel
8.5.2TheEMAlgorithminGeneral
8.5.3EMasaMaximization-MaximizationProcedure
8.6MCMCforSamplingfromthePosterior
8.7Bagging
8.7.1Example：TreeswithSimulatedData
8.8ModelAveragingandStacking
8.9StochasticSearch：Bumping
BibliographicNotes
Exercises

9.1.3Summary
9.2TreeBasedMethods
11NeuralNetworks
12SupportVectorMachinesandFlexibleDiscriminants
13PrototypeMethodsandNearest-Neighbors
14UnsupervisedLearning
References
AuthorIndex
Index
• ##### 内容简介:
Thelearningproblemsthatweconsidercanberoughlycategorizedaseithersupervisedorunsupervised.Insupervisedlearning,thegoalistopredictthevalueofanoutcomemeasurebasedonanumberofinputmeasures;inunsupervisedlearning,thereisnooutcomemeasure,andthegoalistodescribetheassociationsandpatternsamongasetofinputmeasures.
• ##### 作者简介:
作者：（德国）T.黑斯蒂（Trevor Hastie）
• ##### 目录:
Preface
1IntroductionOverviewofSupervisedLearning
2.1Introduction
2.2VariableTypesandTerminology
2.3TwoSimpleApproachestoPrediction：LeastSquaresandNearestNeighbors
2.3.1LinearModelsandLeastSquares
2.3.2Nearest-NeighborMethods
2.3.3FromLeastSquarestoNearestNeighbors
2.4StatisticalDecisionTheory
2.5LocalMethodsinHighDimensions
2.6StatisticalModels，SupervisedLearningandFunctionApproximation
2.6.1AStatisticalModelfortheJointDistributionPr(X，Y)
2.6.2SupervisedLearning
2.6.3FunctionApproximation
2.7StructuredRegressionModels
2.7.1DifficultyoftheProblem
2.8ClassesofRestrictedEstimators
2.8.1RoughnessPenaltyandBayesianMethods
2.8.2KernelMethodsandLocalRegression
2.8.3BasisFunctionsandDictionaryMethods
BibliographicNotes
Exercises

3LinearMethodsforRegression
3.1Introduction
3.2LinearRegressionModelsandLeastSquares
3.2.1Example：ProstateCancer
3.2.2TheGanss-MarkovTheorem
3.3MultipleRegressionfromSimpleUnivariateRegression
3.3.1MultipleOutputs
3.4SubsetSelectionandCoefficientShrinkage
3.4.1SubsetSelection
3.4.2ProstateCancerDataExamplefContinued)
3.4.3ShrinkageMethods
3.4.4MethodsUsingDerivedInputDirections
3.4.5Discussion：AComparisonoftheSelectionandShrinkageMethods
3.4.6MultipleOutcomeShrinkageandSelection
3.5CompntationalConsiderations
BibliographicNotes
Exercises

4LinearMethodsforClassification
4.1Introduction
4.2LinearRegressionofanIndicatorMatrix
4.3LinearDiscriminantAnalysis
4.3.1RegularizedDiscriminantAnalysis
4.3.2ComputationsforLDA
4.3.3Reduced-RankLinearDiscriminantAnalysis
4.4LogisticRegression
4.4.1FittingLogisticRegressionModels
4.4.2Example：SouthAfricanHeartDisease
4.4.4LogisticRegressionorLDA7
4.5SeparatingHyperplanes
4.5.1RosenblattsPerceptronLearningAlgorithm
4.5.2OptimalSeparatingHyperplanes
BibliographicNotes
Exercises

5BasisExpansionsandRegularizatlon
5.1Introduction
5.2PiecewisePolynomialsandSplines
5.2.1NaturalCubicSplines
5.2.2Example：SouthAfricanHeartDisease(Continued)
5.2.3Example：PhonemeRecognition
5.3FilteringandFeatureExtraction
5.4SmoothingSplines
5.4.1DegreesofFreedomandSmootherMatrices
5.5AutomaticSelectionoftheSmoothingParameters
5.5.1FixingtheDegreesofFreedom
5.6NonparametricLogisticRegression
5.7MultidimensionalSplines
5.8RegularizationandReproducingKernelHilbertSpaces..
5.8.1SpacesofPhnctionsGeneratedbyKernels
5.8.2ExamplesofRKHS
5.9WaveletSmoothing
5.9.1WaveletBasesandtheWaveletTransform
BibliographicNotes
Exercises
Appendix：ComputationalConsiderationsforSplines
Appendix：B-splines
Appendix：ComputationsforSmoothingSplines

6KernelMethods
6.1One-DimensionalKernelSmoothers
6.1.1LocalLinearRegression
6.1.2LocalPolynomialRegression
6.2SelectingtheWidthoftheKernel
6.3LocalRegressioninJap
6.4StructuredLocalRegressionModelsin]ap
6.4.1StructuredKernels
6.4.2StructuredRegressionFunctions
6.5LocalLikelihoodandOtherModels
6.6KernelDensityEstimationandClassification
6.6.1KernelDensityEstimation
6.6.2KernelDensityClassification
6.6.3TheNaiveBayesClassifier
6.8MixtureModelsforDensityEstimationandClassification
6.9ComputationalConsiderations
BibliographicNotes
Exercises

7ModelAssessmentandSelection
7.1Introduction
7.2Bias，VarianceandModelComplexity
7.3TheBias-VarianceDecomposition
7.4OptimismoftheTrainingErrorRate
7.5EstimatesofIn-SamplePredictionError
7.6TheEffectiveNumberofParameters
7.7TheBayesianApproachandBIC
7.8MinimumDescriptionLength
7.9VapnikChernovenkisDimension
7.9.1Example(Continued)
7.10Cross-Validation
7.11BootstrapMethods
7.11.1Example(Continued)
BibliographicNotes
Exercises

8ModelInferenceandAveraging
8.1Introduction
8.2TheBootstrapandMaximumLikelihoodMethods
8.2.1ASmoothingExample
8.2.2MaximumLikelihoodInference
8.2.3BootstrapversusMaximumLikelihood
8.3BayesianMethods
8.4RelationshipBetweentheBootstrapandBayesianInference
8.5TheEMAlgorithm
8.5.1Two-ComponentMixtureModel
8.5.2TheEMAlgorithminGeneral
8.5.3EMasaMaximization-MaximizationProcedure
8.6MCMCforSamplingfromthePosterior
8.7Bagging
8.7.1Example：TreeswithSimulatedData
8.8ModelAveragingandStacking
8.9StochasticSearch：Bumping
BibliographicNotes
Exercises

9.1.3Summary
9.2TreeBasedMethods
11NeuralNetworks
12SupportVectorMachinesandFlexibleDiscriminants
13PrototypeMethodsandNearest-Neighbors
14UnsupervisedLearning
References
AuthorIndex
Index

• 2009-01 印刷
印次: 1
八五品
北京市海淀区
35.00
立即购买 加入购物车 不属于本条目

• 九品
上海市徐汇区
30.80
• 2009-01 印刷
印次: 1
八品
山东省济南市
40.00

• 九五品
河北省廊坊市
60.28

• 九品
河北省廊坊市
60.28
• 统计学习基础：（英文版） 书口封面微污渍。书脊微磨

八五品
上海市宝山区
49.00

• 八五品
上海市闵行区
33.00

• 九五品
北京市东城区
58.80

• 九品
北京市东城区
65.00

• 九品
北京市东城区
56.00

• 九五品
北京市东城区
60.00

• 九品
北京市东城区
58.00
• 统计学习基础：（英文版） 无字迹划线 正版 当天发货

九品
北京市丰台区
30.00
• 统计学习基础：（英文版） 正版，检查一遍就发现首页有日期，内页干净

九品
北京市海淀区
33.00

[法]玛格丽特·尤瑟纳尔

[希腊]C. P. 卡瓦菲斯 著；黄灿然 译

[波]维斯瓦娃·希姆博尔斯卡 著；林洪亮 译

[伊朗]沙赫尔努希·帕尔西普尔 著；穆宏燕 王莹

[德]马提亚斯·贝歇尔 著；任伊乐 译