吴恩达《Machine Learning Yearning》的英文版完结:[第1~第58章](Machine Learning Yearning 1-58(by Andrew NG).pdf)
官网:
- http://www.mlyearning.org/
- https://www.deeplearning.ai/machine-learning-yearning/
- https://d2wvfoqc9gyqzf.cloudfront.net/content/uploads/2018/09/Ng-MLY01-13.pdf
原作者:Andrew NG
申明:本文旨在传播知识,并无商业行为之意
TODO
- 中文版因为版权问题,暂时无法更新
Chapter 1. Why Machine Learning Strategy
Chapter 2. How to use this book to help your team
Chapter 3. Prerequisites and Notation
Chapter 4. Scale drives machine learning progress
Chapter 5. Your development and test sets
Chapter 6. Your dev and test sets should come from the same distribution
Chapter 7. How large do the dev/test sets need to be?
Chapter 8. Establish a single-number evaluation metric for your team to optimize
Chapter 9. Optimizingandsatisficingmetrics
Chapter 10. Having a dev set and metric speeds up iterations
Chapter 11. When to change dev/test sets and metrics
Chapter 12. Takeaways: Setting up development and test sets
Chapter 13. Build your first system quickly, then iterate
Chapter 14. Error analysis: Look at dev set examples to evaluate ideas
Chapter 15. Evaluate multiple ideas in parallel during error analysis
Chapter 16. Cleaning up mislabeled dev and test set examples
Chapter 17. If you have a large dev set, split it into two subsets, only one of which you look at
Chapter 18. How big should the Eyeball and Blackbox dev sets be?
Chapter 19. Takeaways: Basic error analysis
Chapter 20. Bias and Variance: The two big sources of error
Chapter 21. Examples of Bias and Variance
Chapter 22. Comparing to the optimal error rate
Chapter 23. Addressing Bias and Variance
Chapter 24. Bias vs. Variance tradeoff
Chapter 25. Techniques for reducing avoidable bias
Chapter 26. Techniques for reducing Variance
Chapter 27. Error analysis on the training set
Chapter 28. Diagnosing bias and variance: Learning curves
Chapter 29. Plotting training error
Chapter 30. Interpreting learning curves: High bias
Chapter 31. Interpreting learning curves: Other cases
Chapter 32. Plotting learning curves
Chapter 33. Why we compare to human-level performance
Chapter 34. How to define human-level performance
Chapter 35. Surpassing human-level performance
Chapter 36. Why train and test on different distributions
Chapter 37. Whether to use all your data
Chapter 38. Whether to include inconsistent data
Chapter 39. Weighting data
Chapter 40. Generalizing from the training set to the dev set
Chapete 41. Identifying Bias, Variance, and Data Mismatch Errors
Chapter 42. Addressing data mismatch
Chapter 43. Artificial data synthesis
Chapter 44. The Optimization Verification test
Chapter 45. General form of Optimization Verification test
Chapter 46. Reinforcement learning example
Chapter 47. The rise of end-to-end learning
Chapter 48. More end-to-end learning examples
Chapter 49. Pros and cons of end-to-end learning
Chapter 50. Choosing pipeline components: Data availability
Chapter 51. Choosing pipeline components: Task simplicity
Chapter 52. Directly learning rich outputs
Chapter 53. Error Analysis by Parts
Chapter 54. Attributing error to one part
Chapter 55: General case of error attribution
Chapter 56. Error analysis by parts and comparison to human-level performance
Chapter 57. Spotting a flawed ML pipeline
Chapter 58. Building a superhero team - Get your teammates to read this