30 | 027. Logistic Regression: Calculating a Probability | 사상최강 | 2021-01-10 | 9 |
29 | 026. Regularization for Simplicity: Lambda (λ) | 사상최강 | 2020-12-20 | 28 |
28 | 025. Regularization for Simplicity: L2 Regularization | 사상최강 | 2020-12-13 | 20 |
27 | 024. Feature Crosses | 사상최강 | 2020-12-07 | 25 |
26 | 023. Representation: Cleaning Data: Handling missing values | 사상최강 | 2020-11-23 | 44 |
25 | 022. Representation: Cleaning Data: Scaling 보충 | 사상최강 | 2020-11-15 | 36 |
24 | 021. Representation: Cleaning Data | 사상최강 | 2020-11-08 | 24 |
23 | 020. Representation - 양질의 features 생성에 필요한 요건 | 사상최강 | 2020-11-01 | 52 |
22 | 019. Representation - Feature Engineering | 사상최강 | 2020-10-25 | 56 |
21 | 018. Validation Set | 사상최강 | 2020-10-19 | 33 |
20 | 017. Training Set와 Test Set | 사상최강 | 2020-10-11 | 33 |
19 | 016. Generalization (일반화) | 사상최강 | 2020-10-04 | 50 |
18 | 015. TensorFlow 2.0 첫걸음 – Synthetic Features와 Outliers | 사상최강 | 2020-09-20 | 72 |
17 | 014. Python pandas 소개 | 사상최강 | 2020-09-13 | 102 |
16 | 013. TensorFlow 소개 | 사상최강 | 2020-08-30 | 87 |
15 | 012. Epoch, Step, Batch size 개념 정리 | 사상최강 | 2020-08-23 | 89 |
14 | 011. Reducing Loss – 배치 사이즈별 Gradient Descent 종류 | 사상최강 | 2020-08-08 | 99 |
13 | 010. Reducing Loss - Learning Rate | 사상최강 | 2020-07-26 | 87 |
12 | 009. Reducing Loss - Gradient Descent | 사상최강 | 2020-07-12 | 92 |
11 | 008. Reducing Loss - Iterative Approach | 사상최강 | 2020-07-05 | 79 |