31 | 028. Logistic Regression: Loss and Regularization | 사상최강 | 2021-01-17 | 11 |
30 | 027. Logistic Regression: Calculating a Probability | 사상최강 | 2021-01-10 | 12 |
29 | 026. Regularization for Simplicity: Lambda (λ) | 사상최강 | 2020-12-20 | 30 |
28 | 025. Regularization for Simplicity: L2 Regularization | 사상최강 | 2020-12-13 | 21 |
27 | 024. Feature Crosses | 사상최강 | 2020-12-07 | 28 |
26 | 023. Representation: Cleaning Data: Handling missing values | 사상최강 | 2020-11-23 | 51 |
25 | 022. Representation: Cleaning Data: Scaling 보충 | 사상최강 | 2020-11-15 | 43 |
24 | 021. Representation: Cleaning Data | 사상최강 | 2020-11-08 | 25 |
23 | 020. Representation - 양질의 features 생성에 필요한 요건 | 사상최강 | 2020-11-01 | 54 |
22 | 019. Representation - Feature Engineering | 사상최강 | 2020-10-25 | 59 |
21 | 018. Validation Set | 사상최강 | 2020-10-19 | 35 |
20 | 017. Training Set와 Test Set | 사상최강 | 2020-10-11 | 34 |
19 | 016. Generalization (일반화) | 사상최강 | 2020-10-04 | 56 |
18 | 015. TensorFlow 2.0 첫걸음 – Synthetic Features와 Outliers | 사상최강 | 2020-09-20 | 76 |
17 | 014. Python pandas 소개 | 사상최강 | 2020-09-13 | 107 |
16 | 013. TensorFlow 소개 | 사상최강 | 2020-08-30 | 90 |
15 | 012. Epoch, Step, Batch size 개념 정리 | 사상최강 | 2020-08-23 | 93 |
14 | 011. Reducing Loss – 배치 사이즈별 Gradient Descent 종류 | 사상최강 | 2020-08-08 | 103 |
13 | 010. Reducing Loss - Learning Rate | 사상최강 | 2020-07-26 | 90 |
12 | 009. Reducing Loss - Gradient Descent | 사상최강 | 2020-07-12 | 96 |