๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ


Machine Learning/๋”ฅ๋Ÿฌ๋‹

(11)
MDN MDN์—์„œ๋Š” ์ถœ๋ ฅ๊ฐ’์„ ๋ช…์‹œ์ ์œผ๋กœ ์ƒ์„ฑํ•˜์—ฌ x->y ๋งคํ•‘์„ ๋ชจ๋ธ๋งํ•˜๋Š” ๋Œ€์‹  ๊ฐ ๋Œ€์ƒ์˜ ํ™•๋ฅ  ๋ถ„ํฌ๋ฅผ ํ•™์Šตํ•˜๊ณ  ์ถœ๋ ฅ์„ ์ƒ˜ํ”Œ๋งํ•œ๋‹ค. ๋ถ„ํฌ ์ž์ฒด๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์šฐ์‹œ์•ˆ(๊ฐ€์šฐ์Šค ํ˜ผํ•ฉ) ์œผ๋กœ ํ‘œ์‹œ๋œ๋‹ค. ๋ชจ๋“  ์ž…๋ ฅ x์— ๋Œ€ํ•ด distribution parameters๋ฅผ ํ•™์Šตํ•œ๋‹ค. mean, variance, mixing coefficient k : ๊ฐ€์šฐ์‹œ์•ˆ ์ˆ˜ l : ์ž…๋ ฅ ํ”ผ์ฒ˜ ์ˆ˜ (l + 2) k ์ถœ๋ ฅ๊ฐ’: the mixing coefficients์™€ component density parameters๋ฅผ ํ•™์Šตํ•œ๋‹ค. # In our toy example, we have single input feature l = 1 # Number of gaussians to represent the multimodal distribution..
[Keras] Noise Regularization https://machinelearningmastery.com/how-to-improve-deep-learning-model-robustness-by-adding-noise/ How to Improve Deep Learning Model Robustness by Adding Noise Adding noise to an underconstrained neural network model with a small training dataset can have a regularizing effect and reduce overfitting. Keras supports the addition of Gaussian noise via a separate layer called the GaussianNoise laye..
ํ›ˆ๋ จ์…‹, ๊ฒ€์ฆ์…‹, ์‹œํ—˜์…‹ ํผ์˜ด: https://tykimos.github.io/2017/03/25/Dataset_and_Fit_Talk/ ๋ฐ์ดํ„ฐ์…‹ ์ด์•ผ๊ธฐ ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋ ค๋ฉด ๋ฐ์ดํ„ฐ์…‹์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ’€๊ณ ์ž ํ•˜๋Š” ๋ฌธ์ œ ๋ฐ ๋งŒ๋“ค๊ณ ์ž ํ•˜๋Š” ๋ชจ๋ธ์— ๋”ฐ๋ผ ๋ฐ์ดํ„ฐ์…‹ ์„ค๊ณ„๋„ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹์„ ์–ด๋–ป๊ฒŒ ๊ตฌ์„ฑํ•˜๊ณ  ๋ชจ๋ธ์„ ์–ด๋–ป๊ฒŒ ๊ฒ€์ฆํ•  ์ง€ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์…‹, ๊ฒ€์ฆ์…‹, ์‹œํ—˜์…‹ ๋‹น์‹ ์ด ๊ณ ๋“ฑํ•™๊ต ๋‹ด์ž„์„ ์ƒ๋‹˜์ด๊ณ  ์ˆ˜๋Šฅ ๋ณผ ํ•™์ƒ์ด 3๋ช…์ด ์žˆ๋‹ค๊ณ  ๊ฐ€์ •์„ ํ•ด๋ด…์‹œ๋‹ค. ์ด ์„ธ ๋ช… ์ค‘ ๋ˆ„๊ฐ€ ์ˆ˜๋Šฅ์„ ๊ฐ€์žฅ ์ž˜ ๋ณผ์ง€ ์•Œ์•„ ๋งžํ˜€๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹น์‹ ์—๊ฒŒ๋Š” ๋ชจ์˜๊ณ ์‚ฌ 5ํšŒ๋ถ„๊ณผ ์ž‘๋…„ ์ˆ˜๋Šฅ ๋ฌธ์ œ 1ํšŒ๋ถ„์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋น„์œ ๋  tykimos.github.io https://book.coalastudy.com/data-science-lv1/week3/..
Validation, Test ๋ฐ์ดํ„ฐ์„ธํŠธ ๋น„๊ต Validation ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜์—ฌ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํŠœ๋‹์— ์“ฐ์ผ ์ˆ˜ ์žˆ๋‹ค. (์ธต์˜ ์ˆ˜, ์ธต์˜ ์œ ๋‹› ์ˆ˜ ๋“ฑ) ๊ฒ€์ฆ ์„ธํŠธ์˜ ์„ฑ๋Šฅ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ชจ๋ธ์˜ ์„ค์ •์„ ํŠœ๋‹ํ•˜๋ฉด ๊ฒ€์ฆ ์„ธํŠธ๋กœ ๋ชจ๋ธ์„ ์ง์ ‘ ํ›ˆ๋ จํ•˜์ง€ ์•Š๋”๋ผ๋„ ๋น ๋ฅด๊ฒŒ ๊ฒ€์ฆ์„ธํŠธ์— ๊ณผ๋Œ€์ ํ•ฉ ๋  ์ˆ˜ ์žˆ๋‹ค. -> ํ•œ ๋ฒˆ ํŠœ๋‹ํ•˜๊ณ  ๋‚˜์„œ ๊ฒ€์ฆ์„ธํŠธ์— ํ‰๊ฐ€ํ•œ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ€์ง€๊ณ  ๋‹ค์‹œ ๋ชจ๋ธ์„ ์กฐ์ •ํ•˜๋Š” ๊ณผ์ •์„ ์—ฌ๋Ÿฌ ๋ฒˆ ๋ฐ˜๋ณตํ•˜๋ฉด ๊ฒ€์ฆ์„ธํŠธ์— ๊ด€ํ•œ ์ •๋ณด๋ฅผ ๋ชจ๋ธ์— ์•„์ฃผ ๋งŽ์ด ๋…ธ์ถœ์‹œํ‚ค๊ฒŒ ๋˜์–ด ๊ณผ๋Œ€์ ํ•ฉ ๋  ์ˆ˜ ์žˆ๋‹ค. Test ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” ์™„์ „ํžˆ ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ์ด์–ด์•ผ ํ•˜๋ฉฐ ๋ชจ๋ธ์€ ๊ฐ„์ ‘์ ์œผ๋กœ ์–ด๋– ํ•œ ์ •๋ณด๋„ ์ฃผ๋ฉด ์•ˆ๋จ -> ํ…Œ์ŠคํŠธ ์„ธํŠธ ์„ฑ๋Šฅ์— ๊ธฐ์ดˆํ•˜์—ฌ ํŠœ๋‹ํ•œ ๋ชจ๋ธ์˜ ๋ชจ๋“  ์„ค์ •์€ ์ผ๋ฐ˜ํ™” ์„ฑ๋Šฅ์„ ์™œ๊ณก์‹œํ‚ฌ ๊ฒƒ์ด๋‹ค. * ๋ฐ์ดํ„ฐ๊ฐ€ ์ ์„ ๊ฒฝ์šฐ ์‚ฌ์šฉํ•˜๋ฉด ์ข‹์€ ๊ณ ๊ธ‰ ๊ธฐ๋ฒ• - ๋‹จ์ˆœ ํ™€๋“œ์•„์›ƒ ๊ฒ€์ฆ(ho..
๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์˜ ๊ต์ฐจ๊ฒ€์ฆ (Cross Validation) ์ถœ์ฒ˜: https://3months.tistory.com/321 [Deep Play] ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์˜ ๊ต์ฐจ๊ฒ€์ฆ (Cross Validation) ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์˜ ๊ต์ฐจ๊ฒ€์ฆ (Cross Validation) ๊ต์ฐจ๊ฒ€์ฆ์ด๋ž€? Keras๋กœ ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ณ  ์ด๋ฅผ ๊ต์ฐจ ๊ฒ€์ฆ ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํฌ์ŠคํŒ…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ์šฐ์„  ๊ต์ฐจ ๊ฒ€์ฆ(Cross validation) ์ด ๋ฌด์—‡์ธ์ง€์— ๋Œ€ํ•ด ์„ค๋ช…์ด ํ•„์š”ํ•  ๊ฒƒ.. 3months.tistory.com
[Machine Learning] Train data normalization Test ๋ฐ์ดํ„ฐ ์ •๊ทœํ™”๋Š” Train์˜ Mean, Std๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋‚˜? - ์ˆ˜์ฒœ๊ฐœ์˜ Data point๊ฐ€ ์žˆ๊ณ , Test data๊ฐ€ Train data๋ฅผ ์™„๋ฒฝํ•˜๊ฒŒ ๋Œ€ํ‘œํ•œ๋‹ค๋ฉด Test, Train ์–ด๋Š ๊ฒƒ์„ ์‚ฌ์šฉํ•ด๋„ ๋œ๋‹ค. (์ž…์ฆ ์–ด๋ ค์›€) https://www.researchgate.net/post/If_I_used_data_normalization_x-meanx_stdx_for_training_data_would_I_use_train_Mean_and_Standard_Deviation_to_normalize_test_data ๋ถˆ๋Ÿฌ์˜ค๋Š” ์ค‘์ž…๋‹ˆ๋‹ค...
Keras Tuner ์„ค์น˜ ํ•„์š”: Python 3.6 TensorFlow 2.0 Beta pip install git+https://github.com/keras-team/keras-tuner.git ๊ธฐ๋ณธ ์‚ฌํ•ญ random search๋ฅผ ์‚ฌ์šฉํ•ด์„œ a single-layer dense neural network ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํŠœ๋‹์„ ํ•ด๋ณด์ž. ๋จผ์ €, model-building ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•œ๋‹ค. hp๋Š” hyperparameter๋ฅผ ์ƒ˜ํ”Œ๋ง ํ•  ์ˆ˜ ์žˆ๋Š” ์ธ์ˆ˜์ด๋‹ค. ex) hp.Range('units', min_value=32, max_value=512, step=32) (ํŠน์ • ๋ฒ”์œ„์˜ ์ •์ˆ˜) return์€ ์ปดํŒŒ์ผ๋œ model from tensorflow import keras from tensorflow.keras import laye..
MDN https://www.katnoria.com/mdn/ https://www.katnoria.com/mdn/ Mixture Density Networks Supervised machine learning models learn the mapping between the input features (x) and the target values (y). The regression models predict continuous output such as house price or stock price whereas classification models predict www.katnoria.com https://kangbk0120.github.io/articles/2018-05/MDN Mixture Densit..