Deng, L.; Yu, D. (2014). "Deep Learning: Methods and Applications" (PDF). Foundations and Trends in Signal Processing. 7 (3–4): 1–199. doi:10.1561/2000000039.
Bengio, Y.; Courville, A.; Vincent, P. (2013). "Representation Learning: A Review and New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (8): 1798–1828. arXiv:1206.5538. doi:10.1109/tpami.2013.50.
Balázs Csanád Csáji (2001). Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary
Cybenko (1989). "Approximations by superpositions of sigmoidal functions" (PDF). Mathematics of Control, Signals, and Systems. 2 (4): 303–314. doi:10.1007/bf02551274.
Hornik, Kurt (1991). "Approximation Capabilities of Multilayer Feedforward Networks". Neural Networks. 4 (2): 251–257. doi:10.1016/0893-6080(91)90009-t.
Haykin, Simon S. (1999). Neural Networks: A Comprehensive Foundation. Prentice Hall. ISBN978-0-13-273350-2.
Hassoun, Mohamad H. (1995). Fundamentals of Artificial Neural Networks. MIT Press. p. 48. ISBN978-0-262-08239-6.
Cybenko (1989). "Approximations by superpositions of sigmoidal functions" (PDF). Mathematics of Control, Signals, and Systems. 2 (4): 303–314. doi:10.1007/bf02551274.
Hornik, Kurt (1991). "Approximation Capabilities of Multilayer Feedforward Networks". Neural Networks. 4 (2): 251–257. doi:10.1016/0893-6080(91)90009-t.
Murphy, Kevin P. (24 August 2012). Machine Learning: A Probabilistic Perspective. MIT Press. ISBN978-0-262-01802-9.
Murphy, Kevin P. (24 August 2012). Machine Learning: A Probabilistic Perspective. MIT Press. ISBN978-0-262-01802-9.
Hinton, G. E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. (2012). "Improving neural networks by preventing co-adaptation of feature detectors". arΧiv:1207.0580 [math.LG].
สิงหาคม 08, 2021
การเร, ยนร, เช, งล, งกฤษ, deep, learning, เป, นส, วนหน, งของว, การการเร, ยนร, ของเคร, องบนพ, นฐานของโครงข, ายปราสาทเท, ยมและการเร, ยนเช, งค, ณล, กษณะ, การเร, ยนร, สามารถเป, นได, งแบบการเร, ยนร, แบบม, สอน, การเร, ยนร, แบบก, งม, สอน, และการเร, ยนร, แบบไม, สอน, ค. kareriynruechingluk xngkvs deep learning epnswnhnungkhxngwithikarkareriynrukhxngekhruxngbnphunthankhxngokhrngkhayprasathethiymaelakareriynechingkhunlksna kareriynrusamarthepnidthngaebbkareriynruaebbmiphusxn kareriynruaebbkungmiphusxn aelakareriynruaebbimmiphusxn 1 khawa luk inkhwamhmaymacakkarthimichnkhxngokhrngkhayhlaychn thimiprasiththiphaphmakkhun kareriynthisadwkkhun aelakarekhaicinokhrngsrangthichdecnkhunphunthankhxngkareriynruechinglukkhux xlkxrithumthiphyayamcasrangaebbcalxngephuxaethnkhwamhmaykhxngkhxmulinradbsungodykarsrangsthaptykrrmkhxmulkhunmathiprakxbipdwyokhrngsrangyxyhlayxn aelaaetlaxnnnidmacakkaraeplngthiimepnechingesn 2 kareriynruechingluk xacmxngidwaepnwithikarhnungkhxngkareriynrukhxngekhruxngthiphyayameriynruwithikaraethnkhxmulxyangmiprasiththiphaph twxyangechn rupphaphphaphhnung samarthaethnidepnewketxrkhxngkhwamswangtxcudphikesl hruxmxnginradbsungkhunepnestkhxngkhxbkhxngwtthutang hruxmxngwaepnphunthikhxngruprangidkid karaethnkhwamhmaydngklawcathaihkareriynruthicathangantangthaidngaykhun imwacaepnkarrucaibhnahruxkarrucakaraesdngxxkthangsihna kareriynruechinglukthuxwaepnwithikarthimiskyphaphsunginkarcdkarkbfiecxrsahrbkareriynruaebbimmiphusxnhruxkareriynruaebbkungmiphusxnnkwicyinsakhaniphyayamcahawithikarthidikhuninkaraethnkhxmulaelwsrangaebbcalxngephuxeriynrucaktwaethnkhxngkhxmulehlaniinradbihy bangwithikarkidaerngbndalicmacaksakhaprasathwithyakhnsung odyechphaaeruxngkrabwnkartikhwamhmayinkrabwnkarpramwlphlkhxmulinsmxng twxyangkhxngkrabwnkarthikareriynruechingluknaipichidaek karekharhsprasath xnepnkrabwnkarhakhwamsmphnthrahwangtwkratunkbkartxbsnxngkhxngesllprasathinsmxng nkwicydankareriynrukhxngekhruxngidesnxsthaptykrrmkareriynruhlayaebbbnhlkkarkhxngkareriynruechinglukni idaek okhrngkhayprasathethiymaebbluk Deep Artificial Neural Networks okhrngkhayprasathethiymaebbsngwtnakar Convolutional Neural Networks okhrngkhaykhwamechuxaebbluk Deep Belief Networks aelaokhrngkhayprasathethiymaebbwnsa Recurrent Neural Network sungmikarnamaichnganxyangaephrhlayinthangkhxmphiwetxrwithsn karrucaesiyngphud karpramwlphlphasathrrmchati karrucaesiyng aelachiwsarsnethssastr enuxha 1 niyam 1 1 aenwkhidphunthan 2 kartikhwam 3 xangxingniyam aekikhkareriynruechingluk epnsakhakhxngkareriynrukhxngekhruxngthi 3 prakxbipdwychnkhxnghnwypramwlphlaebbimepnechingesnhlaychn khxmulkhaxxkkhxngaetlachnkxnhnacaepnkhxmulkhaekhakhxngchntxip maphunthanmacakkareriynrufiecxrhlaychnhruxkaraethnkhxmulaebbhlaychn aebbimmiphusxn klawkhux fiecxrinchnsungcaidmacakfiecxrinchnthitakwa ephuxsrangmaepnkaraethnkhxmulaebbhlaychn epnswnhnungkhxngsakhakareriynrukhxngekhruxnginkareriynrukaraethnkhxmulklawkhux kareriynruechinglukprakxbipdwy 1 hnwypramwlphlaebbimepnechingesnhlaychn 2 aetlachn caeriynrukaraethnfiecxr xaccaepnaebbmiphusxnhruximmiphusxnkid thngni okhrngsranginaetlachnkhxngkareriynruechinglukcakhunxyukbpyhathitxngkarcaaekikh xaccaepn hidden layer khxngokhrngkhayprasathethiym hruxhnwypramwlphltrrkathisbsxnkid hruxxaccaepnondin deep generative model xyangechn okhrngkhaykhwamechuxaebbluk Deep Belief Networks hruxekhruxngckroblthsmnnechingluk Deep Boltzmann Machines kid aenwkhidphunthan aekikh hlkkarodythwipkhxngkareriynruechinglukkhuxkarmihnwypramwlphlhlaychn khxmulkhaekhainaetlachnidmacakptismphnthkbchnxun thngni kareriynruechinglukphyayamhakhwamsmphnththilalukmakkhun nnkhux emuxmicanwnkhxngchnaelahnwypramwlphlthixyuinchnmakkhun khxmulinchnsungkcayinglaluksbsxn abstract makkhunsthaptykrrmokhrngsrangkhxngkareriynruechinglukmkcasrangaebbepnchn layer by layer ipdwywithi greedy method sungkarhasingthilaluksbsxnmakkhuniperuxyinaetlachnniexngthithaihkareriynruechinglukmiprasiththiphaphmakkwawithikarxun 4 twxyangechn khxmulinchntnxaccaeriynruwaphaphthiekhamaprakxbdwyesntang chnthisungipnaesntangmaprakxbknepnrupsiehliym aelachntxmakhuxkarhakhwamsmphnthkhxngesnsiehliymcnkrathngkhxmphiwetxrruidwaphaphthiekhamaepnphaphkhxngthngchati epntninkareriynruaebbmiphusxnnn kareriynruechinglukcachwyldpharainkarhafiecxrthiekiywkhxng ephraawithikarnicaaeplngkhxmulipsurupaebbxuninradbthisungkhunodyxtonmti aelaihkhwamsakhykbkhxmulthisasxnldlngipdwy nxkcakni yngsamarthnaipprbichkbkareriynruaebbimmiphusxniddwykartikhwam aekikheraxaccatikhwamkareriynruechinglukid 2 aenwthangkhux ichthvsdipramankhasakl universal approximation theorem aelaichkarxnumandwykhwamnacaepn probabilistic inference thvsdipramankhasakl snickhwamsamarthkhxngokhrngkhayprasathaebbpxnipkhanghna feedforward neural networks thimi hidden layer ephiyngchnediywaelamikhnadcakd ephuxpramankhakhxngfngkchntxenuxng 5 6 7 8 9 ody George Cybenko idphisucnkareriynruechinglukdwythvsdini odyichfngkchnsikmxydinpi 1989 10 aelatxma Hornik naipphisucntxsahrb feedforward neural networks thimihlaychn inpi 1991 11 swnkartikhwamdwyhlkkhwamnacaepnnn miaenwkhidmacakkareriynrukhxngekhruxng 12 esnxkhunkhrngaerkodyecffriy hintn oychw ebnoc yann elxkhun aelaeyxrekn chmidhuebxr nkwithyasastrphubukebiksakhakareriynruechinglukyukhihm aenwkhidnicaennkarprbokhrngsrangkareriynruechinglukdwykarhaomedlkhathidithisud optimization thidithngsahrbkhxmulkarsxn training aelakhxmulkarthdsxb testing thngni karxnumandwykhwamnacaepnnncamxngwa activation nonlinearity nnepnfngkchnkarkracayaebbsasm Cumulative distribution function 13 thaihekidethkhnikhkarich dropout epntwkhwbkhum regularizer sahrbokhrngkhayprasathethiym 14 xangxing aekikh Bengio Yoshua LeCun Yann Hinton Geoffrey 2015 Deep Learning Nature 521 7553 436 444 Bibcode 2015Natur 521 436L doi 10 1038 nature14539 PMID 26017442 S2CID 3074096 L Deng and D Yu 2014 Deep Learning Methods and Applications http research microsoft com pubs 209355 DeepLearning NowPublishing Vol7 SIG 039 pdf Deng L Yu D 2014 Deep Learning Methods and Applications PDF Foundations and Trends in Signal Processing 7 3 4 1 199 doi 10 1561 2000000039 Bengio Y Courville A Vincent P 2013 Representation Learning A Review and New Perspectives IEEE Transactions on Pattern Analysis and Machine Intelligence 35 8 1798 1828 arXiv 1206 5538 doi 10 1109 tpami 2013 50 Balazs Csanad Csaji 2001 Approximation with Artificial Neural Networks Faculty of Sciences Eotvos Lorand University Hungary Cybenko 1989 Approximations by superpositions of sigmoidal functions PDF Mathematics of Control Signals and Systems 2 4 303 314 doi 10 1007 bf02551274 Hornik Kurt 1991 Approximation Capabilities of Multilayer Feedforward Networks Neural Networks 4 2 251 257 doi 10 1016 0893 6080 91 90009 t Haykin Simon S 1999 Neural Networks A Comprehensive Foundation Prentice Hall ISBN 978 0 13 273350 2 Hassoun Mohamad H 1995 Fundamentals of Artificial Neural Networks MIT Press p 48 ISBN 978 0 262 08239 6 Cybenko 1989 Approximations by superpositions of sigmoidal functions PDF Mathematics of Control Signals and Systems 2 4 303 314 doi 10 1007 bf02551274 Hornik Kurt 1991 Approximation Capabilities of Multilayer Feedforward Networks Neural Networks 4 2 251 257 doi 10 1016 0893 6080 91 90009 t Murphy Kevin P 24 August 2012 Machine Learning A Probabilistic Perspective MIT Press ISBN 978 0 262 01802 9 Murphy Kevin P 24 August 2012 Machine Learning A Probabilistic Perspective MIT Press ISBN 978 0 262 01802 9 Hinton G E Srivastava N Krizhevsky A Sutskever I Salakhutdinov R R 2012 Improving neural networks by preventing co adaptation of feature detectors arXiv 1207 0580 math LG ekhathungcak https th wikipedia org w index php title kareriynruechingluk amp oldid 8922928, wikipedia, วิกิ หนังสือ, หนังสือ, ห้องสมุด,