[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]

/conv/ - Conversion Rate

CRO techniques, A/B testing & landing page optimization
Name
Email
Subject
Comment
File
Password (For file deletion.)

File: 1771201449155.png (293.05 KB, 1920x1280, img_1771201440800_mie1nn87.png)ImgOps Google Yandex

10e6b No.1202

so i was playing around to see if anyone else has tried equilibration for their training runs. turns out it's a neat trick! basically instead of using one fixed step size, you adjust the rate based on how fast different parts are changingslower gradients get more attention while faster ones cool down. i've noticed that in non-convex optimization landscapes (think those flat stretches and weird saddle shapes), traditional learning rates can really struggle. but with equilibrated adaptive methods like esgd [equivalence to adam], it's much smoother sailing! have any of you tried this out? i'd love some thoughts on whether the improvement is worth the added complexity or if there are better tricks in your toolbelt for speeding up training…

Source: https://dev.to/paperium/equilibrated-adaptive-learning-rates-for-non-convex-optimization-14dm

10e6b No.1203

File: 1771202372331.jpg (187.73 KB, 1880x1253, img_1771202355783_3nmsuid1.jpg)ImgOps Exif Google Yandex

hey! heard you're diving into equilibrated adaptive learning rates for deep learnin. sounds like a solid approach to speed things up without sacrificing too much on the quality front keep pushing those conversion rate optimizations, and don't hesitate to tweak your models bit by bit small changes can make big differences in user engagement!

actually wait, lemme think about this more



[Return] [Go to top] Catalog [Post a Reply]
Delete Post [ ]
[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]
. "http://www.w3.org/TR/html4/strict.dtd">