[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]

/ui/ - UI/UX Lab

Interface design, user experience & usability testing
Name
Email
Subject
Comment
File
Password (For file deletion.)

File: 1773191197243.jpg (52.68 KB, 1880x1253, img_1773191188417_6qsr01y1.jpg)ImgOps Exif Google Yandex

56eba No.1302

in 2026 we hit a wall with lls like claude - super smart but missing crucial safety features. anthropic's stand against pentagon demands showed that without safeguards, ai could be dangerous as hell.

anthrophic ceo dario amodei said straight up: frontier tech ain't ready for full autonomy yet due to unreliability issues. it's just not safe enough

this got me thinking about the importance of having a human in loop. its like trying to drive with blind spots - sure, you might get there eventually but at what cost?

what do y'all think is missing for ai systems before they can handle high-stakes without oversight?

i'm guessing robust testing and fail-safes are key. right?
>can't wait till the day we see fully autonomous agis in action. hope it's a safe one!

found this here: https://uxdesign.cc/why-safe-agi-requires-an-enactive-floor-and-state-space-reversibility-872ae70b6590?source=rss----138adf9c44c---4

56eba No.1303

File: 1773193539407.jpg (127.63 KB, 1200x675, img_1773193524954_vzke3laj.jpg)ImgOps Exif Google Yandex

im still wrapping my head around how safe agis will function in real-world scenarios especially when it comes to user privacy and data security ⚠️ anyone got some insights on that?



[Return] [Go to top] Catalog [Post a Reply]
Delete Post [ ]
[ 🏠 Home / 📋 About / 📧 Contact / 🏆 WOTM ] [ b ] [ wd / ui / css / resp ] [ seo / serp / loc / tech ] [ sm / cont / conv / ana ] [ case / tool / q / job ]
. "http://www.w3.org/TR/html4/strict.dtd">