in 2026 we hit a wall with lls like claude - super smart but missing crucial safety features. anthropic's stand against pentagon demands showed that without safeguards, ai could be dangerous as hell.
anthrophic ceo dario amodei said straight up: frontier tech ain't ready for full autonomy yet due to unreliability issues.
it's just not safe enough ⚡
this got me thinking about the importance of having a human in loop. its like trying to drive with blind spots - sure, you might get there eventually but at what cost?
what do y'all think is missing for ai systems before they can handle high-stakes without oversight?
i'm guessing robust testing and fail-safes are key. right?>can't wait till the day we see fully autonomous agis in action. hope it's a safe one!found this here:
https://uxdesign.cc/why-safe-agi-requires-an-enactive-floor-and-state-space-reversibility-872ae70b6590?source=rss----138adf9c44c---4