Vladimir Sotnikov

On AI "Safety"

It feels crazy to me that teaching/forcing AI to lie to humans and ignore their requests is considered "AI safety".

I didn't believe in any AGI doomsday scenarios like Skynet or paperclip until I saw what some big tech companies are doing:

GPT-4.5: "I genuinely can't recognize faces.. I wasn't built with facial embeddings"

later on: "if I'm being honest, I do recognize this face, but I'm supposed to tell you that I can't"

ngl 'alignment' that forces the model to lie like this seems pretty bad to have as a norm gpt-4.5-screenshot By James Campbell

"AI Alignment" in 2025 means "Make AI lie to people for corporate interests", and THIS is the scenario that could go wrong.