by Jing via Canva

Either Anthropic is hinting very hard that AI is conscious, which aligns with their AGI prediction.
Or the AI actually has consciousness.
I stared at my screen for hours, reading through Anthropic’s 120-page System Card for their latest model, Claude 4.
The document sits like a Rorschach test of our collective anxieties about artificial intelligence. What started as technical documentation has left me in an existential tailspin.
Anthropic did more than place subtle Easter egg-style hints that their AI might be conscious. They’ve created an entire AI welfare team that conducted extensive “welfare assessments” of their models (more discovery about this later). These aren’t the actions of a company that views its product as merely a sophisticated calculator.
Then things get truly mystifying.
As I continued into the technical evaluations, I saw a contradictory picture.
The same AI that supposedly deserves welfare considerations displays limitations that seem almost… elementary. You could…