Maligned #7 - Safety Theater vs. Real Safety
Friday the 13th. Fitting, given some of the discourse this week.
The two faces of AI safety
There are two AI safety conversations happening right now and they barely overlap. On one side, you have researchers doing genuinely important technical work: interpretability, alignment, red-teaming, robustness testing. On the other, you have a cottage industry of influencers and policy people who’ve never trained a model making sweeping pronouncements about existential risk. Both conversations have value, but the signal-to-noise ratio in the public discourse is terrible. My frustration is that the performative safety crowd is actually making it harder for the technical safety people to get the attention and resources they need, because everyone is burned out on the hype.
Amazon is building its own foundation model
Reports this week confirm that Amazon is investing heavily in building its own frontier foundation model, separate from its Bedrock partnerships. This makes sense. Depending entirely on external model providers for your AI strategy is a risk, especially when those providers are also your competitors. Amazon has the compute, the data, and the money. Whether they have the research talent to compete with OpenAI, Google, and Anthropic is the open question. Their track record with Alexa’s NLU doesn’t inspire total confidence, but this is a different team and a different scale of investment.
The developer tools race heats up
Cursor, Windsurf, and similar AI-powered coding tools are in a vicious competition for developer mindshare. Usage data suggests that AI-assisted coding has crossed the 50% adoption mark among professional developers, up from maybe 20% a year ago. The question now isn’t whether developers will use AI tools, but which ones and how deeply integrated they’ll become. The tools that figure out how to work within existing workflows rather than demanding developers change how they work are winning.
Model distillation is becoming standard practice
Distilling larger models into smaller, faster versions is now a standard part of the production AI toolkit. What used to require specialized expertise is becoming increasingly automated, with several platforms offering one-click distillation pipelines. This is good for the ecosystem. It means you can start with a powerful model for prototyping, then distill it down for production without rebuilding everything. The quality loss from distillation keeps shrinking with each generation of techniques.
That’s it for this week.
Maligned - AI news by Mal