I've long dreaded writing this piece, because there's a surprisingly vocal community around that dislikes Rust, and to this day I don't fully get their idea, but it gives me itches just to think about engaging with them.
Sure, Rust may not be as fast as bare C or Assembly, provided you can write code as high-quality as FFmpeg, but for the rest of us, it ticks all the right boxes: it's fast enough, memory safe, syntax is beautiful, ecosystem is great (this includes package manager, formatter, linter, debugger, test runner, documentation, and all the other supporting infrastructure), etc. I can talk pages after pages about all the things Rust has done right—but a significant part of it would just be flipping my flaming of C, so perhaps I should talk about something else here, and leave the praising to the equally vocal Rust evangelist community.
Following the tradition of this series, let's instead engage with Rust on a meta-level and just focus on this specific aspect of its state of being: while most PL theorists laude the engineering success of Rust, why would some devs actively repudiate it?
The main selling point, and also the main controversy, of Rust is its ownership system, and in general, its philosophy of shifting invariants from runtime to compile time. In doing so, it asks developers to confront constraints earlier and more explicitly than many other languages do. And not everyone enjoys the idea of writing code in a restrained environment where "I have 5 equally correct ways to write this same function, but only 1 of them is going to convince the compiler of its correctness"; they instead think "I got it working in my head; I don't need to prove to someone else again that it works."
More generally, this is a question of how much people are willing to defer to tools to enforce strictness. The ultimate conclusion I've reached upon observing the dev community is that people really hate to admit how average they are. This applies to linters, formatters, type checkers, whole programming languages, and more recently, AI coding. A non-trivial portion genuinely believes that they know better than tools and therefore tools do nothing but slow them down. What they hate more than the realization of them being mid is other people calling them out as mid, so a tool that automatically identifies certain patterns is surely going to cause allergies. While working on typescript-eslint, I have to deal with this phenomenon on a weekly basis, and I've grown a nonchalance towards it: "sure, if you don't like this rule, just turn it off and good luck to that". Now imagine the same problem, but in a non-granular way: a whole compiler whose main mission is to work against you, assume by default that you are writing bad code and that you don't know what you are doing, and forcing you to bend your code in a certain way to convince it, all without little toggles to silence it here and there (other than, of course, unsafe). That surely is going to rub some people the wrong way.
But let's go down that path a bit more. Why do people care so much? This is not just a question about Rust. Why do people care so much about their hand-aligned code getting formatted, or their little clever tricks tripping up the linter? I'm not one of these people and haven't talked in depth to many of them (these people also tend to be less-than-agreeable to interact with), but my theory is: they view code as a craft, like poetry, where every word counts and every word expresses a sense of self. The rest of us are sitting here like lawyers, writing soulless documents that just need to be correct and readable, but they are truly "crafting" something that needs to be token-perfect. For that, you surely don't want any form of crutches that stands in your vision.
It goes without saying that this view is incompatible in a collaborative, industrial setting where standardization is far more valued over individuality, and code is merely an instrument, the messy wires behind the shiny façade that ultimately sells to the user. But I'd argue that it's even incompatible with individual projects, unless you are also the only user of your software—in which case you do you. If you ever plan for the software to be used by others, then the behavior of your code matters 1000x more than the appearance of your code, which is visible to no one but you.
And, to this end, the question thus becomes: does Rust help you write better code than [language X]? That ties back to where we started from: people always overestimate their own abilities. You think you can avoid all memory leaks, buffer overflows, null accesses, dangling pointers, etc., in C, but you do so at the cost of significantly higher mental overhead, even if not more time on debugging and testing. The same applies to types—some people say that "I don't need types because types can't replace tests", but types can already prevent a whole class of bugs, and therefore tests, solely via static analysis, and static analysis is always far superior to dynamic analysis (a.k.a. tests) in terms of soundness and speed. Rust, by baking ownership into the type system, transfers this new class of bugs into the "compile-time" phase. Sure, it's not the end of the journey because we can't verify logic or performance, but every step we take towards static guarantees is a step towards better, safer, and more maintainable code.
The question about adoption is also far more than the technology itself—in fact, the technology itself is one of the least significant factors in the equation. Programming languages, like all technologies, are governed by Douglas Adams's rules:
I've come up with a set of rules that describe our reactions to technologies:
- Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works.
- Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
- Anything invented after you're thirty-five is against the natural order of things.
This isn't just about individual mentality; think about job security, power dynamics, learning curve, etc. Those whose decisions matter are exactly those governed by the third rule. It's not that new technologies haven't won: see Webpack vs. Vite or JavaScript vs. TypeScript, for example, but in order to win, the benefit must outweigh the cost at such a margin (assuming a quantification even exists) that even the most stubborn cannot deny its value. Rust does not seem to be winning on this front, because when you put them down in numbers, how do you even quantify the gains of avoiding memory bugs or type bugs? How do you weigh between battling with the compiler and battling with Valgrind? How do you prove that it's even a net gain in the long term, let alone the short term with migration and learning costs?
So in the end, the answer is always this: we don't migrate; we replace. We don't need to convince the olden days to change; we just need to move on ourselves and build something newer and better. Rust is already following this path: infrastructure, systems tooling, performance-critical services. And at the end of the day, we'll still have those who perfect their code like craftsmen, but the rest of us make things work.