There’s nothing more appealing than thinking you’re just that good. Our brains are optimized for fondly remembering our successes and quietly downplaying our failures.
The casinos in Vegas are primed for this by making it relatively likely you’ll win something early on. By the time you realize that the slight edge of the house means over the long run it’ll win everything back and more, well, you’ve probably spent more than your first ATM stop and then some.
Future coding is a lot like playing the roulette. If you can guess where the requirements of tomorrow will land today, you’ve just scored the ultimate programmer’s prize: looking like a wizard. You saw the future and you were right! High five, brain!
That’s the almost irresistible draw of “flexibility”—often just a euphemism for building half or whole features before you know how they’re supposed to work or whether you need them. Just like in Vegas, the worst thing that can happen is to be right about this once in a while.
Because like the casinos, the house of program complexity will take all your little bets on the future and add them up until you just don’t know how this damn program got so deep into technical debt!
Running up the debt on your code is not just about the quick hacks and dirty commits you know you really should clean up (but just don’t). No, the far more insidious kind of debt is that acquired in the name of “flexibility”.
Every little premature extraction or abstraction, every little structure meant for “when we have more of this kind”, every little setting and configuration point for “things we might want to tweak in the future”. It all adds up, and at a dangerous rate.
The road to programming hell is paved with “best practices” applied too early. Very few abstractions, extractions, or architectures are inherently bad (although I’ll contend that a few are). But most of them are downright evil if applied before they’re called for.
To take the scare quotes off flexibility, you have to be willing to write no code at all. The “Simplest Thing That Could Possibly Work” and “You’re Just Not Going To Need It” are both sayings to help you get there.
This really does mean not a single parameter to a method that has not yet found a use. No needless public methods that might get reused later (not even for testing!)
The hardest—and yet most important—thing in the world of design is the conviction to say no.
David Christiansen
on 03 Jan 13I have no idea what this phrase means: “To take the scare quotes of flexibility,”. It’d be nice if you could clarify or just rephrase if it’s a typo.
DHH
on 03 Jan 13DC, I mention “flexiblity” (” = scare quotes) twice and then I remove said quotes in the 3rd to last paragraph with a more fitting definition.
Stanislas Marion
on 03 Jan 13Interesting take on coding as making bets. Had never thought of it as such even though I have a background as an online poker player. It actually opens up a lot of reasonings in my head about expected value of code, and how we could try to guestimate it in order to make the most positive EV decisions possible.
Ben Vitale
on 03 Jan 13s/of/off
Tony
on 03 Jan 13I couldn’t agree with this post more. I can’t tell you how many coders I’ve worked with who built massive frameworks and unnecessary hooks into a simple application. When I asked why they looked at me like I was crazy.
“Best practice”, “Forward thinking”, “Futures”, etc. are phrases they would toss around without any real basis in fact, need, or requirements gathering. The reality is you can’t guess what will be needed in an application, and the time used to build all that unnecessary code eats at making a good, performant interface.
Most of the coders I’ve managed who had this mindset ended up losing their positions in the company I work for. When you have a reality of budgets and deadlines, you can’t spend money you don’t have. They left the company believing that they lost their jobs because they weren’t allowed to ‘build things properly’. But, while well-constructed modular code is important, unused and overbuilt code is just foolish.
I think there will be those that will have a tough time accepting the concept in your post, David. They’ve convinced themselves that an “application” is a verbosity of code, as opposed to something people use.
You get what software is. Thank you.
marc
on 03 Jan 13I think you’re blaming a coding style for inexperience eliciting requirements. You never know your requirements fully. Further, your customers never really know their requirements. Following the open/closed principle is an excellent idea, especially in production. The middle ground, which works for all , is to be patient in your design. Creating interfaces and abstractions for extension is not inherently a bad idea, and adds little time to design. The problem originates when people go overboard, and I believe that’s a measure of experience in software engineering( specifically requirements analysis). Underdesigning and overdesigning is bad, in my opinion, and it’s not a measure of design style. Object oriented design should enable extension, and I think the major problem you outline is that we never design to allow this. We must always refactor large amounts of code. Follow the principles, and you will be fine.
Mathew Sanders
on 03 Jan 13The same applies to visual design. I often get push back on a design when it fails some hypothetical edge case that may (or may not) exist in 3/6/12 months where my response is lets deal with that problem when(if) it actually happens.
Sometimes I worry that people might see this approach as lazy (or worse ignorant which is ironic since off the bat I can easily name dozens more potential edge cases they could be worrying about) especially in my current agency/client situation but I can sometimes soften the mood by reminding people that these problems are will come with scaling up so will actually be cause for celebration!
I have to admit, I don’t think there is anything wrong thinking about possible futures (I certainly do, I guess from some boy-scout desire to always be prepared) just as long are you don’t become a slave to that speculation.
Matt Henderson
on 03 Jan 13The most insidious examples I’ve seen of “future coding” were when working in the context of publicly funded software development projects.
I remember one “monitoring and control” system designed to support equipment that didn’t yet exist, and mission types that hadn’t been envisioned. Everything was an abstraction.
Five years later, and with more spent than the development costs of a Boeing 747, the most over-engineered system you can imagine was released.
The very first post-release maintenance activities were to develop an expensive translation layer to support relational databases (since the design assumption five years earlier was that the world would converge on object databases), and to add support for TCP/IP interfaces (since the design assumption five years earlier was that the world would standardize on OSI).
I wonder whether the world of publicly-funded systems will ever adapt their processes to embrace saying “No” and implementing the “Simplest Thing That Could Possibly Work”.
GeeIWonder
on 03 Jan 13While this conveniently sets up your argument, I don’t think it’s true. Rather, we learn much differently from victories than from failures—where we emphasize the unusual, low probability but high impact events.
There are ways in which pathways are favoured and rewarded, and this has a lot to do with your previous post about thinking as well. The point you missed in that one is that much modern thinking is entirely dogmatic, not even personal dogmatic, so the presumption (implied, in this case) of a level of critical thinking to evaluate and generate ideas is increasingly false.
Ryan Cromwell
on 03 Jan 13Part of the problem is fear of deployment. Too many organizations and teams have neglected deployment or been bit by painful deployments that they are afraid. Instead of learn from that pain, they stuff the sausage. We must practice the hard things so that we can account for new learning with frequent/just-in-time releases to production. If it’s important and it hurts… do it again.
Richard Moyles
on 03 Jan 13Good article but a comment on the new design – When you read through the text on the homepage and want to view more or see the comments you have to scroll back to the top click the link. Maybe the code to do this could be resolved.
Ryan Eccles
on 03 Jan 13I don’t know whom to attribute this quote—it might be Kent Beck. But this article reminds me of: “Premature abstraction is the yin to premature optimization’s yang”.
Ryan
on 03 Jan 13This post is wrong by the second sentence. That must be some kind of new record.
Adam
on 03 Jan 13There is a caveat to YAGNI that is frequently overlooked. In fact, I can’t remember the last time I read an article on this topic that brought it up.
Before you can say “no” you had better know what is needed first. I have often found that developers keen on keeping things lean, rightly so, can also put on blinders to the whole picture.
some problems are, for better or worse, actually complex and I am gonna need it, even if you don’t see how that might be.
I’m not saying this isn’t good advice for the vast majority of projects, but like all best practices, YAGNI shouldn’t be followed blindly.
Ryan
on 03 Jan 13http://en.wikipedia.org/wiki/Negativity_bias
David Andersen
on 03 Jan 13I wonder whether the world of publicly-funded systems will ever adapt their processes to embrace saying “No” and implementing the “Simplest Thing That Could Possibly Work”.
Much harder when someone else’s money is being spent. Simple takes on new meaning when spending your own money.
Sebastian
on 03 Jan 13YAGNI is a loaded acronym used by both sides of the argument. Knowing that it is absolutely useless in helping us get anywhere.
The end of the article points it out nicely. No needless argument or public method. Yet, we clearly prefer the interpretation of YAGNI meaning “I don’t want to think about it right now just because”. Thus spreading business logic all over our controllers and AR models up to the point we have no discernable business layer.
I suspect premature or failed attempts at abstraction are more closely related to the mindset which just doesn’t bother to fix all those hacks and dirty commits.
I’m not sure we should separate the two and make it difficult to introduce an abstraction when it’s actually needed because we should not violate YAGNI.
Daniel
on 03 Jan 13I agree.
Jaap Taal
on 03 Jan 13I don’t entirely agree with this article, if we’d followed it 20 years ago, we’d still be programming in asm or C if you’re lucky. Of course you should not make unnecessary abstractions, but it is still the programmer to check and verify which abstraction is in fact necessary. A bad programmer either make too few abstractions, or too many. A good programmer makes the right abstractions. And then there is also style. Some programmers tend to foresee an abstraction coming, others refactor it in later. Other than that, discussion on the subject is always good, DRY, KISS, YAGNI are all sound principles. It’s just that I hear a lot of “that’s premature optimization”, while in fact I think it’s an abstraction that is going to avoid lot’s of violations of said principles in the near future.
Mikhail
on 03 Jan 13David, could you please point the exact “best practices” that you would not recommend at the early stages? Could you please also be a bit more specific on what you mean under the “early stage”? Thanks.
Robert Annett
on 04 Jan 13“Every little premature extraction or abstraction, every little structure meant for “when we have more of this kind”, every little setting and configuration point for “things we might want to tweak in the future”. It all adds up, and at a dangerous rate.”
Completely true – It makes code impossible to navigate and understand. I describe these as the “but what if” programmers who build in every possible scenario and extension no matter how impossible they are.
TVD
on 04 Jan 13From my experience, folks who optimize prematurely, generally, have their hearts in the right place. They just lack the internal barometer that says, “Hey Joe, this is too much for now.” Or worse, they’re so invested in their code that it becomes an extension of their identity.
That’s why strong project leadership is so critical to project success. Someone has to be vested with the authority to make the final call and simply say, “We’re not going to do that because…”
Rex Hoffman
on 04 Jan 13Certain premature abstractions, that lead to inherent flexibility – without incurring extra cost – include dependency injection containers and impedance mismatch mitigation tools (like web frameworks and ORM mappers) are almost always net wins.
Saying no to a capability in your application is fine, and will always win right now, but knowing the right things to say yes too is the key. My rule of thumb is that anything that benefits the twin causes of loose coupling and cohesive capabilities win.
Often devs are just looking at a problem wrong, java devs get locked in to thinking about classes, ruby devs often think to much about objects, but not how they relate to each other at run time. These are both just vehicles for curried functions. The real trick is being able to clearly define scopes for your curried functions that make up your application.
This is a mental exercise, totally in the developers head. All other abstractions, including classes and objects help build toward this, but can tie you to incorrect thinking about your functions and functionality that leads devs to generate unintended complexity. Simplicity is the art of ignoring or working around the parts of a language or framework that prevent you from writing clean (side effect free) curried functions, in a small number of well defined scopes. My 2 cents.
Tim Spangler
on 04 Jan 13As long as we’re on the topic of Las Vegas, let me chime in with something positive and put in a plug for #vegastech. Our city and our tech scene is absolutely booming right now and it’s a lot more interesting that the bullshit coming out of San Francisco.
See also: Downtown Project.
Nick
on 05 Jan 13Sounds like marriage – get some at the beginning; then baddow hell begins
Luke
on 06 Jan 13Although I agree with the general sentiment of this post, I find it a little annoying. As someone who is still in college and working at the same time, maybe I see both sides of this argument.
In college, the “good practices” are almost holy and every prof I have harps on them. In contrast, at work, it’s all about productivity and completing the work for the next sprint in time.
There (at work), I have seen both the good and the ugly side of YAGNI. And this is what annoys me. I think it’s problematic to send such one dimensional signals out unqualified. While for someone with work experience this article makes perfect sense, because he/she can put it into perspective, someone without it could very well find out the hard way, why ‘the good principles’ exist in the first place.
It’s all about evaluating the probability of change. Too often I have seen YAGNI as an argument t o cover lazyness or the lack of motivation.
Alon
on 07 Jan 13Nice post.
Alex
on 07 Jan 13I totally agree
Dave E
on 07 Jan 13The goal is not to see the future but to be flexible enough so that when the future arrives, you can adapt to it. I can create piece of minimalist code that does exactly what I want and nothing more, or I can create an architecture that is flexible enough that does what I need today and can still be easily modified in the future to accommodate whatever comes along. Sure the first option is quicker initially and for ‘throw away’ code, probably fine but the extra expense of the latter method reaps benefits later on. I’m all for “doing just enough to get the job done” (less work for me) but sometimes you have to think beyond today. Sometimes though, just saying “no” works too!
Anonymous Coward
on 08 Jan 13Scientifically, behavioral psychologists found/proved this to be true starting 70 years ago. B.F. Skinner and other scientists found reliably and repeatedly that intermittent positive reinforcement is the most effective feedback for maintaining a behavior on a long term basis. Works on humans gambling in Las Vegas or on mice in a lab.
This discussion is closed.