[An old article I just found whilst trawling through some old notes]
Often described as 3-tiered with those being the Client, Application and Data layers. The developer may see these as equivalent software products: the UI, BLL and DAL. But the DAL is the means by which the software deals with the Data (persistence) layer and is really just a software abstraction rather than an architectural tier. And it won’t be the only abstraction layer existing within the other tiers.
And what is high-level vs. low-level in this case? Are we talking about the big-picture vs. fine-detail views of the solution? Or does this refer to the high-level Client-side vs. low-level (close to the metal) implementations?
Developers have a common parlance despite their various domains, stacks and specialities often overlapping only in generic function. This has understandably led to developers often speaking in an abstracted language about most things – to enable conceptual discussion whilst avoiding domain/implementation/language differences. A house is a house whether it be made of straw, sticks or bricks.
Are we therefore on the same page when we are discussing the tiered breakdown of our solutions? What is in a tier or a layer? How many should there be and what should we call them?
The answer is the usual one – there are as many as you need, and they are called whatever is contextually meaningful without being either too specific nor too generic. It’s the answer to everything and nothing – “it depends”.
At its simplest, my advice is to break the coupling and add abstraction in every place where you can bear to. The better a developer you become the better will be your judgement on whether this will help or hinder future development, and that is one of the key drivers. What will be the impact of your next phase of development? You don’t want to re-code more than is necessary when making a feature change, but that could be the case with either too little, or too much, abstraction.
When building an enterprise application you would struggle to reduce to fewer than 3 tiers as these will likely be running on different hardware, in different security zones. But splitting software across these dividers can still be done poorly with strongly coupled dependencies and poor encapsulation of functionality (eg. splitting business rules between server features running in application and data tiers).
Any good idea can be implemented badly.
Also an experienced software engineer won’t necessarily make fewer or lesser mistakes than a junior one. The risks increase proportionately to the scale of our endeavours and a more experienced engineer will be playing a higher-stake game – and is just as likely to be learning something new, but in less-well-charted territory than a junior is working in.
Patterns are great, and so are CONTRIVANCEs (Contrivance Of Nebulous, Tenuous, Recursive, In-Vogue Acronyms Nobody Can Explain). Give anything a name and you can then refer to it in fewer words. Patterns contribute to our vocabulary. They are also a leg-up to juniors who may not yet have reinvented these solutions and can learn them as a ready-made toolset. But needing is understanding, so beware collecting knowledge for the sake of it. Necessity is the mother of invention whereas a solution looking for a problem is currently unemployed!
Whilst it’s great to know how to code a Singleton, knowing when to code one is more useful. When? It depends!
The first skills that should be learnt are how to identify the gaps in your knowledge, and then how to fill them. With those abilities a developer is ready to begin.
If Industry-Best-Practice was as obvious as it sounds then there would only be one way of solving a problem. We’ve seen browser wars, OS wars, and no end of my-language-is-better-than-yours disputes and we’re not yet ready to settle on a single set of principles to codify our coding.
OOP, SOAP, XML, REST, JSON, RDBM, DAL, ORM, SOLID, GRASP, TDD, DDD, XP, etc.
Recent Comments