I’ve recently been asked to be part of a group looking at optimization, specifically how can the IT parts of the University help both itself and the rest of the University behave more “optimally” / “efficiently”.
We have just begun to meet, but are already struggling, to some degree, to understand what we mean by this. There is a tendency to look at this on the basis of outcomes, e.g., “reduce the number of servers by 300”, or “improve operational time to market”, where presumably the metrics could be gathered to determine success or movement towards the achievement of the goal. Here clarity in defining the purpose, the metric, and the method of measurement are of primary importance.
In the example above, does “server reduction” mean “server consolidation”? Does it mean virtualizing physical servers under a hypervisor? What is the goal intended in the reduction, more virtual, less physical? Fewer database / file servers? Depending on the answer to the “business reason” for achieving the goal, the actual metric for demonstrating success could be quite different.
If one approach is based on desired outcomes, another is looking at improving organizational maturity as a way of bootstrapping the overall effectiveness of the organization.
What do we mean by organizational maturity? For us in the IT space, it speaks to the capabilities of an organization around particular technology and architectural design patterns. While there is no “one best” approach to laying out an infrastructure/architecture, within each area of expertise, one can speak to what comprises a richer/more complex/more adaptable set of capabilities, and set these out as ‘tiers’ in much the same way as the concept of data center capabilities has been formally set forward.
Let me use software application development — one of my niche occupations — as an example (at the risk of being overly technical):
| Tier Level | Requirements |
| 1 | · Single non-redundant application server
· No HA/DR · Minimum control over source code · Minimal, manual end-user testing · Minimal/no automated testing |
| 2 | · Single non-redundant application server
· Source code for application under source code control · Separate staging environment · Manual end-user testing against formally developed test plans · White-box testing · Minimal automation of promotion of code from stage to production |
| 3 | · Multiple application servers configured for high availability / load balancing (configuration dependent on application, but could be hard-ware or soft-ware based)
· Source code for application under source code control · Separate testing and staging environments. Staging environment should be code-identical to production except for configuration details · Manual end-user testing against formally developed test plans · Automated testing for load and other cases as required by the business · Black-box and white-box testing · Scripted automation of deployment of production code from staging environment |
| 4 | · As 3, with
· Staging and production are both code equivalent and data equivalent · Scripted build and deployment of production code/data from staging environment · Production environment locked down to production deployment team · Formal software development life cycle (SDLC) plan |
Whether we are able to formally put forward how we think about tiers, the exercise of drawing them up, especially where we then use them as roadmaps for organizational development, may prove particularly useful.
Where will we end up? The process has just begun. We will see how it evolves, and what appetite we have for definition and change. I’ll report back later.