Reduce, Reuse, Recycle

So, it might be more fair to call this post a "pre-cycle". This is the text of a magazine article I submitted to Global Services Magazine. The article was titled something innocuous like "Sourcing Drivers in 2010." It should hit the press in December of this year.


Historically companies have outsourced product development for some combination of three reasons: Faster, Cheaper, and Better.

By 2010, a fourth driver will become prevalent, if not dominant. With the flattening of the world and the massive economic boom fueled by technology outsourcing, companies will start to evaluate their sourcing decisions around the question of Closer - closer to talent and closer to customers.

The landscape and promise around each of these drivers is evolving. By looking at each in detail, you can position your company for success today and into 2010.


The outsourced product development (OPD) industry certainly faces major consolidation in the coming years. As this plays out, execution will differentiate “the best” from “the rest.” Economic Darwinism will prevail, and the strongest and best firms will come out on top.

The high process orientation used by Indian OPD firms as a market differentiator will continue to evolve across this sector; and that evolution should translate into a promise, and eventually a contractual obligation, for timely delivery.

Lastly, in order to stay competitive against emerging low cost market, Tier-1 providers will have to develop reusable methodology and frameworks, free of client-IP, to bootstrap development and delivery of new solutions. By 2010 you should be able to find a vendor that can offer talent and process, but also a significant head start against your time to market requirements.


I recently discussed a “Rural Business Process Outsourcing (RBPO)” strategy with an Indian BPO and OPD firm. They described a situation where they can hire computer literate rural workers in remote villages, for something in the range of a few hundred rupees a day - that’s maybe $5 a day for work that would cost at least minimum wage in the US.

Those kinds of economics are very seductive. However, data entry is a long way away from software development and testing.

Highly skilled, experienced, credentialed software engineering roles are quickly approaching global price parity. With 15+ percent annual wage inflation in the India tech sector[1], it’s only a matter of time before the price differential disappears.

There will always be someplace on the planet where engineers will work on the cheap, but that place will never stay a secret, and it will always suffer the consequences and benefits of supply and demand. If you care about “cheaper” you need to look at cost in concert with quality and speed, not as a measure of simple labor arbitrage.

Savvy companies are looking at short-term financial savings as a way to justify and fund their global OPD campaigns. You can still effectively create a two to three year “self-funding development center”. This can allow you to set up shop with a partner by focusing on “cheap”, and then evolve your program into higher value as the economic situation in your sourcing locale inevitable changes away from your favor.


To most companies, the promise of building and delivering a better solution through OPD is even more seductive than the promise of massive cost savings.

Process doesn’t guarantee quality, but it goes a long way toward enabling it. In the early days after Y2K, Indian IT Outsourcing firms set the bar high with respect to process orientation. The rest of the industry has followed, and CMMi Level-5 qualification is almost table stakes for winning sizeable OPD deals.

It stands to reason that a company that works on 1500 software releases a year should be better at it than a company that puts out 2 or 3 releases a year. A company with 5000 software development engineers will have much more core technology expertise than a company with 15 engineers. As the OPD landscape consolidates only the best will survive. The better OPD firms will offer their core expertise as a sustainable advantage, and they will partner with their clients to “build a better mousetrap”.

Closer to talent

Sources vary wildly, but it seems safe to estimate that India graduates more than 200,000 engineers each year, with that number rising.

China claims to produce 600,000 new engineers annually, with actual numbers probably just north of 450,000.

The US, still the largest single economy on the planet, produces 100,000 engineers (or less) each year.

That means that there will eventually to be a global disparity in the talent pool, with the favor going to India and China.

In order to find qualified engineers to work on your projects, you’ll need operations in India, China, or a second-tier locale – not to save money, but simply to be able to hire engineering talent. The prospect of outsourced product development offers the upside of having operations in proximity to a global talent “hot-spot”, without the cost, trouble or risk associated with setting up captive development centers.

Closer to Customers

The traditional “big four” destinations for outsourcing – Brazil, Russia, India, and China – represent the 10th, 9th, 4th, and 2nd largest economies on the planet[2]. Where there is technical proficiency and an economic force providing low cost labor, there will be economic growth. Where there is economic growth, there is economic opportunity.

For global companies, where there is economic opportunity, there should be presence. As a means to understand local market forces, and as a means to begin servicing local customers, OPD firms can and will provide that presence to their clients, as an extension of their product development practice.

Pulling it all together for your enterprise

The promise of the OPD landscape changes as we approach 2010. By deciding your priorities against the traditional drivers - faster, cheaper and better – you can plan your OPD efforts now, to allow you to capitalize on the emerging “closer” driver.

A window remains wherein companies can use cost savings to set up operations, and ready those operations for higher efficiency and global excellence as the cost savings evaporate. The continued evolution of the OPD industry will strengthen the companies that survive, and should virtually guarantee that great OPD firms can continue to deliver Faster and Better, as long as your company continues to care about it.

Back story

Someone recently asked me in an e-mail message what we do for background check in our partner teams in India. I had some time on my hands so my response was actually decent... I've de-identified it and reproduced it below.

First, it’s important to understand that India as a country is a federation of strong states, not a strong Republic like the USA. My understanding, for what it’s worth, is that Indian Federal law, including employment law, is often weaker and less well defined than their State laws. So, what is true in Karnataka may not be true in Tamilnadu.

It has been alleged to me by both our partners and by my peers in the industry that substance screening in India is not legal. It is certainly not a matter of normal practice. FYI, there are certain organic substances that are deemed to be sacramental to Hindu festivals (particularly Holi) the use of which would disqualify a candidate in the US. There are also over the counter meds in India (as in most countries) that are on our restricted list, and which would show up as red flags (or at least amber flags) in a US substance screen. We don’t substance screen our third party contractors.

We do, however, send on-site contractors from India through screening when they come to the US for extended stays.

Background checks are also quite different. As I mentioned above, India is a federation of states. There is not a unified federal ID database. That is, they don’t have anything analogous to Social Security numbers. There is an evolving federal tax ID, but that’s still of only marginal utility.

Through our partner, we’ve settled on a third party service, through a firm called ONICRA Credit Rating Agency of India Ltd. They screen and verify the following:

* Employment check for last 2 years
* Address verification
* Reference checks
* Education verification
* Police and criminal verification

This allegedly costs around $75 per employee, and is done with human labor rather than databases. They apparently will often go to the former residences, and ask neighbors if the person in question is known to them, and will ask local magistrates whether the person is known to them as a “bad guy”. (in addition to checks against what ever local data repositories might exist)

One approach that other companies have used, but that I don’t support is the “passport proxy”. To get a passport in India you have to go through a detailed screening process, and an in-person interview. So, for the price of the passport application you can get the Indian Government to do the BG check and criminal check for you.

The problem with this approach is that passports are good for 10 years, and renewals are done with lower scrutiny. So all you get for this is the assertion that the employee wasn’t a bad guy some number of years ago. You also don’t get a “report” to keep on file with this methodology, so it’s not really auditable. Better to have a real company do this, in real time.

PS: As editorial, my standard sound-byte when I do my all-hands in India is along the lines of: “MY COMPANY is a service provider to the most highly regulated companies on the planet. If there is a compliance regulation in any jurisdiction on this blue orb, you can bet we’re subject to it by proxy, through our customers. And we sell one thing and one thing only to those customers – Trust. So MY COMPANY ill be the first employer to ask you to go through a comprehensive criminal background check. But if you stay in this industry for any length of time, I can guarantee you we won’t be the last employer to ask this of you.”

So far, no one has protested, and to the best of my knowledge, no one has been flagged as an employment risk. Either the engineers in India are too timid to protest, or they really don’t mind submitting to this process. (Basically, my opinion is that there aren’t as many aggressive libertarians in the software development world in India…)


necessary but not sufficient

This is an old list I wrote out, on what is necessary for a successful offshore partnership.

  1. Dedicated on-shore resources
  2. Investment, not quick fix
  3. Additional hardware and bandwidth, by resource
  4. Formalized KT plan, both sides.
  5. Formalized communication plan, both sides.
  6. External oversight and goals.
  7. Documented criteria for success.
  8. Face time (both ways)
  9. Start a wiki
  10. Have the infrastructure in place
  11. Have escalation plans in place (24X5)
  12. Interview the leads.
  13. Hire high on the food chain

Self Funding

So, I had an idea the other day – It may in fact have been given to me, but it’s hard to say. Me and one of my guys were talking about a way to build a contract, where we got a pro-rated discount, to be credited at the end of a billing cycle, so if you had 50 people on staff, you’d pay $x, and if you got to 60 people, the vendor would give you a credit for the next month, of, say $12,000. 80 people gets you $25,000, etc. It comes in as a credit.

The clever bit is that you can use this to build an incentive structure for your global sourcing team – the guys who run the program. If you set up the vendor partnerships with this in mind, you can create a system that self-funds growth. The lines of business will pay a set cost, get labor arbitrage, and all the goodness of the “program”, and the program folks will have incentive to grow the team, in order to get more discount funding, so they can implement new programs. You’d probably have to give the program seed money, but you could get scale that would spin off “profit” internally, and use it to fund more global heads, fund more programs, pay T&E, etc., or eventually, spin discounts back into the business.

It’s perverse, but big corporations breed perversity, so it’s really just a clever adaptation to a perverse environment.



In working through the details of a "best practices" program I'm implementing with my team, I had an "Eventual Master of the Obvious" moment this week.

In the phrase I manage a remote team, the emphatic break down is like this:

Manage: 80 %
Remote Team: 20%

I drew a nice chart, that looks like pac-man. (I'll see if I can do it electronically and post it here some time soon...)

This is simple, but is a profound key to the problems I've been trying to resolve with management of remote teams. The place where most remote teams, and most outsourced efforts fail is in management. When I look back to all the failed projects and failed partnerships I've picked apart over the years, the common thread is bad management, not bad staff, bad partners, or bad engineers.

Years ago, The Mythical Man Month talked about the common problem in our industry - promotion of talented individual contributors into management positions. Skills writing code or designing product are not in the least predictive of skills in managing other people who do the same thing.

Many of the remote teams I've put together through time have been managed by people who were great individual contributors, but who hadn't mastered the 80th percentile of their job -- effectively managing.

Most of the "best practices" I've been pushing are in point of fact "management" best practices.

In the 20% of stuff that changes because of remote or outsourced staff, it probably breaks down equally along cultural differences (10%) and practical differences (10%). But it's not the major part of the job, and if you're in charge of remote teams, you shouldn't act as if it is...


Esprit de Corps

If you’re collaborating with other people on a project, you are de facto part of a team. This is true whether you’re all in the same room every day, or whether you work half a world apart.

For teams of people all located in the same building, team identity and dynamics kind of take care of themselves. People go out to lunch together, hang out at the coffee machine in the morning, and generally feel like they are part of a team, with a shared fate and shared accountability. That team identity is a powerful force, and shared fate really does make a difference in the performance of any team.

For global teams though, it’s much harder to build and maintain that esprit de corps.

One quick thing you can do is establish a regular communication rhythm about non-transactional stuff. That is, you can build esprit de corps by paying attention to it.

This is an easy “best practice” to implement.

  • Get your local and remote teams to take and swap team pictures. (with a caption saying who's who, so people can put faces with names)
  • If you’re a manager, commit to putting together a quick “blurb” once a month.
  • Highlight your joint accomplishments and achievements.
  • Send it to both your local and your remote staff.

This little bit of work will go a long way toward making two groups of engineers, on two different sides of the planet, feel and act like one team. This simple bit of management will pay back massive returns for you when it’s time for your team to dig deep on a big milestone or deliverable.


KLOCs redux

So, the one person aside from my wife who regularly reads this blog asked, either in e-mail, or in a "comment" to my KLOC blog entry why I was so down on KLOCs, and then referenced "defects per KLOC" as a valid metric.

I agree with him.

KLOCs are a great way to normalize other things you might measure. That is, I find KLOCs perfectly acceptable in the denominator. But I find KLOCs utterly abhorent in the numerator.

Put differently, Bugs per KLOC good. KLOC per day bad.


We don't want no foreign rulers

So, back in the 70's when the Carter administration was trying to drag the country kicking and screaming into modernity, I remember seeing bumper stickers saying:

Down with metrics -- We don't want to foreign rulers

We clearly still don't want no foreign rulers, as we keep crashing spaceships into the martian terra firma because we still don't use a consistent universal system of weights and measurements.

But that's a different story, and not really what I wanted to talk about today. I just thought it was a clever title for a second meditation on the topic of metrics.

Along with my teams, I've been putting a lot of thought into the topic of metrics lately. I think there are a few basic metrics that are absolutely required in order to to provide executives, managers and leaders the high-level date necessary to follow trends within a team, to highlight any areas of concern, and to shine light on areas where the teams have done exceptional work.

The first area is cost. This all presumes that you care about cost, and that you are in a partnership that has negotiated rate structure, that's variable based on skill set. This won't always be the case, of course, but it is for me, so here's what I think makes sense:

  • Average blended labor rate -- This is the overall cost, on the "buy" side, of a person-day (or what ever your standard is) of work. This should be averaged across your whole partner team, including management.
  • Average blended labor rate, fully burdened -- This is the same as above, but includes hidden costs like air fare, capital equipment, software licenses, bandwidth, T&E, and telco costs. This is important, because in any outsourced partnership or engagement, these hidden costs can represent a significant pile of money.
  • Performance against budget -- this is pretty straight forward, though measuring it can be tough in any organization. Basically, you'll want to know if your project, partnership, virtual team, etc. is on track -- What's your budget, how much of it have you burned up so far?
  • Run-Rate - Run rate is your "latest" monthly (or any other period of time) invoice amount. This ends up being an important predictor, since it's very easy in offshore partnerships to miss big spikes of growth, particularly in distributed relationships. You'll need to use this metric as an input to a forecast model, that will give you a good guess on where you land at the end of your budget cycle, and also, what your one to three year spend forecast should be for your team or project. (with the understanding that this isn't applicable for all situations...)
  • Labor Cost Differential -- This one is only important if you think you care about "cheaper," as in "cheaper than US-based engineers". In this one, you'd use your US fully burdened labor rate (for years, I've used $140,000 a year, and no one has been able to give me a better number, so it's probably pretty close for most East Coast teams...) You'd then take a quotient: (Offshore rate) divided by (Onshore rate) should give you a quotient. In most partnerships, and in most offshore situations, it will be favorable. Please note, this is not a measurement of cost effectiveness, just of how well you've negotiated. To get to cost effectiveness, you have to factor in productivity, which makes it much tougher.

The second area is team composition. This isn't a dynamic metric, in as much as it will only change slowly through time, but it's important to understand the following:

  • Team Composition, by function - I think it's important to follow how many people you have on your team, by function. It's just something you should know off the top of your head. By example, I have 16 QA engineers, 2 SysAdmins, and 12 C++ developers.
  • Team Composition, by seniority - This is also important. If you're working with or for a really well managed company, with a mature PMO function, you might have tools at your disposal to produce staffing pyramids, that graphically display this information. If not, Excel works well too. You basically want to know how many "freshers" you have, how many engineers with one to three years experience, three to five, etc. It doesn't matter how you slice up the bands (though it would help if it matches the terms of your contract, or your internal "career ladder"), but you need to pay attention to this. I won't opine about the "right" mix, but you need to put some thought into what that right mix is, and you need to staff accordingly. Obviously, a team of 45 engineers straight out of college isn't going to give you great product, and just as obviously, a team of 45 engineers with 20 years experience might be tough to build, keep and wrangle. Seek balance with this one.
  • Management structure and hierarchy - It's odd to think of an org chart as a metric, but it kind of is. It's easy, in my experience, to find yourself in a situation where one senior manager may have 17 direct reports. If your team is in India, they may tell you that this is perfectly normal. It might be, but it still feels risky to me, and I still don't support a staff to manager ratio that high. I like to watch the team hierarchy,

The third area is hiring and attrition. This is either a critical success factor, or a red herring, depending on the situation. Either way, it's important to measure.

  • On the hiring front, I don't generally care about how many resumes my partners sources, or what their phone-screen to interview or interview to hire ratios are. They should care, so they can become efficient at hiring. What I care about is hiring latency. This is critical to understand, as it impacts how much advance notice I need in order to have a team up and learning. I like to keep an eye on running average -- how long does it take in my market, to go from "start recruiting" to "start date" for a person with the skill set I usually try to hire.
  • Hiring ratio -- Contrary to what I wrote above, there is one case where I do care about efficiency in hiring -- that's when I have my front-line managers interviewing all the candidates we bring on to the teams. (As an aside, this is expensive and time consuming, but sometimes it is very useful to do this, as it builds buy-in with the onshore management team, and correctly sets the bar with the offshore management team.) If I do have my managers interviewing, I like to know how many of the people my partner "short-lists" end up getting thumbs-up from my staff. This is a measure of how well my vendor "gets" the staffing requirements my teams are giving them.
  • Hire to "on-boarding" ratio - I care about this one because of a problem I've experienced. Often, in my recent experience, my partner will extend an offer to an engineer (in India, by the way), the engineer in question will accept the offer, give us a start date, then "no-show". Everyone stops recruiting, expecting that we've got the talent we need, then they fail to show (usually using the offer they got to drive up their wage with another company). It's not a great practice, and I hope the engineers in question end up somehow punished, if only karmicly, for their misrepresentation, but I like to measure this, as it illustrates 1) how well my partners are selling themselves, and 2) how well they're reading their prospects.
  • New Hire Attrition -- If someone quits in their first 90 days on the job, it's usually because they took the wrong job. It represents an annoyance, a set back, and a loss of productivity, but it is a different problem than losing someone who's been with you for three years. I measure this separately, because I think it reflects the partners ability to find the right person for the job. It's really a late-stage "no-show" measurement, and I like to cluster these two metrics together.
  • Rank-and-File Attrition -- Obviously, you never tell the rank-and-file that they're rank-and-file, but in any team, there's "standouts" and there's "the team". Attrition from this portion of the team is "just bad". Meaning there are worse people to lose than rank-and-file engineers. This needs to be treated as a vector, to the extent possible. That means you need to track it monthly, for ever, so you can watch for trend lines.
  • Key-staff Attrition -- Key-staff can, very privately, know that they are key staff. These are your leads, your high-fliers, you architects, your managers. Losing one of these folks is tough on your whole organization. This is also a vector.
  • Managed attrition -- Knowing someone is going to leave, having a relationship with them that allows them to tell you this, and having more than two weeks to find and groom a replacement is really the best case scenario, aside from keeping someone on your team, happy and productive, forever. People change jobs. If they do it professionally, I can live with it. This is a measure of churn on the team, but also of how well the managers build rapport with the team members.
  • Surprise Attrition -- In India, and in much of the world, 2-weeks notice is considered "professional". In my world, this is both unprofessional and irresponsible. So, if someone surprises us with a "today's my last day", or "I'm giving my 2-weeks", I don't like it. I count these very carefully, and I like to get root cause on each of them.
  • Attrition root cause -- It's not very helpful to look at a high attrition number, declare that it should be lower, and to then consider your job as a leader "done". You need to know why your people are leaving. Is it repetitive work? It is poor treatment by the onshore team? Is it sub-par work environment? Salary and benefits? Don't like their boss? All these are different problems, requiring different solutions. Measure root cause, and commit yourself to fixing the problems reflected therein.

The forth area is performance and execution. There isn't a lot I can say here, except in the abstract. This will change with the team, the function, the product, the SDLC, and every other variant of work and work norms imaginable. Suffice to say you should measure performance, as objectively as possible.

  • Performance against expectation - The method will vary, but it's important to know if your team did what you expected them to do, asked them to do, or what they committed to do. Measure this as scalar (i.e., discrete tasks, over discrete time intervals) but watch it as a trend.
  • Quality of work product - Again, a million ways to measure this, but it's critical to measure. Maybe you look at code-review comments, maybe you look at bugs inserted, maybe you look at call resolution time, but look at something, consistently. It helps if you have a target here, so if you don't have a target, make one up, and make it 5 or 10% better than what you're currently measuring, and keep the quality trend improving.

On this point, it is worth a bit of soap-boxing. If you have a "virtual" team, i.e., a co-sourced effort, where some of "your" employees are working together with some of your outsource staff, you need to measure both sides of the partnership equally. You can't measure your offshore team, and let your onshore team coast. It isn't fair, and it will never produce a favorable outcome.

The last area is very subjective. You want to measure how your team "feels" about the offshore partner or staff. This is mushy, by necessity. What you care about is consistency in measurement, and the trend line. Here's the three areas where I think it makes sense to have a consistently measured but utterly subjective metric and trend line.

  • Trust - Does your management team trust their offshore staff? This can be a simple question you ask every month (Survey Monkey is great for this). It should probably be a slider scale. Some people like a five point scale, some people like a 100 point scale. I don't think it matters. You should just consistently ask the stake holders if they trust the partner. You should de-identify the results, average them, and watch the trend line. If there's an outlier, that is, if someone is way off scale, you should break the "anonymity" rule and intervene, and figure out what's going on.
  • Confidence - How confident is your management team in the their offshore team component? Do they believe their team "gets it"? Again on this one, the scalar is arbitrary and subjective. What matters is the trend, and outliers from the trend.

Last bit of proselytizing for this post -- If you choose to measure trust and confidence, you need to measure both sides of the relationship, and get a subjective analysis of how much your offshore team trusts you, and how confident they are in your leadership, work, and partnership activities.

There's a lot more stuff you could measure, and probably a lot more stuff you should measure, but this is my short list, at the moment, of stuff I think you must measure.



Heisenberg’s uncertainty principle is an interesting starting point.

Better than Deming.

You can not measure without influencing. This starting point presumes that the act of measurement is an intrusive act, even at a quantum level. Deming celebrates this; Heisenberg understands it to be a bit of a messy problem.

I agree with Heisenberg.

In our business, measurement and metrics are periodically hot buzzwords. You can not measure what you can not manage. Oh, wait. Strike that. Reverse it.

A common metric used in the software development business (especially OPD) is KLOCs. Units of one thousand lines of code. Usually measured as KLOCs per person day, or some such temporal normalization.

If I tell an engineering team that they are going to be measured on KLOCs per day, and that their perceived success is going to be based on their output measured against this deliverable, and if the team is worth its salt, they will start to raise their KLOC per day output. There’s a few ways they could do this:

  • They could work longer hours, or remove distractions and work more diligently.
  • They could write shite code. Lots and lots of shite code.

There are probably other ways they could increase their output, but let’s presume the ways boil down to the two bullets above.

With a KLOC metric like this, harmlessly implemented and purely good intentioned, you’ve got a 50% chance of driving your project off a cliff, and seriously degrading the integrity, manageability, and comprehensibility of your code, because you’ve built an inventive structure that discourages abstraction and rewards copy-paste linearity. You’ll get a system that is impossible to maintain, very quickly, if human nature takes over and engineers start padding their KLOC count.

I don’t have a ready proposal for a good alternative to KLOCs, but certainly, there must be something that increases behavior you want, without inducing behavior you abhor.

What would you like your engineers to do? Write shite code? Certainly not? Write elegant code? Maybe. Let’s go with that.

What if you measured something that got at how “good” the code was, not how much of it there was. Pick, for instance, code review comments? You could put a process in place to have all code inspected (a good and proven idea, for a number of reasons) and then count the number and severity of the code comments. Obviously, comments indicate a need for revision, so big numbers here are bad. Start counting these events, and tell your engineers that their count should be low, and that it should get lower through time, and suddenly you’ve created an incentive for behavior you believe is intrinsically good… This is harder to do, but it doesn’t give you the same risk of crippling your project because of the metric you chose.

The reason I agree with Heisenberg, instead of Deming, is that I see it as much more common, pandemic almost in our business, that metric programs create the wrong behavior.

If you ask your team to get predictable, and they will do the same thing over and over again, without the variance necessary to adjust for “conditions on the ground”. Quality will suffer. (Ask me how I know?)

Ask your team for schedule adherence, and they’ll throw quality away, and you’ll ship shite code. (Again, ask me how I know?)

Measurement influences, so it’s critical to think about what behavior you want before you start measuring anything.

I’ve been working through a “balanced scorecard” approach for a big team of OPD contractors I manage. One interesting metric we’re working on is, broadly put, attrition.

This one is real interesting. The job market in India, where this team is located, is very hot. So take it as a given that there will be high attrition. We also hire low on the food chain – young engineers with only 2 to 4 years experience. This population job hops, no matter where they are on the planet. So, again, take it as a given that there will be “industry average” attrition.

So, what do you measure here? Do you count attrition, and hold the team accountable to keep attrition levels below the industry average? Sure. That’s where I started. But it doesn’t really get you the behavior you want.

Let’s talk about what a “good” behavior would be here…

Start a few steps back -- why is attrition viewed as bad?

Because you lose the investment you’ve made in ramping someone up.

Would attrition be bad if the ramp time was zero?

Only a little.

So, what’s bad isn’t attrition itself, but the lowered productivity associated with staff churn, right? Right.

If you accept that assertion, then what’s important is not attrition, but resiliency. The ability to absorb staff loss without a decrease in productivity.

If that is true, measuring attrition will result in a sub-optimal behavior. If your management team focuses on “keeping people”, you’ll probably lower your attrition, maybe even below the industry standard. But what of it? When you do lose someone, can the organization absorb the hit? My assertion is that 1 staff turnover with no resiliency planning is probably worse than 10 staff turnovers in a highly resilient organization. I’d also assert that “presaged” attrition is easier to manage than “surprise” attrition. Lastly, I’d offer that attrition of “freshers” is way less damaging than attrition of key senior staff.

So, what you want is for your team to keep key staff around, and to have a way to manage knowledge-capture and process definition. You have to pay attention to new staff induction, so that in the face of inevitable attrition, you still keep your team cranking out piles and steaming piles of the shite code you’ve made inevitable with your idiotic KLOC metric.

So, a good metric program here would be something like:

  • Key Staff – Managed Attrition
  • Key Staff – Surprise Attrition
  • Fresher – Managed Attrition
  • Fresher – Surprise Attrition
  • Staff Recruiting Latency (how long to fill an open req)
  • Induction Efficiency (how long to “perceived” contribution for a new staffer)

Also, it’s obvious but attrition is when people quit, not when they’re fired. Firing people is good for teams, when it’s done fairly. Counting that as attrition means you create an inventive program that encourages managers to keep bad hires on the team forever.

Maybe you create a modification of the above instrument panel, and add something about quits or fires in the first 90 days, which could get at whether the management team is hiring the right people in the first place.

Anyway, my point, expressed succinctly, is that you can’t measure people without influencing their behavior (presuming they know you’re measuring them and that they give a shit). If you keep this in mind, you can steer them in a direction you presume to be subjectively “good”. If you forget this, you run a big risk that your measurement will induce behavior antithetical to your intent… Because, as Deming said, if you measure something, it will generally “improve”. (For very loose definitions of “improve”)