29.5.08

A bit more on home improvement

The saga of my kitchen counter outsourcing project continues. The contractor in question has committed to "stand by his work." My wife ultimately came up with the best vendor intervention tactic imaginable. Her stance was something like this:

I want you to do a quality job here, such that you would bring any prospective customer into this kitchen and show them this counter top as an example of your work. If you did that right now, do you think you'd ever book another job?

After she said that they quietly agreed to start over and do it right this time.

This ultimately falls down to referencability, and the threat of destroyed reputation. It's not what you want as your first line of defense in a vendor-client relationship, but it's a good fall-back if you don't have all your contract and vendor management ducks in a row...

Stay tuned for more on this non-high-tech outsourcing experiment...

27.5.08

Home Improvement – just another kind of outsourcing...

I do most of my own home improvement projects. I enjoy the occasional carpentry project, and god knows I can use the exercise.

But periodically I'll outsource a big project to a contractor. One such job I outsourced recently was the installation of new granite counter tops in my kitchen. Moving granite is a job for large burly men with monosyllabic names. I am too frail, and my name has too many syllables for me to take on that job myself. So my wife and I found a local kitchen design store that had a good reputation. We had worked with them before on a small job sourcing new cabinet doors, so they were known to us. Without a lot of scrutiny, without reference checks, and without anything but their boilerplate contract we signed up to have them remove our old counter tops and sink, get rid of all the old junk, and reinstall a new sink and a new granite counter top.

We expected the whole process to take about 10 days start to finish, and we expected the results to be a lovely long-lasting and beautiful addition to our home.

We are 21 days into the process, and we still do not have granite counter tops that meet our expectation. We are at a point where we regret both the vendor we selected for the process and indeed the decision to outsource the installation to begin with.

In short, I have never installed granite counter tops before and I think I could do a better job. It's that bad.

So I thought that decomposing the chain of events that got me to this level of dissatisfaction would be an interesting test for my outsourcing best practices. Here are the problems that we've encountered along the way:

  • The sink, a double-bowl with two holes, arrived with only one "drain" unit (the other one was missing, and during the first install they tried to foist a cheap generic unit off on us).
  • The hole they drilled in the granite for the sink had a little inset over part of the sink, that we did not expect.
  • The hole they drilled in the granite for the faucet was not centered to the sink.
  • The counters, when initially installed, were not centered to the window, as we had discussed with the "layout" guy.
  • The first installation had excessive "ledges" in the seam between each of the three pieces of granite.
  • The first installation did not fit against the wall, leaving big gaps.
  • The second installation (they came back, uninstalled everything, and tried again) still had excessive ledges in the seams, though it was a slightly better job than the first try.
  • The second installation left us with a piece of granite counter top that dropped almost 1/2 inch over a 36 inch run. (That is, it ran down hill from the seam, resulting in more of a peaked roof than a flat counter top.)
  • The second installation including installation of two pieces of granite (both smaller than the main run) that were about 1/8 inch thicker than the main piece (enough to be visible from any angle).
  • The second installation arrived cut wrong, and had to be hand-cut on the installers tailgate in my front yard. This (in my opinion) resulted in a bad fit in one corner, that will leave a gap between the counters and the wall.
  • In each case, the installers did not "walk me through" the installation with any care. They pointed to the seams, asked me if they were OK, and bolted for their truck.
  • During each installation they showed up without all the tools they needed for the job.
  • In one case, they borrowed a tool, then (inadvertently or not) stole it from me when they packed up their tools and left.

Let's presume that was a mistake, and they they'll bring the tool back. Either way, reading this list, I'd be hard pressed to describe these guys as "professional."

Let's talk about best practices for managing outsourced technology providers, and see whether what I recommend and teach would have helped my wife and me avoid this fiasco:

  • SLA-driven contracts: Well, right now, we're probably obliged to pay these guys. The contract states "balance due on delivery". We contend that they have not delivered, they probably contend that they have, since there is an 800 pound hunk of granite sitting on top of our cabinets, masquerading as a counter top. A better contract would have defined acceptance criteria and a service level sign off process, so they would know what they had to "get through" before we paid them.
  • Full and detailed specifications: The contract we put in place did not have spec terms in it. There was a verbal discussion of "it will look like this", but that was between the sales force (effectively the general contractor for the job) and us. The supplier (a sub-contractor) was not present for that conversation, and apparently contends that "suck" is par for the course and within spec. It would have been better to spell out exactly what our expectations were:
    • There shall be no ledges in a seam that would case a flat-bottomed glass to wobble back and forth.
    • Counter tops shall be the same height throughout, to a tolerance of 1/32 of an inch.
    • Counter tops shall be level throughout.
    • Counter tops shall fit snug against all walls they touch, within 1/32 of an inch throughout.
    • Counter tops shall be centered to this scribe mark exactly.
    • Distinct specifications about where we wanted the seams.
      (This all would have helped tremendously.)
  • Communication plan: There have been some "triangulation" problems between me, my wife, the kitchen design company, and the granite installer on this project. We should have had one designated point of contact on each side of the project.
  • Roles and responsibilities: At times, the installer was trying to work directly through us, as if we were GC on the job, and as if we had contracted directly with them, which we didn't. It was not clear that the GC added any value, until we pressed them on this and they "stepped up". They should have taken on more responsibility, since we're the ones who wrote THEM the check (or at least the check for the deposit). We should have defined that our responsibility consisted of writing the check, and signing off on the acceptance criteria, period.
  • Contractual protection for recourse: We should have had terms defined that would lay out what we expected in the event that the installation failed to meet our expectations. At this point, we'd like them to get a new granite installer, but that's going to be a tough argument to have with them because they've probably already paid their sub contractor, even though the job was botched.
  • Selection of sub-contractors: I'm adamant about this in my sourcing contracts -- no sub-contractors without approval. We should have put a term in that defined our rights to reject a supplier. Oh well...

Now, if we had done all this, it's highly likely that they would have politely asked us to take our business elsewhere. Maybe. It's a lot to have to go through for a counter top. Ideally, you'd like the vendor you select to be competent, and for things to go smoothly. But when I read my complaints above, about basic levels of professionalism and competence not always being "business as usual" it sounds a lot like the complaints I hear when outsourced product development goes badly...

20.5.08

Another role for program management

I'll start this post with a story:

Not too long ago, when I was in charge of a multi-vendor global sourcing program, I faced a lot of resistance to a particular vendor. None of the delivery heads were happy with this vendor's performance. Each of them had, privately, complained to me that the vendor wasn't meeting expectations, and that we needed a "shake up."

So I initiated a shake up.

I tole the staff at my company that we were going to do a shake up. I told the vendor that they weren't meeting expectations, and that I was beginning a process to fix that.

My first step was to determine what expectations weren't being met.

It turned out, as is often the case, that the "failure to meet your expectations" was really a "failure to read your mind."

That is, the expectations had never been articulated, much less written down, reviewed and socialized.

So we went through an arduous process wherein we attempted to define measurable goals and objectives for each of several delivery teams we had in place with this vendor.

After five weeks of back and forth with my own coworkers, we completed a list of expectations, had a conference call to review them, and then socialized the goals and objectives with the vendor, and subsequently with the offshore team. Good good, I thought. Job well done.

Not quite.

I committed to doing a mid-quarter performance review, wherein I asked all my coworkers how the vendor in question was doing against objectives. I got mostly decent feedback on the vendor, but in one particularly important case, I got a statement to the effect that "they didn't do what I expected in this situation, and I'm very disappointed in their behavior..."

I ended the conversation abruptly by saying something along the lines of "Hmm. I'm looking through the quarterly objectives, and I don't see any place where you called out your expectation on this apparently very important behavioral criteria. So, you don't get to "ding" the vendor on this. You can express your disappointment, and formally document this expectation as a standing behavioral requirement. And henceforth the vendor can be "on notice" that you expect this particular behavior out of them. But you didn't ask them to do this thing. You asked for 30 other deliverables and behaviors. All of which they delivered. So, they met your explicit expectations. They failed to read your mind, but so don't we all."


The moral of this story is that outsourcing program management, done well, stands between the retained team and the vendor, as a mostly neutral third party arbiter. That is, the program manager is the impassive voice of reason, clearly articulating expectations, and eliminating emotion in the presence of either poorly articulated expectations, or execution failure on the part of the vendor.

If you're in anything but a short-term project with an offshore service provider, it's important to have someone playing this role. It could be your vendor. Or you could be lucky and have excellent leadership in your retained team, who are perfect at articulating their expectations. But if you're like most companies, you need someone in the program management role.

19.5.08

Offshore Outsourcing Program Management (a job description)

As I thought further on this "program management" theme this morning, I realized I had a document that summed up my thoughts on the matter. This is a job description I wrote once, and is effectively in public domain (it was posted on Monster and various other jobs boards), so I think it's safe to de-identify it and post it here...

The Global Sourcing Office is responsible for technology staffing and outsourcing partnerships company wide while focusing in four key areas:

People: Recruit, build and train world-class teams on both sides of our global sourcing partnerships.

Process: Establish, document and implement secure, compliant and repeatable processes in support of our global sourcing efforts.

Partnerships: Establish, nurture and govern strong symmetric partnerships with our global staffing vendors.

Performance: Identify and implement performance metrics and targets to measure the efficiency, effectiveness and pervasiveness of our global sourcing initiative.


Responsibilities will include:

· Accountability for on-budget performance across the partner portfolio

· Accountability for the performance of the partner teams against agreed upon performance metrics

· Daily care and feeding of global teams including escalation and problem resolution

· Organizational strategy and succession planning within our partner teams

· Coaching and training of retained-staff team coordinators working with offshore teams

· Development, documentation and implementation of security, compliance and operational processes and best practices

· Discovery and analysis of new business opportunities for global sourcing

· On-site leadership with global partner teams (up to 40% international travel)

· Oversight of executive “governance” program for partner relationships

· Overall evangelism for global sourcing methodology and initiative

Required Skills & Experience:

· High energy, positive attitude, strong work ethic

· Passion for developing and delivering secure and robust software (and/or technology enabled services)

· Passion for delivering solutions that protect and manage critical enterprise information assets

· Strong technical background, with formal engineering training or equivalent

· Strong business and financial acumen, with formal business training or equivalent experience

· Strong leadership skills and experience

· Experience developing and delivering enterprise software products

· Experience developing and delivering captive xSP services and products

· 10+ years of direct or matrix management experience

· 4+ years direct experience managing offshore teams

· Experience working on the “vendor” side of global sourcing partnerships

· Experience managing offshore teams in a variety of countries, with specific emphasis on [insert your sourcing locale here]

· Passion for working with global teams

· Passion for security and regulatory compliance

· Creativity in applying security and compliance mandates to global sourcing models

· Understanding of and interest in economic and geopolitical factors impacting global sourcing

LinkedIn Q & A

I think fewer people read the "Offshoring and Outsourcing" forum in LinkedIn than read this blog, or at least the overlap is low, so I thought I'd cross-post a question and answer from there today. You can read the original question, and a few more answers here.

The question, which I think quickly gets to the core of a "sourcing program" was this:

What part does offshore program management play in success of a project ?

I think this is such a good question that I'm going to make this my theme for the week. My first answer, from Linked In, follows below:

If you are talking about a small-scale, well-contained PROJECT, then program management may be overkill, and may in fact be detrimental to the success of your endeavors, since it adds a lot of second-order work not directly related to the making of software, or the delivery of service. (presuming you're talking about IT outsourcing).

However, in my own experience small scale projects are seldom worth the effort, travel and organizational turmoil required to set up an offshore team. So my advice to firms contemplating a first offshore project is to "go big."

This involves planning from day-one to achieve the largest scale they can conceive of, and setting goals and decision points that would allow them to grow their offshore presence based on successful execution. That is, I counsel organizations to start with a small, well-contained, low-risk project, then grow if and when that project meets or exceeds documented expectations.

With this "growth through small victories" model in mind, program management is critical. It will be the program manager (our Sourcing Manager, or some business leader only tangentially involved with the project), not the project staff, who will keep front-of-mind all the meta-goals around long-term staffing strategy. Your program managers (what ever their job title) should be concerned about:

- managing the cultural impact of introducing an outsourced offshore team to the mix.
- determining and documenting performance goals for the entire project, or at minimum for the offshore team.
- measuring and documenting performance against these goals.
- performance correction where necessary.
- communication and socialization of performance.
- creation of "esprit de corps"
- management of the business relationship between the client and the offshore vendor.
- a whole host of due diligence activities, such as security and background checks, data privacy, network security, IP protection, etc. (all the boilerplate stuff you'll want in your contract).

With this in mind, and with the notion that offshore outsourcing is not worth the effort at a small scale, I'd say that qualified, experienced and talented program management is critical to the success of a program, but not explicitly for a given project within a program. (that is, you might get lucky and have a successful project without a program manager...)

14.5.08

More on DR planning - Overkill?

I had an interesting conversation about disaster readiness and preparation yesterday. The gentleman I was talking with said he found it funny how much scrutiny offshore operations centers get for DR planning. I had mentioned that my "multi-vendor" strategy always includes contingencies for wide-scale long-term power and network outage. He found that funny, and when I asked him why, what he said really made me think.

His point was that many (possibly most) firms do put a lot of effort into DR plans for their offshore centers. They come up with elaborate schemes to handle the situation wherein their city of choice loses power or network, floods, has an earthquake, etc.

Many of those same companies have no such plan for themselves. They have no way to handle the relatively likely event of an earthquake in their Silicon Valley development center, or a long-term power outage in their Omaha data center.

So I thought a little bit about the large and small scale disasters I listed in my own experience, in yesterday's post. Here's the breakdown on these, of "insourced" versus "outsourced" impact:

  • Ice storms in Pennsylvania. That was the insourced team.
  • Tamil rebel bombings in Sri Lanka. That was the outsourced team.
  • Large scale demonstrations in India. That was the outsourced team.
  • Flooding in Mumbai India. That was the outsourced team.
  • The 2004 tsunami in Asia. That was the outsourced team.
  • Virus infection in a lab in Massachusetts. That was the insourced team..
  • Power outage in a lab in Massachusetts. That was the insourced team.
  • Flooding in New Orleans. That was the insourced team.
  • Fires in Southern California. That was the insourced team.
The final tally:

  • DR impact for Insourced Team: 5 events in recent memory.
  • DR impact for Outsourced Team: 4 events in recent memory.
So, by my small unscientific sample set, you're about as likely to have a disaster in your offshore center as you are in your retained onshore center.

So a point to ponder for all the readers on the client side of global sourcing partnerships -- why do you put so more effort into your offshore DR plan than you do into a general disaster readiness plan that encompasses all your sites?

Seriously, e-mail me or add comments, because I'd love to know your thoughts on this.

13.5.08

Best Practice - DR Planning

I don't want to cheapen or minimize the human suffering from the recent typhoon in Myanmar, or yesterday's earthquake in China, but these events underscore a necessary component of global software and service delivery teams -- Disaster Readiness.

I'll be short and to the point in this post. If you work with a global team, you need to prepare a plan to address continuity of service or of productivity in the event of a disaster in any of your locations. In my own experience, I've been involved in or directly observed implementation of portions of the DR plan because of:
  • Ice storms in Pennsylvania.
  • Tamil rebel bombings in Sri Lanka.
  • Large scale demonstrations in India.
  • Flooding in Mumbai India.
  • The 2004 tsunami in Asia.
  • Virus infection in a lab in Massachusetts.
  • Power outage in a lab in Massachusetts.
  • Flooding in New Orleans.
  • Fires in Southern California.
These are large and small scale disasters. In some, many thousands of people died. In others, a few people were inconvenienced. In each case, business went on, and we did have to start execution of the DR plan.

I can't imagine that service delivery managers in Chengdu are worried about call queue wait times right now. With 12,000 or more dead, and with most infrastructure seriously damaged, it will be a long time before things are back to business-as-usual. But in the face of this tragedy, business will go on. This gets to the actionable best practice:
  • Have a DR plan.
  • Review it with your staff and your managers at least quarterly.
  • Have a plan for "stand-by" sites.
  • Have a plan to account for and assist your staff in the event of an emergency or disaster.
  • Hope you never have to use the plan.

7.5.08

A point of clarification - "High Expectation Tasks"


Several times in my best practice recommendations I have referenced "high expectation tasks."

A few people have asked me what I mean by that, so I'll offer this point of clarification.

By "high expectation tasks" I mean those tasks that must be done precisely (hence the vernier caliper above) in a certain way.

Often in engineering, something has to get done, but you don't really care how it gets done. For example, you may need a test harness built, but you may not care what language it's built in, or how elegant the design of the harness is, as long as it will pick up unit tests, run them, and show you results. The actual building of this test harness could be viewed as a low expectation task. Someone still has to get it done, but it isn't necessary to micro-manage how.

Just as often, you'll care about not only what gets done, but how it gets done. For instance, if you already have a test harness, and you need modification to it, you will care what language the mod is done in. You will care about syntax, and comments, and other coding standards, because you are adding to an already extant tool, and you don't want to make it needlessly complex or messy. In this case, you have a high expectation task on your hands.

When you have a high expectation task, it's important to take the time to document your expectations in clear, concise language. Very often when I debug "broken" global teams, there is a feeling on one side or the other that expectations aren't being met. In almost 100% of these cases, the expectation was strongly held, but implicit. That is, there was a high expectation task, but no one had written down or otherwise communicated the expectation.

A few quick examples of high expectation tasks might be:
  • Coding standards
  • Check-in procedures
  • Test execution procedures
  • Conference call "protocol"
Hope this helps clear up what I mean by "high expectation tasks."

5.5.08

Best Practice - Checklists


There is a great article called The Checklist by Atul Gawande in the December 2007 New Yorker magazine. The full text of the article is available for free on-line. If you count yourself serious about software engineering you should probably take fifteen minutes and read it.

Here's the bit I read that made me have an epiphany - and resulted in me xeroxing the article and adding it to my required reading list for my managerial best practices training:

"Intensive Care medicine has become the art of managing extreme complexity - and a test of whether such complexity can, in fact, be humanly mastered."

The article goes on to provide fascinating detail about the complexity of modern medicine, as well as some old-school engineering analogs to this complexity. It then states a simple truth - checklists reduce rates of error.

Making software is a complex task with hundreds, maybe thousands of highly error-prone steps and tasks. As the article points out, humans have an annoying tendency to fail in their efforts to navigate such complexity.

The task of good engineering is to eliminate complexity and to break problems down into simpler more manageable sub-problems.

If you work in software engineering you should do this as a matter of course.

If you work in software engineering in the context of a global team, with language and cultural barriers, you should do this as a matter of dire necessity.

Here are a couple of examples of complex procedures or sub-procedures and a partially developed sample checklist for each:

Sample Checklist #1: Stuff to do before you check in code

I've had to debug several sub-par offshore development teams through the years... In my experience, the situation is usually that US-based management is unhappy with the performance of remote engineers. They break the build too often, and don't do what "we" expect them to do. Often, those expectations have been conveyed only verbally, if at all. This list (or one like it, tailored to your shop) can help.
  1. Compile and link locally
  2. Ensure all new functions have Unit Test coverage
  3. Run Unit Tests locally
  4. Ensure Peer Code Review (using a Code Review Form (that's another checklist all to itself)
  5. Archive Code Review Forms
  6. Do Check-In
  7. Manually check to ensure you didn't miss files
  8. Watch automated build mail, to ensure you didn't break the build
  9. Fix it if you did

Sample Checklist #2: Stuff to include in a defect report

Incomplete defect reports have always infuriated me. I've worked with QA engineers who routinely wrote defect reports like "software crashed, fix it." Great. Very helpful. Thanks. To eliminate such useless clutter in the bug database I've always requested that my QA managers develop checklists of necessary information to include with a defect report. Here's an example for the back end server for a client-server product:

If the error is on the Server:
  1. Server configuration (standalone, mirrored, clustered, etc.)
  2. Server version and build number
  3. Server Operating System and patch level
  4. Steps to reproduce, if applicable.
  5. If the server issue involves a connection / session with a client, turn on a session trace logging on both the server and the client. Include both client and server session logs.
  6. Userdump files.
  7. Zipped export of server logs (or of specific log entries)
If the error is on the Web management console:
  1. Server Operating System for core Server and Web Server (if hosted separately)
  2. Web management console configuration type (i.e., where is it installed?)
  3. Server version and build number
  4. IIS version and patch level
  5. IIS security setting details
  6. Internet Explorer (or other browser) version.
  7. Java VM version(s)
  8. Steps to reproduce, if applicable.
  9. Event logs from the server where IIS lives
  10. Userdump files, if applicable

These are just simple examples. These lists don't seem to represent highly complex tasks. But even these sample lists represents nine, seven and ten discrete things I expect an engineer to remember to do. Given the innate fallibility of busy humans, that's a lot to ask. People forget steps, or to be honest they sometimes decide not to do a step they dislike. A checklist - something each engineer can print out and leave on his or her cube wall - can go a long way toward eliminating goofs and unmet expectations alike.

Best Practice:
  • Turn your complex, high-expectation tasks and processes into checklists.
  • Create a culture that understands that checklists are tools, and that it's acceptable and encouraged to refer to them every single day.

1.5.08

Time Zones




Everyone who works with global teams has faced the issue of how to calculate time zone offset between them and their remote team mates.

Lots of tools exist, but none that do exactly what I want. I currently use "FoxClocks" (a free plug-in for FireFox) to solve the problem of "what time is it right now in Moscow?"

(Of course, this presumes you use FireFox (or Thunderbird, or Sunbird) as your browser).

The tool throws a series of city:day:time readouts along the bottom window of the browse. You can configure the cities. It gives you a quick "at-a-glance" readout of time differences. Not as handy for figuring out what time it's going to be in Hong Kong at 09:30 on 3/1 (with this tool, you still have to do math for that), but you can at least see the differences in real time. Here's the About FoxClocks page. Some readers might find this useful.

For the "when is my meeting?" problem, I have built a little spreadsheet that I use for time offset calculation.



Here's a screen-capture of it (see above, click for a bigger version), with the offset for the summer months... I keep this on my desk, so I have a shortcut any time I'm trying to figure out when a meeting is supposed to be. Just find your city along the top row, and the city you're calling along the columns. Then, add the indicated offset to your time, and you'll have the target time (it's easier to do this in 24-hour notation).

Some useful timezone facts to remember:
  • There are 40 timezones, not 24, so figuring out when your conference call is supposed to be is a big messy problem.
  • Not all countries put themselves through the lunacy of "day light savings time". Here's a great info-graphic from WikiPedia about countries that do and don't use DST.
  • India is all in one big time zone, called "India Standard Time" or IST. India doesn't use DST.
  • China is also one big time zone at UTC +8 (CST).
  • Russia has 11 time zones.
  • Brazil is either 1, 2, or 3 hours ahead of the US East Coast, depending on the time of the year.