A possible end to the fighting in Sri Lanka

Sri Lanka doesn't often sort to the top of the list of outsourcing hot spots, but I have been lucky enough to work with several dozen very bright, dedicated, excellent engineers who work for a great OPD company with a center in Colombo.  And I've visited Colombo twice, and enjoyed my visit both times in spite of the then heavy military presence.  So I've always had a bit of a personal interest in the multi-year civil war there.

As anyone who follows global news knows, fighting between Sri Lanka's  government and the LTTE (the Tamil Tigers) has been on a steady rise over the last several months.  The LTTE have been fighting and occupying an ever decreasing territory in the North-East of the country. There has been a lot of international pressure on the SL government over the question of civilian fatalities and displacement due to the fighting. 

Apparently the military capability of the LTTE was eradicated this week, facilitating a significant move toward peace.


Earlier this week, on May 19 2009, the President of Sri Lanka declared a national holiday in celebration of the end of the 25 year-old Civil War.  Sri Lankan army troops have allegedly killed the leader of the LTTE, and the current leadership of the Tamil separatists have agreed to lay down arms and begin peace talks.


While it seems premature to declare a single-day victory in a 25 year insurgency, such are the events on the ground in Colombo this week. 


Apparently the general mood in the country is one of elation to finally be leaving a bad chapter in the past. There's a good article on this here, at Financial Times.  

I wish the country luck in its peace, and in its rebuilding and relocation of displaced Tamil citizens.



Book Review - The Services Shift

For the last 8 weeks or so I've been carrying around and reading an excellent new book on outsourcing by Robert E. Kennedy.  Kennedy runs the William Davidson Institute, that I mentioned in a post in April.  His book, co-authored with Ajay Sharma, is called The Services Shift, Seizing the Ultimate Offshore Opportunity.

I like the book; and I recommend it,  But I have to admit that one of my primary reactions to the book is jealousy - the jealousy of an unpublished author for a published author.  To explain that comment, I have to digress into a little personal narrative.

When I started my own outsourcing adventure in 2000 or 2001 (a small project doing a Linux port of a server product) I was largely disinterested in the nuances of the outsourcing market. In 2003 I was charged with selecting a vendor for a larger QA outsourcing project, and at that point I put a little more thought and effort into the endeavor.  I reasoned that much of what I was having to learn (about vendor selection, locale selection, work selection, knowledge transfer practices, global team management, cultural nuances in the global workplace, etc.) was or should have been known science. So I expected to be able to go to Amazon.com and find several excellent books on outsourcing, study up, and be an instant master of the art.

In 2003 that was not the case.  There really weren't any good books on outsourcing, or if there were, they were very difficult to find.

Fast forward to 2008, in which year I dedicated a significant amount of my life to researching, writing and trying to sell a book on outsourcing.  The book I envisioned (and partially completed) was positioned for engineers and managers in the USA who are struggling to understand and cope with the global services shift.  I intended it to be somewhere between Daniel Pink's A Whole New Mind and Thomas Friedman's The World is Flat.  After unsuccessfully pitching my proposal to a dozen or so publishers, I came to the conclusion that the market for books on global outsourcing was pretty well saturated.  I have ten such books on a shelf within easy reach of my desk.  I have probable read or skimmed another twenty.  And there are literally dozens more that I haven't bothered to look at, for various reasons. Publishers liked my proposal, but unanimously viewed the market for this topic as supersaturated, and either had a book in the space already, or didn't see a way to make any money on a new book in the space.  (And by the way, I think in many ways it is easier to raise venture capital for a software startup than it is to get a publishing deal for a business book.)

So, getting back to The Services Shift, it might be enough of a recommendation to point out that it was published by FT Press.  That's the book imprint of the Financial Times newspaper.  (As a further aside, If you don't already read FT, you should start, as I think it uniformly offers a good global perspective to balance the WSJ or NYT.)  This is a serious publishing company, and as you'd expect, The Services Shift is a serious book.

I like most about this book the fact that it does a nice job bridging theory and practice.  It's an excellent, scholarly book, without being too abstract, and without being overly burdened by the business school jargon and theory-bloat that plagues many business books.  Think of it as The World is Flat, for a more analytic and intellectualizing audience.

Early in the book, Kennedy sums up what has been the guiding principle of my work for the last several years.  The long quote that follows will give you a sense for the direction the book takes, and for the writing style:

"...[Companies" have to be better and cheaper and faster - all at the same time! ...  We contend that most of these forces compel companies to look offshore for solutions to at least some of their problems.  From a defensive standpoint, they need to lower their costs to compete -- and offshoring certainly offers that prospect.  But offshoring also allows companies to be proactive in shaping their futures: by improving the quality of products and services, developing new offerings, and -- over the long term -- creating toeholds in the economies and markets that will be most important years and decades down the road."

This is the "faster, cheaper, better, closer" argument I've made many times, in this blog and elsewhere. Kennedy understands this new imperative, explains it well, and provides keen insights into both the macro and micro level issues in the services outsourcing space.

There are a lot of books on outsourcing.  I have a hard time saying that any of them are must reads.  But this is a very good book.  It earns a place next to The World is Flat, and Multisourcing (by Linda Cohen and Allie Young) in my bookshelf.  

If you were just starting out on an outsourcing epic, or if you were trying to grasp the shift toward globalization and devaluation of knowledge work, this book would be a great primer.  And if you're already a practitioner, this book is chock full of facts, frameworks, and complex business theory explained in a way front-line technology managers can understand.   

And perhaps the highest praise I can offer: The Services Shift is very close in concept and execution to the book I spent a good deal of 2008 researching and writing.  And it's exactly the book I wish I could have found in 2003 when I started shifting routine and repetitive work to teams I built in India.

Buy it, read it, reference it.


A bit about oDesk, and props for Inside Outsource

I've posted several times about the growing list of freelance outsourcing "clearing houses."  This week, I discovered another, and possibly the most mature of these sites - oDesk.   They take an interesting approach to global outsourcing - Allow direct brokerage between people with jobs and people with talent, without the overhead of a services vendor standing in the middle of the transaction adding margin overhead.  

They have over 200,000 freelance providers bidding on jobs, and over 6000 active jobs.  The oDesk manifesto says better than I could what they do and why they do it. They've assembled a serious management and advisory team that reads like a Silicon Valley fantasy league roster.  They're on to what seems like a great idea, and I wish them much success.

The reason I found out about them is even more interesting, to me at least.  They recently published a "Top-100" list for resources and blogs for outsourcing.  They ranked the Inside Outsource blog in their top-100 list.  (I'm number 23, but the list ranking is topical and alphabetic, not hierarchic.)  That's great recognition, and certainly a dose of encouragement for me to pick up my pace and post more frequently.  Thanks oDesk, for the recognition, and I hope your readers enjoy both the archive and any future insights.   


The William Davidson Institute

I am the first to admit that since late 2008 a shadow of doubt has fallen over the entire broad field of globalization. From presidential politics to questions about culpability in the global economic meltdown, it hasn't been a good time to be a  proponent of and active participant in globalization.  

My negligible blog output reflects both this pall, and the profound uncertainty surrounding the business on which I generally comment. 

The bad news is that this era of uncertainty about where the global services economy is headed seems to be a new constant, and isn't heading toward clarity any time soon.  

The good news is that in spite of this uncertainty, I've got a lot of new ideas observations and ideas to share in the coming weeks.

The first of these is a link to a very interesting and useful resources at the University of Michigan.  Check out The William Davidson Institute web site (best viewed with Explorer, as they have some menu widgets that don't work in Chrome).  

I stumbled across this institute because I'm reviewing a very good book by Robert E. Kennedy, their Executive Director (more on the book later this week).  The institute is an interesting and useful resource for anyone interested in the academic side of globalization, or the more broad areas of sustainable development, poverty alleviation, and general business trends in emerging markets.

With events in Chile, Colombia, Algeria, Rwanda, and Latvia on their Q2 calendar, these guys clearly get around.  Their site is worth some time, particularly for anyone who writes or does research in the field of globalization.  Their web site has an archive of policy briefs, business briefs, and academic working papers available for download.  Most interestingly, the working papers seem to be collected from universities all over the world, and should give an interesting "global" perspective on globalization.

Read and enjoy.


A framework for quality / maturity analysis

I was looking through some old files recently and I stumbled across a "quality" framework I wrote that actually works quite well as a pre-engagement "maturity" framework, that might lead you to conclusions about whether a particular organization or company was ready to consider outsourcing any portion of their engineering or product development work.  It's a long list of questions and concerns, but I figured it might be useful to someone, hence the framework in its entirety is posted below.

As always, while I have found this framework to be useful, your mileage may vary.

Notes on using this framework

This framework can be employed through technical "interviews" with key engineering team staff.  Primary targets for interviews are:
  • Development engineering management
  • Development engineering staff (small sample set)
  • QA engineering management
  • QA engineering staff (small sample set)
  • Functional area managers for any other engineering disciplines

This framework is intended to get at core quality concerns primarily through "gut feel".  This approach is admittedly insufficient for comparative analysis.  
Project methodology

Regardless of the software development methodology used (waterfall, agile, XP, scrum, etc.) there are several common constructs that usually have a significant and direct impact on overall quality.  These items should be considered in any analysis of product quality engineering practices.

Process definition and documentation

  • Is the software development methodology well understood by the engineers building the software? (as measured by correlation between various interviewees)
  • Is the understanding consistent across the team?  (as measured by correlation between various interviewees)
  • Has the team implemented projects using this process before?
  • Is the process well documented? 

Clarity and purity of role
  • Are roles defined for team members?  (for example, "QA Engineer" vs. "Development Engineer")
  • What are the roles?
  • Are the roles well documented and understood by all parties?  (as measured by correlation between various interviewees)
  • Do individual team members serve more than one role based function?  Do they do so simultaneously?

Process phases
  • Are process phases formally defined? (for example, Requirements, Design, Implementation)
  • Are entry and exit criteria defined and documented?
  • Are formal process phase gate reviews performed?  
  • Are results of phase gate reviews documented and shared with the full team?
  • What is the theoretic impact of a failed phase gate review?
  • What is the real-world impact of a failed phase gate review?

  • What project artifacts (internal documentation such as Requirements, Use Cases, Specifications) are produced?
  • Are project artifacts common across projects (i.e., are they standardized as part of the development methodology)?
  • Are project artifacts stored in a common repository under version control?
  • What change control methodology is used for project artifacts?
  • What traceability methodology is used to track dependencies among various project artifacts?

Project complexity
  • Has the team implemented and successfully delivered a project of this size and scope before?

Development engineering methodology

Quality engineering starts well upstream of "QA".   The following common aspects of software engineering can have significant impact to product quality.  

Repository and Build
  • Is the software source stored in a source control system?  If so, what repository?
  • Does the repository allow highly granular version control?
  • Does the product / project in question build every day?  More often?
  • Is the build fully automated, and does it proceed from "start" without any human intervention?
  • What is the outcome of a failed build?
  • Does the repository and build system allow 100% identical build results  (i.e., can any previous build number be reproduced from the repository without human intervention?)

Code Inspection
  • Is peer inspection of software source code performed?
  • If so, how often?  (all source, only new source, only tricky source, etc.)
  • Are code inspections required prior to check-in, or simply suggested?
  • Are code inspection artifacts produced? (code inspection forms, etc.)
  • What is the outcome of a "failed" code inspection?
  • Are defect reports entered against code inspection "failures"?

Developer Unit Test
  • What (if any) unit test framework is used?
  • Do developers follow a "test first, then code" methodology?
  • How are unit tests run (automatically, manually, ad hoc)?
  • How often are unit tests run?
  • What is the result of a failed unit test?
  • Are defect reports entered against failed unit tests?

Developer Integration Test
  • Are developers required to build and test the project / product prior to checking in source?
  • What is the result of a failed integration test?
  • Are defect reports entered against failed integration tests?

Build Yield
  • Is build yield measured through the life of the project?  (I.e., X attempts to build, Y successful builds, breaks for reason Z, etc.)
  • If so, how often does the build break?
  • What is the outcome of a failed build?
  • Is there a quality feedback loop from build failures (i.e., is root cause analysis done and corrected?)

Code Coverage Tools
  • Are code coverage tools in use?
  • If so, where and when is code coverage measured?

Advanced Diagnostic Tools
  • Are advanced diagnostic tools in use?  (for example, Bounds Checker?)
  • Who uses the tools, and how often?

  • Development Environment
  • Does development engineering have its own systems to develop and test the product in question?
  • Does development engineering have autonomy within this environment?
  • Is the development engineering team measured on product quality?

Quality assurance methodology

While quality engineering is significantly influenced by upstream processes and events, as discussed in the previous two sections, the execution within the QA organization is of primary importance to this audit framework. 

General QA Methodologies
  • Is "QA" a separate function?
  • How empowered is QA?  
  • Does the QA team have a separate reporting structure?
  • Does QA perform all, some or none of the "testing"?
  • What is the development engineer: QA engineer ratio?
  • Does QA staff participate in product design?  
  • Does QA staff participate in project phase gate reviews (if applicable)?

Overall testing approach
  • What phases or test levels does the QA function define? (for example, black box, integration, performance, beta)

Task decomposition
  • How does QA decompose tasks in order to completely test all components, sub-systems and functions?  (for example, by use case, by feature, by architectural component, etc.)
  • Is this task decomposition reviewed outside of the QA team?

Smoke Test
  • Is each build tested for "happy path" to ensure basic functionality?
  • Are the smoke test "test cases" documented and well understood?
  • Does the smoke test run automatically?
  • Is Smoke yield tracked through the life of the project?
  • What is the result of a failed smoke test?
  • Are defect reports entered against failed smoke test?

Correctness testing
  • Irrespective of task decomposition, is test strategy documented?
  • Is this test strategy reviewed outside of QA?
  • Are all individual test cases documented and reviewed?
  • Are validation methodologies and data sets documented along with the test cases?
  • Is any attempt made to "normalize" test cases (i.e., one function per test case, or "about an hour of setup and verification")
  • Who authors test cases?
  • Who reviews the test cases for correctness?
  • What approach is taken in the presence of complex combinatorial features (for example, testing a function on multiple operating systems)?
  • Reproducible data sets?
  • Reproducibility of results?
  • Feedback loop for defects found outside of test cases?

White/Black Box?
  • What verification methods are used across the product / project testing?
  • Are verification methods reviewed as part of test case review?

Function coverage
  • What estimated level of "function coverage"  (i.e., use cases, features) are covered during a given QA test cycle?
  • What methodology was used to arrive at this estimate?

System (integration) testing
  • Is System Testing viewed as a separate function from general correctness testing?
  • Is the full system tested "as deployed", or are only sub-components tested?
  • Are negative "scenario tests" performed? (for example, pull out a disk while the server is doing some function.)

Load testing
  • Is the full (integrated) system or product tested in a variety of "over-clocked" scenarios?
  • Are system components isolated and tested under "over-clocked" scenarios?

Performance testing
  • Is a distinction made between load testing and performance testing?
  • Is performance testing done on full-scale systems?  
  • If not, what analysis has been done to facilitate scaled down performance measurement?
  • How are performance requirements and scenarios determined?
  • Is the performance testing modeled after customer deployments?

Network Simulation
  • If the product / project has any network layer dependencies, is any modeling done to determine possible network-layer perturbations?
  • Is any testing done using WAN / LAN simulation software?

Beta / EFT Testing
  • Are customer beta programs or "early field trials" part of the QA process?
  • If so, how many customers participate?
  • If so, what is the typical duration of the Beta?
  • If so, what is the typical number of defects discovered during Beta?
  • If so, what is the typical response to defects discovered in Beta phase?

Pre-release Regression Testing
  • What methodology is employed to ensure validity of test results through the "end-game" of a release?
  • How much of the final "gold candidate" is tested pre-release?

Defect Tracking & Measurement
  • Is a defect tracking tool (for example, Bugzilla, MKS Integrity Manager, etc.) used?  If so, what flavor?
  • Is the defect workflow documented and well understood by all members of the team?
  • Is the defect schema well documented and well understood by all members of the team?
  • In which project phase are defects tracked?
  • Can defects be expunged by the system?
  • Who gets to close defects?
  • What defect metrics are tracked through the course of a project?
  • Are defect trends for existing projects compared to historic projects?
  • How many open defect reports are there?

Defect Verification
  • How are defects "committed" for fix?
  • How are defects verified for fix?
  • How are defects analyzed and verified for potential "collateral damage"?

Statistical Methods
  • Are any statistical methods in place to extrapolate product quality (and hence customer satisfaction) from software defect data?

Test case management
  • How are test cases managed?  
  • Are they under change control?

Test Environment 
  • Does QA team have its own systems to develop and test the product in question?
  • Does QA team have autonomy within this environment?

  • Can QA stop a release?  If so, how and why?
  • Has QA stopped releases?  If so, how and why?

Customer Experience

Overall quality is best determined by overall customer satisfaction.  These parts of the framework attempt to get at the customer experience:

  • What is the size of customer base?
  • What is the largest single deployed instance?
  • What is the diversity of customer base?  (range of sizes, market verticals, etc.)
  • What is the average number of tech support calls per customer per time period?
  • What is the average number of defects reported per customer (over some time period)?
  • What is the "work flow" for customer-reported defects?
  • Is the customer-reported defect work flow well understood and documented?

Special considerations

Product Complexity
  • How many lines of code in the product?
  • Are there any measurements of the complexity of the product?  (Rose, etc.)
  • How many components or systems comprise the "system as deployed"?

Engineering Team Scalability
  • How big is the engineering team?  
  • How many releases (and of what scale) does the team deliver a quarter/year?
  • How many releases / projects does the team work in at any given point?
  • Are there any significant staffing or resource bottlenecks?

Sustaining Engineering Methodology 
  • How are customer "issues" handled?  
  • Is there a dedicated "sustaining" team?  If so, what disciplines are involved?  How many people in each respective discipline?
  • What is the support model?
  • What is the triage / prioritization model?
  • How fast, after a fix has been coded, can a "hot fix" be released?
  • What level of QA is done on bug-fix releases?
  • How much of the product / project must be built and released in order to deliver a bug fix?


Too busy to blog, but not too busy to cross-post...

I've been too busy trying to keep my own little consulting shop fully booked to write much lately, but I saw this today, and thought it relevant, from a cultural perspective, to anyone doing business in Japan.  It also underlines a lot of what I've previously written about understanding the cultural norms of the place where part of your team lives...

Read and enjoy.


The Story of India

"India's history is a ten thousand year epic but for over two millennia, India has been at the center of world history."

The Story of India is a new six hour documentary from PBS and the BBC.  It purports to trace the history of modern India from 70,000 BCE to 2007.  When I first read this, I thought the goal was too ambitious, and figured the show might not even be worth watching.  

But... I was very much mistaken.  This PBS and BBC joint production, narrated by Michael Wood, is the first TV or film production I've found that captures the feel of India.  I've watched the first four instalments, and have certainly learned a great deal; but more importantly, I've been transfixed by the beauty and intelligence of this documentary.

And the good news is that you can still (in the US anyway) see it on TV for free, on PBS.  

If that doesn't work for you, or if your DVR is all filled up with footage of the US presidential inauguration, then you can buy the movie on DVD here.

I won't even try to summarize the content of the first four installments of this six-hour film, but I will say that the photo slide-show here really captures the feel of the film, and the film really captures the color and energy of modern India.

Take a look through the PBS site, and if it looks compelling to you, do try to watch this movie.  You won't be disappointed!


Old news, but interesting

If I were more diligent in keeping up with commentary about the outsourcing industry, I'd have posted this last week.  But alas, I was traveling and working, and this is the first chance I've had to summarize the latest big news in Indian Outsourcing.  

Satyam Computer Services, formerly India's third largest IT outsourcing company (53,000 employees) is now known as "India's Enron."  The company has been embroiled in scandal since the end of December, and as facts unravel, it appears that this publicly traded company, once a winner of prestigious awards for corporate governance, has been manufacturing earnings reports for some time.  

So far, the founder has been arrested, the CFO has been taken in for questioning by Indian authorities, the stock has been delisted, and the company's external auditor may be under investigation for complicity.

This is interesting for a few reasons:
  • It's a fair bet that Satyam won't weather this as a company, meaning their assets, contracts, and more importantly talent might be acquired by one of the remaining Indian outsourcing companies.
  • This scandal calls to question practical wisdom about corporate governance.  Satyam was publicly traded, and gave the appearance of being in compliance with all regulations, standards and best practices for corporate governance. Lots of companies in the US seek to partner with publicly traded companies on the potentially mistaken assumption that SOX compliance (or other similar international standards for governance) will reduce risk and prevent the kind of fraud Satyam is accused of committing. There's still more investigation required in the Satyam case, but being publicly traded is clearly no guarantee of being well governed.
  • I suppose this is not news, the US doesn't have a monopoly on corruption, though US companies are probably still statistically over-represented in recent corruption cases.
More on this story in these stories from the Financial Times:



I posted some time back about a freelance aggregation site called Scriptlance. I just stumbled across another reference, in a blog I read called "Cool Tools." The post in question talks about an aggregation service called Elance. They've apparently brokered $60M in contracted freelance jobs in the last year, with a very small dispute ratio. According to Kool Tools author Kevin Kelly:

Elance's escrow service holds the payment and protects both the work provider and you the employer. The site provides status updates on work done, and plenty of communication between the parties. Workers must pass a competency test to qualify to be listed. Some freelancers can also pass expertise tests in a mild form of certification, say for working on java or ajax, etc. Elance freelancers did about $60 million of work last year and less than 1% of the jobs had any kind of dispute, and most of those were self-resolved by the fact that the entire transaction correspondence is logged. (quoted from Cool Tools.)

I really like this business model. I'm on the verge of bidding out a logo design, and a web site redesign, and I'm going to try Elance. More news on this project and business model as I get it.


Outsourcing and process

The CMMI Maturity Levels, from Wikimedia Commons 

I have had dozens of conversations through the years about outsourcing and process. They usually start with questions about "what process is best" for a given team structure. I have a very pragmatic approach to this problem, that sometimes comes across as heretical.

The process doesn't matter.

I've worked in shops that spent a lot of time focusing on process. I've worked in shops that spent next to no time on process. I've come to the conclusion that process models and frameworks alone have very little impact on productivity or quality.

What matters is having smart people who know how to work well together.

The paradox in my heretical belief system comes in the nearly universal truth that smart people working together will develop and document a process, and will often optimize that process, write it down, diagram it with swim lanes and flow charts, and update it when they change it.  It's human nature to develop process.

The best teams I've been a part of developed process communally and adapted the process perpetually.  

The lesson for leaders in this observation is this:
  • Don't spend a lot of time focusing on your process model, whether it's for software development or IT service delivery.  Certainly don't do this at the expense or exclusion of first-order problems.  
  • Instead, give your team the mission of determining their own process definitions.  Make the mission of developing process definitions subservient to the mission of building software, or delivering service, or doing what ever it is your customers pay you to do for them.
  • Make sure your team writes their process or processes down.  Make sure it's reviewed by some contingent of your leadership team or executives.  (That's less to get input from the reviewers, and more to make the staff structure the way they talk about the process.)  
  • Make sure the staff has some way to train new hires or new team members on the expectations around how the team works together.
  • Make sure there's a feedback mechanism to get inputs and good ideas into the process.
  • Make sure the process is malleable.  
You'll notice that the CMMI process model doesn't define what kind of work flow you should use.  It doesn't mandate what format your functional specification takes.  It doesn't demand that you make your meetings 90 minute seated affairs with parliamentary rules of order.  Neither does it give guidelines that your meetings be held in conference rooms without chairs, to make them go faster.  The CMMI process model defines some broad, universal concepts that all projects go through.  As a process definition, it really just maps the high points.  It's up to you and your teams to fill in the details.  I think this is appropriate.  

So my ultimate guidance is that the process doesn't matter.  What matters is having one, and communicating what it is.  (In this, your SDLC can be thought of as just another high-expectation task you need to document.)  All a process model is is a statement of expected behavior of people in groups solving problems. 

I will add one caveat, because I've seen a lot of teams attempt to adapt their process to integrated global teams -- Agile software development methodology, with daily scrum meetings, does not work well if your scrums have to involve teleconferences with remote teams in different timezones.  It's too cumbersome to give the kinds of productivity lifts for which this particular process trick was designed.  Instead of doing one big scrum,  consider decomposing your project a bit more, and allowing the remote teams to each work on discrete release components, so they can have their scrum meetings be self-contained in their local centers.  It won't always work, because you won't always be able to break the project into discrete components, but it's a good hack to allow a bit more productivity across two lobes of a global team.