Awash in White Papers

We are inundated with “white papers.”  They appear daily in our e-mail boxes promising expertise. What exactly is a white paper and why are there so many of them all of a sudden?

My context for white papers is based on what I saw as a child growing up in India. Whenever there was a thorny issue facing the country, the government would invariably publish a white paper on the subject authored by an eminent person or panel. I didn’t actually read any of the white papers (there were none on cricket or boy scouts), but I got the impression that the government was trying to educate people on a multi-faceted, complicated issue so that they could act on a fully-informed point of view.

Of course, as I grew older and more skeptical of government, I realized there had to be some spin in there somewhere. Without question, there was “white-washing” going on at some level.  I like to think that the intent of a white paper is to simultaneously (and often artfully) investigate, educate and advocate. Unfortunately, these days the “advocate” intent is really all about marketing. It’s no wonder then why we are awash in white papers. The pity is that the investigation and education dimensions are being diluted if not ignored altogether.

Why white and not any other color? A couple of possibilities come to mind. White is associated with purity, so it lends an air of objectivity. When you start with a blank (which in most cases is white) sheet of paper, you are starting from scratch without any pre-conceived notions and are open to all facts and points of view. White is perhaps the least objectionable color as well since it is a neutral and ideologically unencumbered color.

Why “paper”? Maybe we are just used to it now, but white paper sounds much more authoritative and official than white memorandum, white document, or anything else. A paper has an academic feel to it and lends further credibility in terms of not being biased or even subjective. You feel someone has taken a lot of trouble to research and write a paper. You feel obliged to take a look and take note.

I don’t know about your email inbox, but mine is flooded daily with white papers written or sponsored by consultants. Either way, it is not difficult to infer the underlying agenda.  The intent is usually to establish expertise with a view to marketing a product or service. It is reasonable to use white papers on a commercial basis. However, I find that commercial white papers exhibit a large range in terms of length, content, style, editorial slant, quality and usefulness.

Maybe I’m old-fashioned, but here are my thoughts on what a white paper ought to be.

A white paper needs to be comprehensive. Once you’ve read the white paper, you should have a complete and thorough picture of the subject in question. It should deal with every angle and every situation. You should be able to have a press conference and hold forth on the subject, taking any and all questions in your stride.

Since it is must be comprehensive, a white paper needs to tackle a specific issue. It ought to define the boundaries of the issue and be explicit about what it is not covering. There should always be a date on it, since situations change with time and new research emerges that makes previous views obsolete.

The topic needs to be substantive and of current interest. It need not be a controversial topic, but it ought to have different sides to it so that the whitepaper addresses and informs a debate. A white paper needs to be longer than one page. Make it as long as you want. It’s a white paper – comprehensive and authoritative. If it’s a couple of pages or less, either the topic does not warrant a white paper or the white paper is incomplete.

A long document needs to be well organized so that the reader can easily navigate through its contents. An executive summary is a must, as is a table of contents if it is more than say five pages long. A bibliography shows that some homework has been done, signals any biases based on sources, and provides a reading list for people who want to investigate further. An index would be helpful. Hyper-links would be icing on the cake.

No one wants to read a poorly written white paper.  Verbosity and pomposity need to be avoided.  A bullet point list does not constitute a white paper.  Illustrations are a plus; relevant photographs, tables, charts and schematics add color (literally!) and flavor. However, there’s no room for clip art. That’s just cheesy and distracting. White paper titles should be sober and informative, not breathless and crass

Some would argue that it does not matter what you call it – a white paper, a research report, a study, a product description, etc. I don’t agree. We use different documents for different things and they have different characteristics.  When presented with a document, we have certain expectations regarding what it is, what its purpose is and how it might be useful to us.

Calling everything a white paper to lend it credence and authority is potentially manipulative and deceptive.  That’s not how you want to introduce yourself or your services.

 

Posted in Uncategorized | Tagged | 2 Comments

Getting Started in Human Capital Analytics

Last week I attended the NCHRA (Northern California HR Association) HR Metrics and Analytics conference in Santa Clara, California.  There was a turnout of over 60 Bay Area HR professionals representing a number of diverse organizations.

Based on the dialog at the conference, I realized that although there is a broad and deep interest in workforce planning and analytics, many HR professionals are still trying to figure out how to get into the game.

We are witnessing a renaissance in workforce planning and a new surge in analytics. This is indicated by a number of tell-tale signs. My good friend, Allan Brown, Director of Compensation and Analytics at Marvell Semiconductor, and I presented a session entitled “From Historical Analytics to Predictive Analytics.”

Our presentation included the following slide.


I enjoyed the NCHRA conference and was encouraged by the participants’ enthusiasm and engagement. I jotted down a few observations on the plane home.

What are your thoughts on the following seven observations from my California trip?

1. The interest in human capital analytics is gaining tremendous momentum. It’s hard to believe that the word “analytics” was only put into play a few years ago when it became a buzzword in the management literature. It wasn’t long before analytics were being applied to human capital issues. A new sub-function within HR is being created and many large firms have set up internal human capital analytics capabilities. Distinct human capital analyst roles are emerging and it is very difficult to fill them, especially at the middle and senior levels due to the need for both HR experience and technical proficiency.

2. The interest is broad and deep. Attendance at the conference was broad in terms of industries and management level.  There was a healthy representation of hardware and software companies, but also government agencies and educational institutions.  There was a diverse range of positions as well, ranging all the way up to VPs of HR.  From what I’m told by friends, colleagues and competitors, this variety is not unusual. Unless attendance or membership is restricted to leadership roles, all levels of the company are attracted to such events.

3. People and companies are in a tearing hurry to get started with human capital analytics. Many of the presentations dealt with broad themes and perspectives.  And while the audience appreciated the contextual background and big picture landscape, there was a palpable urgency to the audience’s questions and concerns. When asked by a speaker why they were attending the conference, a number of participants stated that they were looking for actual analytical models that they could apply to their situation. One participant came up to us after our presentation and enquired whether we could share a spreadsheet template of a turnover model we had described since she wanted to show some analytical outputs to her management right away.

4. HR professionals trained in compensation, industrial/organizational (I-O) psychology or other quantitative fields have a competitive advantage in human capital analytics. When discussing a statistical model involving logistic regression that analyzed gender bias in equity grants, a participant asked where one could find the required “technical” talent.  There are plenty of people trained in statistics, but few of them have the background necessary for the happy coincidence of technical expertise and experiential insight.  We have discussed in a previous post how compensation professionals have a competitive advantage in practicing human capital analytics. Another participant and independent consultant, Bonnie Pollack, suggested we not forget those HR professionals trained in industrial/organizational psychology (I-O psychology), many of whom have the requisite statistical and research methodology training. You can learn more about these professionals through the Society for Industrial and Organizational Psychology.

5. There is a scarcity of good training opportunities. While there are many conferences on human capital analytics, there is a gap when it comes to training in human capital analytics. One of the reasons is that you just can’t get away from having to do some statistics, which is not an attractive proposition on the demand and supply sides of the equation. The other is that much of the human capital analytical work done within companies is very targeted and does not lend itself to generalization. One cannot ignore that companies have a competitive interest in not divulging analytical experiences and methodologies.  Some training material has emerged, but it is typically focused on HR metrics or getting comfortable with data.

6. There is a need for an in-depth handbook of human capital analytics. I reviewed the extant HR Analytics Handbook in a previous post. As I said in the review, it is a timely publication that provides a quick and concise summary of the current state of affairs. However, it does not go into enough detail. What’s needed is a “how-to” guide that describes the actual models and how to build them from scratch. It’s delightful to read about all the great results companies have achieved through human capital analytics, but how about the details on what exactly they did?

7. Workforce planning is not a new concept; it has just been re-invented to fit the new world of “talent management. While doing some research recently, I pulled out an old book from the 1990’s (with an ancient Amazon receipt in it!) published by the HR Planning Society (now known as HR People & Strategy). It covered workforce planning in great detail, presenting many alternative mathematical approaches, some of which had been used for decades. It was all there – Leontief input-output analysis from the early 1900’s, Markov analysis from the mid-1900’s, etc. For some reason the mathematical approach to workforce planning went into hibernation for a couple of decades as HR’s focus shifted to talent management.

What is Nelson Touch Consulting doing in response to the above?

We have already developed a human capital analytics curriculum that has been tested with HR practitioners. It is targeted at HR professionals requires no background in statistics (though the curriculum does not shy away from it). The course is ideally delivered over 2 days at an in-person seminar format session.

We will be delivering a 3 hour pre-conference workshop on Strategy, Planning and Analytics at The Talent Management Academy’s Workforce Planning conference in Boston June 13-16, 2011. The course introduces each of the three elements, explains how they inter-relate and provides concrete examples of how to go about achieving your HR strategy through planning and analytics.

I have received positive feedback from publishers on an outline for a book that will introduce human capital analytics to HR professionals. The working title is “Getting Started with Human Capital Analytics – A Guide for Human Resources Professionals.” The intent is to cover the entire employee life-cycle through human capital analytic models – e.g., staffing, development, rewards, turnover, etc. Readers will be gently introduced to the required math and provided model templates to populate with their own company data. Publication is expected in 2012.

Stay tuned to this blog and our Twitter account, @TheNelsonTouch for updates.

Posted in Analytics, Compensation, Statistics, Workforce Planning | Tagged , , , , | 2 Comments

The Rule of 72

Did you know that you can instantly calculate in your head the number of years that a quantity will double in, given its annual rate of growth?

Picture yourself in a conference or discussion where, for example, someone says that salaries in India are growing at an annual rate of 12% and you can say in a heartbeat “but that means salaries in India will double in about 6 years!”

As people wonder at your brilliance, you can pat yourself on the back for having learned “The Rule of 72.” No one need know that all you did was to divide 72 by 12, the annual growth rate, to arrive at 6, the approximate number of years in which the salaries would double.

I learned this trick, oddly enough, through a footnote in a high school economics textbook (Economics, by Lipsey and Steiner, now in its 13th edition). I was under the impression that everyone was in on this trick. However, I’ve come to realize that it is not as widely known as I expected. I decided to share it with my readership as a reward for your interest in my blog so that you, too, can amaze friends and colleagues.

In summary, here’s a table showing the number of years in which something will double, for annual growth rates ranging from 1% to 20%.  All you need to do is divide 72 by the annual growth rate.

You will notice that most of the numbers in the “years to double” column are whole numbers, not fractions. This is why the Rule of 72 is so useful. In most cases, the division is very simple, since 72 has so many factors (2, 3, 4, 6, 8, 9, etc.).

Many users of the Rule of 72 don’t know that it should really be The Rule of 69. We use 72 because of this neat feature of easy divisibility.  You wouldn’t appear so sharp if you had to calculate 69 divided by 12 in your head, for example.

Why does The Rule of 72 work? It has to do with certain properties of natural logarithms. For those who are interested, here is the story.

We start with the formula for compound interest:

FV = PV (1+r)^n

Where FV is the future value, PV is the present value, r is the annual interest rate (or annual rate of growth for our purposes) and n is the number of periods. In our example a period is a year. Knowing that the future value is twice the present value, the equation reduces to:

2 . PV = PV (1+r)^n

This in turn reduces to

2 = (1+r)^n

Taking natural logarithms on both sides of the equation, we get

ln [2] = ln [(1+r)^n]

Since we know from the properties of logarithms that ln a^b = b . ln a, and that the natural logarithm of 2 is approximately equal to 0.69, the equation reduces to

0.69 = n . ln (1+r)

Isolating n on the left hand side of the equation gives us

n = 0.69/ ln (1+r)

Now we take advantage of another property of natural logarithms, i.e., ln (1+r) is approximately equal to r when r is relatively small, to get

n = 0.69/r

Since .69 is not so nice to divide into (as discussed above), we replace it with .72 and then multiply numerator and denominator by 100 so we are dealing with whole numbers and the interest rate r can b expressed as a percentage rather than a decimal value.

n=72/r

Remember that this is an approximation. If you do the actual math using the compound interest formula you will get the exact answer. However, for most purposes, the Rule of 72 should work just fine. Enjoy!

 

 

Posted in Analytics, Compensation, Statistics | Tagged , , , | 1 Comment

“Unalytics” Awards – Nominee #1

The growing appetite for human capital analytics is drawing more professionals into the wider “market” for human capital analytics, including some players who ought to have done a little more homework.

We hereby announce the Nelson Touch Consulting Awards for Human Capital Unalytics – for the most egregious displays of half-baked notions or products. If analytics represent the use of data, analysis and systematic reasoning to make human capital decisions, then these awards are for the opposite – the misuse of data, poor analysis and fuzzy reasoning to make human capital decisions.  Hence:  “unalytics” – short for un-analytics.

Nominees will be named throughout the year and a 2011 winner will be chosen from among 10 finalists through a reader poll.

We certainly don’t want to embarrass or discourage thought-leaders and vendors who are nominated. After all, it’s the ‘mad’ inventors who eventually end up with game-changing ideas! But at this nascent stage in the evolution of human capital analytics, HR professionals should not be misled by half-baked models that stifle quality and dilute standards.

We welcome defense from the nominees (who may choose to remain anonymous) and debate from the blog’s readership.

Nominee #1 is a human capital analytics software vendor. The company offers customers a dashboard that seeks to warn management of a broad variety of talent management issues, based on analytics calculated using the company’s employee database.  In addition to the analytics, which are presented in graphical format, the dashboard provides an associated color-coded risk assessment.

One of the dashboard items is an exhibit that depicts differences in pay between men and women. On this count, Nominee #1 is to be commended for attempting to throw light on a very important issue. Gender pay disparities have lingered very long without appropriate redress.

The passage of the Lilly Ledbetter Fair Pay Act (2009) renewed interest in the issue, which is probably why pay disparity analysis is showing up in human capital analytics products. Of course, due to the sensitive nature of the topic, analysis is done privately and there is little scope for benchmarking.

However, Nominee #1 doesn’t give us a practical or accurate approach to investigate gender pay disparities. There is no happy intersection between the surge in human capital analytics (the tool) and the addressing of gender wage dynamics (the issue). Instead, here is Nominee #1’s ham-handed solution, depicted below.

The graphic is confusing and, stunningly, at once both overly complicated and overly-simplified.

It is overly complicated because what’s important here is the differential between male and female pay. This can easily be shown as one number: average female pay as a proportion of average male pay. This happens to be 60% in this example. In one number, 60%, you immediately see the overall problem (it’s not close to 100%) and the extent of it (it’s not even close). No need for a chart at all!

The graphic is overly simplified because comparing gross averages between men’s and women’s pay does not provide us much useful information. Certainly, knowing the gross extent of the gap is a start. However, the gap needs to be decomposed into what is explainable and what is not. Differences in labor market experience, educational qualifications and specialized skills (i.e., individuals’ human capital stock) might account for some of the gap.  The unexplainable portion of the gap is the problem and most of it is attributed to gender wage discrimination.

The actual problem of unjustified pay disparity might be different from what one might assume looking at just the gross difference. Furthermore, there might be important insights to be drawn from the more detailed analysis. It could be that men and women earn different rates of return on their individual human capital (which needs further exploration). The main point here is that it is misleading to try and boil down this important issue into one simplistic dashboard statistic.

And then there are some minor irritants in terms of the graphical representation.  Why are we burdened with two decimal places of significance when the gross difference between the categories is so large? Reporting no decimals or at most one decimal place would be sufficient. Why is the pay differential denominated in 22% increments on the y-axis? Not adjusting what seems to be an automatic scaling setting shows a disregard for the numbers and our analytic sensibilities. What elements are included in “Pay”? Is this for a specific position or is it an average over all positions? Why are we not provided trend information on a statistic that we want to improve over time?

Finally, we come to the risk assessment, represented alongside the graphic as follows.

The differential represented here is that between average female pay and overall average pay, when the crux of the matter is the differential between average female pay and average male pay. One explanation for this approach might be that builders of dashboards need to provide a baseline or target and, in this case, overall average pay appears to have fit the bill.

The differential, again to an unnecessary two decimal places of significance, is flagged as a “Severe Risk.” Once more, I think the dashboard construct has trumped thoughtfulness about the issue, the metric and the risks posed.

Have I missed anything? Please join the discussion and stay tuned for Nominee #2 for the Nelson Touch Consulting Awards for Human Capital Unalytics. You are invited to submit “unalytic” gems that you come across for deconstruction and debate.

 

Posted in Analytics, Compensation, Incentives | Tagged , , | 1 Comment

Leadership vs. Management

Are these two notions distinct, synonymous or complementary? Many views prevail in the literature.

I find the views of John Kotter and Peter Northouse particularly compelling, based on my own experience with leaders and managers at all levels of organizations, especially during times of change.

Kotter argues that leadership and management involve two distinct but complementary sets of action. Leadership is about coping with change while management is about coping with complexity. Here is a summary* that I keep handy to distinguish between the two.

* From Peter G. Northouse’s Leadership: Theory and Practice, Fourth Edition (2007) in which he draws from John Kotter’s A Force for Change: How Leadership Differs from Management, (1990).

 

Posted in Change Management, Leadership, Organization Design | Tagged , , , , | 3 Comments

Text Analyze This

What are the different connotations ascribed to the term “human capital” by HR professionals and what is a simple way to illustrate the variety?

The opportunity to examine these two questions came up recently as part of some work with Society for Human Resource Management’s (SHRM) Measures and Metrics task force.

The Measures and Metrics task force is one of many that have been convened to establish standards around HR metrics. This innovation over past attempts is the employment of the American National Standards Institute (ANSI) protocols for standards development.

The Measures and Metrics task force is drawn from HR practitioners, consultants and other interested parties from around the world. Its remit is to develop measures and metrics that will be useful for investors – ones that might become standard elements of the United States’ Securities and Exchange Commission’s (SEC) Form 10K, for example.

The term “human capital” is ubiquitous, but it often means different things to different people. The term was invented by economists (see our previous blog post, “A Capital Idea?”), but has now entered the business lexicon and is often championed by HR.  It was a natural choice as the basis for the task force’s metrics nomenclature

In order to ensure that everyone was talking about the same thing, members of the task force were asked to write down their views on what they understood by the term “human capital.” Many members responded.  Reading through the definitions, it became clear that there were disparate views on what human capital meant or ought to mean.  As to be expected with such a popular term, there were some common themes and words – education and experience, for example – but also some new words and ideas.

I thought it would be a good idea to create a “word cloud” (similar to the “category cloud” on the right margin of this blog) of the submissions. I used a free web-based application called Wordle, though there are many such free text analytic applications available.

The image you see on the left of the page shows the words used in the submissions to describe “human capital.” The size of the word is proportional to the number of times the word is used in the document. This particular application shaped the words around a footprint shape (this is a customizable feature and you can choose from a variety of shapes and styles).

Right away, you can tell which are the most common words used. Such text analytics can be very useful. From this blog’s tag cloud, for example, you can see which are the most popular topics discussed.

There are a number of other possible uses.  HR leaders can look at HR communications to ensure that the message is not being overwhelmed by certain words and phrases. Compensation professionals can examine job descriptions to check for an appropriate balance in the verbiage between strategic and operational responsibilities. Team leaders can ascertain the common themes among team members’ inputs on various topics. Job seekers can ensure that their resumes are hitting the right notes in terms of key words and capabilities.

Text analytics is a growing field and has come a long way. It will grow in importance as the web evolves towards a semantic structure and capability (the “semantic web”).  Words in a document are now akin to numbers in a spreadsheet. Evidence-based HR enthusiasts should familiarize themselves with text analytics and leverage them as they would quantitative analytics – to help answer critical questions and make better decisions.

 

Posted in Uncategorized | Leave a comment

It’s Probable: You Have a Chance

As HR professionals wrap their heads around predictive human capital analytics, one of the capabilities required will be a firm grasp of probability. Not just relevant probability theory, but a feel for the numbers as well.

Why is this important? The outputs of predictive human capital models are typically probabilities, likelihoods and odds. If we are going to use predictive models, we need to wrap our heads around these special numbers.  This post deals with probabilities. A probability is simply a way of expressing knowledge or a belief that an event will occur or has occurred.  Subsequent posts will cover likelihoods and odds to complete the discussion.

In most cases, we will have an inkling or some sort of gut feel for what the prediction of a predictive human capital analytical model will be, based on training and experience. For example, if we are trying to model the probability of a new hire’s success in the company, based on his or her attributes, we typically have a feel for what sorts of candidates succeed in different business units, based on observations over the years.  Our predictive model will quantify this probability so that we can judge the impact of various factors.

Unless we are familiar with the mathematics underpinning the model, however, we might be surprised by the output. Unexpected results should always be examined to unearth the source of discrepancy from our expectations.  Sometimes the problem is the model; sometimes it is our expectations which are awry.

Two examples have emerged in the popular literature to illustrate how we cannot always trust our initial instincts with regard to probabilities.

The Birthday Problem

The first example is known as “the birthday problem.” It has been around for a long time, but I was first confronted with it in an applied mathematics course at graduate school. It was one of the problems in the problem set assigned at the very first class. Fortunately, the solution was discussed by the professor in the next class (and fortunately for current students, is now widely available on the internet).

The problem is to figure out what is the probability that two people in a group of random individuals have the same birthday (day and month). Obviously, the larger the group, the larger is the probability.

Most people underestimate the likelihood that two people will have the same birthday. We have a prior notion that birthdays are rare (you have to wait 364 long days until your next one) and we only seldom encounter people with the same birthday as ours.

If you do the math right (there are a number of ways to arrive at the result), you get some fairly counter-intuitive results. With 20 people in a room, the probability of a shared birthday is as high as 41%. Increase the number to just 23 and you have even odds of a shared birthday (50% probability).

Looking at it another way, you only need 57 people to reach a 99% probability of a shared birthday. The table below summarizes the relationship.

Try it out at your next meeting. If you don’t get a shared birthday, remember this was a probability – you are not guaranteed the result. Even with 366 people in the room, if one person has a birthday on February 29, all bets are off.

The Drug Test Problem

The second example is typically framed as a drug test situation. Imagine a drug test that is known to correctly identify a drug user as testing positive 99% of the time and correctly identify a non-user as testing negative 99% of the time. That’s quite accurate by any measure. Now let’s assume that 0.5% of employees are drug users.

When the math is done right (using Bayes’ Theorem), the probability that someone who tests positive for the drug is actually a drug user is – hold your breath – 33%!  It’s more likely that the person is not a drug user!! Surely this can’t be – what’s going on here?

Despite the apparent sensitivity of the test, the low rate of drug use results in low accuracy. Basically, there’s a greater chance of false positives when the use rate is so low.

Both the birthday example and the drug test example are covered in greater detail in Wikipedia, including all the underlying math.

What are the conclusions? Here are two.

First, you need to have some understanding of basic probability theory. Being able to compute an expected value is essential. Knowing the difference between a probability and a conditional probability is important. One easy example is estimating turnover. Most people would measure turnover in a period as a percentage (say 17% if 17 people out of 100 left the organization). They would perhaps estimate that the probability of turnover is 17%. Fair enough. However to get an accurate estimate, one needs to look at individuals’ tenure and compute the probability of turnover after x years. This probability needs to be conditioned on the fact that they have not terminated for x years. I’ll cover this “life expectancy” notion of turnover in a subsequent post.

Second, gut feels for probabilities only work sometimes. It would be advisable to do the math or compute the model and come up with the actual probability. Of course, if the number does not agree with your hunch, you need to be able to get under the hood and work out the probabilities formally in order to convince yourself which number is right. With multiple variables and complicated or “advanced” predictive models, it is going to be very difficult to do the math yourself. At some point you have to trust the model. Which only means that you need to understand how he model works; it’s not enough to press a button, get the output and stick it onto a Powerpoint chart. Odds are that someone will call you on the probability underlying the prediction.

 

Posted in Analytics, Compensation, Incentives | Tagged , , | Leave a comment