Tech Industry Insights: People Analytics Explained

People analytics has been all over the news as of late – from Google’s Project Oxygen to Don Peck’s article about the future of the hiring process.

But what are people analytics, what have they achieved so far, who’s working on them, what could they become, and what are the possible issues?

What Are People Analytics?

According to the Wharton People Analytics Conference:

“People analytics is a data-driven approach to managing people at work. Those working in people analytics strive to bring data and sophisticated analysis to bear on people-related issues, such as recruiting, performance evaluation, leadership, hiring and promotion, job and team design, and compensation.”

In other words, it’s an attempt to bring human bias out of recruiting, managing, and retaining employees.

What Have They Achieved So Far?

Most of the high profile people analytics work so far has come out of Google, so let’s talk about how people analytics have assisted in recruitment, performance, and retention at Google.

1. Recruitment. Up until recently, Google was notorious for putting its applicants under a battery of interviews (dating back to its initial founding when new recruits had to be interviewed by the entire company).

However, Google’s People Operations division discovered that 4 interviews was enough to get an accurate picture of the applicant and anything after that would be diminishing returns. This decision significantly sped up the hiring process and also quelled the complaints that the process was too time consuming.

Additionally, Lazlo Bock (Google’s VP of People Operations), recently spoke to the New York Times about some findings from their interview process. Apparently, success on brain teasers / market sizing questions or having a high GPA / test scores had no correlation with success on the job.

Anyway, I wanted to speak a little bit about the GPA / test score / brain teaser finding. On its head it seems pretty surprising that there’s no correlation between a high GPA / high test scores / success on brain teasers and success on the job. However, let’s dig more into it.

Ultimately GPA, test scores, and brain teasers are all indirect ways of measuring future performance (since obviously you can’t measure it directly). These metrics are meant to give some indication of intelligence, hard work, creativity, persistence, etc.

However, I’d argue that they fail as good indicators of talent because:

They Lack Sensitivity. What’s the difference between a 3.8, a 3.85, and a 3.9 GPA? What’s the difference between a 2350 and a 2400 SAT score or a 35 and a 36 on the ACT? What’s the difference between a market sizing estimate that is 5% off vs. 8% off? What’s the difference between two excellent case interview performances?

At some point the metrics break down. For example, I’ve given hundreds of case interviews and I’ve noticed once people reach a certain threshold of skill it’s almost impossible for me to distinguish their performance from anyone else above that threshold.

For GPA as an indicator – it’s difficult enough comparing two people’s GPAs if they are from different majors (for example, apparently on average Chemistry majors have the lowest GPAs and Language majors have the highest, so it’s hard to pull out how much of one’s GPA is hard work vs. easy classes). Once you throw in the fact that grade inflation is rampant at top universities (i.e. the median grade at Harvard is an A-, and the most common grade is an A), you can’t use GPAs to distinguish two people because everybody has a high GPA.

They’re Easy To Game. The problem is, once people know what is supposed to correlate with success they will game the system. Now normally this wouldn’t be a bad thing but the problem is that what correlates with success does not create success.

As we established already, if someone knows that having a high GPA is necessary, it’s very easy to game the system to get the highest GPA possible.

If someone knows demonstrated leadership is necessary, it isn’t difficult to join a few organizations and just consistently show up to get a leadership position (most organizations are built to have as many unique leadership positions as possible to let people do this).

The problem is that unlike doing your homework in school gaming these metrics may not actually improve your talent.

There’s Only a Weak Link Between Past & Future Performance. One of the most interesting premises of behavioral psychology is that “past performance is the best indicator of future performance.” It’s simple, easy to understand, and you hear it everywhere so it must be true.

And to a certain extent, it is true. But only If the past and future actions you’re tracking / trying to predict involve high-frequency / habitual behaviors, are exactly the same contextually, don’t receive corrective feedback, and if the person themselves remains unchanged / consistent in their behaviors.

So for example, someone who exercises frequently today will probably exercise frequently in the near future. In a few years or decades? Hard to say.

But as a counterpoint, consider paroled murderers. You’d expect based on past behavior that paroled murderers would be more likely to commit murder than the average member of the population. But, as it turns out they studied the recidivism rates of parolees in California over a period of 20 years and discovered that 0% of them repeated their crimes.

Why? Because the present action and future action are a long time apart, contexts are often different, there is negative feedback for the action, etc.

And although people’s actions are usually ascribed to their characters, it turns out there much more related to the situation.

Anyway, the point is that past performance isn’t a great measurement of future performance in general, thus trying to use GPA / Test Scores to predict future performance isn’t going to work.

(Note: If you are interested in more on the past performance / future performance paradigm, read this article by Karen Franklin)

Anyway, thus far research from people analytics has shown us that it’s very difficult to predict future performance from past performance. Now let’s see how Google’s Project Oxygen tried to improve performance from its existing employees.

2. Performance.

Project Oxygen

(credit: Coert Visser and the New York Times)

Reading the above list my first thought was that you could Google “what makes a good manager” and you’d get a pretty similar list.

It isn’t groundbreaking, but (again) it does have authority because it was derived from “10,000 observations about managers — across more than 100 variables, from various performance reviews, feedback surveys and other reports.”

The other advantage of Project Oxygen is that they gave each manager detailed feedback on their own performance. As it turns out, this might be one of the only ways to counteract the Dunning-Kruger effect (that the least skilled at something tend to overestimate their own abilities).

It’s pretty easy to ignore what your subordinates say at the water cooler, but a report that shows you’re a 25% percentile manager is a little bit more convincing.

Of course, as with the previous point once the metrics for what determines performance are known they can be gamed. Unfortunately, in many cases how good you are at your work is determined by a group of your peers (to eliminate bias), many of whom don’t see you work directly or aren’t capable of judging your work directly.

What ends up happening is that to score well in these rankings you have to spend a significant amount of time convincing others that you’re effective instead of actually being effective.

(see Kurt Eichenwald’s article about how this caused Microsoft to lose its way after Steve Ballmer instituted stack ranking).

Anyway, it seems like people analytics could help judge / improve performance, but only if they find a way to gather data that does not involve human bias. I’m not sure how to do it.

3. Retention. A few years ago Google’s People Operations division discovered that their attrition rate amongst women was unusually high compared to their average. When they dug into it further they realized that most of the attrition was coming from new mothers.

At the time, Google had a standard 3 month maternity leave package, which they extended to 5 months and allowed expectant mothers to take time off before and after birth according to their own schedule. How they decided on this is unclear, but according to the article they saw Google’s attrition rate amongst women drop by half, and thus move to the average for all employees.

Unfortunately the article doesn’t go into detail about how exactly they noticed this trend and how they came up with this solution. However, this is a pretty well known problem and probably could’ve been figured out by just making a few phone calls to new mothers who had left.

However, I’d argue the advantage of the analytics portion of this is confidence. If you were to try to write an article saying that the attrition rate amongst women in your organization is higher than normal because new mothers are leaving and that you should change your maternity policy, it’s unclear what the reaction would be. People might take offense, it may trigger disagreement, debate, etc.

However, If you say that the data shows that attrition amongst expectant mothers is a problem and that changing the maternity policy has a high probability of success then the reaction will probably be different.

Who’s Working on People Analytics?

1. Evolv – A Workforce Optimization Startup

From The Economist:

“Evolv mines mountains of data. If a client operates call centers, for example, Evolv keeps daily tabs on such things as how long each employee takes to answer a customer’s query. It then relates actual performance to traits that were visible during recruitment.

Some insights are counter-intuitive. For instance, firms routinely cull job candidates with a criminal record. Yet the data suggest that for certain jobs there is no correlation with work performance. Indeed, for customer-support calls, people with a criminal background actually perform a bit better. Likewise, many HR departments automatically eliminate candidates who have hopped from job to job. But a recent analysis of 100,000 call-centre workers showed that those who had job-hopped in the past were no more likely to quit quickly than those who had not.

Working with Xerox, a maker of printers, Evolv found that one of the best predictors that a customer-service employee will stick with a job is that he lives nearby and can get to work easily. These and other findings helped Xerox cut attrition by a fifth in a pilot program that has since been extended. It also found that workers who had joined one or two social networks tended to stay in a job for longer. Those who belonged to four or more social networks did not.”

This is an interesting set of insights.

First, they discover that for call centers, people with a history of changing jobs are no more or less likely to leave their job than anyone else. It’s good to see this bias finally put to rest, at least for call center workers.

Second, they discover people with criminal records perform slightly better than the average (most likely they perform better because they’re a smaller group, its easier for them to deviate from the average based on a few extreme cases, but it’s still good to see that this bias against people with criminal records is false).

Third, they discovered that employees that live closer are less likely to quit. Nothing groundbreaking here, but I’m not sure how this helps with recruiting. It would be discrimination to use where you live as a basis for employment (in many cities there are neighborhoods for each ethnic group and neighborhoods are often stratified by wealth).

Fourth, they discovered something about being part of two or more social networks but less than four makes someone a better worker.

Again, this is probably because of group size. There probably aren’t that many people in four or more social networks so its easier for them to deviate from the average. It would be reading too much into it to conclude more.

By the way, this reminds me of a finding that people who apply to a job by a browser other than the one their computer came installed with make better workers (at least as call center employees).

I have a feeling that whoever came up with that finding doesn’t understand that correlation doesn’t imply causation. Perhaps there is a correlation between having a different type of browser and job success, but that doesn’t mean that people who apply using Chrome or Firefox make better workers.

Anyway, Evolv’s work seems to be mainly focused on high frequency  repetitive jobs where it is easy to track performance and past performance is somewhat more indicative of future performance.

I don’t think their findings can be extended to all types of jobs and I’m also not sure if their techniques can be extended to more creative jobs either.

2. Bersin by Deloitte – A leading provider of research-based membership programs and advisory services in the human resources, talent and learning market.

From Forbes:

“Let me cite one of the examples from the research. One company studied the turnover and retention behavior of employees based on pay raises. Their traditional approach was to pay based on a normal curve and give top performers slightly higher raises than second-tier performers, they received slightly more than the next group, and so on.

It turns out, as much of our other research shows, that this “normal distribution” curve of pay is a big mistake. What the research found was that employees in the second and third quintile of performance (good solid performers) would stay with the company even if their raise was as low as 91% of average increases in their job class.  So these folks were being overpaid.

On the other hand, people at the top of the performance curve would leave the company unless they received 115-120% of the average pay increase for their job class, indicating that the payroll money should go here.

As most managers know, top performers out deliver mid-level performers by a wide margin, so paying top people “much more” is a huge advantage if it prevents them from leaving.

In this particular case the findings did not solve the problem. Even after being informed, managers continued to pay their people the old way (belief systems die hard and managers don’t like to make waves). So the company had to roll out a massive training program and a new tool set for compensation distribution based on the data science, essentially over-riding typical manager thinking.”

See this is an interesting insight. I’m not sure how they came up with it so I won’t comment on its accuracy.

However, I’m not sure if changing the way the company pays its employees is really the best way to improve retention of talent. If retaining talent is a real problem why don’t they just ask their people what would make the working environment better and do that?

It might even be little things such as fixing the copy machine and eliminating TPS reports.

An interesting insight to be sure, but is compensation really the most important thing when deciding whether or not to leave your job? It might be, but usually it’s something else.

3. LinkedIn – The world’s largest business networking website for people in professional occupations.

Examples of Their Services Include:

LinkedIn’s Talent Brand Index:

“LinkedIn’s Talent Brand Index (TBI) is an amazing tool provided for free to companies with a LinkedIn careers page and therefore a paid account. In a nutshell, TBI lets you know how attractive your company is to the talent you want to hire. According to LinkedIn, 83% of employers believe their brand directly impacts their hiring, but only a mere 33% of them actually measure how their brand is perceived. This is a figure LinkedIn are looking to change with TBI.

So how does TBI work? LinkedIn measure the interactions between it’s millions of users, analyze those interactions and determine what makes people interested in companies as places to work. They do this firstly by measuring two elements:

  1.  Reach – the number of people that are familiar with your company as an employer e.g. company page profile views and users who have connected with your company.
  2. Engagement – the number of people who show interest in your brand e.g. career page views, company page followers and job viewers.

Then, in order to determine your company’s Talent Brand Index, they divide the level of engagement with the level of reach. For example, if your total reach amounts to 336,000 people and the number of engaged users comes to 55,000 then your TBI percentage will be 16.4% which is considered high. This figure represents the people who know about your company as an employer and also express an interest in it. In other words, how well you attract candidates in your talent pool. The higher your percentage, the easier it is for you to attract talent to your available role. Then depending on your TBI score, LinkedIn will help you improve it.

Your TBI can be benchmarked against that of your competitors, it helps you understand how different types of professionals (marketing, finance, sales etc.) view your company and it also keeps tabs on how your TBI measures over time.

For those of you without a paid company account, LinkedIn still provide some big data smarts in the form of a comprehensive list of “LinkedIn’s Most In Demand Employers“. By visiting the link you can view the top rated employers in the world by geographic location and most helpfully by function e.g. Deloitte is the most popular employer among finance and accounting professionals.”

This one’s a bit different. Instead of trying to give insights about what makes an attractive employee, it tries to rank how attractive your company is as an employer. It’s useful for growing companies to see how they stack up by industry and by region.

LinkedIn’s Corporate Recruiter Tools

LinkedIn has a whole suite of tools for recruiting as well, which allows you to search through the profiles of everybody on the site for specific skills / education / prior work experiences.

This allows companies to get access to people who aren’t looking for jobs right now, if they know what exactly they’re looking for.

4. Gild – A Tech Recruiting Solutions Startup

From the New York Times:

“The concept is to focus less than recruiters might on traditional talent markers — a degree from M.I.T., a previous job at Google, a recommendation from a friend or colleague — and more on simple notions: How well does the person perform? What can the person do? And can it be quantified?       

The technology is the product of Gild, the 18-month-old start-up company of which Mr. Bonmassar is a co-founder. His is one of a handful of young businesses aiming to automate the discovery of talented programmers — a group that is in enormous demand. These efforts fall in the category of Big Data, using computers to gather and crunch all kinds of information to perform many tasks, whether recommending books, putting targeted ads onto Web sites or predicting health care outcomes or stock prices.       

Of late, growing numbers of academics and entrepreneurs are applying Big Data to human resources and the search for talent, creating a field called work-force science. Gild is trying to see whether these technologies can also be used to predict how well a programmer will perform in a job. The company scours the Internet for clues: Is his or her code well-regarded by other programmers? Does it get reused? How does the programmer communicate ideas? How does he or she relate on social media sites?”

and it continues:

“Everybody can pretty much agree that gender, or how people look, or the sound of a last name, shouldn’t influence hiring decisions. But Dr. Ming takes the idea of meritocracy further. She suggests that shortcuts accepted as a good proxy for talent — like where you went to school or previously worked — can also shortchange talented people and, ultimately, employers. “The traditional markers people use for hiring can be wrong, profoundly wrong,” she said.       

Dr. Ming’s answer to what she calls “so much wasted talent” is to build machines that try to eliminate human bias. It’s not that traditional pedigrees should be ignored, just balanced with what she considers more sophisticated measures. In all, Gild’s algorithm crunches thousands of bits of information in calculating around 300 larger variables about an individual: the sites where a person hangs out; the types of language, positive or negative, that he or she uses to describe technology of various kinds; self-reported skills on LinkedIn; the projects a person has worked on, and for how long; and, yes, where he or she went to school, in what major, and how that school was ranked that year by U.S. News & World Report.”

Gild is trying to help companies recruit “diamonds in the rough.” People, who for whatever reason didn’t follow the standard path but may still be very talented programmers.

It’s a peak into the future, where your online footprint will be used to determine your personality, skills, and suitability for different jobs.

With programming, you can actually examine someone’s work and there are open source communities that people can use to get their name out there. This isn’t the case for every job.

The difficulty ends up being: can Gild’s algorithms be used to determine how much potential people have for jobs other than programming?

5. TalentBin and Gozaik – San Francisco and Boston based Startups that were recently acquired by Monster.com

TalentBin basically created a dossier of all of someone’s online activity (Quora, LinkedIn, Facebook, MeetUp, GitHub, Stack Overflow, blogs, etc.). Goziak allowed recruiters to create targeted job advertisements to specific candidates.

Similar themes to Gild; the idea is to use online activity to identify talented developers that were overlooked by the usual channels.

6. Entelo – A startup that also aggregates information about candidates’ online activity and can also predict / alert recruiters when someone is ready to leave a job and may be looking for their next one.

From Gigaom:

“Professionals are generating an ever-growing pool of public data that sends signals about their skills — and their availability. A start-up has made a business of parsing that data for tech firms, and now wants to expand to academe and other professions.

Workers in fields like technology and academia are posting more information about their professional lives online, creating a pool of public data that can be machine-sifted to find job candidates.

That’s the idea behind Entelo, a start-up that believes algorithms can replace much of the heavy lifting performed by recruiters and HR departments. The San Francisco-based company, whose clients include Yelp and Square, parses millions of data points to create what amounts to a “professional graph” for thousands of skilled employees.

As my colleague Derrick Harris explained, Entelo’s data-aggregation software combs through sites like Github and LinkedIn to find job candidates who are likely to be not just qualified, but also available (a burst of online activity is one of the strongest signals someone is ready to move).

Entelo now has over 80 paying clients and ten full-time employees, and on Wednesday it announced a $3.5 million funding round led by Battery Ventures with the participation of Menlo Ventures. The company will use the cash to expand its engineering operations and, eventually, to push into new professional verticals beyond tech.”

According to that article, Entelo is thinking about moving into academia next (because academics generate a significant amount of public work), and also helping companies comb through people they rejected previously who could still be successful at their companies.

7. Kenexa – A talent and workforce management company recently acquired by IBM.

Josh Bersin recently wrote an excellent article about why IBM bought Kenexa and what its product offerings are.

8. Knack – A Palo Alto based startup that uses games to assess employability.

From Don Peck’s They’re Watching You at Work article:

“Knack makes app-based based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour.

These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test.

How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality.

The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.”

Pretty interesting idea, but only time will tell how accurate their predictions are. The problem is that their game is likely easily to figure out and game once you know what you’re doing. If it gets widely adopted, we’ll see an entire industry rise about how to beat it. (just how there are admissions consultants for high end kindergartens in New York now)

9. Workday – An on‑demand (cloud-based) human capital management and financial management software vendor

hrlab wrote this very extensive review and explanation of what Workday does. Basically, it allows you to manage the whole employment life cycle (from getting hired to termination) in one system. It allows you to manage benefits, compensation, payroll, absences, etc.

This is a different segment of the market than most of the other companies we’ve discussed since it’s about managing employee data as opposed to recruiting.

10. Sociometric Solutions – A Startup borne out of the MIT Media Lab that focuses on measuring real-world social behavior using Sociometric Badges.

From Don Peck’s Article:

“Pentland’s initial goal was to shed light on what differentiated successful teams from unsuccessful ones. As he described last year in the Harvard Business Review, he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons.

About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day.

Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk.

In a development that will surprise few readers, Pentland and his fellow researchers created a company, Sociometric Solutions, in 2010, to commercialize his badge technology.”

Very Star Trek. Also, it seems like these badges eliminate privacy at the workplace completely.

11. Oracle and SAP Human Resources Management Software – These two are what most companies use for human capital management software (they’re similar in function to Workday, but both Oracle and SAP probably have 50x the number of users than Workday).

What’s Next for Workplace Analytics?

We’ve seen some of the future in the previous section, so I’ll just pull out what I think are the most important trends:

1. Recruiters Will Find You. As data mining techniques become more common, everything you’ve ever put on the web will be assembled into a dossier about you. You will be ranked alongside every other person in your field and geographic area and recruiters will target you based on this information. Right now this is only in its infancy and exists mainly for software jobs but it will become more common.

2. The Process Will Be (Even) More Automated. Employers will be able to generate customized written and practical tests, psychological evaluations, and games to test your cognitive ability, social skills, etc.

3. Organizations Will Study Themselves and Hire Based on Specific Criteria. They’ll try to figure out what correlates with success in their organization and use this as criteria for hiring. Many employers are already doing this, but with time it’ll become pervasive.

What Are The Critiques of People Analytics?

1. It Will Reduce Social Mobility. As people analytics become more mature, even more tacit knowledge will be required to gain employment in high end jobs. The children of the rich will learn from a young age how to beat employment tests, navigate the complexities of the recruiting processes, and manage their online profiles. The children of the poor will not.

2. It Provides False Certainty. When it comes down to it all of the data used by workplace analytics will suffer from the same human biases as our decisions suffer from today. The only issue is, because insights will be based on “data,” people will be less likely to question them.

As the educator Alfie Kohn once wrote about grading, “What grades offer is spurious precision—a subjective rating masquerading as an objective evaluation.”

3. It’s Easy to Game. Once people analytics becomes popular people will figure out how to game it. For example, if employers start to use your online footprint to evaluate your employability, I would bet you anything that an entire industry pops up overnight to help manage people’s online image, from cradle to grave.

4. It Requires Human Beings to Provide Feedback. For there to be people analytics you need data from people in the first place. The problem is people don’t want to give others accurate feedback, anonymous surveys or not. People are so biased that they give negative feedback to people they don’t like / are threatened by (even if these people are capable) and positive feedback to people they like (because they don’t want to hurt their feelings). No amount of statistical number crunching can eliminate these biases.

5. It Can be Used to Lie. Many people don’t understand statistics and can be easily manipulated.

6. Correlation is not Causation. This one is so huge I can’t stress it enough. Many things may correlate with success (for example, they found that amongst a group of good programmers all of them liked a particular manga site) but that doesn’t mean that reading manga causes one to become a strong programmer.

Unfortunately, there’s a real risk that people may confuse correlations with causations and completely screw up their hiring process.

7. It Depends on Your Sample. The first thing is, for analytics to be accurate you need a large enough sample size. People may forget this and get all sorts of strange findings as a result.

The other problem is that if you study your present workforce to figure out what correlates with success you’ll introduce systematic bias (because they were all recruited through another system that makes them all similar). You may end up finding that certain things correlate with success in your organization not because they are intrinsically good characteristics to have, but just because all of your workers are similar.

8. It Creates Homogeneity. If you use your present workforce as a basis for your future workforce you get the same thing as you always had. There’s nothing wrong with this necessarily, but overlooking people because they’re different may cause your organization to miss out on some serious talent.

9. It Creates an Algorithmic Arms Race. One of the main purposes of People Analytics is to find undervalued talent. The problem is, as analytics get better previously undervalued talent will become overvalued and you’ll need new algorithms to find new diamonds in the rough (read about high frequency trading for an understanding of why this happens).

10. It Encourages the Fundamental Attribution Error. Ben Dattner wrote a great piece in HBR about this. To paraphrase one of his examples, let’s say a company sees high turnover in its entry level employees. Most people would commit the Fundamental Attribution Error and say that its something wrong with that crop of employees. With the advent of people analytics they’d probably crunch the data and lo and behold be able to come up with a “convincing story” (read: rationalization) that there’s an error in the recruiting process and they’re looking for the wrong things in new employees.

The correct response would be to look at the organization itself and try to figure out what’s going on, but people are good at lying to themselves and you can prove anything with statistics if you want to.

11. It’s Probably a Violation of Privacy. Collecting every available piece of data about someone’s online history? Making people undergo a battery of psychometric tests? It’s probably violating the spirit (if not the letter) of privacy laws.

12. It Assumes Past Performance Determines Future Performance. Many of the recruiting startups are finding ways to rate / rank / find people based on past work. But, as we discussed earlier there may not be a great link between past and future performance unless the work and a whole suite of other factors are exactly identical.

13. It’s Aimed Toward the Top. All of the recruiting / matching of talent is aimed toward the top of the population, but it does nothing to assist people who are poor, lack skills, or don’t have jobs.

Conclusion

People analytics holds great promise for matching people to the right jobs, helping companies recruit talent and find hidden games, helping companies improve their employees’ performance, and improving retention of talent.

Of course, there are tremendous risks. It’s unclear if organizations can collect unbiased data in the first place, and even if they do who says we can adequately use them to inform our decisions?

I see great promise here, but only if we can control our natural inclinations toward the fundamental attribution error and other sources of bias and understand exactly what the data is telling us with proper confidence limits.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s