The poison apple of “easy” skills data – are you ready to give up that sweet taste?

This is the third in a series of posts about skills. If you haven’t already, read the other posts first:
Cutting through the BS and Sorry folks, but “Microsoft Office” is NOT a skill.

In the second post of this series, we discussed skills and the issues around defining and specifying them. Assuming we can reach some kind of common understanding of this valuable new currency, the next step is to find a way to generate meaningful skills and job data.

 

Shaky data – shaky results

Big data from online job platforms or professional networking sites can yield a wealth of information with a much higher granularity than the usual data gathered by national statistics offices in surveys – especially regarding skills. One reason is that, unlike printed advertisements, employers do not have to pay by space for online job postings and thus can provide more detailed information on the knowledge and skills they require. This online data also allows for a much larger sample to be monitored in real time, which can be highly valuable for analysts and policy makers to develop a timely and more detailed understanding of conditions and trends on the labor market.

However, when working with the data that is available online, such as online job advertisements (OJA) or professional profiles (e.g., LinkedIn profiles), we need to be clear on the fact that this data is neither complete nor representative and therefore any results must always be interpreted with caution. Not only because of the obvious fact that the results will be distorted, but more importantly because of the implications. Promoting certain skills based on distorted data can be harmful to the labor market: if workers focus on obtaining these skills – which by nature tend to be derived from data biased towards high-skilled professionals in sectors such as IT and other areas involving higher education – they are less likely to opt for career paths involving other skills that actually are in high demand, e.g., vocational careers in skilled trades, construction, healthcare, manufacturing, etc. Despite the fact that digitalization will primarily affect better educated workers with high wages in industrialized countries, simply because it is much easier to digitalize or automate at least some of the tasks in these jobs than those in many blue-collar and vocational occupations such as carpentry, care work, etc. The last thing any labor market policymaker would want is to accentuate the already critical skill gap in this area. Or create an even tighter labor market for certain professions, say, IT professionals [1]. Similarly, education providers seeking to align their curricula with market demand need reliable data so as not to amplify skill gaps instead of alleviating them. And yet, a growing number of PES are relying on this often shaky data for decision making and ALMP design.

For instance, there are several projects that aim to gather and analyze all available OJA from all possible sources in a given labor market and use these aggregated data to make recommendations including forecasts of future employability and skills demand. But the skills are typically processed and presented without any semantic context, which can be extremely misleading.

Challenges of OJA data

In 2018, the European statistical system’s ESSnet Big Data project issued a report [2] on the feasibility of using OJA data for official statistics. Their conclusion was: «the quality issues are such that it is not clear if these data could be integrated in a way that would enable them to meet the standards expected of official statistics.»

Let us take a look at some of the basic challenges of OJA data.

  1. Incomplete and biased: Not all job vacancies are advertised online. A significant proportion of positions are filled without being advertised at all (some say around 20%, others claim up to 85% of vacancies). Of those that are advertised, not all are posted online. CEDEFOP reported that in 2017 the share of vacancies published online in EU countries varies substantially, ranging from almost 100% in Estonia, Finland and Sweden down to under 50% in Denmark, Greece, and Romania. [3] In addition, some types of jobs are more likely to be advertised online than others. And large companies or those with a duty to publish vacancies are typically statistically overrepresented while small businesses, who often prefer other channels such as print media, word of mouth, or signs in shop windows, are underrepresented. Another relevant point is that certain markets are so dried out that advertising vacancies is just not worthwhile, and specialized headhunters are used instead. In summary, this means that OJA data not only fail to capture many job vacancies, but are also not representative of the overall job market. [4]
  2. Duplicates: In most countries, there is no single source of OJA data. Each country has numerous online job portals, some of which publish only original ads, others that republish ads from other sources, hybrid versions, specialized sites for certain sectors or career levels, etc. So, to ensure adequate coverage, OJA data generally need to be obtained from multiple sources. This inevitably leads to many duplicates, which must be dealt with effectively in order to reliably measure labor market trends in the real world. For instance, in a 2016 project the UK national statistics institute (NSI) reported duplicate percentages of 8-22% depending on the portal, and an overall duplication rate of 10%. [5] In the ESSnet Big Data project, the Swedish NSI identified 4-38% duplicates per portal and 10% in the merged data set [6].
  3. Inconsistent level of detail: Certain job postings provide much more explicit information on required skills than others, for instance depending on the sector (e.g., technical/IT) or country (e.g., due to legislation or cultural habits). Moreover, implicit information is recorded only to a limited extent and is statistically underrepresented, despite its high relevance. One reason for this is that US data providers often fail to recognize how uniquely detailed OJA are in the US, thus assuming that this is true everywhere and basing their methods on this assumption. However, this is far from correct. For instance, a job description like the one below, which is fairly typical in the US, will often be condensed to «carry out all painting work in the areas of maintenance, conversions and renovations; compliance with safety and quality regulations; minimum three years of experience or apprenticeship» in European countries. Moreover, in job ads like this, many of the required skills must be derived from the listed tasks or responsibilities. This shows just how important it is to extract implicit information.

 

The poison apple of “easy” skills data – are you ready to give up that sweet taste?

 

So, the question is, can these issues be dealt with in a way that can nonetheless generate meaningful data?

The answer: sort of. Limitations on representativeness can be addressed using various approaches. There is no one-size-fits-all solution, but depending on the available data and the specific labor market, statistically weighting the data according to the industry structure derived from labor force surveys could be promising; as could comparing findings from several data sources to perform robustness checks, or simply focusing on those segments of the market with less problematic coverage bias. [7]

Deduplication issues can be solved technically to a certain extent, and there is a lot of ongoing research in this area. Essentially, most methods entail matching common fields, comparing text content and then calculating a similarity metric to determine the likelihood that two job postings are duplicates. Some job search aggregators also attempt to remove duplicates themselves – with variable success. Identifying duplicates is fairly straightforward when OJAs contain backlinks to an original ad as these links will be identical. On the other hand, job ads that have been posted on multiple job boards pose more of a challenge. Thus, ideally, multiple robust quality assurance checks should be put in place, such as manual validation over smaller data sets.

Seriously underestimated: the challenge of skills extraction

The third challenge, the level of detail, seems to be the most underestimated. OJA from the US are typically much more detailed than elsewhere. A lot of information is set out explicitly that is only implicitly available in OJA data from the UK and other countries (e.g., covered by training requirements or work experience) – or not given at all. But even within the US, this can vary greatly.

 

The poison apple of “easy” skills data – are you ready to give up that sweet taste?

 

Clearly, even if we can resolve the issues concerning representativeness and duplicates, simply recording the explicit data will still result in highly unreliable nowcasts or forecasts. Instead, both the explicit and implicit data need to be extracted – together with their context. To reduce the distortions in the collected data, we need to map them accurately and semantically. This can be done with an extensive knowledge representation that includes not only skills or jobs but also education, work experiences, certifications, and more, as well as required levels and the complex relations between the various entities. In this way, we can capture more implicit skills hidden in stipulations about education, qualifications, and experience. In addition, the higher granularity of OJA data is only truly useful if the extracted skills are not clustered or generalized too much in subsequent processing, e.g., into terms like “project management”, “digital skills” or “healthcare” (see our previous post), due to working with overly simplified classifications or taxonomies instead of leveraging comprehensive ontologies with a high level of detail.

And then of course, there is the question of how to analyze the data. We will delve deeper into this in the next post, but for now, this much can be said: Even if we are able to set up the perfect system for extracting all relevant data from OJAs (and candidate profiles for that matter), we are still faced with the challenge of interpreting results – or even just asking the right questions. When it comes to labor market analyses, nowcasting and forecasting, e.g., of skills demand, combining OJA data with external data such as from surveys by NSI promises more robust results as the OJA data can be cross-checked and thus better calibrated, weighted and stratified. However, relevant and timely external data is extremely rare. And we might possibly be facing another issue. It is much easier and cheaper to up- or reskill jobseekers with, say, an online SEO course than with vocational or technical training in MIG/MAG welding. So maybe, just maybe, some of us are not that interested in the true skills demand…

 

[1] According to the 2020 Manpower Group survey, IT positions are high on the list of hardest-to-fill positions in the US, but not everywhere else. In some countries, including developed ones such as UK and Switzerland, IT professionals are not on the top 10 list at all.
[2] https://ec.europa.eu/eurostat/cros/sites/crosportal/files/SGA2_WP1_Deliverable_2_2_main_report_with_annexes_final.pdf
[3] The feasibility of using big data in anticipating and matching skills needs, Section 1.1, ILO, 2020 https://www.ilo.org/wcmsp5/groups/public/—ed_emp/—emp_ent/documents/publication/wcms_759330.pdf
[4] The ESSnet Big Data project also investigated coverage, for the detailed results see Annexes C and G in the 2018 report.
[5] https://ec.europa.eu/eurostat/cros/content/WP1_Sprint_2016_07_28-29_Virtual_Notes_en
[6] https://ec.europa.eu/eurostat/cros/sites/crosportal/files/WP1_Deliverable_1.3_Final_technical_report.pdf
[7] See for example Kureková et al.: Using online vacancies and web surveys to analyse the labour market: a methodological inquiry, IZA Journal of Labor Economics, 2015, https://izajole.springeropen.com/track/pdf/10.1186/s40172-015-0034-4.pdf

Would you buy a wheel if someone told you it was a bicycle?

After recently stumbling upon this Forbes post from 2019, and with skills ontologies entering the Gartner HCM Tech hype cycle, we decided it’s high time to discuss the difference between taxonomies and ontologies again. Although we have been developing and explaining our ontology for over 10 years, many HR and labor market professionals still let themselves be sold on the idea that a taxonomy is good enough for jobs and skills matching. Now, finally, after trying out one disappointing solution after the other, the idea is slowly catching on that they are being duped. And as ontologies start to trend, many providers are beginning to use this “more fashionable” term. But do not be fooled: their product hasn’t changed. It is still just a taxonomy. Speaking from experience, it takes years to build a true ontology. Why care? Because the difference in performance is massive.

As a reminder, a taxonomy is a hierarchical structure to classify information, i.e., the only possible relation between concepts in a taxonomy is of type “is a”. Think Yellow Pages or animal taxonomies. An ontology is a framework that describes information by establishing the properties and relationships between the concepts. Now if we want a machine to perform a task like job matching for us, we need to share our contextual knowledge with the machine. Meaning that we need to find a way to represent our knowledge in a machine-readable way. Given any specific domain, say, jobs and skills, which do you think could represent human knowledge better, a taxonomy or an ontology?

When we think about things or concepts, we automatically associate them with other things or concepts. Based on our knowledge, we make connections and set up a context. We think of a bicycle and know that it is made up of components (wheels, handlebars, seat, etc.), and is a vehicle. But we also know that some people can ride a bicycle, that this skill has to be learnt, that bicycles can go on roads or across fields and that cycling is good exercise. It’s more ecofriendly than a car, doesn’t need fuel, and so on and so forth. We can reason that in terms of use, a bicycle is similar to a tricycle for small children, but not for adults. We may know that in some countries, we are required to wear a helmet when cycling. Our knowledge about bicycles is not hierarchical, the concepts we can connect it with do not just fit into a cascade of “is a” relations, but instead satisfy relations like “has a”, “can”, “is similar to”, “requires”, etc. By definition, this knowledge cannot be represented by a taxonomy. But it can be represented in an ontology.

The same is true when we think about jobs and skills. Even with little knowledge of medical care, we know that ICU nurses have skills in common with psychiatric nurses, but that there are also must-have skills for ICU nurses that psychiatric nurses do not need, and vice versa. So, we can draw on common or contextual knowledge to determine that these two occupations are similar, but probably not similar enough for a good job-candidate match. We can infer other important information as well. For example, that a registered nurse requires official certification. Or that a head nurse will need additional skills like leadership.

We also know intuitively that the skills and tasks of a software test engineer have nothing to do with those of a nurse, even if the software company description states that they provide solutions that help nurses (amongst others). But without an ontology, this will show up in matching results.

 

Would you buy a wheel if someone told you it was a bicycle?

 

To truly capture the complexity of this domain of jobs and skills, together with its many interdependencies, such as similarities and differences between various specialties, inferred skills, knowledge of various computer applications, or required certifications or training, there is simply no way around an extensive ontology that describes all these various aspects of job and skills-related data. So next time someone tries to sell you an ontology, make sure you’re not just getting a souped-up taxonomy.

If you’re interested in a more in-depth explanation of the difference between taxonomies and ontologies, read the Forbes post for the general concepts, or this JANZZ post for a discussion in the context of matching. Or you can experience the difference by comparing your current matching solution against ours. Contact us at info@janzz.technology.

Sorry folks, but “Microsoft Office” is NOT a skill.

One of the most prominent buzzwords around employment, employability and workforce management is skills. There is a lot of noise surrounding this concept and its fellow buzzwords like reskilling, upskilling, skills matching, skills alignment, skill gaps, skills anticipation, skills prediction, and so on. One can find myriad publications and posts explaining why skills are so important, how to analyze skills supply and demand, how to develop active labor market policies based on skills, how to manage and develop employee skills – as well as the many sites listing the “most in-demand skills” of the year. We certainly agree that skills are increasingly important, or as stated in one of the Gartner Hype Cycles 2020,

Skills are […] the new currency for talent. They are a foundational element for managing the workforce within any industry. Improved and automated skills detection and assessment allows for significantly greater organizational agility. In times of uncertainty, or when competition is fierce, organizations with better skills data can adapt more quickly […]. This improves productivity and avoids costs through improved planning cycles. [1]  

This applies not only to HCM in businesses, but also to labor market management by government institutions. Considering how globally important these concepts are, there should be a clear or at least common idea of what this valuable currency is. However, in the much of the skills-related content posted online, there is a pervasive pattern of conceptual ambiguity, lack of specificity and lack of concision. So, in the last post, where we discussed a few examples of the noise surrounding jobs and skills, we called for a more fact-based discussion. In this post, we want to lay the groundwork for such a discussion.

Statistics 101

As a reminder from the last post: whenever you try to generalize, you run the risk of losing relevance. Despite all the globalism going on, the world is divided into regions. And each region has its own distinct economic landscape and its individual skills demand. Some regions are more focused on certain industries than others, and even when comparing regions with similar industries, skills demand and gaps can vary significantly, as has been shown in various studies and reports (for example here and here). So there will never be a meaningful list of top skills on a global level. Problem solving skills, blockchain, app development and other “top skills” propagated on various websites are simply not relevant for all activities across the globe. On top of this, it is extremely challenging to generate meaningful, representative data from online profiles and job postings. In general, the data collected online is biased, certain groups are underrepresented, others massively overrepresented. For instance, despite all the noise about apparently all-important, accelerating “digital skills”, most representative surveys highlight that EU and US labor markets require a generally low to moderate level of digital skills, with about 55 to 60 percent of jobs doing simple word processing or data entry and emailing. 10–15 percent need no ICT skills. Only about 10–15 percent need an advanced ICT level. [2] This alone shows that all these publications about the most important skills of the future etc. are at best very misleading.

To perform sound analyses and anticipate the skills that will be required in the future, to predict how these requirements will change (which skills will gain in importance and which skills will become obsolete or to perform target-oriented skills matching, we first need to be able to correctly recognize, understand, assign, and classify today’s skills. We will discuss the challenges (and strengths!) of skills and job data more in detail in the next post. First, we need to focus on an even more basic, but absolutely crucial aspect: we need to clarify what we mean by skills. Or abilities and competencies.

Truth be told, there are so many different definitions floating around, it is quite hard to keep up, and this is one of the key reasons why most approaches and big data evaluations fail miserably. It is therefore all the more important that we agree on a common understanding of this new currency.

What exactly is a skill?

O*NET defines skills as developed capacities that facilitate learning, the more rapid acquisition of knowledge or performance of activities that occur across jobs, [3] and distinguishes skills from abilities, knowledge and technology skills and tools, referring only to directly job-related or transferable skills and knowledge. ESCO, on the other hand, defines a skill as the ability to apply knowledge and use know-how to complete tasks and solve problems. Moreover, ESCO only knows the two main categories skills and competences, which – unlike O*NET – also include attitudes and values. In both classification systems, there is significant overlap between the categories. Indeed, on the other hand, just summarizes all these concepts under the term skill:

Skill is a term that encompasses the knowledge, competencies and abilities to perform operational tasks. Skills are developed through life and work experiences and they can also be learned through study. [4]

Clearly, these discrepancies in the definition of a skill will cause discrepancies in data collection and analysis, which in turn will affect the robustness of any extrapolation based on these data. But, for sake of argument, let us assume there is a universal definition of a skill. In a nutshell, we shall think of a skill as some kind of ability that is useful in a job.

Analyzing generic skills yields generic answers

Just having a written definition of a skill is far from enough. Apart from the fact that it still leaves a lot of room for interpretation, we also have many issues at the level of individual skills. One issue is the granularity, which differs extremely among the various collections. For instance, the ESCO taxonomy currently includes around 13,500 skills concepts, O*NET under 9,000 (in fact, only 121 of these are not skills of the type “can use a certain tool/machine/software/technology”) and our ontology JANZZon! over 1,000,000. Of course, the desired level of detail depends on the context. But for many modern applications of skills analysis, such as skill-based job matching, career guidance, etc., a certain level of detail is crucial to achieve meaningful results. Take the list of “top 10 skills for 2025” published by the World Economic Forum [5]:

  1. Analytical thinking and innovation
  2. Active learning and learning strategies
  3. Complex problem-solving
  4. Critical thinking and analysis
  5. Creativity, originality and initiative
  6. Leadership and social influence
  7. Technology use, monitoring and control
  8. Technology design and programming
  9. Resilience, stress tolerance and flexibility
  10. Reasoning, problem-solving and ideation

Depending on the context, e.g., industry or activity, these skills are understood very differently. They are thus too generic or unspecific to be of any use in matching or for meaningful statistics. In fact, for many occupations they are barely relevant at all. Or how often do you see these skills in job postings? Other generic skills we often see in predictive top 10 lists and recommendations have similar issues, for instance:

Digital skills: What exactly are these skills? Does this include operating digital devices such as smartphones or computers or dealing with the Internet? Do we expect someone with these skills to be able to post on social media, or really know how to handle social media accounts professionally? Is there any sense in summarizing skills such as knowledge of complex building information modelling applications in real estate drafting and planning under digital skills?

Project management skills: This too is almost completely useless when taken out of context like this. A large proportion of workers has project management skills on some level, but it is very difficult to compare or categorize this knowledge across roles or industries. For example, the individual project management knowledge differs substantially between a foreperson on a large tunnel construction site, a project manager for a small-scale IT application, a campaign manager in the public sector and a process engineer or event manager. Clearly, if the event industry comes to a halt, a project manager cannot just switch to the construction industry. So, it is nonsensical to comprise all these variations into a single “matchable” skill.

 

JANZZsme! Semantic Precision for Skills/Competence Matching

Think multidimensional

Being precise about skills does not just entail clearly identifying the skill and its context, the level of capability is equally relevant. The level of English required of a laborer on a construction site is certainly not the same as that required of a translator. However, construing a robust definition of levels also poses challenges: What does “good” or “very good” knowledge mean, and what distinguishes an “expert” in a certain skill? Is it theoretically acquired knowledge, for example, or is it knowledge already applied in a real professional environment? In contrast to other areas of big data, scales and validations – if they exist – are not necessarily binding. Thus many providers of this type of data just resort to disregarding levels entirely. In doing so, we lose a huge amount of information which would be highly relevant, not only for job matching and career guidance, but also in analyzing skills demand, say, as a basis for workforce or labor market management. Do we have a shortage of highly skilled experts or of employees with a basic working knowledge? Clearly, appropriate measures will differ strongly depending on the answer.

Say what you mean

Granularity in terms of identifying context and level of a skill are certainly important. The main issue, however, is clarity. One of the recurring top 10 skills required in job postings almost anywhere on the planet is almost always listed as Microsoft Office, which at a first glance may seem fairly specific. But what does this really mean? Technically, MS Office is a family of software, available in various packages comprising a varying selection of applications, which evolve over time. Currently, it consists of 9 applications: Word, Excel [6], PowerPoint, OneNote, Outlook, Publisher, Access, InfoPath and Skype for Business. So, if someone “has MS Office skills”, does this mean they can use all those apps? Hardly. And what does it mean to be able to use an app? According to ESCO, someone who can “use Microsoft Office” can

work with the standard programs contained in Microsoft Office at a capable level. Create a document and do basic formatting, insert page breaks, create headers or footers, and insert graphics, create automatically generated tables of contents and merge form letters from a database of addresses (usually in Excel). Create auto-calculating spreadsheets, create images, and sort and filter data tables. [7]

Many people may think they can use MS Office – until they read that definition. It seems that the less one knows about the full potential of an application, the more likely one is to identify as a capable user. This becomes even more apparent when we consider PowerPoint, which, surprisingly, is not included in ESCO’s “use Microsoft Office” skill. Instead, this is called ‘use presentation software’. There are thousands of applications to create presentations, many of which work quite differently to PowerPoint and thus require different knowledge or additional skills: Prezi, Perspective, Powtoon, Zoho Show, Apple Keynote, Slidebean, Beautiful.ai, just to name a few. And yet, the skill of “using presentation software” is just vaguely described in ESCO as:

“Use software tools to create digital presentations which combine various elements, such as graphs, images, text and other multimedia.” [8]

Putting aside the fact that there are many instances of presentation software, if this is a skill, in the sense of an ability that is useful in a job, then one should expect “creating presentations” imply es that the person can create usable or even good presentations. Amongst many skills, this includes the ability to distill information into key points, as well as a sense of aesthetics and storytelling skills. Yet, with enough self-confidence, a person lacking these implicit skills may still think that they are capable of creating great presentations.

And apart from this, what an employer means when they ask for these skills varies substantially. Someone looking for an office help in an old-school micro business may have a very different idea of MS Office skills than a large corporation looking for a marketing specialist. When it comes down to it, trying to interpret the expression “Microsoft Office” as a skill results in so much guesswork, that the informative value of “Microsoft Office skills” becomes comparable to that of “hammering skills”. Everyone can use a hammer, but does that mean anyone can work in any profession that involves hammers? Of course not.

 

JANZZsme! Occupations That Involve Hammers

My Math teacher used to say: If you mean something else, say something else. That could be a good place to start.

(Self-)assessment vs. reality

As mentioned above, many people’s self-image deviates from reality, resulting in under- or overestimating their skills (hammering, creating presentations or any other skill). In addition, there is the issue that completing a course or education that should teach a set of skills does not automatically mean that we have that skill set, i.e. that we can apply it productively in a job. Also, many unused skills have an expiration date. And yet, once we get used to listing a certain skill on our resume, we rarely take it off again, no matter how long we haven’t used it. Just asking ourselves the question can I apply this productively in my job? could go a long way in moving our projected image closer to reality. If we wanted to. Just as agreeing on a definition of a skill, standardizing skill designations and levels or just being plain more specific and accurate could give us a clearer common understanding of this valuable currency. If we wanted to. And then we can turn to the challenges of generating smart data – which we will investigate in the next post.

 

[1] Poitevin, H., “Hype Cycle for Human Capital Management Technology, 2020”, Gartner. 2020.
[2] Thanks to Konstantinos Pouliakas at Cedefop for pointing this out.
[3] https://www.onetcenter.org/content.html

[4] https://www.indeed.com/career-advice/career-development/what-are-skills
[5] http://www3.weforum.org/docs/WEF_Future_of_Jobs_2020.pdf
[6] Read the previous post for our view on Excel.
[7] http://data.europa.eu/esco/skill/f683ae1d-cb7c-4aa1-b9fe-205e1bd23535

[8] http://data.europa.eu/esco/skill/1973c966-f236-40c9-b2d4-5d71a89019be

Cutting through the BS

Adaptability and flexibility, digital skills, creativity and innovation, emotional intelligence… Since the pandemic went global, everyone has been talking about the top post-COVID skills employees will need. Going through numerous posts from Forbes over Randstad to EURES, it seems that the key point they have in common is that they are untransparent if not completely unfounded. Despite all the noise they generate, none of these posts give any insight into what data their claims are based on – or whether they have any data at all. Here at JANZZ, we have been analyzing over 500,000 job postings from the last few months for a project in Australia and New Zealand. In this data, just as in any of our other data from similar projects in completely different markets and regions of the world, there is no indication of increased demand for creativity and innovation or for digital skills – which, by the way, should not include the ability to participate in a video call, just as Excel usage does not turn an economist into a STEM profession. The skills that were most in demand across all professions from waiters to senior policy officers were in fact ambition, self-motivation, and ability to work under pressure, independently and in fast changing environments.

But it is not just about skills analysis. When it comes to… well, anything related to jobs and skills, there is an unbelievable amount of BS out there. Here are just a few examples.

Future jobs. According to the WEF’s Future of Jobs Survey 2020, among the top 20 job roles in increasing and decreasing demand across industries, Mechanics and Machinery Repairers are listed as both increasing (#18) and decreasing (#9). The same is true for Business services and administration managers (up #12, down #6). This apparent contradiction is simply stated with no explanation in the text. And yet, this information is just reproduced blindly in numerous blogs and posts. [1]

LinkedIn skills reports. The same is true for all the buzz generated by LinkedIn reports on in-demand skills. Countless articles and posts just reproduce these lists, all completely disregarding the fact that they are based on the data captured in LinkedIn profiles [2], which is strongly biased. For instance, blue-collar professions and industries are massively underrepresented in their data. By contrast, according to the ManpowerGroup Talent Shortage surveys, for 7 consecutive years, skilled trades have been hardest to fill, globally and nationally in almost all countries, along with drivers (especially truck/heavy goods, delivery/courier and construction drivers), manufacturers (production & machine operators), construction laborers and healthcare professionals on this year’s list. Shouldn’t the skills associated with these professions be in higher demand than blockchain or cloud computing?

Skills demand. A Canadian institute created a report based on data and skills taxonomies from a large labor market analytics provider. They introduce the report with the statement “Telling Canadians they need digital skills is not enough; we must be specific.” The report then goes on to identify the top 10 digital skills by number of job postings. Among the top 10 skills are Microsoft Excel and Spreadsheets. There is nothing specific about these “skills”. First off, the term “Microsoft Excel” says absolutely nothing about the skills that are actually needed. Is the candidate expected to just be able to open the application and enter data? Or should they be capable of creating formulas? How complex are the formulas supposed to be? What about charts? Also, what exactly is the difference between the two skills Microsoft Excel and Spreadsheets?

Upskilling. Within a business, upskilling can be very useful. An individual company is fairly fixed in its position and should have a clear strategy which will also largely determine the skill needs of the company and thus, the upskilling strategies. However, developing sustainable upskilling strategies as part of an active labor market policy (ALMP) is a very different challenge. Contrary to what many posts and tech providers say, just upskilling all the unemployed will not lower unemployment numbers sustainably and does not necessarily meet market demands. For instance, in a country where a lot of low-skilled work is on offer, upskilling a jobseeker who is already overqualified will not be of any use. Or if the training offers are of poor quality or not aligned to market needs. This is the reality in many countries.

Job matching. A Dutch tech provider for Employment Services claims that its software solutions can help PES “reduce unemployment figures”. As if that could happen by just using the right job matching tool. For instance, this article (in Italian) in Italy’s renowned newspaper Corriere della Sera illustrates just a few of the issues that need to be resolved before, or at least while, implementing a software solutions for the PES: there are currently 730K job vacancies in Italy, compared with 2.5M active jobseekers plus 13.5M inactive and discouraged. The skills of jobseekers in Italy are not aligned with labor market demand. Training, particularly for the unemployed, is inadequate, of poor quality and disconnected from market needs. PES have insufficient and not adequately trained staff. Italy invests extremely little in ALMPs. They have made plans to increase this budget but have no strategy on how to spend the additional funds. A change of direction would require a vision that does not expire after the next elections, which is an extremely high ask given Italy’s political landscape and history. And yet, the Dutch tech provider still argues that their job portal solutions will make the crucial difference.

Most of what is out there is basically gut feelings and creative marketing. So how about cutting through the BS and finding our way back to an honest, fact-based discussion? Well, to do that, we need to find out what the facts are. But for that, we first need to agree on the basics (for instance, define what constitutes a skill) and then generate reliable data based on these definitions. More about this in the next post…

 

[1] If you are interested in learning more about the issues surrounding predictive analyses based on occupational data (e.g. skills anticipation), read our CEOs talk featured in a recent ILO report.

[2] According to LinkedIn: The most in-demand skills were determined by looking at skills that are in high demand relative to their supply. Demand is measured by identifying the skills listed on the LinkedIn profiles of people who are getting hired at the highest rates.
Source: https://business.linkedin.com/talent-solutions/blog/trends-and-research/2020/most-in-demand-hard-and-soft-skills