Building the AI-ready workforce: A collaborative effort between government, tech companies and education providers

To follow the trend of future work, upskilling and reskilling, and digital transformation, we are posting a series of articles to analyze government policies and practices to learn how countries are taking strategies to build their workforce for these challenges. In the previous article, we examined how Singapore is helping mid-career PMETs to switch to the tech sector. Our second stop is Saudi Arabia.

The Kingdom of Saudi Arabia has shown strong commitment to the implementation and development of AI as it seeks to diversify the economy, reduce the dependence on oil and shift away from public sector-driven welfare within a strategic framework called Saudi Vision 2030. According to estimates by PwC, the contribution of AI to the national economy in Saudi Arabia will reach US$135.2 bn by 2030, around 12.4% of its total GDP.

Saudi Vision 2030 comprises a host of ambitious targets for the labor market, including lowering the unemployment rate from 11.6% to 7% [1], increasing female participation from 22% to 30% [2], and bolstering the private sector with a special emphasis on small and medium-sized enterprises (SMEs) to increase their contribution to GDP from 20% to 35% by 2030 [3].

Under Saudi Vision 2030, Saudi Arabia is spending heavily on the communication and information technology (ICT) sector. The International Data Corporation (IDC) predicts that spending on IT in Saudi Arabia will exceed $11 billion in 2021. In addition to Saudi Vision 2030, the Kingdom has launched its National Strategy for Data and Artificial Intelligence (NSDAI) and last year signed a series of partnership agreements with international tech companies last year to advance AI in Saudi Arabia. One of the key pillars of the NSDAI is to build an AI-ready workforce in the Kingdom.

The growing industry is putting high pressure on a domestic labor market that lacks a sufficiently large and experienced national ICT workforce. Saudi Arabia is highly reliant on expat workers, and many of its professionals, including most in ICT, are sourced internationally. This has hampered many private companies and especially SMEs due to high recruiting costs. Furthermore, as skilled talents are increasingly in short supply globally, Saudi Arabia needs to focus on its young domestic population.

With 60% of the population under 30 years of age in 2020, Saudi Arabia has one of the world’s youngest populations. However, youth employment remains low in the Kingdom, and 16% of Saudi youth between 20 and 24 are classified as NEET (not in education, employment or training) [4]. Several reports indicate that there is a notable gap between the future-proof skills requested in the labor market and those possessed by the young people, and that the Kingdom is lacking an effective skill matching mechanism. Current skills gaps include STEM skills (science, technology, engineering and math), soft skills, and industry-specific international skills due to a lack of vocational training [5]. Therefore, part of the NSDAI agenda is to provide professional AI training to Saudi university students, researchers and developers, as well as up- and reskilling opportunities that enable Saudis to utilize AI and data in both the public and private sectors. Programs including the National AI Capability Development Program and the National AI Talent Cultivation and Onboarding Program have already been set up by the government as a collaboration between tech partners and education providers to attract, develop and retain AI talent in Saudi Arabia with a target of creating 20,000 AI and data specialists and experts by 2030.

As countries embark on the journey of digital transformation, their governments and public employment services (PES) seek suitable solutions to support them in the increasingly important role they play in job matching, enhancing employability, addressing skill gaps and aligning education offerings with market needs. To learn about how assists PES in tackling these challenges, please visit our product site for PES or contact us at


[1] Readiness for the Future of Work, ATKearney and MISK.

[2] KSA Vision 2030: Strategic Objectives and Vision Realization Programs 61.

[3] KSA National Transformation Program Delivery Plan – 2018-2020

[4] International Labour Organization, ILOSTAT database.

[5] Building the talent pipeline in Saudi Arabia, City&Guilds Group

The importance of localizing ontologies, illustrated on the education systems in Peru and Colombia

In one of our recent posts, we explained the difference between an ontology and a taxonomy. Although choosing an ontology over taxonomies is an important step towards smart and accurate matching, it is not the only aspect to consider. Even if you are only interested in a monolingual ontology, but even more so with multilingual ones, localization is another key feature. For high performance and satisfactory matching results, it is simply not enough for the ontology to cover your language of choice, especially not if that language is spoken in several countries. The system needs to truly understand context, including regional or national variations in occupational, legal, educational and linguistic matters. For instance, certain occupations may require official certifications or authorizations in one country, but not in another. And most often these certifications will have different names depending on the country they are issued in. A certain job title may be widely used in one country, and completely uncommon in another, e.g., joiner in the UK, Australia and New Zealand – a type of carpenter. This term is practically not used in the US (even though the largest US trade union for carpenters is called the United Brotherhood of Carpenters and Joiners of America).

Of course, this issue is not just limited to job titles and authorizations. It is also essential to understand implicit skills, i.e., those not mentioned in a job description or candidate profile but that can be derived from other information such as education and training. And required education must be factored in as well. Suppose, for instance, you are located in a Spanish-speaking country and looking to hire someone with a bachelor’s degree, i.e., who has completed undergraduate university studies. In Peru, you may ask for a Bachiller. However, in Colombia, this term will give you matches with candidates who have the equivalent of a high school degree: a Bachiller Académico or a Bachiller Técnico. If you ask for a Licenciado, another common term in Spanish-speaking countries that often corresponds to a bachelor’s degree, your Colombian candidates will have a degree more or less equivalent to a Bachelor of Education in the US.


Localizing Ontologies


To avoid unsatisfactory to outright useless matching results, an ontology must be carefully enriched with country-specific information such as linguistic variations, localized job titles, mapping of national classification systems and details from the country’s education system including names of degrees and diplomas, and – ideally – curricula and taught skills. This may require extensive work by subject matter experts familiar with the country in question. But it is an investment well worthwhile that will dramatically improve matching results and all associated services such as career counseling, education/job matching platforms, labor market analytics and more, as well as enhance usability of interactive services and features, for instance, with smart suggestions and typeaheads that actually make sense to the users.

You may have already found out the hard way that using standard taxonomies like ESCO does not really work in your country. If not, don’t do it. Go straight for a well-localized ontology. At, this is one of our key services for new country clients and we have successfully implemented localizations of our ontology JANZZon! for countries across the globe. If you want to enhance your system with the extensive knowledge from the world’s largest multilingual job and skills ontology, or learn more about our highly performant ontology-driven products and technologies, feel free to contact us at

A resume of CV parsing. Great candidate experience vs. ATS optimization – is a trade-off truly inevitable?

In recent years, online job and candidate searches have become increasingly important and a growing number of CVs and resumes is now available in digital form. Despite recurring announcements of their demise, and even if the delivery format and selection of information has changed, the need for details on a candidate’s professional background persists. Thus, more and more businesses are turning to automation tools to handle the increasing volume of CVs and resumes. However, because these documents are not standardized, the available recruitment automation tools are faced with significant challenges and are known to screen out good candidates. So, an essential and far from trivial question is how to process this information to produce smart, meaningful data that can be utilized by recruiting tools to pinpoint the best candidates for the vacancy at hand.

Even with the rapid advance of AI-based systems, most ATS parsing algorithms are outdated and unintelligent, often causing essential resume information to get distorted or lost. Candidates are thus advised to submit a standardized resume that is neither visually appealing, nor shows any personality, to avoid being sorted out by the system. This is in stark contrast to the wealth of online advice on how to stand out from the crowd with an appealing modern resume. It also undermines efforts to improve the candidate experience. But there seems to be a general consensus that a trade-off is inevitable: either hirers choose to obtain quality resume information in the first step of the recruiting process – and run the risk of disgruntling best-in-class candidates. Or they improve the candidate experience by not asking too many questions and then simply make do with potentially jumbled parsed resume information – running the risk of great candidates falling through the ATS.

Here at, we refuse to ignore these challenges and simply hand the problem off to applicants. Our mission is to help improve both the recruiting and the candidate experience by providing efficient application processes whilst ensuring the candidates’ freedom of individual expression. With our cutting-edge parser, we already have the text processing down, using strategies from deep learning models trained specifically for CV content to semantic technologies that translate the myriad variations in occupational jargon to a common vocabulary. As for visual aspects such as formatting, layout and graphical representations, we are currently very actively engaged in the R&D phase, developing pioneering technology to tackle this.

To find out more about the challenges and latest advances in CV and resume parsing – and to take an intriguing walk through the history of the CV – read our white paper:

A resume of CV parsing. Great candidate experience vs. ATS optimization– is a trade-off truly inevitable?

And if you want to use our state-of-the-art semantic parsing tool and benefit from the top quality in multiple languages, all services are also available via API and can be easily integrated into existing ATS or platforms. For a demo or quote, feel free to contact us at

Building the AI-ready workforce: A switch for mid-career PMETs to the tech sector

According to the Future of Jobs Report 2020, the World Economic Forum estimates that 85 million jobs will be displaced while 97 million new jobs will be created across 26 countries by 2025 through an AI-driven shift in the division of labor driven. AI technology will have a profound effect on the nature of work for many jobs and our workers will require reskilling or constant upskilling to prepare for changing and new jobs. To follow this trend, we will post a series of articles focusing on government policies and practices regarding these challenges to learn how countries are developing strategies to build their AI-ready workforce. Our first stop is Singapore.

As many other countries, Singapore has a tech talent shortage. According to Minister Vivian Balakrishnan, in charge of the national Smart Nation Initiative, Singapore will require an additional 60,000 tech talents in the next three years. Its education system is producing 2,800 ICT graduates per year, which leaves a gap of 51,600 in three years’ time. As one of the most important business hubs in the Asia Pacific region, Singapore has attracted tech giants to setup headquarters in this city-state. However, many international businesses are concerned that the tight local tech talent pool is slowing down their speed of development. [1]

In its National Artificial Intelligence Strategy, Singapore plans to establish more local talent pipelines in order to raise both the quantity and quality of its AI workforce in the long run. Among the various programs aiming to meet the demand for ICT talent is TeSA Mid-Career Advance, a program under the TechSkills Accelerator (TeSA) initiative for mid-level ICT and non-ICT professionals aged 40+ to make a switch to tech-related careers through company-led reskilling and upskilling.

This newly designed program is supported by government, industry and the National Trades Union Congress. They believe mature workers should also benefit from opportunities created in the fast-growing tech sectors and that the digital momentum must reach all segments of the economy and society. Currently, ten companies have partnered with the program, offering around 500 tech jobs. Eligible mature workers can be hired and trained in a variety of tech jobs by one of these ten partner companies for up to 24 months.

According to the Minister for Communications and Information S Iswaran, TeSA Mid-Career Advance is targeting 2,500 place-and-train opportunities over the next three years. As a start, the government will invest 70 million SGD for the initial job placements under the program. Participating companies will receive government subsidies as a contribution to the additional training costs and salaries. [2]

Besides meeting the tech talent demand, the program is also an attempt to fight against the issue of displacement of mature workers, another long-term challenge faced by Singapore. Based on the 2019 Labor Market Report from Singapore’s Ministry of Manpower, 6,790 locals were laid off that year, over half of which were professionals, managers, executives and technicians (PMETs) aged 40 and above. The 6-month re-entry rate of all laid-off local workers by age group was 65.8% for workers aged 40-49, and 52.2% for the over-50s. These rates are considerably lower than the 82.5% and 76.3% for workers aged below 30 and 30-39, respectively.

The perceived difficulty adjusting to the tech industry at later stages of a career is a concern, as most in-demand tech-related roles are becoming more technical in nature and it can be challenging to pick up skills such as programming languages, software proficiency and data analysis when transitioning from unrelated fields. However, the tech talent shortage is across the entire spectrum of roles so there are also many opportunities in “tech-lite” roles such as technology project managers, digital sales advisors and “for-tech” roles that contribute non-tech knowledge to the development of solutions (e.g., an HR professional working on an HR tech solution). In these positions, the domain knowledge and expertise brought in by experienced PMETs will contribute in creating technology applications that meet business needs. [3]

Despite various successful approaches in Singapore, Wong Wai Weng, chairman of tech trade association SGTech, believes that more proactive efforts should be made, before displacement occurs and before jobs are lost. At, we also believe public employment services (PES) must act now to prepare for further shifts and turbulences in the labor market. We provide comprehensive AI-driven solutions and services tailored to the needs of PES across the globe, helping them to actively match jobseekers to suitable jobs, strengthen their labor market resilience and design and implement effective active labor market policies (ALMPs). To learn more about our solutions for PES please visit our product site for PES or contact us at








If not now, then when? Digitalizing PES in times of COVID – and what it costs.

The current worldwide pandemic has catapulted the labor market into a state of unprecedented turbulence. According to the OECD, the impact on jobs just within the first three months has been 10 times that of the 2008 financial crisis. Entire sectors such as hospitality, civil aviation and cultural sectors have been hit hard, resulting in mass job losses and collapsing self-employed incomes. On the other hand, e-commerce and supermarkets, courier and logistics services, manufacturers of food or hygiene products, pharmaceuticals and others have thrived, creating new opportunities by dramatically increasing their workforce. Even if some of these jobs are only temporary, they can be lifesavers for those in need of income.

Amidst these turbulences, Public Employment Services (PES) have faced a historical stress test, being inundated with a number of new jobseekers far beyond what their often outdated systems are typically designed for – if such systems are in place at all. With youth struggling due to the closing down of entry-level jobs and apprenticeships, and low-paid workers, women, ethnic minorities as well as self-employed and informal workers among the hardest hit by the crisis, existing vulnerabilities have been exposed and inequalities amplified. Now more than ever, PES must find new ways to best serve their people in this crisis and in future. These vulnerable groups need to be reconnected with good jobs as quickly as possible to avoid potentially long-lasting scarring effects. And PES need to be prepared for further shifts and turbulences in the labor market with innovative digital solutions that help strengthen labor market resilience by ensuring efficiency and scalability as well as providing creative placement solutions and valuable, timely insights – into the labor market and for jobseekers and employers.

Even if a country’s PES is only just being built up, now is the right time to initiate a digital transformation. In fact, there may never be a better time – especially for countries that are just getting started. A well-designed solution does not require a perfect starting point. It does not need a lot of in-house data or even a well-organized PES. It performs well in markets with only few highly skilled professionals and supports transitions from the informal to the formal economy. In addition, choosing a mobile-first approach geared towards self-service features over a cumbersome expert system plays to the strengths of one of the most affected groups of the pandemic: young people, who are used to taking matters into their own hands, finding information and discovering their options with their devices. The digital transformation has begun and those who want to be part of it must act now.

To ensure effective digitalization in these challenging times, PES should look for solutions with:

Full profile matching based on ontologies for a wider variety of fitting placement solutions

Instead of simply comparing job titles, jobseekers can be matched based on their full profile of skills, education, previous experience and many more relevant criteria. This will particularly help those who need to seek new lines of work due to collapsing economic sectors, where matching on job title alone is ineffective. In addition, using labor market-specialized ontologies enriched with country-specific content helps identify hidden skills based on education and experience. This can substantially improve jobseeker profiles and thus broaden the search for suitable jobs and candidates, whilst significantly increasing the accuracy of matches.

Searchable jobseeker profiles and simple recruitment processes for enhanced visibility and virtual mobility

Providing jobseekers with a platform to present themselves with a searchable, well-structured profile enhances their visibility and gives them the opportunity to be found by potential employers. To avoid bias, the open profile should only contain professional information. Integrating the first steps of selection and recruitment process into the system reduces the need to travel in person until a real opportunity presents itself. These features improve both visibility and virtual mobility for jobseekers, which is especially important for vulnerable groups such as low-paid or informal workers and minorities and in times of increased remote working and hiring. Easy-to-use recruitment processes also encourage smaller businesses to transition from informal candidate search, e.g., by word of mouth, to online vacancy posting, making their business and open positions visible to a wider pool of jobseekers.

Non-discriminatory, explainable matching for unbiased and transparent results

Matching processes must be explainable and auditable to ensure transparency and accountability. Moreover, solutions should be designed in a way to guarantee that by default the best candidate with the best aptitude in all individual criteria achieves the best match – regardless of gender, ethnicity, disability or other personal characteristics. This ensures that all jobseekers have equal opportunities, including youth, women and minorities.

Gap analysis for targeted job and career pathing

In times of dramatic shifts in the labor market, many jobseekers are required to identify new lines of work and change professions entirely. By determining the closest match to currently available positions and identifying missing skills, training or other relevant criteria, gap analyses can help find a path out of the employment crisis. Based on real-time labor market data and comprehensive ontologies, they can be used to counsel individual jobseekers, or even to redirect entire workforces from disappearing professions or sectors.

Smart labor market intelligence to identify real-time shifts and labor market shocks

In a turbulent labor market environment with fast and unpredictable changes, smart labor market intelligence delivered in real time as well as well-designed intelligence management tools can make the crucial difference. Processed with a powerful labor market ontology, these data can provide better, more accurate and timely insights – key to effective management and rapid response.

JANZZ systems provide all of the above features and more. We cannot create jobs, but we can help countries smooth out the effects of COVID-19 in labor markets and guide PES in a digital transformation to become more efficient and more sustainable. We can support the transition from guessing to knowing what it will take to give employment back to as many people as possible. For instance, PES can leverage our integrated labor market solution JANZZilms! to take action in key areas and rapidly respond with timely strategies that have real impact.


JANZZ Integrated Labor Market Solution (ILMS)



Since the beginning of March 2020, we have been able to prove how powerful and scalable our systems are. In a large European country, almost 10% of the working population has been forced to register with the unemployment office due to the impact of the pandemic. At the beginning of the crisis, the system, which was designed for about 30,000 registrations per year, was saturated with almost 400,000 registrations within just a few weeks. Although nobody had anticipated a scenario like this during the planning phase, our systems processed almost ten times the volume of transactions without any problems. Performance and stability of the systems were always fully maintained in these important times. In addition, thanks to intuitive and intelligent design, the national PES was able to both increase job counseling capacities and reduce average time to market reintegration. This way, we were able to make a valuable contribution to the government’s efforts to quickly and efficiently register, counsel and reintegrate almost 400,000 job seekers in the country.

Quick and efficient implementation – with no surprises

Our agile methods have led time and again to outstanding products – developed on time and on budget. The standard solutions can be realized in 120 – 180 days, or 90 days in a language we have already deployed before. This provides great value in terms of implementation, operation and maintenance. Moreover, prices are fixed over several years, ensuring financial predictability. For instance, the full bundle solution JANZZilms! including all components such as matching and gap analysis, profiler, ontology, several languages, parsing, dashboards and much more, with up to one million active/concurrent users costs around USD 1 per user and year after implementation. As a complete SaaS solution, no further investments for the systems and hardware, regular maintenance, SLA’s or updating processes etc. are necessary. For larger systems with up to 5 million users in all roles (e.g., PES counsellors, job seekers, companies and job providers, third party providers like education etc.) the price drops to around 60-70 cents per user and year. With even larger systems, the price quickly falls below 50 cents per year.

Moreover, our solutions run in secure, GDPR-compliant cloud environments that are perfectly suited for any IT infrastructure including simpler setups in emerging markets. They also offer great accessibility for all users – from digital natives to tech newbies, on mobile and small screen devices and with slow internet connections.

For more information on how our services and solutions can help strengthen your labor market resilience, visit our product site for PES or contact us at If not now, then when?

Analyzing skills data. Can you see the gorilla?

This is the fourth and last in a series of posts about skills. If you haven’t already, we recommend you read the other posts first: Cutting through the BS and Sorry folks, but “Microsoft Office” is NOT a skill and The poison apple of “easy” skills data – are you ready to give up that sweet taste?

In the third post of this series, we discussed the challenges and opportunities of online job advertisement (OJA) data. Suppose we have the perfect definition of a skill and have extracted all relevant information, including occupation, required experience and training, explicit and implicit skills, etc., from clean, duplicate-free OJA data. What now?

This is the point in many related projects where the data is made accessible in form of, say, an interactive website for interested users. Sometimes there will be a disclaimer stating that the data is not representative or that it is biased in some way. The idea of these websites is to provide information for policy makers, employers, education providers, career starters, etc., so that they can make informed decisions. This is certainly a noble idea. But a disclaimer does not change the fact that these users are still being given a distorted picture. They are enticed to make fact-driven decisions – without being given all the facts. How is a student supposed to make an informed career choice, e.g., trade or profession, if there is only information available for the professional route? How is a policy maker supposed to decide which training projects to allocate funds to if the data is biased towards certain industries or occupations? How is an education provider going to align curricula with market demand if there is no information on how critical a given skill is for a job?

Let us take a look at some of the challenges in analyzing this data.

Increased number of advertisements is not equal to actual expansion of demand. OJA data can only approximate gross changes in demand because the data includes both new positions resulting from growth as well as existing positions left vacant as a result of staff turnover. This means that the future of a given occupation cannot be predicted purely based on the growth of the number of OJA for that job. The same is true for skills.

Frequently mentioned requirements are not necessarily crucial requirements. Even assuming that we have been able to extract all implicit skills (for instance, occupational skills that hirers assume are implicitly obvious to prospective candidates, or skills presumably acquired in training/education), there are still challenges here: First off, there may be a tendency to demand more than necessary for vacancies that are easy to fill and less than necessary for hard-to-fill positions. Also, certain positions tend to ask for education that, per se, is not necessary. An example of this is a STEM degree for quantitative professions. There is rarely need for a graduate’s expert knowledge in physics or biology in consulting, but STEM students typically also acquire skills such as critical thinking, complex problem solving, quantitative reasoning, communication and presentation skills, etc. It is these transferrable skills that such employers are interested in. So simply counting mentions of skills will not reveal the most in-demand skills that are truly relevant to employers.

Implicit skills extraction comes with its own challenges. Profiles for jobs and educations vary greatly on various levels. There is neither a standard skill set for a given occupation, nor is there one for a given education. [1] For instance, the skill set acquired in vocational training for carpenters will depend on the duration of the training (e.g., 1.5 years in Nicaragua vs. four years in Switzerland), or the choice of specialty, and many more factors. Some of the skills required for a nurse in an urban private clinic will differ from those needed by a nurse in a rural state-run hospital, even if they have the same specialty. Thus, skills demand cannot be directly extrapolated from jobs/education demand either.

Ignoring these and other challenges leads precisely to the common misinterpretations we have discussed in this series. One could also call this inattentional blindness, a phenomenon that was famously demonstrated in the “gorilla experiment”. In the experiment, the study participants were told to focus on a specific detail in a video of two teams passing a ball. Mid-way through the video, a gorilla walks through the game, stands in the middle, pounds its chest and exits. More than half the subjects missed the gorilla entirely – and were sure they could not have. Similarly, by focusing on “easy data” and the current publications with their quick and dirty interpretations, we run a big risk of losing sight of what is right in front of us. We think we have a shortage of skills in certain areas and miss the utterly obvious. To illustrate this, let us take a closer look at the most in-demand skill of 2020 according to CEDEFOP data: adapt to change. This also happens to be one of the current global buzz skills.

Note: We will be using several examples from CEDEFOP’s online OJA data tool Skills-OVATE over the course of this post. This does not mean that these data are in any way worse than that of other OJA data providers, we are not here to mock anyone. We simply want to provide real facts from real data in the spirit of a fact-based discussion and we decided to focus on one source for consistency.

Among the top 10 occupations that require the skill adapt to change are Athletes and Sports Players, Aircraft Mechanics and Repairers and Firefighters. Here are all skills listed in the CEDEFOP data for these occupations, ordered by count:

Analyzing skills data. Can you see the gorilla?

These lists are right in the comfort zone, containing many of the current buzz skills. But what about the crucial skills and knowledge [2] that workers in these occupations actually need? Shouldn’t aircraft mechanics have knowledge of aircrafts? Or firefighters be able to tolerate stress? And using office systems cannot possibly be a key skill for athletes and sports players.

So now what?

First off, we should move away from generalizing lists and easy statements. As we have seen in this series of posts, these are clearly not conclusions we can sensibly draw from the available data. Instead, we should move towards more differentiated interpretation and communication – even if it is less sexy. In addition, we should steer clear of normalizing and summarizing skills into generic groups when communicating results. Broad terms such as sales & marketing, computer skills or teamwork abilities may be useful for statistics, but they simply do not convey any useful information in other contexts. Sales skills differ dramatically depending on whether they are sought for a position in retail, selling advertisements for magazines, machinery or an entire power plant. Given in their context, there are millions of skills and these cannot sensibly be squeezed into, say, the just over 13,000 ESCO skills without losing critical information – even if the majority had been parsed and extracted properly, which is clearly not the case in general.

We must find ways to determine crucial skills and distinguish them from buzz skills. Expert knowledge as used to create job-specific skill profiles in taxonomies such as O*NET and ESCO tends to be inaccurate or too generalized because of the wide variety of skill profiles for a given occupation. Thus, determining crucial skills will presumably remain a huge challenge until hirers start to consistently highlight which skills are truly necessary and which are not in job advertisements. In the meantime, skills should at least be analyzed as they arise in the context of an occupation, depending on regions, industries, etc., recording must-have skills in OJA data when available and supplementing this with survey data from both employers and employees.

If the aim is to make the data available to users such as students or policy makers, we should gather and provide additional information where OJA data are insufficient and explain how to look at the data instead of (at best) adding simplified disclaimer blurb. The information must be supplemented to provide a balanced picture of in-demand occupations and skills. By solely relying on OJA data, we are actively pointing even more young people away from occupations and industries in dire need of new talent: skilled trades and construction, nurses, care workers and more – many of which are still clearly “futureproof”. And we are encouraging continued potential misallocation of billions in funding for upskilling and reskilling in the wrong areas. Then again, it may be less expensive than gathering additional data through time-consuming surveys and other costly means and paying for extensive vocational, technical or higher (re-)training… And of course, there is also the challenge of measuring skills supply, which is another key aspect for informed policy making for the labor market. That, however, goes far beyond the scope of this post.

Oh, and we should perform basic sanity checks.

According to CEDEFOP data from 2020, 2% of advertised bartender jobs in the EU require skills in conducting land surveys, aquaculture reproduction, collecting weather-related data or analyzing road traffic patterns.

Analyzing skills data. Can you see the gorilla?

Did you know that 1 in 25 musicians, singers and composers is required to know Java?

Analyzing skills data. Can you see the gorilla?

Does anyone really require freight handlers to know yoga or Bihari? Or should someone maybe go back and check the data collection processes?

Analyzing skills data. Can you see the gorilla?

Sadly, it is very easy to find examples like this, you can find many more in data from any one of your favorite OJA data providers. (Again, we are only using examples from CEDEFOP for consistency, not because their data is any worse than that from other sources.) The only conclusion we can draw from this is that these dashboards and statistics are simply not checked. Otherwise, this could not possibly go unnoticed. It does, however, also shed light on what the authors of the ESSnet Big Data report (mentioned in the previous post) meant by “the quality issues are such that it is not clear if these data could be integrated in a way that would enable them to meet the standards expected of official statistics.”

And because more and more institutions and organizations work with the same few data providers – along the lines of “if everyone works with them, their data can’t be that bad.” – the same mistakes are made over and over, multiplying faster and faster. Quoted, posted and shared everywhere by more and more people. The thing is, repeating them often enough does not make these mistakes better or truer. And now, as of January 20, 2021, the time has come to move past alternative facts. So, let’s start looking for fact-based alternatives.


[1] For more on this, take a look at this study or our whitepapers on standard skill profiles and education zones.
[2] CEDEFOP data is based on the ESCO taxonomy, which includes knowledge in its definition of skills.

The poison apple of “easy” skills data – are you ready to give up that sweet taste?

This is the third in a series of posts about skills. If you haven’t already, read the other posts first:
Cutting through the BS and Sorry folks, but “Microsoft Office” is NOT a skill.

In the second post of this series, we discussed skills and the issues around defining and specifying them. Assuming we can reach some kind of common understanding of this valuable new currency, the next step is to find a way to generate meaningful skills and job data.


Shaky data – shaky results

Big data from online job platforms or professional networking sites can yield a wealth of information with a much higher granularity than the usual data gathered by national statistics offices in surveys – especially regarding skills. One reason is that, unlike printed advertisements, employers do not have to pay by space for online job postings and thus can provide more detailed information on the knowledge and skills they require. This online data also allows for a much larger sample to be monitored in real time, which can be highly valuable for analysts and policy makers to develop a timely and more detailed understanding of conditions and trends on the labor market.

However, when working with the data that is available online, such as online job advertisements (OJA) or professional profiles (e.g., LinkedIn profiles), we need to be clear on the fact that this data is neither complete nor representative and therefore any results must always be interpreted with caution. Not only because of the obvious fact that the results will be distorted, but more importantly because of the implications. Promoting certain skills based on distorted data can be harmful to the labor market: if workers focus on obtaining these skills – which by nature tend to be derived from data biased towards high-skilled professionals in sectors such as IT and other areas involving higher education – they are less likely to opt for career paths involving other skills that actually are in high demand, e.g., vocational careers in skilled trades, construction, healthcare, manufacturing, etc. Despite the fact that digitalization will primarily affect better educated workers with high wages in industrialized countries, simply because it is much easier to digitalize or automate at least some of the tasks in these jobs than those in many blue-collar and vocational occupations such as carpentry, care work, etc. The last thing any labor market policymaker would want is to accentuate the already critical skill gap in this area. Or create an even tighter labor market for certain professions, say, IT professionals [1]. Similarly, education providers seeking to align their curricula with market demand need reliable data so as not to amplify skill gaps instead of alleviating them. And yet, a growing number of PES are relying on this often shaky data for decision making and ALMP design.

For instance, there are several projects that aim to gather and analyze all available OJA from all possible sources in a given labor market and use these aggregated data to make recommendations including forecasts of future employability and skills demand. But the skills are typically processed and presented without any semantic context, which can be extremely misleading.

Challenges of OJA data

In 2018, the European statistical system’s ESSnet Big Data project issued a report [2] on the feasibility of using OJA data for official statistics. Their conclusion was: «the quality issues are such that it is not clear if these data could be integrated in a way that would enable them to meet the standards expected of official statistics.»

Let us take a look at some of the basic challenges of OJA data.

  1. Incomplete and biased: Not all job vacancies are advertised online. A significant proportion of positions are filled without being advertised at all (some say around 20%, others claim up to 85% of vacancies). Of those that are advertised, not all are posted online. CEDEFOP reported that in 2017 the share of vacancies published online in EU countries varies substantially, ranging from almost 100% in Estonia, Finland and Sweden down to under 50% in Denmark, Greece, and Romania. [3] In addition, some types of jobs are more likely to be advertised online than others. And large companies or those with a duty to publish vacancies are typically statistically overrepresented while small businesses, who often prefer other channels such as print media, word of mouth, or signs in shop windows, are underrepresented. Another relevant point is that certain markets are so dried out that advertising vacancies is just not worthwhile, and specialized headhunters are used instead. In summary, this means that OJA data not only fail to capture many job vacancies, but are also not representative of the overall job market. [4]
  2. Duplicates: In most countries, there is no single source of OJA data. Each country has numerous online job portals, some of which publish only original ads, others that republish ads from other sources, hybrid versions, specialized sites for certain sectors or career levels, etc. So, to ensure adequate coverage, OJA data generally need to be obtained from multiple sources. This inevitably leads to many duplicates, which must be dealt with effectively in order to reliably measure labor market trends in the real world. For instance, in a 2016 project the UK national statistics institute (NSI) reported duplicate percentages of 8-22% depending on the portal, and an overall duplication rate of 10%. [5] In the ESSnet Big Data project, the Swedish NSI identified 4-38% duplicates per portal and 10% in the merged data set [6].
  3. Inconsistent level of detail: Certain job postings provide much more explicit information on required skills than others, for instance depending on the sector (e.g., technical/IT) or country (e.g., due to legislation or cultural habits). Moreover, implicit information is recorded only to a limited extent and is statistically underrepresented, despite its high relevance. One reason for this is that US data providers often fail to recognize how uniquely detailed OJA are in the US, thus assuming that this is true everywhere and basing their methods on this assumption. However, this is far from correct. For instance, a job description like the one below, which is fairly typical in the US, will often be condensed to «carry out all painting work in the areas of maintenance, conversions and renovations; compliance with safety and quality regulations; minimum three years of experience or apprenticeship» in European countries. Moreover, in job ads like this, many of the required skills must be derived from the listed tasks or responsibilities. This shows just how important it is to extract implicit information.


The poison apple of “easy” skills data – are you ready to give up that sweet taste?


So, the question is, can these issues be dealt with in a way that can nonetheless generate meaningful data?

The answer: sort of. Limitations on representativeness can be addressed using various approaches. There is no one-size-fits-all solution, but depending on the available data and the specific labor market, statistically weighting the data according to the industry structure derived from labor force surveys could be promising; as could comparing findings from several data sources to perform robustness checks, or simply focusing on those segments of the market with less problematic coverage bias. [7]

Deduplication issues can be solved technically to a certain extent, and there is a lot of ongoing research in this area. Essentially, most methods entail matching common fields, comparing text content and then calculating a similarity metric to determine the likelihood that two job postings are duplicates. Some job search aggregators also attempt to remove duplicates themselves – with variable success. Identifying duplicates is fairly straightforward when OJAs contain backlinks to an original ad as these links will be identical. On the other hand, job ads that have been posted on multiple job boards pose more of a challenge. Thus, ideally, multiple robust quality assurance checks should be put in place, such as manual validation over smaller data sets.

Seriously underestimated: the challenge of skills extraction

The third challenge, the level of detail, seems to be the most underestimated. OJA from the US are typically much more detailed than elsewhere. A lot of information is set out explicitly that is only implicitly available in OJA data from the UK and other countries (e.g., covered by training requirements or work experience) – or not given at all. But even within the US, this can vary greatly.


The poison apple of “easy” skills data – are you ready to give up that sweet taste?


Clearly, even if we can resolve the issues concerning representativeness and duplicates, simply recording the explicit data will still result in highly unreliable nowcasts or forecasts. Instead, both the explicit and implicit data need to be extracted – together with their context. To reduce the distortions in the collected data, we need to map them accurately and semantically. This can be done with an extensive knowledge representation that includes not only skills or jobs but also education, work experiences, certifications, and more, as well as required levels and the complex relations between the various entities. In this way, we can capture more implicit skills hidden in stipulations about education, qualifications, and experience. In addition, the higher granularity of OJA data is only truly useful if the extracted skills are not clustered or generalized too much in subsequent processing, e.g., into terms like “project management”, “digital skills” or “healthcare” (see our previous post), due to working with overly simplified classifications or taxonomies instead of leveraging comprehensive ontologies with a high level of detail.

And then of course, there is the question of how to analyze the data. We will delve deeper into this in the next post, but for now, this much can be said: Even if we are able to set up the perfect system for extracting all relevant data from OJAs (and candidate profiles for that matter), we are still faced with the challenge of interpreting results – or even just asking the right questions. When it comes to labor market analyses, nowcasting and forecasting, e.g., of skills demand, combining OJA data with external data such as from surveys by NSI promises more robust results as the OJA data can be cross-checked and thus better calibrated, weighted and stratified. However, relevant and timely external data is extremely rare. And we might possibly be facing another issue. It is much easier and cheaper to up- or reskill jobseekers with, say, an online SEO course than with vocational or technical training in MIG/MAG welding. So maybe, just maybe, some of us are not that interested in the true skills demand…


[1] According to the 2020 Manpower Group survey, IT positions are high on the list of hardest-to-fill positions in the US, but not everywhere else. In some countries, including developed ones such as UK and Switzerland, IT professionals are not on the top 10 list at all.
[3] The feasibility of using big data in anticipating and matching skills needs, Section 1.1, ILO, 2020—ed_emp/—emp_ent/documents/publication/wcms_759330.pdf
[4] The ESSnet Big Data project also investigated coverage, for the detailed results see Annexes C and G in the 2018 report.
[7] See for example Kureková et al.: Using online vacancies and web surveys to analyse the labour market: a methodological inquiry, IZA Journal of Labor Economics, 2015,

Would you buy a wheel if someone told you it was a bicycle?

After recently stumbling upon this Forbes post from 2019, and with skills ontologies entering the Gartner HCM Tech hype cycle, we decided it’s high time to discuss the difference between taxonomies and ontologies again. Although we have been developing and explaining our ontology for over 10 years, many HR and labor market professionals still let themselves be sold on the idea that a taxonomy is good enough for jobs and skills matching. Now, finally, after trying out one disappointing solution after the other, the idea is slowly catching on that they are being duped. And as ontologies start to trend, many providers are beginning to use this “more fashionable” term. But do not be fooled: their product hasn’t changed. It is still just a taxonomy. Speaking from experience, it takes years to build a true ontology. Why care? Because the difference in performance is massive.

As a reminder, a taxonomy is a hierarchical structure to classify information, i.e., the only possible relation between concepts in a taxonomy is of type “is a”. Think Yellow Pages or animal taxonomies. An ontology is a framework that describes information by establishing the properties and relationships between the concepts. Now if we want a machine to perform a task like job matching for us, we need to share our contextual knowledge with the machine. Meaning that we need to find a way to represent our knowledge in a machine-readable way. Given any specific domain, say, jobs and skills, which do you think could represent human knowledge better, a taxonomy or an ontology?

When we think about things or concepts, we automatically associate them with other things or concepts. Based on our knowledge, we make connections and set up a context. We think of a bicycle and know that it is made up of components (wheels, handlebars, seat, etc.), and is a vehicle. But we also know that some people can ride a bicycle, that this skill has to be learnt, that bicycles can go on roads or across fields and that cycling is good exercise. It’s more ecofriendly than a car, doesn’t need fuel, and so on and so forth. We can reason that in terms of use, a bicycle is similar to a tricycle for small children, but not for adults. We may know that in some countries, we are required to wear a helmet when cycling. Our knowledge about bicycles is not hierarchical, the concepts we can connect it with do not just fit into a cascade of “is a” relations, but instead satisfy relations like “has a”, “can”, “is similar to”, “requires”, etc. By definition, this knowledge cannot be represented by a taxonomy. But it can be represented in an ontology.

The same is true when we think about jobs and skills. Even with little knowledge of medical care, we know that ICU nurses have skills in common with psychiatric nurses, but that there are also must-have skills for ICU nurses that psychiatric nurses do not need, and vice versa. So, we can draw on common or contextual knowledge to determine that these two occupations are similar, but probably not similar enough for a good job-candidate match. We can infer other important information as well. For example, that a registered nurse requires official certification. Or that a head nurse will need additional skills like leadership.

We also know intuitively that the skills and tasks of a software test engineer have nothing to do with those of a nurse, even if the software company description states that they provide solutions that help nurses (amongst others). But without an ontology, this will show up in matching results.


Would you buy a wheel if someone told you it was a bicycle?


To truly capture the complexity of this domain of jobs and skills, together with its many interdependencies, such as similarities and differences between various specialties, inferred skills, knowledge of various computer applications, or required certifications or training, there is simply no way around an extensive ontology that describes all these various aspects of job and skills-related data. So next time someone tries to sell you an ontology, make sure you’re not just getting a souped-up taxonomy.

If you’re interested in a more in-depth explanation of the difference between taxonomies and ontologies, read the Forbes post for the general concepts, or this JANZZ post for a discussion in the context of matching. Or you can experience the difference by comparing your current matching solution against ours. Contact us at

Sorry folks, but “Microsoft Office” is NOT a skill.

One of the most prominent buzzwords around employment, employability and workforce management is skills. There is a lot of noise surrounding this concept and its fellow buzzwords like reskilling, upskilling, skills matching, skills alignment, skill gaps, skills anticipation, skills prediction, and so on. One can find myriad publications and posts explaining why skills are so important, how to analyze skills supply and demand, how to develop active labor market policies based on skills, how to manage and develop employee skills – as well as the many sites listing the “most in-demand skills” of the year. We certainly agree that skills are increasingly important, or as stated in one of the Gartner Hype Cycles 2020,

Skills are […] the new currency for talent. They are a foundational element for managing the workforce within any industry. Improved and automated skills detection and assessment allows for significantly greater organizational agility. In times of uncertainty, or when competition is fierce, organizations with better skills data can adapt more quickly […]. This improves productivity and avoids costs through improved planning cycles. [1]  

This applies not only to HCM in businesses, but also to labor market management by government institutions. Considering how globally important these concepts are, there should be a clear or at least common idea of what this valuable currency is. However, in the much of the skills-related content posted online, there is a pervasive pattern of conceptual ambiguity, lack of specificity and lack of concision. So, in the last post, where we discussed a few examples of the noise surrounding jobs and skills, we called for a more fact-based discussion. In this post, we want to lay the groundwork for such a discussion.

Statistics 101

As a reminder from the last post: whenever you try to generalize, you run the risk of losing relevance. Despite all the globalism going on, the world is divided into regions. And each region has its own distinct economic landscape and its individual skills demand. Some regions are more focused on certain industries than others, and even when comparing regions with similar industries, skills demand and gaps can vary significantly, as has been shown in various studies and reports (for example here and here). So there will never be a meaningful list of top skills on a global level. Problem solving skills, blockchain, app development and other “top skills” propagated on various websites are simply not relevant for all activities across the globe. On top of this, it is extremely challenging to generate meaningful, representative data from online profiles and job postings. In general, the data collected online is biased, certain groups are underrepresented, others massively overrepresented. For instance, despite all the noise about apparently all-important, accelerating “digital skills”, most representative surveys highlight that EU and US labor markets require a generally low to moderate level of digital skills, with about 55 to 60 percent of jobs doing simple word processing or data entry and emailing. 10–15 percent need no ICT skills. Only about 10–15 percent need an advanced ICT level. [2] This alone shows that all these publications about the most important skills of the future etc. are at best very misleading.

To perform sound analyses and anticipate the skills that will be required in the future, to predict how these requirements will change (which skills will gain in importance and which skills will become obsolete or to perform target-oriented skills matching, we first need to be able to correctly recognize, understand, assign, and classify today’s skills. We will discuss the challenges (and strengths!) of skills and job data more in detail in the next post. First, we need to focus on an even more basic, but absolutely crucial aspect: we need to clarify what we mean by skills. Or abilities and competencies.

Truth be told, there are so many different definitions floating around, it is quite hard to keep up, and this is one of the key reasons why most approaches and big data evaluations fail miserably. It is therefore all the more important that we agree on a common understanding of this new currency.

What exactly is a skill?

O*NET defines skills as developed capacities that facilitate learning, the more rapid acquisition of knowledge or performance of activities that occur across jobs, [3] and distinguishes skills from abilities, knowledge and technology skills and tools, referring only to directly job-related or transferable skills and knowledge. ESCO, on the other hand, defines a skill as the ability to apply knowledge and use know-how to complete tasks and solve problems. Moreover, ESCO only knows the two main categories skills and competences, which – unlike O*NET – also include attitudes and values. In both classification systems, there is significant overlap between the categories. Indeed, on the other hand, just summarizes all these concepts under the term skill:

Skill is a term that encompasses the knowledge, competencies and abilities to perform operational tasks. Skills are developed through life and work experiences and they can also be learned through study. [4]

Clearly, these discrepancies in the definition of a skill will cause discrepancies in data collection and analysis, which in turn will affect the robustness of any extrapolation based on these data. But, for sake of argument, let us assume there is a universal definition of a skill. In a nutshell, we shall think of a skill as some kind of ability that is useful in a job.

Analyzing generic skills yields generic answers

Just having a written definition of a skill is far from enough. Apart from the fact that it still leaves a lot of room for interpretation, we also have many issues at the level of individual skills. One issue is the granularity, which differs extremely among the various collections. For instance, the ESCO taxonomy currently includes around 13,500 skills concepts, O*NET under 9,000 (in fact, only 121 of these are not skills of the type “can use a certain tool/machine/software/technology”) and our ontology JANZZon! over 1,000,000. Of course, the desired level of detail depends on the context. But for many modern applications of skills analysis, such as skill-based job matching, career guidance, etc., a certain level of detail is crucial to achieve meaningful results. Take the list of “top 10 skills for 2025” published by the World Economic Forum [5]:

  1. Analytical thinking and innovation
  2. Active learning and learning strategies
  3. Complex problem-solving
  4. Critical thinking and analysis
  5. Creativity, originality and initiative
  6. Leadership and social influence
  7. Technology use, monitoring and control
  8. Technology design and programming
  9. Resilience, stress tolerance and flexibility
  10. Reasoning, problem-solving and ideation

Depending on the context, e.g., industry or activity, these skills are understood very differently. They are thus too generic or unspecific to be of any use in matching or for meaningful statistics. In fact, for many occupations they are barely relevant at all. Or how often do you see these skills in job postings? Other generic skills we often see in predictive top 10 lists and recommendations have similar issues, for instance:

Digital skills: What exactly are these skills? Does this include operating digital devices such as smartphones or computers or dealing with the Internet? Do we expect someone with these skills to be able to post on social media, or really know how to handle social media accounts professionally? Is there any sense in summarizing skills such as knowledge of complex building information modelling applications in real estate drafting and planning under digital skills?

Project management skills: This too is almost completely useless when taken out of context like this. A large proportion of workers has project management skills on some level, but it is very difficult to compare or categorize this knowledge across roles or industries. For example, the individual project management knowledge differs substantially between a foreperson on a large tunnel construction site, a project manager for a small-scale IT application, a campaign manager in the public sector and a process engineer or event manager. Clearly, if the event industry comes to a halt, a project manager cannot just switch to the construction industry. So, it is nonsensical to comprise all these variations into a single “matchable” skill.

JANZZsme! Semantic Precision for Skills/Competence Matching

Think multidimensional

Being precise about skills does not just entail clearly identifying the skill and its context, the level of capability is equally relevant. The level of English required of a laborer on a construction site is certainly not the same as that required of a translator. However, construing a robust definition of levels also poses challenges: What does “good” or “very good” knowledge mean, and what distinguishes an “expert” in a certain skill? Is it theoretically acquired knowledge, for example, or is it knowledge already applied in a real professional environment? In contrast to other areas of big data, scales and validations – if they exist – are not necessarily binding. Thus many providers of this type of data just resort to disregarding levels entirely. In doing so, we lose a huge amount of information which would be highly relevant, not only for job matching and career guidance, but also in analyzing skills demand, say, as a basis for workforce or labor market management. Do we have a shortage of highly skilled experts or of employees with a basic working knowledge? Clearly, appropriate measures will differ strongly depending on the answer.

Say what you mean

Granularity in terms of identifying context and level of a skill are certainly important. The main issue, however, is clarity. One of the recurring top 10 skills required in job postings almost anywhere on the planet is almost always listed as Microsoft Office, which at a first glance may seem fairly specific. But what does this really mean? Technically, MS Office is a family of software, available in various packages comprising a varying selection of applications, which evolve over time. Currently, it consists of 9 applications: Word, Excel [6], PowerPoint, OneNote, Outlook, Publisher, Access, InfoPath and Skype for Business. So, if someone “has MS Office skills”, does this mean they can use all those apps? Hardly. And what does it mean to be able to use an app? According to ESCO, someone who can “use Microsoft Office” can

work with the standard programs contained in Microsoft Office at a capable level. Create a document and do basic formatting, insert page breaks, create headers or footers, and insert graphics, create automatically generated tables of contents and merge form letters from a database of addresses (usually in Excel). Create auto-calculating spreadsheets, create images, and sort and filter data tables. [7]

Many people may think they can use MS Office – until they read that definition. It seems that the less one knows about the full potential of an application, the more likely one is to identify as a capable user. This becomes even more apparent when we consider PowerPoint, which, surprisingly, is not included in ESCO’s “use Microsoft Office” skill. Instead, this is called ‘use presentation software’. There are thousands of applications to create presentations, many of which work quite differently to PowerPoint and thus require different knowledge or additional skills: Prezi, Perspective, Powtoon, Zoho Show, Apple Keynote, Slidebean,, just to name a few. And yet, the skill of “using presentation software” is just vaguely described in ESCO as:

“Use software tools to create digital presentations which combine various elements, such as graphs, images, text and other multimedia.” [8]

Putting aside the fact that there are many instances of presentation software, if this is a skill, in the sense of an ability that is useful in a job, then one should expect “creating presentations” imply es that the person can create usable or even good presentations. Amongst many skills, this includes the ability to distill information into key points, as well as a sense of aesthetics and storytelling skills. Yet, with enough self-confidence, a person lacking these implicit skills may still think that they are capable of creating great presentations.

And apart from this, what an employer means when they ask for these skills varies substantially. Someone looking for an office help in an old-school micro business may have a very different idea of MS Office skills than a large corporation looking for a marketing specialist. When it comes down to it, trying to interpret the expression “Microsoft Office” as a skill results in so much guesswork, that the informative value of “Microsoft Office skills” becomes comparable to that of “hammering skills”. Everyone can use a hammer, but does that mean anyone can work in any profession that involves hammers? Of course not.

JANZZsme! Occupations That Involve Hammers

My Math teacher used to say: If you mean something else, say something else. That could be a good place to start.

(Self-)assessment vs. reality

As mentioned above, many people’s self-image deviates from reality, resulting in under- or overestimating their skills (hammering, creating presentations or any other skill). In addition, there is the issue that completing a course or education that should teach a set of skills does not automatically mean that we have that skill set, i.e. that we can apply it productively in a job. Also, many unused skills have an expiration date. And yet, once we get used to listing a certain skill on our resume, we rarely take it off again, no matter how long we haven’t used it. Just asking ourselves the question can I apply this productively in my job? could go a long way in moving our projected image closer to reality. If we wanted to. Just as agreeing on a definition of a skill, standardizing skill designations and levels or just being plain more specific and accurate could give us a clearer common understanding of this valuable currency. If we wanted to. And then we can turn to the challenges of generating smart data – which we will investigate in the next post.


[1] Poitevin, H., “Hype Cycle for Human Capital Management Technology, 2020”, Gartner. 2020.
[2] Thanks to Konstantinos Pouliakas at Cedefop for pointing this out.

[6] Read the previous post for our view on Excel.


Cutting through the BS

Adaptability and flexibility, digital skills, creativity and innovation, emotional intelligence… Since the pandemic went global, everyone has been talking about the top post-COVID skills employees will need. Going through numerous posts from Forbes over Randstad to EURES, it seems that the key point they have in common is that they are untransparent if not completely unfounded. Despite all the noise they generate, none of these posts give any insight into what data their claims are based on – or whether they have any data at all. Here at JANZZ, we have been analyzing over 500,000 job postings from the last few months for a project in Australia and New Zealand. In this data, just as in any of our other data from similar projects in completely different markets and regions of the world, there is no indication of increased demand for creativity and innovation or for digital skills – which, by the way, should not include the ability to participate in a video call, just as Excel usage does not turn an economist into a STEM profession. The skills that were most in demand across all professions from waiters to senior policy officers were in fact ambition, self-motivation, and ability to work under pressure, independently and in fast changing environments.

But it is not just about skills analysis. When it comes to… well, anything related to jobs and skills, there is an unbelievable amount of BS out there. Here are just a few examples.

Future jobs. According to the WEF’s Future of Jobs Survey 2020, among the top 20 job roles in increasing and decreasing demand across industries, Mechanics and Machinery Repairers are listed as both increasing (#18) and decreasing (#9). The same is true for Business services and administration managers (up #12, down #6). This apparent contradiction is simply stated with no explanation in the text. And yet, this information is just reproduced blindly in numerous blogs and posts. [1]

LinkedIn skills reports. The same is true for all the buzz generated by LinkedIn reports on in-demand skills. Countless articles and posts just reproduce these lists, all completely disregarding the fact that they are based on the data captured in LinkedIn profiles [2], which is strongly biased. For instance, blue-collar professions and industries are massively underrepresented in their data. By contrast, according to the ManpowerGroup Talent Shortage surveys, for 7 consecutive years, skilled trades have been hardest to fill, globally and nationally in almost all countries, along with drivers (especially truck/heavy goods, delivery/courier and construction drivers), manufacturers (production & machine operators), construction laborers and healthcare professionals on this year’s list. Shouldn’t the skills associated with these professions be in higher demand than blockchain or cloud computing?

Skills demand. A Canadian institute created a report based on data and skills taxonomies from a large labor market analytics provider. They introduce the report with the statement “Telling Canadians they need digital skills is not enough; we must be specific.” The report then goes on to identify the top 10 digital skills by number of job postings. Among the top 10 skills are Microsoft Excel and Spreadsheets. There is nothing specific about these “skills”. First off, the term “Microsoft Excel” says absolutely nothing about the skills that are actually needed. Is the candidate expected to just be able to open the application and enter data? Or should they be capable of creating formulas? How complex are the formulas supposed to be? What about charts? Also, what exactly is the difference between the two skills Microsoft Excel and Spreadsheets?

Upskilling. Within a business, upskilling can be very useful. An individual company is fairly fixed in its position and should have a clear strategy which will also largely determine the skill needs of the company and thus, the upskilling strategies. However, developing sustainable upskilling strategies as part of an active labor market policy (ALMP) is a very different challenge. Contrary to what many posts and tech providers say, just upskilling all the unemployed will not lower unemployment numbers sustainably and does not necessarily meet market demands. For instance, in a country where a lot of low-skilled work is on offer, upskilling a jobseeker who is already overqualified will not be of any use. Or if the training offers are of poor quality or not aligned to market needs. This is the reality in many countries.

Job matching. A Dutch tech provider for Employment Services claims that its software solutions can help PES “reduce unemployment figures”. As if that could happen by just using the right job matching tool. For instance, this article (in Italian) in Italy’s renowned newspaper Corriere della Sera illustrates just a few of the issues that need to be resolved before, or at least while, implementing a software solutions for the PES: there are currently 730K job vacancies in Italy, compared with 2.5M active jobseekers plus 13.5M inactive and discouraged. The skills of jobseekers in Italy are not aligned with labor market demand. Training, particularly for the unemployed, is inadequate, of poor quality and disconnected from market needs. PES have insufficient and not adequately trained staff. Italy invests extremely little in ALMPs. They have made plans to increase this budget but have no strategy on how to spend the additional funds. A change of direction would require a vision that does not expire after the next elections, which is an extremely high ask given Italy’s political landscape and history. And yet, the Dutch tech provider still argues that their job portal solutions will make the crucial difference.

Most of what is out there is basically gut feelings and creative marketing. So how about cutting through the BS and finding our way back to an honest, fact-based discussion? Well, to do that, we need to find out what the facts are. But for that, we first need to agree on the basics (for instance, define what constitutes a skill) and then generate reliable data based on these definitions. More about this in the next post…


[1] If you are interested in learning more about the issues surrounding predictive analyses based on occupational data (e.g. skills anticipation), read our CEOs talk featured in a recent ILO report.

[2] According to LinkedIn: The most in-demand skills were determined by looking at skills that are in high demand relative to their supply. Demand is measured by identifying the skills listed on the LinkedIn profiles of people who are getting hired at the highest rates.