W Power 2024

Is AI coming for your job?

In a post-AI world, where an algorithm can draft marketing copy—or even pop songs and movie scripts—anything seems possible. Harvard Business School faculty members discuss how artificial intelligence could reshape how work gets done

By Kristen Senz
Published: May 11, 2023 10:28:31 AM IST
Updated: May 11, 2023 11:39:36 AM IST

Harvard Business School faculty members share their thoughts below about how AI will reshape the workforce and the skills necessary to succeed in the years ahead. Image: ShutterstockHarvard Business School faculty members share their thoughts below about how AI will reshape the workforce and the skills necessary to succeed in the years ahead. Image: Shutterstock

The launch of ChatGPT seems to have reignited doomsday fears about artificial intelligence (AI) replacing workers en masse. Are these fears prescient or overblown? A recent survey shows 62 percent of Americans think AI will majorly impact work and jobholders over the next 20 years, yet only 28 percent believe the technology will affect them personally.

Harvard Business School faculty members share their thoughts below about how AI will reshape the workforce and the skills necessary to succeed in the years ahead.

Joseph Fuller: Brace for massive workforce changes

The deep learning-based AI tools that are now being introduced will have a profound impact on the labor market, leading to the eventual elimination of many jobs and the restructuring of many others. The effect will be particularly acute among knowledge workers—those who do what has been traditionally defined as non-routine cognitive work. Many people in such roles have been insulated from automation and globalization. That is about to change.

The change is likely to follow a path similar to one a character in Ernest Hemingway’s The Sun Also Rises used to describe his descent into bankruptcy: It occurred in “Two ways … gradually and then suddenly.†Companies will move slowly to deploy generative AI technology like that embodied in OpenAI’s ChatGPT. Harnessing the immense pool of data underlying it will require the development of proprietary machine-learning systems, requiring companies to add talent that is in short supply.

"ONCE COMPANIES LEARN HOW TO EXPLOIT GENERATIVE AI, WE CAN ANTICIPATE RAPID RESTRUCTURING AT MANY COMPANIES THAT INVOLVE SUBSTANTIAL CUTS IN WHITE-COLLAR STAFF."

Historically, companies have asked, “How can this new technology improve the efficiency of our existing processes?†That is an irrelevant construct when considering how to harness generative AI’s capabilities. Processes ranging from negotiating contracts with vendors to developing marketing messages will be redesigned from the ground up in order to exploit the full potential of this new technology.

Read More

Also read: Why Generative AI will not take advertising's dinner away

Once companies are confident that they understand how to use cognitive AI to transform their operations, the impact on workers promises to be dramatic. White-collar workers whose job security was founded on their knowledge of complex processes and ability to integrate information from various sources quickly to make decisions will be displaced in large numbers.

Those job losses will be partially offset by job gains for machine learning specialists and emerging jobs like prompt engineers. But, once companies learn how to exploit generative AI, we can anticipate rapid restructuring at many companies that involve substantial cuts in white-collar staff.

Joseph Fuller is a Professor of Management Practice in General Management and co-leads the Managing the Future of Work initiative at HBS.

Ayelet Israeli: For now, AI still needs human intervention

In the near future, AI will be better used as a supplementary tool to help experts perform their work. Yes, certain tasks can be completed correctly and completely by AI, but for those types of tasks we will see something like the Industrial Revolution, where people’s jobs changed, and they were able to use new tools to become more productive and focus on other tasks instead.

Also read: India's tech sector has to learn to navigate the 'no normal': Debjani Ghosh

At the same time, for other tasks, AI will provide useful outputs, but will need humans to optimize these outputs and complete tasks successfully. When thinking about “knowledge work,†it is especially important to consider that many of the generative AI tools that we are seeing these days (such as ChatGPT) were not meant to reveal the truth or to display correct knowledge (though we have seen attempts to use them for this purpose). Instead, they were built to generate content (in this example, text) that displays the words that are most likely to come next. We cannot expect the outcome to necessarily be true statements.

"THESE TOOLS ARE KNOWN TO 'MAKE THINGS UP' (OR TO 'HALLUCINATE') AND, THEREFORE, CANNOT BE USED WITHOUT AUDITS FOR CORRECTNESS."

These tools are known to “make things up†(or to “hallucinateâ€) and, therefore, cannot be used without audits for correctness. Moreover, users may have additional knowledge or context that the AI doesn’t (e.g. that the AI hasn’t been trained on, propriety knowledge, a better understanding of the specific task at hand, etc.).

Also read: How to survive the AI revolution

Another risk with these generative AI models is that without human interventions and audits, they are likely to generate content that perpetuates existing biases. When we train these models at scale based on existing data, if the underlying data included biased information, the result is also likely to include that bias unless we intervene. One example we have seen in early generative-image AI is that when we ask for images of “a man,†it is very likely to create an image of a white man, likely because of the data on which it was trained. This issue of perpetuating biases raises a lot of interesting policy questions about who should be monitoring the outcomes and what rules and values AI should represent.

Ayelet Israeli is the Marvin Bower Associate Professor in the Marketing Unit and cofounder of the Customer Intelligence Lab at the Digital, Data, and Design (D) Institute at HBS.

Iavor Bojinov: Those who resist AI risk falling behind

Like any technological innovation, artificial intelligence (AI) has the potential to transform knowledge workers’ roles, processes, and practices. To understand AI’s potential, we must differentiate between its applications as externally facing—enhancing product offerings—and internally facing—aimed at improving operational efficiencies. External-facing AI applications present opportunities for job creation by broadening the scope and scale of a company’s product portfolio. Conversely, internal-facing applications are more likely to affect knowledge workers’ jobs.

"AUTOMATING THESE TASKS WILL ENABLE KNOWLEDGE WORKERS TO CONCENTRATE ON VALUE-ADDING ACTIVITIES WHERE HUMAN EXPERTISE IS INDISPENSABLE ..."

The emergence of generative AI, such as ChatGPT, will soon be integrated into various tools employed by knowledge workers, automating numerous routine tasks like note-taking, document summarization, and drafting personalized customer messages. Automating these tasks will enable knowledge workers to concentrate on value-adding activities where human expertise is indispensable, such as interpreting context and nuance, exercising emotional intelligence, addressing moral and ethical considerations, and fostering creativity and innovation.

Also read: 6 AI governance principles to help enterprises cope with risk in the fast-moving world

Furthermore, I expect a bifurcation of the workforce in the near future: individuals who embrace AI to enhance their productivity, potentially yielding substantial gains, and those who resist AI and risk falling behind. The latter group will likely face replacement by their AI-empowered counterparts. Thus, the prudent approach for knowledge workers is to harness the potential of AI as a complementary tool, amplifying their capabilities and adapting to the evolving landscape of work.

Iavor Bojinov is an Assistant Professor of Business Administration and the Richard Hodgson Fellow at HBS.

Edward McFowland III: Skillsets will shift

As far as the doomsday scenario in which AI replaces everyone’s jobs, I don’t see that playing out anytime soon. As new technologies penetrate markets, they often change how organizations in those markets compete. Simultaneously, and sometimes consequentially, with these new technological innovations, certain sets of skills become more important, while others become less important. I believe the same will be true with generative AI. Programmers, for example, will not need to write as much code from scratch. Creatives will not need to be their own muse for idea generation. All of this should increase productivity. However, AI models do make mistakes, whether in logic, efficiency, or inference. The skills to test, edit, and innovate or otherwise improve on AI outputs are likely to become more valuable.

Also read: How ChatGPT will redefine MarTech for years to come

The advent of the calculator didn't make math less important, but it did change what mathematical skills became important to organizations and, importantly, how we taught math in schools. It became less important for engineers building rockets at NASA, for example, to solve complex math problems in their heads. The ability to structure a problem or goal as a set of mathematical equations that the calculator could solve became more important. The calculator became an invaluable tool for rocket science, but it did not remove the need for math, engineering, or deep problem-solving by humans. In fact, since calculators were invented, engineering and managerial “miscalculations†have still occurred during attempts to put rockets into space because human decision-making and judgment continue to play vital roles in problem-solving.

"AI TOOLS CAN CREATE TREMENDOUS VALUE. HUMANS MUST DECIDE HOW BEST TO ADAPT IN ORDER TO HARNESS THE POTENTIAL OF THESE NEW TECHNOLOGIES WHILE MINIMIZING THEIR NEGATIVE CONSEQUENCES."

If we view AI as a supportive tool, augmenting human decision-making, then it is important to teach people how to structure problems and interactions with AI “optimally†and how to recognize (subtle) errors in AI output. As an academic, my job is to read the work of my students and colleagues, assess their assumptions, logic, and conclusions, and make connections. This invaluable set of skills must be provided to all users of generative AI tools. Critical thinking has been taught at many levels of education for a very long time, but just as math education changed with the entrance of the calculator, our overall approach to education must adapt to AI technology. There might even be a sub-discipline of critical thinking, not of essays or novels, but of AI models. One can even imagine interactive learning sessions in which instructors use tools like ChatGPT to teach concepts to students. The students might observe and engage with the AI tool and learn how to actively interrogate the responses it provides. I think this could make for beautiful, interactive learning sessions – much better than someone lecturing at them from the front of the room.

Also read: Designers' dilemma: Embrace or evade the advancements of artificial intelligence?

AI tools can create tremendous value. Humans must decide how best to adapt in order to harness the potential of these new technologies while minimizing their negative consequences.

Edward McFowland III is an Assistant Professor in the Technology and Operations Management Unit.

Tsedal Neeley: Companies and workers should focus on upskilling

Historically, technological revolutions have created more jobs than they have destroyed. The real concern that people should have is about whether they will be replaced by those who have a digital mindset, which is the ability to see new possibilities and chart a path for the future using data, algorithms, AI, and machine learning.

Also read: Which workers suffer most when new technology arrives?

AI serves to augment or improve human performance. When computational and machine-learning algorithms perform an ever-increasing number of activities within organizations, the nature of jobs changes. For example, AI has fundamentally shifted the nature of Wall Street trading. It determines credit scores for existing and potential customers, screens applicants, assists in hiring, responds in real time to queries, and suggests new courses of action.

"AI SYSTEMS HELP PRODUCE HIGHLY ACCURATE SALES FORECASTS ... GIVING SALES MANAGERS AND SALES REPRESENTATIVES TIME TO FOCUS ON BUILDING RELATIONSHIPS, MANAGING, AND SELLING."

AI systems help produce highly accurate sales forecasts, which traditionally have taken managers days and weeks to pull together, giving sales managers and sales representatives time to focus on building relationships, managing, and selling. Software engineers can use services to autogenerate programming code for basic purposes, allowing them to write code more efficiently while spending more time on other activities, like system design and aligning with user experience.

Ultimately, individuals and organizations should focus on upskilling and scaling to make the most of new technologies.

Tsedal Neeley is the Naylor Fitzhugh Professor of Business Administration and the coauthor of the Digital Mindset: What it Really Takes to Thrive in the Age of Data, Algorithms and AI.

This article was provided with permission from Harvard Business School Working Knowledge.

X