Leading Through Disruption
Soft Skills Are Back | Aleksandra Przegalinska, Vice Rector Kzminski University
Leading Through Disruption
Aleksandra Przegalinska, Vice Rector at Kzminski University, Harvard CLJE Senior Researcher, and Member of the Board of Advisors at CampusAI Poland, shares her key insights on AI as the great equalizer, the nuance of the human voice, and the rise of soft skills in leadership in this “Leading Through Disruption” interview with The ExCo Group EMEA Managing Director, Anastassia Lauterbach.
Lauterbach: Could you please tell us about your path and how you became the AI Queen of Poland and an influencer in academia beyond your own country?
Przegalinska: I was always curious about different things, from biology to linguistics, coding to philosophy. My bachelor’s studies were in journalism and communication. Learning about media theories, understanding how propaganda is built, and how media influence mass audiences increased my interest in the intersection of technology and philosophy. In my master’s studies, I joined a course with Professor Sierocki. ‘Where does the mind end?’ ‘Is Artificial Intelligence (AI) going to be the dominant technology of the 21st century?’ ‘Is AI the biggest philosophical project of the 21st century?’ These were mindblowing and highly formative questions. I attended courses on semiotics and logic, philosophy of mind, and agency, and all these subjects shaped my path into AI, especially conversational AI. Around 2000, language processing wasn’t as popular a topic as today. We lacked hardware to train models and datasets to work with. But I loved the interdisciplinarity of Neural Language Recognition. I wanted to explore the limitations of the Turing test and how we could improve the performance of chatbots – which were miserable back then.
At the end of my studies, I was given a chance to segway into diplomatic services and work for the Polish diplomatic corps in Brussels. Still, I wanted to develop my own voice. In diplomatic services, you represent a large structure. I tried to understand what I could contribute individually. I came back to science, and I stayed there ever since. I did my postdoc at MIT at the Center for Collective Intelligence without knowing we would witness a boom in AI and generative AI in a few years. Large Linguistic Models (LLMs) outperformed my wildest expectations.
Lauterbach: AI augments everything in our professional and personal lives. How can people find their own voices in the age of AI?
Przegalinska: Technology and AI have recently become the great equalizer. No-code or low-code environments reduced barriers to entry for people independently of their educational background and initial skill sets. Before 2023, AI might have been intimidating as you were required to master formal programming languages, data science, and machine learning to plunge into it. You couldn’t enter the field without a very early commitment to it. Today, technology presents itself to whoever is interested. Everyone can prompt and get an output; this trend will only grow and touch every industry, business function, and educational field. Finding one’s own voice becomes a deep and highly challenging question. It is about self-expression beyond the primary use of generative AI models. Self-expression requires self-awareness, an ability to choose in an oversaturated world with superficial ‘can-do’ things. Human voices will get very nuanced while AI will play with the average.
We are witnessing the beginnings of the mass adoption of technology, and every person will decide what he or she is using these technologies for. Are you looking for a personal assistant or a sparing partner? For a colleague to take the burden of monotone and repetitive tasks or a researcher to suggest new paths in your R&D program? In the coming years, we will redefine the notions of supervision and agency in the workplace and maybe rediscover ourselves.
Lauterbach: There are many fears about the use of AI across creative professions. We witness that great singers and artists don’t mind using generative AI to reach their audiences across time zones and cultural circles. The advertising industry personalizes messaging at low costs. At the same time, we see Hollywood writers and actors going on strike as they fear AI might be after their jobs. What trends do you see in AI adoption and its influence on the future of employment?
Przegalinska: At some point in time, once a true ubiquity of AI is reached, we will stop talking about it much. We don’t ask similar questions about the internet, as it is ubiquitous. We don’t question that there is PowerPoint instead of a human graphic designer. But one trend will have massive consequences on everything we humans do. I am talking about deepfakes. Already today, we can have a conversation with an avatar of Marilyn Monroe, asking her what she would do today in one or another situation. We can all speculate about possible futures with Marilyn Monroe. Some organizations might want to bring on board that fantastic CTO they can’t afford in real life. How will his simulated advice impact what happens to the human teams?
We will see different types of search engines based on generative AI. AI will shape our interaction with cloud services. The social media landscape will change, too. We will interact with digital avatars of celebrities and cartoon figures, but to what end – I can’t tell.
Lauterbach: How will people maintain their agency while constantly surrounded by AIs?
Przegalinska: You are asking the most challenging question there is. We can easily fall into pessimistic determinism where we say, hey, we will be just managed by technology. Technology will be the crucial decision-recommender, and we will just trust it. When it comes to trivial things, we won’t be against it. I wouldn’t be bothered if AI suggested a restaurant nearby instead of me just searching for it.
But we must keep an eye on the decision process itself. European AI regulation is trying to drive our attention to who decides things. Where do we want a code decision, or where do we rely on human agency with all the responsibilities attached – this is something to consider in the future. We can experiment with different types of interfaces that enable more agency. I saw such projects in the healthcare sector, for instance. Besides, there will be jobs of explaining AIs. Explainability vs. decision-making will be based on certain thresholds. When my team at Harvard builds a chatbot, it is all about defining thresholds. When does AI need to ask more questions? When will it just delegate a task back to you? Those building AIs will be more conscious about what makes us truly human and what we must decide.
Lauterbach: What do you think about the transition period from now to the world with deep penetration of AIs? For example, when I started working for McKinsey, I did a lot of analytical tasks, numerical analyses, and basic problem-solving exercises demanding the collection of benchmarking data. This training enabled my progression to further levels. How will people learn if AI masters basic skills and can substitute junior workers?
Przegalinska: AI can be an amplifier and a co-pilot. Organizations will consciously let people learn, even if technology can take over completely. We might witness an exciting redefinition of a ‘junior’ in a business. It won’t be about ‘can do’ but about psychological maturity. AIs allow a junior to grow quickly. They will act as teachers and supervisors.
Lauterbach: How should corporate HR leadership address the changing landscape of co-opetition between people and technology?
Przegalinska: I recently participated in the Shaping the Future of Work conference at MIT and listened to someone from LinkedIn. She stated that professional intelligence and employees’ education still don’t score high in their data. For example, onboarding and training are categories in jobs with slow progression. I think this is because AI is still considered to be a silo, not a universal equalizer and influencer. Suppose HR professionals don’t strategize around the impact of AI on their organizations. In that case, they miss vital points and fail to prepare their companies for what will be mainstream.
Lauterbach: What leadership qualities would be essential for humans in the age of AI?
Przegalinska: Soft skills are back. Understanding team dynamics, reading humans who work with you, collaborating, and competing are essential qualities. For instance, I feel that schools are still very competitive in Poland. I’m not sure one can nurture children in teamwork and recognize the diversity within a team while establishing a highly competitive environment. We don’t pay enough attention to human relations in general. My daughter is now at a school in Cambridge, Massachusetts. She has a class called Health, which is about physical, economic, and social Health. In the age of collaboration between humans and machines, we need more insights and mindfulness into what constitutes ‘health’ for a human being. Besides, technology literacy and the ability to communicate complex concepts to a broad audience are essential.
Lauterbach: Who were your most important influencers contributing to your vision’s evolution and professional path?
Przegalinska: I have already mentioned Professor Sierocki, who put AI into a broad context of philosophical studies. In the US, I met fantastic women in academia and learned from them. For example, Rosalind Picard created the Affective Computing Research Group at the MIT Media Lab. She co-founded startups Affectiva and Empatica. Finally, Richard Barry Freeman consulted the solidarity movement in Poland about how to form labor unions. He thinks about how AI can amplify and help jobs and daily work. This thinking is highly inspirational.
This interview on AI and soft skills is part of Leading Through Disruption, our series featuring powerful conversations with leaders navigating this era of relentless change.
Join the conversation on LinkedIn.
Download this article on AI and soft skills with Aleksandra Przegalinska.