This director's roundtable brought together experienced directors to answer this key question: What is the board director's role in AI?
  • Home
  • Articles
  • How Directors Should Lead on GenAI | Directors Roundtable

The Journal Of People + Strategy

How Directors Should Lead on GenAI | Directors Roundtable

The Journal Of People + Strategy

Tuesday, April 9, 2024

Because the generative artificial intelligence revolution is a huge disruptor to how work is done, directors are forced to make important decisions and ask the right questions. Dawn Zier sat down with three experienced board members to discuss a board director’s role in AI.

Participants

Saar Gillai, chairman of Liquid Instruments; director at Semtech

Anastassia Lauterbach, founder and CEO of AI Edutainment; director at Cyberion, RiskQ, and Aira Technologies. Co-founder of the Austrian and German chapters of Women Corporate Directors

Carlyn Taylor, chief growth officer and global co-leader of corporate financing at FTI Consulting; director at Flowserve and The Hain Celestial Group

Moderator

Dawn Zier, former CEO of Nutrisystem; director at Hain Celestial Group, Prestige Consumer Healthcare, and Acorns

 

DAWN ZIER: Carlyn, you were in Davos earlier this year, where there was a lot of talk around generative artificial intelligence (GenAI). You also oversee AI strategy at FTI Consulting. Can you share some of your key takeaways from Davos?

CARLYN TAYLOR: There’s a lot of discussion about whether GenAI will replace humans in the workplace and in what types of roles. Right now, GenAI is just math. It may seem like it’s talking to you, but it’s really just math and statistics. It doesn’t actually think the way we do; it doesn’t have judgment.

At Davos, there was debate about whether we’re ever going to be able to create AI that has judgment. There were also conversations around the creation of artificial general intelligence (AGI) and superintelligence, the latter meaning that AI is smarter than humans. Some thought this might be 20 years away; others thought it would never happen.

Many of the AI discussions at Davos centered around regulation with a desire not to overregulate. AI is going to solve many difficult problems for the world and mankind. It’s going to be much more positive than negative. But safeguards need to be in place. Some AI experts said the EU is going to regulate AI to such an extent that they’re going to kill off AI entrepreneurship there. In the U.S., neither political party seems inclined to overregulate it.

ZIER: Anastassia, you bring a European and academic perspective along with many years of industry expertise. What are your thoughts? Saar, please bring the Silicon Valley voice to the conversation.

 ANASTASSIA LAUTERBACH: I am against the European AI Act for several reasons. It doesn’t solve any risk of AI technologies. Deep learning is mainstream in machine learning today. The machine will execute admirably and deliver reliable results as long as the problem is clearly stated and all training parameters are clear.

However, unlike humans, machines don’t understand what is in front of them. This is why we see a lot of “hallucinations” in ChatGPT and its cousins. There will also be issues with biases in existing datasets. For example, there is a gender gap in medical applications, as historically, the health care industry developed therapies on cohorts of healthy young male probands. Also, it’s difficult to apply one-size-fits-it-all to the human-in-the-loop problem. Every application is different. Only experts can decide what is contextually valid. Bureaucrats won’t be capable of helping.

Midsized AI companies spend 25 percent of their revenues on the cloud and 15 percent on data hygiene and pre-processing. Additional expenses for regulatory compliance might put them out of business. The EU AI Act is now under review by the legislators, but it’s dangerous to have people regulating things they don’t understand.

If implemented, the best companies will leave Europe, and development will move abroad, leaving Europe competitively disadvantaged. Depending on the degree of regulation, Europeans might have limited consumer experience if regulators start banning certain services coming into Europe. The Middle East, especially Saudi Arabia, is opening its doors and incentivizing companies to move there to develop AI. This is the one region worldwide without financial issues, so it offers fertile ground for those looking for investment dollars.

SAAR GILLAI: While coming from Silicon Valley, I also have a military intelligence background. So, while I support innovation 100 percent, there needs to be balanced regulation.

We talk a lot about physical border control, but what about protecting our digital border? Before the web, I couldn’t just bring anything I wanted into the country; everything went

through passport control. TikTok has no passport control. China solves its border problem with the Great Firewall, but that’s authoritarian. However, without digital passport control, we can lose our sovereignty. Balancing digital innovation and regulation, including GenAI, is a complex issue. There’s no silver bullet.

Pivoting to superintelligence, I don’t think we’re close to that. GenAI is taking information and spitting stuff out based on pattern recognition. GenAI lacks wisdom, even though it may seem to emulate it. It’s a very good actor with an authoritative, confident swagger. But it’s just the next level of data processing—a faster horse.

 

“Many of the best tools are open source and often free.” – Saar Gillai

 

ZIER: GenAI is a clear disruptor. One could argue it levels the playing field by expanding access to data and insights. Do companies need to be the first to move?

GILLAI: We’re at the very early stages of this. I have published a pyramid framework starting with data, information, knowledge, and wisdom. The first stage of this revolution is about taking information and turning it into knowledge, in the same way, that the first stage of the web was about taking stuff you had lying around and making it easy to access.

Many free, quality GenAI materials are available online to help people at all levels understand and learn it. Additionally, many of the best tools are open-source and often free. Most companies don’t need to build their own language models or spend a lot of money. The rewards will come to those who are curious. Watch what some of the cutting-edge companies and start-ups are doing. You will lose a competitive edge if you’re not playing with AI and experimenting.

TAYLOR: I agree with Saar. There is a wide range of maturity within large companies worldwide, so don’t be concerned if you’re just getting started on thinking about how to respond to AI. However, the AI revolution is moving faster than the internet revolution. We are at an inflection point. Companies that don’t proactively engage will get left behind.

Disruption is already visible in the creative industries that produce electronic content. These industries will be revolutionized very quickly. There’s also a lot happening in the tech and healthcare fields and, to a lesser extent, finance because of how regulated financial services are today. The most dramatic changes in health care are happening in R&D for new drugs. In financial services, business leaders are taking a more measured approach due to the risks and potential biases around sensitive customer information they are responsible for protecting.

LAUTERBACH: Building on what Carlyn said, GenAI creates huge copyright issues. If GenAI creates something in the arts or literacy space based on the past works of authors, artists, or musicians, is that really original content? We have to be careful of using synthetic data in the medical world to ensure we aren’t enhancing false or misleading information, and we still need to solve the larger issue that the learning models behind GenAI are flawed due to their mathematical architecture.

I do not believe that GenAI levels the playing field. Those who master the infrastructure and those who master the curve game will win. There will be costs around regulation and compliance that could erode margins. It will be hard for smaller companies and start-ups to compete.

ZIER: I think the point about input bias is very important. I was just at a conference where GenAI was being demonstrated, and it was asked to create the image of a CEO, a terrorist, and a housekeeper. The inherent biases based on historical stereotypes were visibly apparent. This kind of bias in data can also present itself in talent searches, as there have been instances of women and people of color not “showing up.” We need to be careful that we don’t take an unintended step back.

ZIER: How should companies and boards begin to think about GenAI?

TAYLOR: The first thing boards should do is make sure that their companies have a strong senior technical expert who understands what AI can do and can work with the business executives to think through best use cases. Second, the company should explore the general tools available from Microsoft, OpenAI, Google, etc., to see if any are worth deploying. They should also review open-source point solution tools, which are often more valuable— and free. Don’t invest too heavily or too quickly in one or two solutions, at least not yet.

Third, get your data organized if it isn’t already. Data needs to be organized in a way that protects confidential information from getting outside your company and has permissions around who has access internally. Finally, experiment and conduct proof-of-concept tests.

GILLAI: Companies want to bring in experts, but there are no real experts for most use cases. The experts are people with some technical sense who play with it, learn, and understand. Technical people in your company should experiment and partner with cross-functional areas on thinking through use cases.

 

“Boards need to assess whether their companies need to be pushed to get started or slowed down to avoid risks.” – Carlyn Taylor

 

The safest place to start is with your company’s public data, as there’s no privacy risk. Leading-edge companies are putting all the information from their website into a language model and using it to answer product questions. This streamlines the customer service experience while simultaneously improving satisfaction.

Content marketing and press releases are other great use cases. GenAI can do 80 percent of the work, then you need a smart person to finalize the content. GenAI can be an expert assistant, allowing individuals to level up and do more interesting work while improving productivity.

ZIER: As directors, how should we be thinking about oversight for AI?

LAUTERBACH: When it comes to oversight for AI, directors need to understand cyber risk and have clear action plans for what to do in case of an attack. They need to consider how AI pertains to the company’s business model and how it can drive profitability. Finally, they need to make sure that the data strategy mirrors the competitive strategy.

GILLAI: The board needs to make sure the right structure and processes are in place. Ask who owns it. It should be someone in IT. How are you protecting data? What policies are in place? What contracts are in place, and how are third parties being vetted?

The board will need to provide oversight that balances embracing AI with the right structure and guardrails. Cyberattacks will become more sophisticated with the use of AI, so you will want to revisit corporate controls and make sure that you have double triggers for approvals and other things.

TAYLOR: Boards need to assess whether their companies need to be pushed to get started or slowed down to avoid risks, which depends on their maturity and speed. While the opportunities are exciting, boards should balance the AI enthusiasm and comply with regulations to avoid unintended consequences.

It’s important for leaders in regulated industries or industries with sensitive customer data to have guardrails in place and experienced individuals monitoring for bias and other risks. Here are some questions boards should be asking:

  • What are the use cases that the management team has thought through?
  • Who on the management team is accountable for the AI strategy?
  • How can we drive productivity?
  • What guidelines and governance are needed around the use of AI?
  • How is data being protected?
  • What permissions are in place?
  • What data is being used to train the AI, and how diverse and representative is it?
  • How do we attract the next generation of AI talent?

ZIER: How should directors be using GenAI in the boardroom?

 TAYLOR: Boards should not replace their own judgment and thinking, based on years of real-world experience, with the output of a new tool. AI is just one of a variety of methods boards can use to draw inspiration for finding the right questions. I use ChatGPT to summarize topics I don’t know much about, but it’s not good for answering specific questions on a specific situation. It’s important to remember that it doesn’t always answer factual questions correctly, so refrain from asking it for specific or recent facts.

GILLAI: One of the challenges for board directors is to make sense of the information they don’t deal with daily and to recall information across multiple meetings spanned months apart. GenAI could be a good tool for sorting through this.

One of the board portals could have an overlay that allows you to ask questions such as “What was said on this topic in earlier meetings?” and “How have the numbers changed?” Over time, GenAI should enable the board to ask smarter questions and make more informed decisions because the information will be at their fingertips and presented more easily.

With more information readily available, we will need to maintain good governance and remind ourselves of the lines between the board and management. Importantly, GenAI does not give you wisdom or replace director judgment.

“It would be easy to view GenAI as a tool you could outsource thinking to, but I strongly caution against doing that.” – Dr. Anastassia Lauterbach

LAUTERBACH: I was involved in a huge crisis on one of my boards, and what I noticed is that directors love to outsource the thinking to lawyers or consultants. It would be easy to view GenAI as a tool you could outsource thinking to, but I strongly caution against doing that because GenAI lacks basic understanding. It’s OK to use it as a tool, but recognize that it is flawed, can be factually incorrect, and has an inherent bias. It’s a long way from replacing our human expertise and instincts.

One benefit of machine learning is that directors can have a real-time view of the company’s activities, especially financials and trends. If the company invests in the data processing engine and real-time application environment, dashboards will become more robust.

GenAI 2029: Where Are We Headed?

We asked our roundtable of board directors to look into their crystal balls and tell us what the impact of generative artificial intelligence (GenAI) will be on the way we do business five years from now.

TAYLOR: This is one of the biggest revolutions I’ve seen in my career, similar to the internet. I think GenAI, with the proper guardrails in place, will improve productivity, help automate complex tasks, and make it much easier for people to interact with computers because you can use real-world language to ask the computer to do things. But GenAI won’t replace thinking, judgment, and creativity. It can free up more time and resources for people to focus more on important work and less on mundane tasks. It will make us quicker and more efficient.

GILLAI: I agree that GenAI will massively improve productivity because it will eliminate a lot of manual, repetitive work, in the same way Excel did when it came out. The machine will do a lot of the data work.

GenAI will level up people in the workforce. Many very smart people are not necessarily great writers, don’t create good presentations, and don’t know how to best organize their thoughts. GenAI can help them do that today. Everyone can have an executive AI assistant at their disposal.

I also think our education system must change. We will need to teach people how to think critically and gain knowledge and wisdom. They won’t be able to gain knowledge at work like a junior person traditionally does because AI will be doing that work. It will be similar to how we teach people math when they never have to do it.

LAUTERBACH: GenAI can provide broader access to the masses in the form of education and creativity tools. It has the opportunity to excite children early on to enter the world of AI to study neuroscience, computational science, computing, engineering, and linguistics. This will positively impact the future workforce as we will have more diversity, which will reduce the risk of biases and optimize AI for everyone.

 

The Directors Roundtable was hosted by Dawn Zier, executive coach at The ExCo Group, the former CEO of Nutrisystem, and a current board member at Hain Celestial Group, Prestige Consumer Healthcare, and Acorns.