No technology leader can be blamed for feeling a bit overwhelmed with how to handle artificial intelligence (AI). On one hand, there’s the prevailing message that AI will not only fundamentally change business, it will completely transform how we live. On the other hand, CIOs and CTOs must not be blinded by any amount of potential hype.
Those competing mindsets are causing many leaders to employ a pragmatic approach to AI deployment strategy. According to the Workday C-Suite Global AI Indicator Report, nearly three-quarters of executives believe that AI will make a global impact. They see the main benefits of AI including increased operational efficiency, smarter decision-making, and enhanced competitive advantage.
The idea of better results, while sparking excitement, also creates a deep sense of urgency for the technology pros charged with carrying out the AI-driven vision. And that feeling inherently causes anxiety. The Workday study showed that leaders are primarily concerned about the immense pressure they face with AI- and machine learning-related decisions. Where do they apply resources? Where should they spend? What initiatives take priority over others? What if they’re wrong?
That said, tech executives tend to be optimists, and that positive spirit extends to AI. CIOs and CTOs, after assessing the data and the forecasts, have no or little doubt about AI’s ability to boost productivity, foster greater collaboration, drive profits, and speed up the processes by which organizations find and develop talent. In fact, many recognize that as AI and machine learning evolve, so too will the capabilities of the people who actually do the work.
Another positive about AI, from the tech leader perspective, is that we will witness new roles emerging while current roles fade in importance. In practice, that means there will be an emphasis on constant learning and skills development. As such, AI will instill a new element of corporate culture where people will continuously enhance their abilities.
But as we mentioned earlier, these same optimistic leaders understand that they must be cautious of hype. This is especially the case with other forms of generative AI that will sprout in the marketplace. Any AI model is only as effective as the data and information on which it is educated. One can never predict if an inaccurate ChatGPT result will lead to a mild corporate embarrassment or a total financial catastrophe.
In the Workday study, almost 60% of respondents admitted that organizational data is somewhat or completely siloed, which makes it challenging to always get accuracy from AI and machine learning. Leaders can’t maximize the power of AI when there’s ambiguity about the integrity of data. And despite the pressure to get AI initiatives implemented as soon as possible, leaders can’t rush deployment just for the sake of reducing their to-do list.
Indeed, data integrity is the crux of the entire AI concept. To that end, many tech leaders are using the NIST AI Risk Management Framework to provide guidance for the design and development of AI initiatives. This is one way that CIOs and CTOs are acknowledging the potential pitfalls of AI and machine learning while moving forward with confidence.
That ability to recognize possible danger and simultaneously take action is a trait that leaders across the organization must develop. From marketing to operations to human resources to finance to sales (and more), AI-driven tasks and strategies will be the new normal; it’s not a matter of if AI will become primary tool but when it will be implemented.
“When” is the operative term here. The sooner an organization develops an AI culture based on data integrity, the more of an advantage they’ll have in the marketplace – for customers, talent, sales, and reputation. And taking that first step is simply a matter of being cautious yet optimistic.