In its efforts to deploy AI tools, international services firm Wolters Kluwer has created frameworks to ensure responsible AI development with continuous human oversight.
The Dutch company has woven AI into its core products for more than a decade, products that now drive about 50% of digital revenue. Wolters Kluwer’s strategy is to create an “AI toolbox” from which it can chose which models best fit any single business task. The company also learned a key truth about the fast-moving technology: without clean data, AI produces errors and hallucinations.
Deep integration of AI — instead of relying on add-ons — has been a core approach to rolling out the technology at the nearly 200-year-old firm. For example, in its Tax & Accounting division, Wolters Kluwer pursues a strategy called “Firm Intelligence,” which leverages AI, its own content, and embedded platform integration to anticipate internal workforce and customer needs.
The Netherlands-based company has also established what it calls Responsible AI Principles that emphasize transparency, explainability, privacy, fairness, governance, and human-centric design.
In this Q&A, Wolters Kluwer CIO Mark Sherwood explained how his company has seen efficiencies in AI-assisted code generation and a closing of skills gaps.
Mark Sherwood, Wolters Kluwer
Wolters Kluwer
AI-assisted code generation tools are increasingly prevalent in software engineering. How has AI-assisted development changed your software development lifecycle? “We are beginning to see improvements in our software development lifecycle leveraging AI-assisted development. We are reducing the time it takes to generate code while vastly reducing the number of errors and the time it takes to test the new code. Our current targets are a 25% reduction in both metrics, and we are seeing signs that those goals are very achievable.”
Which AI tools have provided the most value to your engineering teams so far? “We use a mixture of LLMs [large language models], automated test assistants and domain-specific AI models, and we’ve found some of the native third-part tools are very good at some specific tasks. We have not (yet) found one tool that can do it all, but that may never be the case. We have chosen to go down the route of bringing an ‘AI toolbox’ with a number of different tools and picking the one(s) that we believe are best suited for the task at hand.”
Will AI-assisted code generation tools eliminate the need for as many software developers. Have you seen that in your own organization? “We firmly believe that AI-assisted code-generation tools will change the structure of software development teams over time, with fewer people needed for repetitive coding tasks. We believe this is particularly the case for more entry-level coding work, but we also see this as an opportunity to shift more junior talent into more advanced and creative projects early on in their careers.
“While we have not eliminated any existing roles to date due to AI-assisted code generation tools, we have reduced the number of open requisitions we used to have for software developers. We do not view AI as a way to eliminate current job roles, but more to allow software developers to work on other high-value tasks.”
How are you managing code quality, testing, and security with AI-generated code? “We are using AI to help with testing both AI-generated and human-generated code. It’s still early days so we have engineers involved, but we see a day in the very near future where we’re able to have AI test all code without needing human intervention. We do have security checks in place — they are a key part of our DevSecOps strategy, which lends itself well to leveraging the advantages of what AI brings.”
Is AI helping you close skills gaps or reduce dependency on specific roles? “AI is helping us both close skills gaps and reduce dependency on certain roles. The amount of interest and knowledge in AI and AI tools is increasing at a rapid pace and we are building up our own internal knowledge very quickly. In the initial phases, it’s more about improving the skills of software engineers and some more technical business roles. Going forward, it will allow us to reduce dependencies in a number of areas of engineering, both external and internal facing.”
How are large corporations — especially in regulated sectors like healthcare, finance and legal — deploying AI at scale while managing risk, data security and compliance challenges? “Managing risk is one of our highest priorities and a robust data security program is a critical piece of that strategy. We have safeguards in place to make sure we are only using our own internal data, which represents nearly 200 years of proprietary information, and we go to great lengths to ensure that data is managed and protected.”
What governance policies do you have around using generative AI? “We have created an AI Center of Excellence that has members from all organizations across the company, including our product development organization and our internal information technology organizations, which are driving this.
“The focus is on the product development organization, but both Product Development and Information Technology teams are key participants. Part of the charter of the team is to create and help enforce the governance policies around AI usage, including tools, and making sure that we prioritize the work being done across the teams.”
What’s coming next, including AI agents, quantum security risks and why data quality is essential to successful digital transformation? “AI is progressing at a rapid pace. We’re already developing AI agents and are working through the implications of having AI ’employees.’ It’s exciting to see a mindset shift moving from thinking of AI as just a tool to viewing it as an operator. These systems will take on tasks, make decisions and function independently. This will have real implications for how we design products, structure workflows and approach accountability.
“Of course, none of this works without good data. AI models are only as effective as the information they’re trained on and without a strong data strategy, effective governance, and enterprise-wide participation, companies won’t be able to fully leverage AI agents. That’s why we place such a strong emphasis on ensuring our nearly 200 years of data at Wolters Kluwer remains accurate and reliable.”
Read the full article here