AI governance is becoming urgent as Deloitte Sri Lanka warns firms to manage risks around bias, privacy, cybersecurity and compliance.
AI governance is becoming a critical business priority as artificial intelligence moves deeper into corporate operations, customer-facing platforms, and decision-making systems, Deloitte Sri Lanka has warned.
As organisations across industries increase their use of artificial intelligence, the need for stronger oversight, risk management, transparency, and accountability is becoming more important than ever. Deloitte Sri Lanka said companies can no longer treat AI as an experimental technology managed only by technical teams.
“AI adoption is accelerating across industries, but so are the associated risks. Organisations need to move beyond experimentation and focus on building structured governance around how AI systems are developed and used,” Vengadasalam Balagobi, Cyber and Technology Risk Head and Information Security Leader at Deloitte Sri Lanka & Maldives, said.
His comments come as businesses explore generative AI and other advanced AI applications while facing growing concerns over accuracy, bias, data privacy, cybersecurity, and regulatory compliance. This raises concerns about whether organisations are moving fast enough to control the risks that come with rapid AI adoption.
ISO/IEC 42001, released in December 2023 by the International Organization for Standardization, is now emerging as an important framework for organisations seeking to build trustworthy, transparent, and accountable AI systems.
“Frameworks such as ISO 42001 provide a practical starting point to strengthen oversight, manage risks effectively, and build trust with stakeholders,” Balagobi said.
As AI becomes more embedded in business functions, the risks are no longer limited to technology departments. They are increasingly becoming boardroom issues, especially as companies attempt to scale AI responsibly while protecting stakeholder confidence.
ISO 42001 provides a structured management system standard for AI governance and risk management across the AI lifecycle. It covers governance structures, accountability, risk assessment, transparency, fairness, and mechanisms to support compliance with changing legal and regulatory requirements.
For organisations seeking to adopt AI more confidently, the standard offers a practical route to strengthen internal oversight. It also helps businesses demonstrate readiness to customers, regulators, investors, and other stakeholders who are now watching how AI is designed, deployed, and monitored.
From a Deloitte perspective, the importance of ISO 42001 is not limited to certification. It represents a more disciplined and sustainable approach to managing AI-related risks while building the foundations for long-term trust.
As artificial intelligence becomes more involved in business operations and decision-making, this level of maturity will become increasingly important. However, questions remain over how many organisations are ready to move from AI enthusiasm to responsible AI governance.
The relevance of ISO 42001 is also tied to the wider direction of global regulation. AI-related requirements are continuing to evolve across jurisdictions, and many areas covered by ISO 42001 align with broader regulatory and governance expectations.
For businesses, early alignment with such a framework can support risk management while also improving future readiness. This is especially important as governments, regulators, and customers become more alert to the consequences of poorly governed AI systems.
Another strength of ISO 42001 is that it can build on capabilities many organisations may already have in place. Existing controls and processes around data governance, information security, privacy, enterprise risk management, and internal audit can often serve as a starting point.
This gives organisations an opportunity to assess what already exists, identify gaps, and strengthen coordination across teams involved in AI development, deployment, and oversight. What happens next could be critical as businesses decide whether AI governance becomes a real operational discipline or remains only a policy document.
“Organisations should take a structured approach when strengthening AI governance. This includes assessing existing capabilities across governance, security and compliance, establishing clear ownership of AI risk management responsibilities, and ensuring there is sufficient evidence to demonstrate that AI systems are operating effectively and sustainably over time,” Vengadasalam Balagobi, Cyber and Technology Risk Head and Information Security Leader at Deloitte Sri Lanka & Maldives, said.
As conversations around AI shift from experimentation to enterprise adoption, governance will play a central role in determining how confidently organisations can scale these technologies. ISO 42001 offers a timely framework for businesses seeking to balance innovation with accountability while building trust in how AI is designed, deployed, and monitored.
Deloitte remains committed to supporting organisations as they navigate the evolving AI landscape, strengthen governance practices, and build systems that are trusted, resilient, and fit for a rapidly changing business environment.
