© All rights reserved. Powered by Techronicler 

How To Create Trustworthy AI

AI at the workplace

As AI becomes embedded into enterprise products and services, CIOs and CISOs must move beyond traditional supplier assurance and adopt a risk-led approach to evaluating how vendors design, operate, and govern AI. This is no longer just a technical concern; it is a question of trust, accountability, and organizational resilience. Agentic AI systems can take actions, trigger workflows, or make decisions with limited human input. Given this, CIOs and CISOs should insist on transparency around where AI is used, whether it is assistive or agentic, what decisions it can influence or execute, and how failure or misuse would affect the organization. If a vendor cannot clearly articulate this, it signals immature governance and elevated risk.

From a cyber security perspective, AI introduces a new and expanding attack surface. For agentic systems, the risk is amplified: compromised prompts, poisoned data, or manipulated context can result not just in incorrect answers, but in unauthorized actions. Leaders should assess whether vendors have explicitly considered threats such as prompt injection, data leakage through model outputs, model poisoning, abuse of autonomous workflows, and escalation of privilege through AI-driven automation. Mature vendors can demonstrate that AI and agentic risks are embedded into threat modelling, secure development, monitoring, and incident response.

Data protection remains fundamental. CIOs and CISOs must understand how customer data flows through AI systems, including whether it is used for training, fine-tuning, telemetry, or decision-making. For agentic AI, this extends to understanding what data the agent can access, what systems it can interact with, and what safeguards prevent it from acting outside intended boundaries. This is where close involvement from Data Protection Officers is essential, ensuring lawful basis, data minimization, transparency, and appropriate DPIAs for AI-enabled processing. The critical issue is not policy language, but technical and legal control.

Ethical risk is now inseparable from cyber and operational risk. CIOs, CISOs, DPOs, and governance boards should examine how vendors manage bias, unintended harm, and unsafe behavior, particularly where AI or agents influence people, access, prioritization, or outcomes. This requires evidence of testing, runtime guardrails, and clear escalation when behavior crosses acceptable boundaries. Governance boards play a vital role in setting organizational risk appetite and determining where AI use is acceptable—or not.

Business/operational people, and even customers, will need to be part of governance boards and look to gain assurance that the right approach is in place. This may require education and training to explain the new responsibility for members.

A key challenge today is ongoing assurance of the LLM and agent behavior itself. Even when AI is procured safely, models and agents evolve. Leaders should expect vendors to continuously validate accuracy, detect bias drift, monitor agent decisions, and identify signs of data poisoning or adversarial manipulation. Assurance must include logging, evaluation, and testing of both outputs and actions—not one-off pre-deployment checks—and should be visible to risk and governance forums, not just IT.

This challenge is further complicated by AI supply-chain risk. Most vendors do not build AI end-to-end; they rely on model providers, cloud platforms, data sources, plug-ins, and agent frameworks. CIOs and CISOs must therefore assess not only the vendor, but the vendor’s dependencies: who provides the underlying model, where training data originates, how updates are propagated, and how downstream risks are managed. When visibility stops at the first tier, organizations inherit hidden risk without awareness or control.

However, it is essential to be clear about accountability. CIOs and CISOs are not the owners or maintainers of vendor LLMs or agentic models. Responsibility for model design, training data, agent logic, and ethical alignment sits primarily with the vendor or model provider. The role of the CIO and CISO is governance and assurance: ensuring accountability is explicit, risks are understood, and controls are enforced. DPOs provide legal and data protection oversight, while governance boards ensure decisions align to organizational values, regulatory obligations, and risk tolerance. When these boundaries are blurred, organizations accept risk without control.

From a CISO’s perspective, it is also important to be honest about current practice. Vendor assurance is rarely continuous today. For most organizations, assessments remain point-in-time activities triggered by procurement, renewal, or incidents. Only the most mature organizations reassess strategic vendors annually, and even fewer do so in response to incremental AI or agentic capability changes. AI breaks this model. The realistic response is not to pretend continuous assessment exists, but to prioritize high-impact vendors, material AI changes, and agentic capabilities that materially alter risk—and to escalate these changes through appropriate governance forums.

To manage this complexity, leading organizations are adopting a layered vendor assurance approach: deeper scrutiny for higher-risk AI use cases, evidence-based assurance rather than policy reliance, and ongoing engagement aligned to material change. Contracts reinforce transparency, notification of AI or agent updates, and clear accountability for failures.

Ultimately, trustworthy AI and agentic systems are a shared responsibility. CIOs and CISOs act as orchestrators, but success depends on collaboration between security, data protection, legal, risk, vendors, and governance boards to ensure AI-enabled services remain secure, ethical, lawful, and aligned to organizational risk appetite. Currently, trust in AI is built not on promises or frameworks alone, but on disciplined governance, realistic assurance, and clarity about who owns what—especially as systems move from answering questions to taking actions.

Richard Holland is Field CISO at Quorum Cyber. He has over 25 years’ experience working in various sectors from higher education, local government, housing, aerospace, defense, finance, and outsourcing. Prior to joining Quorum Cyber, Richard held several senior positions, including Assistant Director of Office CIO at Queen Mary University of London, Head of IT Systems and Cyber Security at Notting Hill Genesis, Assistant Director Technology Innovation at London Borough of Waltham Forest, and Head of Business Solutions at Genesis Housing Association.

If you wish to showcase your experience and expertise, participate in industry-leading discussions, and add visibility and impact to your personal brand and business, get in touch with the Techronicler team to feature in our fast-growing publication. 

Individual Contributors:

Answer our latest queries and submit your unique insights:
https://bit.ly/SubmitBrandWorxInsight

Submit your article:
https://bit.ly/SubmitBrandWorxArticle

PR Representatives:

Answer the latest queries and submit insights for your client:
https://bit.ly/BrandWorxInsightSubmissions

Submit an article for your client:
https://bit.ly/BrandWorxArticleSubmissions

Please direct any additional questions to: connect@brandworx.digital