
Local AI vs Cloud AI: What Board Directors Need to Know
You have received your board pack. It contains 294 pages: pre-market financial results, a confidential M&A discussion, the audit committee's findings on material controls, and legally privileged advice from external counsel. You want to use AI to help you prepare — to summarise the key risks, cross-reference the financials against the strategy paper, and surface the questions you should be asking at Tuesday's meeting.
The question is not whether AI can do this. It can. The question is where your documents go when it does.
This is the distinction that matters most for board directors considering AI tools: the difference between cloud AI, which processes your documents on external servers, and local AI, which processes them entirely on your own device. For most professional tasks, the distinction is academic. For board papers, it is fundamental.
How Cloud AI Works
When you use a cloud-based AI tool — ChatGPT, Google Gemini, Microsoft Copilot, or any of the enterprise board management platforms — your documents leave your device. They are transmitted over the internet to external servers, typically located in data centres operated by the AI provider or their cloud infrastructure partner. The AI model processes your documents on those servers and sends the results back to you.
This means your board papers exist, however briefly, on infrastructure you do not control. The provider's security practices, data retention policies, and jurisdictional obligations determine what happens to your documents during and after processing.
For most business documents, this model works well. Cloud AI offers powerful models, continuous updates, and the ability to handle large volumes of data. The security measures employed by major cloud providers are substantial — encryption in transit and at rest, SOC 2 compliance, and regular security audits.
But board papers are not most business documents.
Why Board Papers Are Different
Board papers occupy a unique position in corporate life. They routinely contain information that is market-sensitive, legally privileged, commercially confidential, or all three simultaneously. A single board pack may include:
- Pre-market financial results that could move share prices if disclosed prematurely
- M&A discussions involving named targets, valuations, and negotiation strategies
- Legal advice protected by privilege, which may be waived if shared with third parties
- Executive remuneration proposals that are politically sensitive before board approval
- Risk assessments that reveal vulnerabilities a company has not yet addressed
- Strategic plans that would give competitors a material advantage
The consequences of inadvertent disclosure are not abstract. For a publicly listed company, premature release of financial results can trigger regulatory investigation. For any company, loss of legal privilege through careless handling can undermine litigation positions worth millions. And for the individual director, personal liability under the UK Corporate Governance Code and Companies Act is real.
This is why the security architecture of your AI tool matters — not as a technical curiosity, but as a governance decision.
The Data Journey: Cloud vs Local
Understanding where your data travels is essential to making an informed choice. Here is what happens in each model:
Cloud AI: Your Documents Leave Your Device
- You upload or paste your board papers into the AI tool
- Your documents are transmitted over the internet (encrypted in transit) to the provider's servers
- The AI model processes your documents on those servers
- Results are sent back to your device
- Your documents may be retained on the provider's servers according to their data retention policy
- Your prompts and inputs may be logged for service improvement, abuse prevention, or model training
At each stage, your documents exist on infrastructure managed by a third party. Even with strong encryption, the data is decrypted during processing — the AI model must read your documents in plain text to analyse them.
Local AI: Your Documents Stay on Your Device
- You add your board papers to the AI tool on your computer
- The AI model, running locally on your device, processes your documents
- Results are generated and displayed — all on the same device
- Nothing is transmitted over the internet
- No data is retained on external servers because no external servers are involved
- Your prompts, inputs, and outputs remain entirely under your control
The difference is not incremental. It is architectural. With local AI, there is no data journey to secure because the data never leaves.
The Security Comparison
| Factor | Cloud AI | Local AI |
|---|---|---|
| Data leaves your device | Yes | No |
| Processed on external servers | Yes | No |
| Subject to provider's data retention policy | Yes | No |
| Potentially used for model training | Depends on provider and settings | No |
| Visible to provider's employees (in principle) | Possible, depending on access controls | No |
| Subject to foreign jurisdiction requests | Possible, if servers are overseas | No |
| Requires internet connection | Yes | No |
| Vulnerable to cloud infrastructure breaches | Yes | No |
This is not a theoretical risk assessment. In 2025, security researchers documented over 225,000 OpenAI credentials for sale on dark web markets, harvested by infostealer malware from compromised endpoints.1 In July 2025, a flaw in ChatGPT's sharing feature resulted in over 4,500 private conversations being indexed by public search engines — including financial queries and confidential business discussions.2 And in early 2026, Microsoft confirmed a security vulnerability that allowed its Copilot AI to read and summarise emails marked as "confidential" without user permission, bypassing Data Loss Prevention labels.3
These are not failures of incompetent providers. They are the inherent risks of any architecture where sensitive data is processed on shared infrastructure.
The Scale of the Problem
The evidence on how professionals are using AI with sensitive data is sobering.
According to Metomic's research, 34.8% of employee inputs to ChatGPT now contain sensitive data — up from 11% in 2023.4 That is a threefold increase in two years. The LayerX Enterprise AI & SaaS Data Security Report found that 77% of employees have pasted company information into AI tools, and 82% of those used personal accounts rather than enterprise-managed tools.5
For board directors specifically, the Diligent Institute found that 46% of directors using AI for board work rely on consumer tools like ChatGPT or Gemini.6 These tools were not designed for handling market-sensitive documents. They were designed for general-purpose assistance — and their terms of service reflect that.
Meanwhile, 69% of organisations cite AI-powered data leaks as their top security concern, yet nearly 47% have no AI-specific security controls in place.4 The gap between the risk and the response is wide.
GDPR and Data Protection
For UK-based directors, data protection law adds another dimension. The UK GDPR requires that personal data is processed lawfully, with appropriate technical and organisational measures to protect it. When you upload board papers containing personal data — employee names, executive compensation details, customer information — to a cloud AI service, you are initiating a data processing activity that engages these obligations.
The UK Information Commissioner's Office (ICO) launched its AI and Biometrics Strategy in June 2025, signalling increased regulatory focus on how AI tools handle personal data.7 The Data Use and Access Act (DUAA), which received Royal Assent in June 2025, introduces targeted updates to the UK GDPR that are expected to come into force throughout the first half of 2026.8
Cloud AI providers typically process data under their own terms of service. Depending on the provider, your board papers may be processed on servers located outside the UK — in the United States, for example — subjecting them to different legal regimes. The US CLOUD Act, for instance, allows US authorities to compel disclosure of data held by US-based companies, regardless of where the data is physically stored.9
With local AI, these questions do not arise. If your documents never leave your device, there is no cross-border data transfer, no third-party processor to assess, and no reliance on another organisation's data protection practices.
When Cloud AI Is Appropriate
It would be misleading to suggest that cloud AI has no place in professional life. For many tasks, it is the right choice:
- General research and drafting where no confidential information is involved
- Public information analysis such as reviewing published reports or market data
- Administrative tasks like scheduling, email drafting, or document formatting
- Learning and exploration where you are working with hypothetical scenarios
Cloud AI excels at these tasks. The models are powerful, continuously updated, and increasingly capable. The key is knowing which documents are appropriate to share with an external service — and which are not.
When Local AI Is Essential
For any document where confidentiality is a governance requirement, local AI is not merely preferable — it is the only architecture that eliminates the risk of external exposure:
- Board papers containing market-sensitive information
- Legal advice where privilege must be preserved
- M&A documentation with named targets and valuations
- Executive remuneration and succession planning materials
- Risk and audit reports that reveal unresolved vulnerabilities
- Any document you would not send as an email attachment to an unknown recipient
The last test is perhaps the most practical. If you would not email the document to someone outside your organisation, you should not upload it to a cloud AI service that processes it on external servers.
The Shadow AI Problem
There is a further dimension that boards should consider. Gartner projects that 40% of enterprises will suffer a data breach attributable to "shadow AI" by 2030 — not from hacking or phishing, but from employees voluntarily submitting sensitive data to unauthorised AI tools.10 Research from IBM found that shadow AI breaches cost organisations an average of $670,000 more than traditional incidents and took longer to detect, averaging 247 days.10
For individual directors, the shadow AI risk is particularly acute. Unlike company employees, NEDs typically use their own devices and accounts. There is no IT department monitoring which tools you use to prepare for board meetings. The responsibility for protecting confidential information rests with you personally.
A local AI tool that processes documents entirely on your device eliminates the shadow AI risk by design. There is no external service to monitor, no account to audit, and no data flow to track — because the data never leaves your control.
The Confidential Computing Middle Ground
It is worth noting that the technology landscape is evolving. Major cloud providers are investing in "confidential computing" — hardware-based security that encrypts data even during processing, using trusted execution environments (TEEs). Gartner predicts that by 2029, more than 75% of processing operations on untrusted infrastructure will be secured by confidential computing.11
This is a promising development, but it remains early. For board directors making decisions today, confidential computing in cloud environments is not yet widely available for consumer AI tools, and verifying that your specific data is protected by these measures requires technical expertise that most directors do not have — and should not need to have.
Local processing offers a simpler assurance: your documents do not leave your device. There is nothing to verify because there is no external processing to secure.
What to Look For in a Local AI Tool
If you are considering local AI for board preparation, these are the questions to ask:
Does it process documents entirely on my device? The answer should be unambiguous. "Partially local" or "data is encrypted before sending" still means your documents leave your device.
Does it require an internet connection to analyse documents? If it does, data may be transmitted externally. True local AI works offline.
Can I use it across multiple boards? Portfolio NEDs sit on two to four boards simultaneously. A tool that works across all of them — without mixing contexts or requiring separate company approvals — is significantly more practical.
Is it designed for the documents I work with? Board papers are complex, multi-section documents with financial tables, narrative reports, and cross-references. The AI needs to handle this structure, not just extract text.
What happens to my data after processing? With local AI, your data should remain on your device, under your control, with no external logging or telemetry.
The Governance Dimension
Provision 29 of the UK Corporate Governance Code, effective for financial years beginning on or after 1 January 2026, requires boards to declare the effectiveness of their material controls.12 Among those controls are the measures protecting confidential information.
A director who uploads board papers to a consumer AI tool is creating a data flow that sits outside the company's information security framework. If that data is exposed — through a breach, a training data leak, or a discoverable chat log — the director's personal liability is engaged. As Skadden's partners noted in their guidance for the Harvard Law School Forum on Corporate Governance: "AI chats may be discoverable by regulators or litigation adversaries."13
Local AI eliminates this risk vector entirely. Documents that never leave your device cannot be discovered on someone else's servers.
A Practical Framework
For board directors navigating this landscape, here is a simple decision framework:
Step 1: Classify the document. Is it confidential, market-sensitive, legally privileged, or commercially sensitive? If yes, proceed to step 2.
Step 2: Consider the processing architecture. Does the AI tool process documents on external servers, or entirely on your device?
Step 3: Apply the email test. Would you send this document as an attachment to an unknown recipient? If the answer is no, do not upload it to a cloud AI service.
Step 4: Choose accordingly. Use cloud AI for general tasks with non-sensitive materials. Use local AI for anything that requires confidentiality.
This is not about being anti-technology. It is about applying the same governance rigour to your AI tools that you apply to every other aspect of your board responsibilities.
Notes
meetinginsight.ai processes your board papers entirely on your device. Nothing uploaded. Nothing shared. Nothing stored elsewhere. Download a free 30-day trial at meetinginsight.ai/download.
Footnotes
-
Wald AI, "ChatGPT Data Leaks and Security Incidents (2023–2026)," February 2026. https://wald.ai/blog/chatgpt-data-leaks-and-security-incidents-20232024-a-comprehensive-overview ↩
-
Ismail Kovvuru, "ChatGPT Privacy Leak 2025: Deep Dive, Real-World Impact, and Industry Lessons," Medium, 2025. ↩
-
Security Boulevard, "Microsoft Patches Security Flaw That Exposed Confidential Emails to AI," February 2026. https://securityboulevard.com/2026/02/microsoft-patches-security-flaw-that-exposed-confidential-emails-to-ai/ ↩
-
Metomic, "Is ChatGPT Safe for Business in 2026?" Based on Q4 2025 research. https://www.metomic.io/resource-centre/is-chatgpt-a-security-risk-to-your-business ↩ ↩2
-
LayerX, "Enterprise AI & SaaS Data Security Report 2025." As reported in eSecurity Planet, "77% of Employees Leak Data via ChatGPT." https://www.esecurityplanet.com/news/shadow-ai-chatgpt-dlp/ ↩
-
Diligent Institute / Corporate Board Member, "As Directors Embrace GenAI Use, Robust Governance Must Follow." https://www.diligent.com/resources/blog/as-directors-embrace-genai ↩
-
ICO, "Guidance on AI and Data Protection," 2025. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ ↩
-
DPO Centre, "Data Protection & AI Governance 2025–2026." https://www.dpocentre.com/data-protection-ai-governance-2025-2026/ ↩
-
Regolo / PR Newswire, "3,332 Data Breaches in the United States in 2025," citing CLOUD Act implications. https://www.prnewswire.com/news-releases/3-332-data-breaches-in-the-united-states-in-2025-regolo-powered-by-seeweb-offers-european-infrastructure-to-help-avoid-the-cloud-act-and-support-ai-act-compliance-302699464.html ↩
-
Shadow AI breach statistics cited in Cloud Radix, "Shadow AI Is Your Biggest Data Risk in 2026," referencing Gartner projections and IBM Cost of a Data Breach Report. https://cloudradix.com/blog/shadow-ai-data-risk/ ↩ ↩2
-
Tenable, "2026 Cloud Security and AI Security Risk Report," citing Gartner research on confidential computing. https://www.tenable.com/blog/cloud-ai-research-report-2026-governance-vs-innovation ↩
-
FRC, UK Corporate Governance Code 2024, Provision 29. https://www.frc.org.uk/library/standards-codes-policy/corporate-governance/uk-corporate-governance-code/ ↩
-
Skadden, Arps, Slate, Meagher & Flom LLP, "Do's and Don'ts of Using AI: A Director's Guide," Harvard Law School Forum on Corporate Governance, September 2025. https://corpgov.law.harvard.edu/2025/09/14/dos-and-donts-of-using-ai-a-directors-guide/ ↩