LogoOsome
  1. Osome SG
  2. AI Governance & Data Responsibility Policy

Osome AI Governance & Data Responsibility Policy

Regions Covered: Singapore, Hong Kong, the United Kingdom (UK), the United Arab Emirates (UAE)

1.Purpose and Scope

At Osome, we use Artificial Intelligence (AI) to help founders, finance teams, and enterprises work faster and more accurately. This policy sets out how we design, deploy, and monitor AI safely across all our markets.

It explains:

  1. How Osome uses AI in our products and operations
  2. How customer data is protected
  3. What controls and safeguards Osome applies
  4. How we align with the regulatory frameworks across Singapore, Hong Kong, the UK, and the UAE

This framework applies to:

  1. All AI-enabled features in Osome’s products
  2. All in-house models and third-party AI platforms
  3. All systems that process customer data using AI
  4. All Osome employees, contractors, and business partners who use AI tools

2.Our Objectives

Osome’s AI governance aims to:

  1. Build trust through transparency and responsible use
  2. Protect personal, financial, and confidential data
  3. Maintain compliance across all operating jurisdictions
  4. Ensure AI supports humans without replacing human judgment
  5. Provide explainable and auditable AI decisions
  6. Give customers clarity on how their data is handled

AI at Osome is assistive, not autonomous. Humans always remain accountable for decisions that affect customers.

3.Governance Structure

Osome’s AI governance is led by a cross-functional group that reviews policies, ensures compliance, and oversees safe deployment.

Governance Layer
Responsibilities
Review Cycle
AI Working Group (CTO, Legal, Product Intelligence, Security)Owns AI policy, approves updates, ensures alignment with regulationsQuarterly
Product IntelligenceOversees data quality, model lifecycle, drift monitoring, HITL thresholdsContinuous
Engineering & SecurityImplements encryption, access control, logs, and incident responseContinuous
Legal & Compliance (per region)Ensures compliance with PDPA, GDPR, PDPO, UAE Charter; oversees auditsBi-annually
Customer Success & CommsCommunicates customer assurance, handles AI-related FAQs, and maintains a transparency logOngoing

4.Osome’s Core AI Governance Principles

Below are the eight principles that guide how we build and operate AI. Each principle includes what it means for customers and how Osome ensures it.

4.1

Data Quality & Lineage

High-quality, well-managed data is foundational to safe AI.

What this means:

  1. All data used for AI must be accurate, complete, and relevant.
  2. Data sources are documented from ingestion to output.
  3. Training datasets and features are versioned.

How Osome implements this:

  1. Data quality checks run during ingestion and processing.
  2. Models are periodically monitored for drift, anomalies, and performance degradation, with automated alerts triggering human review where required.
  3. Errors are flagged and corrected through manual and automated processes.
4.2

Human-in-the-Loop (HITL) Oversight

AI never acts alone on high-impact customer decisions.

What this means:

  1. Humans supervise and approve decisions where mistakes could cause financial or compliance harm.
  2. “Low-confidence” or ambiguous outputs are automatically routed to human review.

How Osome implements this:

  1. For any AI-assisted output that could materially impact compliance, financial reporting, or statutory filings, human review is mandatory.
  2. All HITL interactions are logged for audit purposes.
4.3

Security & Access Controls

Security is embedded across all AI systems.

What this means:

  1. Customer data is encrypted in transit and at rest.
  2. Only authorised roles can access AI models and datasets.

How Osome implements this:

  1. Data in transit is protected via HTTPS/TLS1.2 (or higher); failed TLS connections do not fall back to an insecure connection.
  2. Data at rest, including backups, is encrypted with centrally managed keys; keys are changed at least annually.
  3. Access to AI models, datasets, and supporting systems follows least privilege with Role-Based Access Control (RBAC); Multi-Factor Authentication (MFA) is required for privileged access to production systems; access is reviewed quarterly and removed within 24 business hours when no longer needed.
  4. Vendor-level cache retention is disabled where possible.
  5. Cloud systems are configured for minimal retention.
4.4

Explainability & Transparency

Customers deserve to understand how AI supports their experience.

What this means:

  1. AI decisions must be understandable in human language.
  2. Customers can request explanations of outcomes.

How Osome implements this:

  1. Documentation for each AI feature includes its purpose, boundaries, and oversight.
  2. A Transparency Log summarises changes, incidents, and updates.
4.5

Privacy & Compliance

We comply with local regulations in each of our markets.

What this means:

  1. Customer data is never used to train general-purpose models.
  2. Data processing follows consent principles.
  1. Customers have the right to access, deletion, and correction.

Osome aligns with:

  1. Singapore: PDPA, IMDA Model AI Governance Framework, AI Verify
  2. Hong Kong: PDPO, PCPD AI Guidance
  3. UK: GDPR (retained), ICO AI Guidelines
  4. UAE: UAE AI Charter, National Data Protection Law

How Osome implements this:

  1. Privacy Impact Assessments for new AI features
  2. Region-specific retention and residency
  3. Client right requests are handled in accordance with the applicable Privacy Policy for each market.
4.6

Monitoring & Auditability

All AI systems must be continuously assessed and auditable.

What this means:

  1. Models monitored for drift, anomalies, and bias.
  2. Logs are maintained for accountability and regulatory review.

How Osome implements this:

  1. Automated alerts trigger human review.
  2. Audit logs are retained according to jurisdictional requirements.
4.7

Vendor Governance

Third-party AI tools must meet Osome’s standards.

What this means:

  1. Vendors cannot use customer data to train their models.
  2. Data residency and retention must meet regulatory obligations.

How Osome implements this:

  1. NotebookLM: No training on user content
  2. Gemini/Vertex: Project-level cache controls enabled
  3. Annual vendor risk reviews
4.8

Accountability & Training

Humans are responsible for all AI-assisted decisions.

How Osome implements this:

  1. Mandatory Responsible Use training for employees
  2. Fact-checking workflow for AI-generated outputs
  3. Escalation procedures for misuse
  4. Regular team refreshers on data privacy

5.Regional Regulatory Alignment

Region
Key Frameworks
What This Means for Customers
SingaporePDPA, IMDA Model AI Governance Framework, MAS FEATExplicit consent, transparency, fairness, and bias testing
Hong KongPDPO, PCPD AI GuidelinesPurpose limitation, accuracy, and customer rights
United KingdomGDPR (UK), ICO AI GuidanceExplainability, DPIAs, human review rights
UAEUAE AI Charter, National Data Protection LawEthical AI, clear communication, and residency controls

6.How Osome Uses Data in AI

6.1 We never train AI models on your books

Your accounting data is used only to deliver services — never to train general-purpose models.

6.2 Data Flow Overview

Input → Processing → Human Review (if required) → Output → Secure Storage

6.3 Vendor Configurations

  1. Vendors are reassessed on a semi-annual basis; third-party data processing agreements are maintained for all processors in scope.
  2. Strict boundary controls are enforced
  3. No reuse of customer content

6.4 Retention & Deletion

  1. Data stored according to each region’s laws
  2. Customers can request deletion
  3. Logs retained for regulatory compliance

7.Human-in-the-Loop (HITL) Controls

We maintain human oversight over all material outcomes.

Examples:

  1. Document interpretations are reviewed by humans
  2. Low-confidence outputs are flagged for human validation
  3. Relevant information and knowledge are created solely by humans

8.Monitoring, Testing & Auditing

Osome performs:

  1. Regular model validation
  2. Drift detection
  3. Bias assessments
  4. Adversarial (red-team) testing
  5. Logging & audit trial retention

Cadence:

  1. Continuous: automated alerting for anomalies, access issues, and system errors; events forwarded to central monitoring.
  2. Monthly: model validation sampling and KPI review; backup success reporting and restore spot-checks for critical systems.
  3. Quarterly: bias assessments; access rights reviews for user, admin, and service accounts; review of audit logging coverage and gaps.
  4. Semi-annually: recovery exercises for critical systems, including restore verification.
  5. Annually: review of encryption key management and relevant policies; vendor risk posture summary for approved AI vendors.

9.Accountability Metrics (KPIs)

We track the following indicators:

  1. % of AI outputs requiring human review
  2. Drift incidents and remediation speed
  3. Number of data-related incidents (target: zero)
  4. Transparency Log updates
  5. Completion rate of internal Responsible AI training

10.Customer Rights

Customers may:

  1. Request human review of any output
  2. Ask for an explanation of how AI assisted
  3. Access, correct, or delete their data
  4. Request region-specific privacy information

11.Transparency Log

We maintain a public Transparency Log on the Osome Trust Hub that includes:

  1. Model updates
  2. Policy changes
  3. Incident summaries
  4. Data usage clarifications

Cadence: Semi-Annually; with out-of-cycle updates published within 30 business days after any material model update, policy change, or confirmed AI-related incident.

12.Review and Versioning

This policy is:

  1. Reviewed quarterly by the AI Working Group
  2. Approved by Osome’s CTO and Legal Head
  3. Updated publicly on the Trust Hub with version notes

Customers will be notified of materially significant updates.

We’re using cookies! What does it mean?