AI Governance

Responsible AI is the only AI worth building.

Luna is powerful. That power comes with responsibility. Here's how we govern Luna's AI to ensure transparency, privacy, and appropriate human oversight.

Principles

How we govern AI

Four principles that guide every AI decision in inflooens.

Transparency

Luna explains her reasoning. Users see what data she analyzed and why she reached her conclusions. No black boxes.

Human Oversight

Luna assists, humans decide. She surfaces insights and flags issues, but all actions require human approval.

Data Privacy

Customer data is processed, not stored. No PII is logged. No data is used for model training. Your data stays yours.

Fail-Safe Design

When Luna is uncertain, she says so. She won't guess at guideline answers or hide confidence levels.

Data Handling

What Luna sees, and what she doesn't keep

Transparency about how customer data flows through Luna.

Data TypeHow UsedStorage
Loan DataAnalyzed in memory to detect anomalies and predict approvalResults cached in Salesforce, raw data not persisted in AI layer
Credit DataParsed to identify credit factors and risk indicatorsNever logged, processed only for immediate analysis
GuidelinesEmbedded and indexed for semantic searchAgency guidelines (public documents) stored in vector database
ConversationsMaintain context within session for follow-up questionsConversation history stored in Salesforce, not in AI layer

Your data is never used for:

  • Model training or fine-tuning
  • Sharing with third parties
  • Building aggregate datasets
  • Marketing or profiling
  • Any purpose beyond immediate analysis
Human Oversight

AI assists. Humans decide.

Mechanisms that ensure humans stay in control.

Citation Requirements

Every guideline answer includes citations to source documents. Users can verify Luna's answers against the original text.

Confidence Indicators

Luna's approval predictions include confidence scores. Users see not just the prediction, but how confident Luna is.

Anomaly Explanations

When Luna flags an anomaly, she explains why. "Rate lock expires in 5 days" is actionable; "problem detected" is not.

Action Confirmation

Luna suggests actions but doesn't execute them automatically. Creating a task, sending a message—all require user confirmation.

Limitations

What Luna can't do

Being honest about AI limitations is part of responsible governance.

Not a licensed advisor

Luna provides information and analysis, not financial or legal advice. Human judgment required for lending decisions.

Guidelines may change

Luna's guideline knowledge is updated periodically. Always verify critical requirements against current agency publications.

Analysis, not decisions

Luna's approval predictions are estimates based on historical patterns, not guarantees of underwriter decisions.

Context limitations

Luna analyzes data in Salesforce. She doesn't have access to information not captured in the system.

Technology

Built on responsible AI infrastructure

Luna uses Claude (Anthropic) for analysis and reasoning, and OpenAI for embeddings. Both providers have strong commitments to AI safety and responsible development.

Claude (Anthropic)
Constitutional AI with built-in safety constraints
OpenAI Embeddings
Text embeddings for semantic guideline search
No Customer Data in Training
Neither provider uses your data to train their models

AI Provider Commitments

Anthropic (Claude)
Constitutional AI approach, extensive safety research, commitment to beneficial AI development
OpenAI
Enterprise data policies, no training on API data, SOC 2 compliant
Improvement

AI governance evolves

Our commitment to responsible AI is ongoing, not a one-time checkbox.

Regular Audits

Periodic review of Luna's outputs for accuracy, bias, and alignment with stated capabilities.

User Feedback

Mechanisms for users to flag incorrect or concerning outputs. Feedback drives improvements.

Policy Updates

As AI governance best practices evolve, so do our policies. Transparency in changes.

Questions about AI governance?

We're happy to discuss our AI practices, data handling, and governance approach with your compliance or security team.

Discuss AI Governance