Responsible AI is the only AI worth building.
Luna is powerful. That power comes with responsibility. Here's how we govern Luna's AI to ensure transparency, privacy, and appropriate human oversight.
How we govern AI
Four principles that guide every AI decision in inflooens.
Transparency
Luna explains her reasoning. Users see what data she analyzed and why she reached her conclusions. No black boxes.
Human Oversight
Luna assists, humans decide. She surfaces insights and flags issues, but all actions require human approval.
Data Privacy
Customer data is processed, not stored. No PII is logged. No data is used for model training. Your data stays yours.
Fail-Safe Design
When Luna is uncertain, she says so. She won't guess at guideline answers or hide confidence levels.
What Luna sees, and what she doesn't keep
Transparency about how customer data flows through Luna.
| Data Type | How Used | Storage |
|---|---|---|
| Loan Data | Analyzed in memory to detect anomalies and predict approval | Results cached in Salesforce, raw data not persisted in AI layer |
| Credit Data | Parsed to identify credit factors and risk indicators | Never logged, processed only for immediate analysis |
| Guidelines | Embedded and indexed for semantic search | Agency guidelines (public documents) stored in vector database |
| Conversations | Maintain context within session for follow-up questions | Conversation history stored in Salesforce, not in AI layer |
Your data is never used for:
- Model training or fine-tuning
- Sharing with third parties
- Building aggregate datasets
- Marketing or profiling
- Any purpose beyond immediate analysis
AI assists. Humans decide.
Mechanisms that ensure humans stay in control.
Citation Requirements
Every guideline answer includes citations to source documents. Users can verify Luna's answers against the original text.
Confidence Indicators
Luna's approval predictions include confidence scores. Users see not just the prediction, but how confident Luna is.
Anomaly Explanations
When Luna flags an anomaly, she explains why. "Rate lock expires in 5 days" is actionable; "problem detected" is not.
Action Confirmation
Luna suggests actions but doesn't execute them automatically. Creating a task, sending a message—all require user confirmation.
What Luna can't do
Being honest about AI limitations is part of responsible governance.
Not a licensed advisor
Luna provides information and analysis, not financial or legal advice. Human judgment required for lending decisions.
Guidelines may change
Luna's guideline knowledge is updated periodically. Always verify critical requirements against current agency publications.
Analysis, not decisions
Luna's approval predictions are estimates based on historical patterns, not guarantees of underwriter decisions.
Context limitations
Luna analyzes data in Salesforce. She doesn't have access to information not captured in the system.
Built on responsible AI infrastructure
Luna uses Claude (Anthropic) for analysis and reasoning, and OpenAI for embeddings. Both providers have strong commitments to AI safety and responsible development.
AI Provider Commitments
AI governance evolves
Our commitment to responsible AI is ongoing, not a one-time checkbox.
Regular Audits
Periodic review of Luna's outputs for accuracy, bias, and alignment with stated capabilities.
User Feedback
Mechanisms for users to flag incorrect or concerning outputs. Feedback drives improvements.
Policy Updates
As AI governance best practices evolve, so do our policies. Transparency in changes.
Questions about AI governance?
We're happy to discuss our AI practices, data handling, and governance approach with your compliance or security team.
Discuss AI Governance