Proof / Reliability

Retry patterns. Idempotency. Observability.

External APIs fail. Networks flake. inflooens is built to handle failure gracefully and recover automatically.

Reliability Patterns

Built for failure

Not because we expect to fail, but because we expect external systems to.

Exponential Backoff

All external API calls implement exponential backoff with jitter. Failed calls retry automatically with increasing delays to avoid thundering herd.

Initial delay: 1 secondMax delay: 60 secondsMax retries: 5Jitter: ±20%

Idempotent Operations

All write operations are idempotent. Retries don't create duplicate records. Same request, same result.

Unique request IDsUpsert over insertConditional updatesDuplicate detection

Circuit Breaker

External integrations implement circuit breaker pattern. Failing services are temporarily bypassed to prevent cascade failures.

Failure threshold: 5Recovery timeout: 30sHalf-open testingGraceful degradation

Timeout Management

All callouts have explicit timeouts. No hanging requests. Fast failure enables quick recovery.

Encompass: 30sCredit providers: 45sAI backend: 60sDefault: 10s
Queue Architecture

Event-driven processing

Salesforce Platform Events power our async processing architecture.

Encompass Sync Queue

Handles bidirectional sync between Salesforce and Encompass

Platform EventsOrdered processingRetry on failureDead letter queue

Credit Pull Queue

Manages credit pull requests to multiple providers

Provider failoverRate limitingPriority orderingStatus tracking

Message Queue

Processes SMS, email, and notification delivery

Delivery receiptsRetry logicTemplate processingAudit logging

AI Analysis Queue

Queues loan analysis requests to Luna backend

Async processingResult cachingLoad balancingPriority lanes
Observability

You can't fix what you can't see

inflooens provides complete visibility into system health, queue status, integration performance, and error patterns—all within Salesforce.

Real-Time Dashboards

Salesforce dashboards showing sync status, queue depth, error rates, and processing times.

Platform Event Monitoring

Monitor event publishing and consumption. Detect backlogs before they become problems.

Error Alerting

Automated alerts on error rate spikes, integration failures, and queue backlogs.

Audit Logs

Complete audit trail of all operations for compliance and debugging.

Sample Dashboard Metrics
Encompass Sync Success Rate99.7%
Average Sync Latency1.2s
Credit Pull Queue Depth3
Luna Response Time (p95)2.8s
Error Rate (24h)0.02%
Salesforce Limits

Governor-limit safe by design

Salesforce has limits. We architect around them.

Governor LimitOur Approach
SOQL QueriesBulk patterns, query optimization, selective queries
DML StatementsBatch processing, collections, upsert operations
CPU TimeAsync processing, efficient algorithms, caching
Heap SizeStreaming, lazy loading, memory-conscious design
Callout TimeAsync callouts, timeout management, circuit breaker
API LimitsComposite API, batching, off-peak scheduling
Scalability

From 10 loans to 10,000

Architecture that scales with your business.

Horizontal Scaling

AWS Lambda scales automatically. More loans, more concurrent functions. No capacity planning required.

Async Processing

Heavy operations run asynchronously. User interface stays responsive. Background jobs handle the load.

Caching Strategy

Luna results cached for quick re-access. Guideline searches cached. Reduce redundant processing.

Failure Handling

What happens when things go wrong

Planned responses for common failure scenarios.

Encompass Unavailable

If Encompass is down, users can continue working in Salesforce. Changes queue for sync when Encompass recovers. No data loss.

Recovery: Automatic retry with exponential backoff

Credit Provider Down

Multi-provider architecture means automatic failover. If Experian is down, try Equifax. If all down, queue request for retry.

Recovery: Provider failover, then queued retry

AI Backend Timeout

If Luna's AI backend times out, user sees graceful error with option to retry. Basic loan data still visible. Analysis can be re-requested.

Recovery: User-initiated retry, cached fallback

Salesforce Platform Issue

inflooens relies on Salesforce availability. We monitor Salesforce status and inherit their 99.9%+ uptime SLA.

Recovery: Follow Salesforce status page

Questions about reliability?

Our technical team can discuss SLAs, disaster recovery, and reliability architecture in detail.

Talk to Our Technical Team